Re: solve javax.persistence.EntityNotFoundException with JPA (not by using @NotFound in HIBERNATE)

2015-09-14 Thread Craig L Russell
Hi Prabu,

Can you post your schema, entities, mapping (annotations in the entities are 
sufficient), and application code that demonstrates the issue? Perhaps filing a 
JIRA with attachments is the easiest way to do this.

Thanks,

Craig

> On Sep 14, 2015, at 6:36 PM, Prabu  wrote:
> 
> HI Team
> 
> I am using JPA for my project and retrieving data from database.
> 
> ManyToOne relationship A (table1) to B (table2),if some element not present
> in B i am getting
> 
> javax.persistence.EntityNotFoundException.
> 
> But,I want result with null value if element not present in table2 .don't
> want error to throw in page
> 
> How solve javax.persistence.EntityNotFoundException with JPA (not by using
> @NotFound in HIBERNATE)
> 
> Could you please help.
> 
> -- 
> Thanks & Regards
> Prabu.N

Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: What is the best practice for handling results from JPQL query?

2013-02-01 Thread Craig L Russell

Hi Chris,

On Feb 1, 2013, at 1:28 PM, Chris Wolf wrote:


I notice that upon executing a query with the EM, it returns a
collection list of type "DelagatingResultList" can I just use that in
the business logic of my app, or should I copy it into a standard
ArrayList first?


Generally, any type of XXXList that you get from "someone else's  
code" should be treated (not copied!) as a List. Unless you know  
that the behavior you need is only in XXXList and not in List. So in  
your  Java code,


List result = fooQuery.getResultList();

So carefully look at your requirements, but in general just use the  
List behavior until you find you need something more. And then  
ask here to see if there's a better way.


Craig


Thanks,


Chris


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: [DISCUSS] JDO usage end-of-life?

2011-10-11 Thread Craig L Russell

Hi strub,

On Oct 11, 2011, at 10:26 AM, Mark Struberg wrote:

There is a JDO-3.0 spec out there since almost a year now. I'm not  
sure if we can/like to catch up.


There have been non-trivial but minor changes since JDO 2.0. IMHO  
"Catching up" would be a small task compared to re-activating the JDO  
support in OpenJPA.


Isn't there a pure JDO impl at db.apache.org which is up2date anyway?


No, the reference implementation for JDO is/has been DataNucleus. JDO  
1.0 had a reference implementation that didn't support relational  
databases, just a "toy" key/value store.


Best,

Craig



LieGrue,
strub



- Original Message -

From: Michael Dick 
To: users@openjpa.apache.org; d...@openjpa.apache.org
Cc:
Sent: Tuesday, October 11, 2011 4:21 AM
Subject: Re: [DISCUSS] JDO usage end-of-life?

T here is at least some interest from a subset of our users.  
Matthew Adams

and issue: OPENJPA-1744
<https://issues.apache.org/jira/browse/OPENJPA-1744>to add support  
for

JDO last July. I closed the issue, but Matthew responded
and the issue was reopened.

There hasn't been a lot of activity on the JIRA since then. There  
are some
users watching it, but no one has voted for it. If there's an  
outpouring of
support from the users list, and a committer (or aspiriing  
committer) is
interested in championing the effort, I'd be all for adding a JDO  
persona.
Absent a champion who is ready to dive into the code, I think that  
we should

clean up the references to jdo.

Even if OpenJPA removes the references to JDO, I'm sure a separate  
module
could be written that sits on top of our binaries. I suspect that's  
what BEA

/ Oracle did.

-mike



On Mon, Oct 10, 2011 at 8:18 AM, Kevin Sutter   
wrote:



Hi,
Sorry to cross post to both forums, but I wanted to ensure that I  
hit

everybody that might have an opinion on this JDO topic...

Is the JDO personality of OpenJPA still being utilized?  Marc's  
recent

post
about possibly pulling in javax.jdo.* packages during the  
enhancement
processing [1] reminded me that we still have old remnants of JDO  
(and

Kodo)
in the OpenJPA code base.  OpenJPA has never claimed support for  
JDO (nor
Kodo).  Way back when, BEA provided a JDO implementation as part  
of their
offering that sat on top of OpenJPA.  As far as I know, BEA (and  
Oracle)
only support the 1.1.x service stream of OpenJPA.  So, if we did  
this in

the
2.x stream, there should be no effects to that set of users.

Would there be a concern with the current users of OpenJPA to  
clean up the

code base and remove these JDO/Kodo references?  From a JPA/OpenJPA
perspective, you should see no differences in functionality.

Like I said, Marc's posting prompted me to revisit this topic.  I'm

just
exploring the option with no immediate plans of actually doing the  
work...


Thanks,
Kevin

[1]



http://openjpa.208410.n2.nabble.com/weird-JDO-Exception-when-using-OpenJPA-2-Enhancer-tc6870122.html






Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: No error message while I would wait for one

2011-03-05 Thread Craig L Russell

Hi,

Feel free to file a JIRA with a reproducible test case. Even better,  
attach a patch. ;-)


Craig

On Mar 5, 2011, at 12:46 PM, Jean-Baptiste BRIAUD -- Novlog wrote:


Hi,

2 persistent classes :
A
B extends A.

A have the primary key : id.

I was using fetch plan to include id but with the wrong class.
I badly added a fetch plan for B.class, id.

id was null, because it was finally not included in the fetch plan.
This is OK but when I build the fetch plan, why not raising an error  
message with something like :

"no attribute id on B".

It took me ages to find my error.
After I added the correct fetch plan : A.class, id, everything works  
fine.


What do you think ?


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: "join fetch" doesnt work as expected

2011-02-22 Thread Craig L Russell

Hi,

*The association referenced by the right side of the FETCH JOIN clause
must be an association or element*
*collection that is referenced from an entity or embeddable that is
returned as a result of the query.*

I didn't write the specification, but the way I read it, the right  
side of the FETCH JOIN clause must be a member of the entity.


It would be good to ping the expert group about this issue.

Meantime, Kevin's proposed solution (multiple FETCH JOIN clauses)  
should work, as there's nothing that I can see in the specification to  
disallow it.


And it's worth noting that the proposed JSR for JPA.next explicitly  
calls for fetch groups and fetch plans to be addressed.


Craig

On Feb 22, 2011, at 1:56 PM, Kevin Sutter wrote:


Hi Marc,
If you can't tell, this question is making me curious...

What if you changed your query to be like this:

SELECT b FROM Box b JOIN FETCH b.order JOIN order o JOIN FETCH
o.orderPositions WHERE b.id = ?1

I don't even know if this will be processed correctly or if it's  
"legal" per

the spec, but on paper it looks good.  :-)

As I dug through our mail archives, I did see where the single level  
of

fetching is clarified:
http://openjpa.208410.n2.nabble.com/Problem-with-join-fetch-tc2958609.html

So, the basic question you posted is not a bug.  But, there are  
other ways
of getting around the issue per the posting above.  Another thing  
you may

want to look at is the OpenJPA fetch groups.  This would allow you to
specify programatically when to lazy vs eager load a relationship,  
and how
many levels to traverse.  You can find more information on fetch  
groups here

[1].  Specifically, take a look at section 7.3 on per-field fetch
configuration.

Good luck,
Kevin

[1]
http://openjpa.apache.org/builds/latest/docs/manual/manual.html#ref_guide_fetch

On Tue, Feb 22, 2011 at 2:42 PM, Kevin Sutter   
wrote:


I looked at our existing junit tests and it looks like we're  
testing a
single level of fetching.  That is, if b <-> orders was 1:n instead  
of 1:1,
then you could do the "JOIN FETCH b.orders".  It looks like we have  
tested

that processing.  But, not going multiple levels...

If I look at the spec (section 4.4.5.3), it states:

*The syntax for a fetch join is*

*fetch_join ::= [ LEFT [OUTER] | INNER ] JOIN FETCH
join_association_path_expression*

*The association referenced by the right side of the FETCH JOIN  
clause

must be an association or element*
*collection that is referenced from an entity or embeddable that is
returned as a result of the query.*

Since you result of this query is returning "b", and your JOIN  
FETCH is
attempting to access b.order.orderPositions, orderPositions are not  
directly
referenced by b.  The spec is not clear on whether the additional  
levels of

traversal are expected.

Any other insights from other readers?

Kevin


On Tue, Feb 22, 2011 at 2:27 PM, Kevin Sutter   
wrote:



Hmmm...  It's my understanding that a JOIN FETCH should preload the
relationship, even if it's marked as being Lazy.  Have you  
performed a SQL
trace to see if we're even attempting to load the relationship?   
Unless

someone else speaks up, this would seem to be a bug.

Kevin


On Tue, Feb 22, 2011 at 12:08 PM, Marc Logemann   
wrote:



Hi,

assume this code:

  Query query = getEntityManager().createQuery("SELECT b FROM  
Box b

JOIN FETCH b.order.orderPositions WHERE b.id = ?1");
  query.setParameter(1, boxId);
  List boxList = query.getResultList();

The relationship is:

box  <-- 1:1 ---> order <-- 1:n --> orderPositions

When doing this query, i would expect that the orderPositions are
attached but they are null (order is attached to the box as  
expected but
thats 1:1) . I checked this right after the query.getResultList()  
call.

What am i missing here?

thanks for infos.

---
regards
Marc Logemann
http://www.logemann.org
http://www.logentis.de











Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Does PrePersist work when merging entities? (2.0.1)

2011-02-09 Thread Craig L Russell
 time

enhancement.

Here's a sample Entity that I used for

testing this.




import javax.persistence.*;

@Entity
@Access(AccessType.PROPERTY)
@Table(name="TEST")
public class Test {

 private int

id;

 private String

name;


 @Id


   @GeneratedValue(strategy =

GenerationType.IDENTITY)


   @Column(name="ID")

 public int

getId() {



   return id;

 }
 public void

setId(int id) {



   this.id = id;

 }



   @Column(name="NAME")

 public String

getName() {



   return name;

 }
 public void

setName(String name) {



   this.name = name;

 }

 @PrePersist
 void

populateDBFields(){


System.out.println("Hello, I happen

prePersist!");

 }

 @PostLoad
 void

populateTransientFields(){


System.out.println("Hello, I happen

postLoad!");

 }

 public static

void main(String[] args)

throws Exception {


   EntityManagerFactory

factory =

Persistence.createEntityManagerFactory(


 "su3", null);


   EntityManager em =

factory.createEntityManager();



   // Test t = new

Test();


   //

t.setName("name");


   // em.persist(t);




   Test t =

em.find(Test.class, 1);


   t.setName("new

name");


   em.merge(t);



em.getTransaction().commit();


   em.close();


 }
}




Any clues?

Joel









________

8:00? 8:25? 8:40? Find a flick in no time
with the Yahoo! Search movie showtime

shortcut.

http://tools.search.yahoo.com/shortcuts/#news















Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Evicting

2011-02-09 Thread Craig L Russell

Hi Daryl,

I think the core issue is how your objects fit into the architecture.

JPA, Hibernate, JDO, EclipseLink, TOPLink, and other domain object  
model architectures share this architecture: the application interacts  
with a persistence manager that in turn manages a persistent object  
cache that manages a collection of domain objects that have no  
knowledge of the persistence environment.


This is very different from the Data Access Object pattern in which  
each object is responsible for its own persistent life cycle. From  
your brief description you're using the DAO pattern.


Craig

On Feb 9, 2011, at 5:43 AM, Daryl Stultz wrote:



If you want to isolate changes to job1 and job2 you would have to  
buffer

changes (e.g. setName) and only apply them when performing the save.



I am the developer of the core Java code of my application. I can  
deal with

issues like this when I have to, but I have people on staff writing
JavaScript extensions to the product who know nothing about how JPA  
works

and ideally shouldn't have to.




Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Evicting

2011-02-08 Thread Craig L Russell

Hi Daryl,

It appears to me that the issue you have is the conflation of entities  
(job1 and job2) with EntityManagers (the same entity manager is used  
by both job1 and job2 since you are using a thread local EM).


If you want job1 and job2 to stay isolated then each of them would  
need its own entity manager. And you would need to have some other  
mechanism (not thread local EM) to group related entities.


Or have I misunderstood your example below? The key for me is the "not  
logical"


assertEquals("XX_newname_1", job1b.getName()); // !!! this is not  
logical since I didn't "save" job1


Well, reading the code, both job1 and job2 share the same entity  
manager so "begin transaction, save, commit" will save changed  
instances.


You don't mention it, but I assume that setName immediately changes  
the state of a Role. So the "save" doesn't really do anything.


If you want to isolate changes to job1 and job2 you would have to  
buffer changes (e.g. setName) and only apply them when performing the  
save.


I also don't quite understand what you are trying to accomplish with:


deleteViaJdbc(job1b);


Is this intended to simulate a third party deleting data that you're  
using? If so, then the observed behavior seems rational.


Craig


On Feb 8, 2011, at 9:32 AM, Daryl Stultz wrote:

On Tue, Feb 8, 2011 at 11:08 AM, Michael Dick >wrote:


getting the deleted object out of L1. I thought that if the object  
was
modified prior to the JDBC delete, then another object was  
modified and
"saved", the save transaction would cause the deleted but dirty  
object

still
in L1 to be saved as well (assuming it is still managed), but I  
can't

demonstrate this in a unit test.



Haven't tried that scenario myself, and I'd be interested to hear  
what you

find out if you have time to try it.

Here's a unit test that exposes what I consider a big problem with  
the JPA

architecture:

Role job1 = setup.insertJob("XX_1"); // instantiates and inserts via
ThreadLocal EM
Role job2 = setup.insertJob("XX_2");
System.out.println("job1.getId() = " + job1.getId());
System.out.println("job2.getId() = " + job2.getId());
// both jobs "managed"
job1.setName("XX_newname_1");
job2.setName("XX_newname_2");
// both dirty and managed
job2.save(); // begin transaction, merge, commit
setup.resetEntityManagerForNewTransaction(); // close previous EM,  
open new

one
Role job1b = getEntityManager().find(Role.class, job1.getId());
assertFalse(job1b == job1);
assertEquals("XX_newname_1", job1b.getName()); // !!! this is not  
logical

since I didn't "save" job1
Role job2b = getEntityManager().find(Role.class, job2.getId());
assertFalse(job2b == job2);
assertEquals("XX_newname_2", job2b.getName()); // this is expected
// part two
job1b.setName("XX_newname_1b"); // job1b dirty
deleteViaJdbc(job1b);
job2b.setName("XX_newname_2b"); // job2b dirty
try {
job2b.save();
fail("Expected exception");
} catch (OptimisticLockException exc) { // trying to update deleted  
job

exc.printStackTrace();
}
So I'm making changes to two objects, then saving just one of them.  
The side
effect is that both objects are saved. I repeat the process but  
delete the
changed object via JDBC. OpenJPA appears to be trying to save the  
deleted

object. Here's the output:

job1.getId() = 170
job2.getId() = 171

org.apache.openjpa.persistence.OptimisticLockException: An  
optimistic lock

violation was detected when flushing object instance
"com.sixdegreessoftware.apps.rss.model.Role-170" to the data store.   
This
indicates that the object was concurrently modified in another  
transaction.

FailedObject: com.sixdegreessoftware.apps.rss.model.Role-170

I don't think OpenJPA should be expected to handle my JDBC delete,  
I'm just
trying to illustrate why I called em.clear() after deleting via  
JDBC. It
would make the second problem go away but lead to lazy load  
problems. The
first problem marked with !!! is the "nature" of JPA I don't like  
and I'm
having to find a workaround for since I have to do the delete via  
JDBC. At

this point, I'm just hoping the "user" doesn't modify the object to be
deleted.

The main impact I foresee is that the other entities could have a  
reference

to a deleted row. Resulting in a constraint violation
at runtime. Then again, you're running that risk so long as you  
have the

legacy cron job that deletes rows unless your code gets a callback.



There are certain things the "user" should not expect to do after  
deleting
an object, so if find() fails, that's OK. But the above scenario is  
a little
harder to understand. Why can't I change a property on an object,  
delete it,
then save

Re: [ANNOUNCE] Welcome Heath Thomann as a new committer

2011-01-05 Thread Craig L Russell

Congratulations Heath!

Craig

On Jan 5, 2011, at 12:14 PM, Michael Dick wrote:

The OpenJPA PMC recently extended committership to Heath Thomann and  
he

accepted.

Welcome aboard Heath!

-mike


Craig L Russell
Secretary, Apache Software Foundation
Chair, OpenJPA PMC
c...@apache.org http://db.apache.org/jdo











Re: Latest OpenJPA trunk updates

2010-07-20 Thread Craig L Russell

Hooray.

Are the patches such that they could be applied to the 2.whatever.x  
branches? If anyone volunteers for this activity...


Craig

On Jul 20, 2010, at 2:47 PM, Donald Woods wrote:

As of r966020, the OpenJPA trunk (2.1.0-SNAPSHOT) now has the  
following

long sought after improvements -

OPENJPA-1732 LogFactory adapter for SLF4J
Now, the SLF4J API can be used by setting openjpa.Log=slf4j and
including the required slf4-api and backend adapter on the classpath.

OPENJPA-1735 Mark commons-logging as provided in the build to remove
transient maven dependency
Now, users of openjpa-2.1.0-SNAPSHOT.jar no longer need to include a
dependency exclusion for commons-logging.


Enjoy!
Donald


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Opinion of statement: "The JPA specification doesn't have a way to deal with a shared primary key"

2010-07-17 Thread Craig L Russell
And I meant to add that you could also consider the two tables to be  
in an inheritance relationship in which the table with the "logical  
foreign key" maps to an Entity that is a subclass of the other Entity.


Craig

On Jul 16, 2010, at 10:39 PM, Craig L Russell wrote:

There's no common definition for "shared primary key" that I'm aware  
of.


But let's just go and create a definition that might make sense and  
see how JPA would deal with it.


If two tables have the same type of primary key, and some values of  
one table's PK are also values of another table's PK, this might be  
a definition of shared primary key.


You can further constrain the problem by requiring that all values  
of one table's PK must be values of another table's PK, in effect  
defining a logical foreign key, whether or not the logical foreign  
key is actually declared as a foreign key.


If there is no logical foreign key, then I'd agree that JPA doesn't  
address it. A relationship has to have an "owning" side and without  
any constraints, there cannot be an owner.


If there is a logical foreign key, then there are concepts that  
apply to modeling the tables' relationship.


If you want to model the two tables as a single Entity, then you can  
use secondary tables. This model assumes that for each row in the  
secondary table there is a corresponding row in the primary table.  
Fields in the Entity can be mapped to columns in the secondary table.


If you want to model the two tables as two Entities, then you can  
declare a one-to-one relationship between them. You can then  
navigate between instances of the two Entities, and have the fields  
of each Entity map to columns in each Entity's own primary table.


Craig

On Jul 16, 2010, at 6:57 PM, KARR, DAVID (ATTSI) wrote:


Would you say that the following statement is accurate or not?

"The JPA specification doesn't have a way to deal with a shared  
primary

key."

Isn't this what "secondary-table" addresses?  Are there other  
strategies

that apply to this?


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Opinion of statement: "The JPA specification doesn't have a way to deal with a shared primary key"

2010-07-16 Thread Craig L Russell

There's no common definition for "shared primary key" that I'm aware of.

But let's just go and create a definition that might make sense and  
see how JPA would deal with it.


If two tables have the same type of primary key, and some values of  
one table's PK are also values of another table's PK, this might be a  
definition of shared primary key.


You can further constrain the problem by requiring that all values of  
one table's PK must be values of another table's PK, in effect  
defining a logical foreign key, whether or not the logical foreign key  
is actually declared as a foreign key.


If there is no logical foreign key, then I'd agree that JPA doesn't  
address it. A relationship has to have an "owning" side and without  
any constraints, there cannot be an owner.


If there is a logical foreign key, then there are concepts that apply  
to modeling the tables' relationship.


If you want to model the two tables as a single Entity, then you can  
use secondary tables. This model assumes that for each row in the  
secondary table there is a corresponding row in the primary table.  
Fields in the Entity can be mapped to columns in the secondary table.


If you want to model the two tables as two Entities, then you can  
declare a one-to-one relationship between them. You can then navigate  
between instances of the two Entities, and have the fields of each  
Entity map to columns in each Entity's own primary table.


Craig

On Jul 16, 2010, at 6:57 PM, KARR, DAVID (ATTSI) wrote:


Would you say that the following statement is accurate or not?

"The JPA specification doesn't have a way to deal with a shared  
primary

key."

Isn't this what "secondary-table" addresses?  Are there other  
strategies

that apply to this?


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Disabling enhancement on my netbeans project

2010-07-11 Thread Craig L Russell
Be sure your class path at runtime has the enhanced classes before the  
un-enhanced classes.


Craig

On Jul 11, 2010, at 6:27 PM, C N Davies wrote:

Decompile them and you can see for sure.  "implements  
PersistenceCapable"


C hris


-Original Message-
From: jingmeifan [mailto:janef...@hotmail.com]
Sent: Monday, 12 July 2010 5:54 AM
To: users@openjpa.apache.org
Subject: Re: Disabling enhancement on my netbeans project


I already enhance the model classes in build time, but I still get


org.apache.openjpa.persistence.ArgumentException: This configuration
disallows runtime optimization, but the following listed types were  
not

enhanced at build time or at class load time with a javaagent: "
...

I checked the size of class file before enhancement and after  
enhancement,

the after is bigger than before.

--
View this message in context:
http://openjpa.208410.n2.nabble.com/Disabling-enhancement-on-my-netbeans-pro
ject-tp3861704p5280701.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Can entity manager join jdbc transaction?

2010-06-02 Thread Craig L Russell

Hi,

You might consider turning this upside down and first starting a JPA  
transaction and then get the connection from OpenJPA to do your JDBC  
work. Then both JPA and your JDBC code would be working with the same  
connection.


Craig

On Jun 2, 2010, at 8:43 AM, Daryl Stultz wrote:

Hello, my project has a lot of work done direct to JDBC and some new  
work
done through OpenJPA. I'm using 1.2.1 on Tomcat (non-container). I  
have a
situation where a JDBC connection is obtained and a transaction is  
started.

Then some OpenJPA querying is done. The Entity Manager obtains an
independent connection which means it doesn't read the state with  
respect to
the JDBC transaction. Is there any way to get an Entity Manager to  
join a
JDBC transaction? My setup uses a DataSource to supply a connection  
to the
Entity Manager. It seems I could modify this to return the JDBC  
connection,
but I'm guessing the EM would close it before my JDBC operation is  
done.


Thanks.

--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
http://www.opentempo.com
mailto:daryl.stu...@opentempo.com


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: equals, hashcode, toString, etc, and field access

2010-05-30 Thread Craig L Russell
I'm interested in hearing what you would like to see with regard to  
entities after the persistence context in which they were valid no  
longer exists.


Thanks,

Craig

On May 28, 2010, at 7:18 AM, C N Davies wrote:

Darryl is right, I fought with this one for some time then it dawned  
upon me
that I was dealing with a detached entity that had a lazy loaded  
field. The
result of toString is like picking the lottery numbers.. pot luck!  
Now do
the same thing with runtime enhancement during development and  
deploy it

with build time enhancement.

Yet another reason to drag the JPA spec into the 20th century and do  
away

with this whole attached / detached state business.

Chris



-Original Message-
From: Daryl Stultz [mailto:daryl.stu...@opentempo.com]
Sent: Friday, 28 May 2010 10:23 PM
To: users@openjpa.apache.org
Subject: Re: equals, hashcode, toString, etc, and field access

On Thu, May 27, 2010 at 8:49 PM, Trenton D. Adams
wrote:



I mean I know if I'm doing lazy loading, toString won't get all the  
data,

cause it hasn't been enhanced.



Assuming the object is detached, yes. I believe the JPA spec does not
specify the behavior for attempted access of an unloaded property on a
detached entity. I believe OpenJPA returns null. This makes it very
difficult to tell if an association is null or not loaded. I have  
configured
OpenJPA to disallow access to unloaded properties of detached  
entities to
avoid the confusion. This means a toString method like yours in my  
project

could crash.

--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
http://www.opentempo.com
mailto:daryl.stu...@opentempo.com



Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: equals, hashcode, toString, etc, and field access

2010-05-27 Thread Craig L Russell
If you're using property access, then you should never access the  
fields directly except for getters and setters. It can cause wrong  
results.


If you know that the property is always loaded (identity fields are  
always loaded into instances), then you don't have to worry about the  
messages.


Craig

On May 27, 2010, at 5:49 PM, Trenton D. Adams wrote:


Hi Guys,

The PCEnhancer complains about field access in some of these  
methods.  Do I need to worry about those particular methods?


As an example...
"org.adamsbros.rmi.entities.TelephonePK" uses property access, but  
its field "entityId" is accessed directly in method "toString"  
defined in "org.adamsbros.rmi.entities.TelephonePK"


I mean I know if I'm doing lazy loading, toString won't get all the  
data, cause it hasn't been enhanced.  That's basically what all the  
enhancement is for, right?  But, I don't think I care, do I?


Can I disable the analysis for certain methods?

Thanks.


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: OpenJPA confusing classes

2010-05-25 Thread Craig L Russell

IMHO this is an OpenJPA bug.

OpenJPA should never use an identity object (type plus key) internally  
as a cache key that does not have the exact type.


If OpenJPA internally constructs an identity object (e,g, for find)  
that contains a possible supertype plus key and uses that to find an  
instance, that's ok, but the actual identity object with the actual  
type and key should always be used for cache keys.


The fact that Compatibility.UseStrictIdentityValue helped this case,  
should make fixing the bug easier.


Craig

On May 25, 2010, at 10:00 AM, Pinaki Poddar wrote:



I was not talking about toString() of users' persistent classes. It  
is about
the internal mechanics of OpenJPA encodes/decodes a persistent  
identity with

its type information.
When an application does
 String xid = "123ABC";
 X x = em.find(X.class, id);

Under certain conditions, OpenJPA internally constructs an "id"  
Object xid
(that is not of type String) using the type info (i.e. X.class) and  
the
stringfid id value (i.e the String value "123ABC"). xid encodes both  
the

type and key value. And the xid instance is the key for any subsequent
lookup in OpenJPA internal caches etc.
For the reported case, the process that encodes X.class + "123ABC"  
into xid
was confused to lose the type information of the actual class and  
replaced

it with the superclass.
Compatibility.UseStrictIdentityValue helped to resolve that confusion.

-
Pinaki
--
View this message in context: 
http://openjpa.208410.n2.nabble.com/OpenJPA-confusing-classes-tp5094249p5099457.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Fwd: [Travel Assistance] - Applications Open for ApacheCon NA 2010

2010-05-16 Thread Craig L Russell



Begin forwarded message:


From: "Gav..." 
Date: May 16, 2010 4:25:39 PM PDT
To: 
Subject: [Travel Assistance] - Applications Open for ApacheCon NA 2010
Reply-To: priv...@openjpa.apache.org

Hi PMC's

Please distribute this notice to your user and dev lists:

The Travel Assistance Committee is now taking in applications for  
those
wanting to attend ApacheCon North America (NA) 2010, which is taking  
place

between the 1st and 5th November in Atlanta.

The Travel Assistance Committee is looking for people who would like  
to be
able to attend ApacheCon, but who need some financial support in  
order to be
able to get there. There are limited places available, and all  
applications

will be scored on their individual merit.

Financial assistance is available to cover travel to the event,  
either in
part or in full, depending on circumstances. However, the support  
available

for those attending only the barcamp is smaller than that for people
attending the whole event. The Travel Assistance Committee aims to  
support
all ApacheCons, and cross-project events, and so it may be prudent  
for those

in Asia and the EU to wait for an event closer to them.

More information can be found on the main Apache website at
http://www.apache.org/travel/index.html - where you will also find a  
link to

the online application and details for submitting.

Applications for applying for travel assistance are now being  
accepted, and

will close on the 7th July 2010.

Good luck to all those that will apply.

You are welcome to tweet, blog as appropriate.

Regards,

The Travel Assistance Committee.




Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: NoSuchMethodError: org.apache.openjpa.l ib.conf.IntValue.get()I

2010-05-06 Thread Craig L Russell
ClusterJPA has only been tested with OpenJPA 1.2.0. Can you try this  
and let me know?


Craig

On May 6, 2010, at 10:05 AM, Craig L Russell wrote:


Hi Marton,

The component you're using is indeed the new ClusterJPA that was  
shipped as part of MySQL Cluster 7.1.3 last month.


This is an open source project that is available from MySQL  
downloads at http://dev.mysql.com/downloads/cluster/


There are blogs that you might be able to use to help get set up:

http://ocklin.blogspot.com/2009/12/java-and-openjpa-for-mysql-cluster_14.html
http://www.clusterdb.com/

If you have more trouble setting up the configuration let me know.

Craig

On May 6, 2010, at 12:10 AM, Marton R wrote:



Hi Pinaki

I tried it without Brokerfactory=ndb, in this case exception is not  
there,

but of course it would be necessary.

1,
NdbOpenJPAConfigurationImpl.java is an official part of the MySQL  
Cluster,

it is not our code base.
it is the part of clusterj-7.1.3.jar, which is used by everyone (I  
think so)
Source version is not published, or not known by me where could I  
check it.

Sorry, I'm newcomer in java world...
So it is maybe a configuration issue in my case, I mean incorrect  
versions,

or missing jar... I have no idea


2,
version id: openjpa-2.0.0-r422266:935683
Apache svn revision: 422266:935683

os.name: Linux
os.version: 2.6.21.7-hrt1-WR2.0ap_standard
os.arch: i386

java.version: 1.6.0_06
java.vendor: Sun Microsystems Inc.

thanks for your help
Marton


--
View this message in context: 
http://openjpa.208410.n2.nabble.com/NoSuchMethodError-org-apache-openjpa-l-ib-conf-IntValue-get-I-tp5002677p5012906.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: NoSuchMethodError: org.apache.openjpa.l ib.conf.IntValue.get()I

2010-05-06 Thread Craig L Russell

Hi Marton,

The component you're using is indeed the new ClusterJPA that was  
shipped as part of MySQL Cluster 7.1.3 last month.


This is an open source project that is available from MySQL downloads  
at http://dev.mysql.com/downloads/cluster/


There are blogs that you might be able to use to help get set up:

http://ocklin.blogspot.com/2009/12/java-and-openjpa-for-mysql-cluster_14.html
http://www.clusterdb.com/

If you have more trouble setting up the configuration let me know.

Craig

On May 6, 2010, at 12:10 AM, Marton R wrote:



Hi Pinaki

I tried it without Brokerfactory=ndb, in this case exception is not  
there,

but of course it would be necessary.

1,
NdbOpenJPAConfigurationImpl.java is an official part of the MySQL  
Cluster,

it is not our code base.
it is the part of clusterj-7.1.3.jar, which is used by everyone (I  
think so)
Source version is not published, or not known by me where could I  
check it.

Sorry, I'm newcomer in java world...
So it is maybe a configuration issue in my case, I mean incorrect  
versions,

or missing jar... I have no idea


2,
version id: openjpa-2.0.0-r422266:935683
Apache svn revision: 422266:935683

os.name: Linux
os.version: 2.6.21.7-hrt1-WR2.0ap_standard
os.arch: i386

java.version: 1.6.0_06
java.vendor: Sun Microsystems Inc.

thanks for your help
Marton


--
View this message in context: 
http://openjpa.208410.n2.nabble.com/NoSuchMethodError-org-apache-openjpa-l-ib-conf-IntValue-get-I-tp5002677p5012906.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: What is a NaN?

2010-05-03 Thread Craig L Russell
Apparently, divide by zero causes a fixed point exception but not for  
floating point.


Craig

On May 3, 2010, at 12:11 PM, C N Davies wrote:

Thanks Richard you were right it was a  0.0 / 0.0 which I would have  
thought
would have generated a divide by zero exception which I do capture,  
but
apparently not *shrug*. Also 0 divide by zero I would have thought  
was zero

... I learned 2 new things today!

Thanks!

Chris


-Original Message-
From: Landers, Richard [mailto:richard.land...@ct.gov]
Sent: Tuesday, 4 May 2010 3:38 AM
To: 'users@openjpa.apache.org'
Subject: RE: What is a NaN?


Could you have arithmetic gone bad before persisting?

0.0 / 0.0
Sqrt(-something)
Inf - Inf

http://www.concentric.net/~Ttwang/tech/javafloat.htm has other  
examples.



-Original Message-
From: C N Davies [mailto:c...@cndavies.com]
Sent: Monday, May 03, 2010 1:05 PM
To: users@openjpa.apache.org
Subject: RE: What is a NaN?

No, just a str8 insert :(


-Original Message-
From: Daryl Stultz [mailto:da...@6degrees.com]
Sent: Tuesday, 4 May 2010 2:54 AM
To: users@openjpa.apache.org
Subject: Re: What is a NaN?

On Mon, May 3, 2010 at 12:09 PM, C N Davies  wrote:


I guess it is "Not a Number",

A long shot: do you have any JavaScript involved?


--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
mailto:da...@6degrees.com






Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: NPE in enhanced entity (where I can't see it)

2010-04-30 Thread Craig L Russell

Hi,

If you're going to play with the values in the get/set methods, then I  
strongly encourage you to use persistent fields, and not persistent  
properties.


Guaranteed to work. Money back if not satisfied.

Craig

On Apr 30, 2010, at 8:06 AM, Wes Wannemacher wrote:


Guys,

I have an entity that I decided to munge up a little bit... So,
normally, a setter would be coded like this -

public void setFoo(String foo) {
   this.foo = foo;
}

But, we decided to try to uppercase everything before it goes into the
database. Now the relevant section of code looks like this -

[code]
private String foo;

@Column(length=32)
public String getFoo() {
   return foo;
}

public void setFoo(String foo) {
   this.foo = (foo == null ? null : foo.toUppsercase()) ;
}
[/code]

Our build process enhances the entities (using the ant task which
maven calls during the process-classes phase). Then, it unit tests
them, and right away, every entity that we tried to do this with
bombed out. There was an NPE in pcsetEntityName. The relevant portion
of the stack trace is below -

Caused by: java.lang.NullPointerException
at  
com 
.cdotech.amarg.entities.CDO_CageCode.pcsetCageCode(CDO_CageCode.java: 
55)
at  
com 
.cdotech.amarg.entities.CDO_CageCode.pcClearFields(CDO_CageCode.java)
at  
com 
.cdotech.amarg.entities.CDO_CageCode.pcNewInstance(CDO_CageCode.java)
at  
org 
.apache 
.openjpa.kernel.SaveFieldManager.saveField(SaveFieldManager.java:132)

at org.apache.openjpa.kernel.StateManag

I tried to look through the PCEnhancer, but I'm not familiar enough to
be effective figuring out the problem. At the same time, since the NPE
happens in the generated byte-code, it's *really* hard to figure out
what's going on. I'd like to provide more information, but this is as
much as I can see.

For right now, we moved the capitalizing logic out of the entity, but
was wondering if there is a way to move it back to the entity...

-Wes

--
Wes Wannemacher

Head Engineer, WanTii, Inc.
Need Training? Struts, Spring, Maven, Tomcat...
Ask me for a quote!


Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: Problem build and run hellojpa using Maven

2010-04-22 Thread Craig L Russell

Hi Chris,

If you want this fixed, please file a JIRA with a reproducible test  
case. The message should be useful and it's not. This is a usability  
issue that should be fixed. You can even fix it yourself and help  
everyone.


Thanks,

Craig

On Apr 22, 2010, at 8:15 AM, C N Davies wrote:


Did you add the Message class to your persistence.xml?

I just love OpenJPA error messages, about as useful as tits on a bull:

"..The configuration property named "openjpa.Id" was not recognized  
and will be ignored, although the name

closely matches a valid property called "openjpa.Id" ..."

Looks like the same spelling and capitalisation to me, who the hell  
writes these messages? The only useful word in this message is "error"


Chris





-Original Message-
From: hezjing [mailto:hezj...@gmail.com]
Sent: Friday, 23 April 2010 12:46 AM
To: users@openjpa.apache.org
Subject: Problem build and run hellojpa using Maven

Hi

I have successfully ran the hellojpa sample as described in the  
getting

started with Netbeans and Ant.

Now, I'm trying to build and run the same sample using Maven.

When the goal exec:exec is executed, the following ArgumentException  
is

thrown:

0  WARN   [main] openjpa.Runtime - The configuration property named
"openjpa.Id" was not recognized and will be ignored, although the name
closely matches a valid property called "openjpa.Id".
Exception in thread "main" error> org.apache.openjpa.persistence.ArgumentException: Attempt to  
cast
instance "hellojpa.mess...@1094d48" to PersistenceCapable failed.   
Ensure

that it has been enhanced.
FailedObject: hellojpa.mess...@1094d48
   at
org 
.apache 
.openjpa.kernel.BrokerImpl.assertPersistenceCapable(BrokerImpl.java: 
4377)

   at
org.apache.openjpa.kernel.BrokerImpl.persist(BrokerImpl.java:2443)
   at
org.apache.openjpa.kernel.BrokerImpl.persist(BrokerImpl.java:2304)
   at
org 
.apache 
.openjpa.kernel.DelegatingBroker.persist(DelegatingBroker.java:1021)

   at
org 
.apache 
.openjpa 
.persistence.EntityManagerImpl.persist(EntityManagerImpl.java:645)

   at hellojpa.Main.main(Main.java:54)


Here are the snippets of my pom.xml:


   org.apache.openjpa
   openjpa
   1.2.2
   compile


   org.apache.derby
   derby
   10.5.3.0


   org.codehaus.mojo
   exec-maven-plugin
   1.1.1
   
   java
   

- 
Dopenjpa.ConnectionDriverName=org.apache.derby.jdbc.EmbeddedDriverargument>


-Dopenjpa.ConnectionURL=jdbc:derby:openjpa- 
database;create=true

   -Dopenjpa.ConnectionUserName=
   -Dopenjpa.ConnectionPassword=

-Dopenjpa.jdbc.SynchronizeMappings=buildSchema
   -Dopenjpa.Log=DefaultLevel=WARN,SQL=TRACEargument>

   -classpath
   
   hellojpa.Main
   
   



Do you have any idea of what could be the problem?


--

Hez



Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@oracle.com
P.S. A good JDO? O, Gasp!



Re: [DISCUSS] Upcoming OpenJPA 2.0.0 release plans

2010-03-04 Thread Craig L Russell

Hi Donald,

I'd like to get OPENJPA-1530 into the 2.0.0 release. It allows schema  
generation for non-innodb MySQL storage engines (needed to test MySQL  
cluster for example).


It would be great if someone could review the patch...

Thanks,

Craig

On Mar 4, 2010, at 8:28 AM, Donald Woods wrote:


As we're winding down the changes for the 2.0.0 release, I wanted to
alert everyone to the proposed release dates.

3/19 - Cut 2.0 branch
4/12 - Start release candidate vote

Once the branch is created, only changes approved by myself or Kevin
will be accepted into the branch.  Trunk (probably renamed to 2.1)  
will

still be open for any changes.

Also, please use this email thread to discuss any critical patches  
that

you would like to see considered for 2.0.0.


Thanks,
Donald


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to Setup a Named Query to Use Eager Fetch

2010-02-06 Thread Craig L Russell

Hi,

If you're "using the Hibernate driver" then this is probably the wrong  
list to ask. This list is for questions about the OpenJPA  
implementation of JPA...


Craig

On Feb 5, 2010, at 4:29 PM, cgray wrote:



Hello,

I'm seeing an oddity.  When I execute the below namequery
(Country.findCountry) the first time one sql statement is created that
fetches all Country objects from the database.  The second time this  
named
query gets executed it creates a sql query per row in the database  
(over 200
rows).  It seems like its lazy loading the second time, any ideas on  
how to
fetch this eagerly or any other ideas to limit the second hit to one  
query?

This is using the hibernate driver.


@Entity
@Table(name = "COUNTRY")
@NamedQueries({...@namedquery(name = "Country.findCountry", query =  
"SELECT c

FROM Country c")})
public class Country implements Serializable {


   @Column(name = "COUNTRY_CODE")
   @Id
   private String countryCode;

--
View this message in context: 
http://n2.nabble.com/How-to-Setup-a-Named-Query-to-Use-Eager-Fetch-tp4523309p4523309.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: JPQL to get association members, but only ones that fulfill a condition

2010-02-03 Thread Craig L Russell

Hi,

On Feb 3, 2010, at 1:27 PM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: craig.russ...@sun.com [mailto:craig.russ...@sun.com]
Sent: Wednesday, February 03, 2010 12:31 PM
To: users@openjpa.apache.org
Subject: Re: JPQL to get association members, but only ones that
fulfill a condition

Perhaps you could try using the bar as the root of the query by
selecting the bar and joining the foo parent. If the foo id is a
simple foreign key in bar this might be the most efficient way to get
the data anyway.


The association is through a join table.  This is making my brain  
hurt.

:)


You can still use the suggested technique. You have defined the  
association such that JPA can navigate the relationship from either  
side.


Craig


For now I'm going to just implement manual filtering outside of the
transaction, but I suppose I could build a native query for the  
filtered
list and ignore the mapping for the one-to-many and just use the  
native

query to build that filtered list.

I see that Hibernate has a "formula" attribute on a mapped field which
can specify a native SQL expression.  I guess that would have been
useful here.


On Feb 3, 2010, at 12:23 PM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: Daryl Stultz [mailto:da...@6degrees.com]
Sent: Wednesday, February 03, 2010 12:00 PM
To: users@openjpa.apache.org
Subject: Re: JPQL to get association members, but only ones that
fulfill a condition

On Wed, Feb 3, 2010 at 2:12 PM, KARR, DAVID (ATTCINW)
wrote:


So I changed my query to:

select foo from packagepath.Foo foo left join fetch foo.childBars

as

bar

where foo.id=:id and current_date between bar.startDate and
bar.endDate

try this:


select distinct foo from packagepath.Foo foo
left join foo.childBars as bar
left join fetch foo.childBars
where foo.id=:id
and current_date between bar.startDate and bar.endDate

Notice "distinct". You might find it works without out it bug a bug
will
bite you later...

I'm not sure if you are expecting to get a subset of foo.childBars.
If
you
are, this won't work.


I don't understand the last statement here.

When I tried this strategy, it resulted in no rows returned, and I
know
that there's at least one "bar" with a current date range, but I

know

there are several that do not.  I tried both with and without
"distinct", with the same result.

I have a feeling I'm heading towards having to construct a specific
query for the bars that are child of this foo and are in the date
range.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: JPQL to get association members, but only ones that fulfill a condition

2010-02-03 Thread Craig L Russell
Perhaps you could try using the bar as the root of the query by  
selecting the bar and joining the foo parent. If the foo id is a  
simple foreign key in bar this might be the most efficient way to get  
the data anyway.


Craig

On Feb 3, 2010, at 12:23 PM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: Daryl Stultz [mailto:da...@6degrees.com]
Sent: Wednesday, February 03, 2010 12:00 PM
To: users@openjpa.apache.org
Subject: Re: JPQL to get association members, but only ones that
fulfill a condition

On Wed, Feb 3, 2010 at 2:12 PM, KARR, DAVID (ATTCINW)
wrote:


So I changed my query to:

select foo from packagepath.Foo foo left join fetch foo.childBars as

bar

 where foo.id=:id and current_date between bar.startDate and
bar.endDate

try this:


select distinct foo from packagepath.Foo foo
left join foo.childBars as bar
left join fetch foo.childBars
where foo.id=:id
and current_date between bar.startDate and bar.endDate

Notice "distinct". You might find it works without out it bug a bug
will
bite you later...

I'm not sure if you are expecting to get a subset of foo.childBars.  
If

you
are, this won't work.


I don't understand the last statement here.

When I tried this strategy, it resulted in no rows returned, and I  
know

that there's at least one "bar" with a current date range, but I know
there are several that do not.  I tried both with and without
"distinct", with the same result.

I have a feeling I'm heading towards having to construct a specific
query for the bars that are child of this foo and are in the date  
range.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Upcoming Live Webinars: New Java and JPA Access to the MySQL Cluster Database

2010-01-27 Thread Craig L Russell

Hi,

I thought this group might be interested in a new product that I've  
been working on that gives OpenJPA users a faster access path to the  
MySQL Cluster database.


<<
MySQL have been developing a new way of directly accessing the MySQL  
Cluster database using native Java interfaces.


Designed for Java developers, the new MySQL Cluster Connector for Java  
implements an easy-to-use and high performance native Java interface  
and OpenJPA plug-in that maps Java classes to tables stored in the  
high availability, real-time MySQL Cluster database.


By avoiding data transformations into SQL, users get lower data access  
latency & higher transaction throughput by directly accessing and  
persisting their mission-critical data.


When using a standard Java API and plug-in for OpenJPA, Java  
developers have a more natural programming method to directly manage  
their data, with a complete, feature-rich solution for Object/ 
Relational Mapping.


An introduction to the Connector, and how to get going is covered in a  
series of 2 forthcoming webinars. As always these are free to attend –  
you just need to register in advance


More details are available here:
http://www.clusterdb.com/mysql-cluster/upcoming-webinars-for-java-and-jpa-access-to-mysql-cluster/
>>

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to release db connection into pool?

2010-01-21 Thread Craig L Russell

Thanks for using OpenJPA and the users alias. Good luck.

Craig

On Jan 21, 2010, at 7:35 PM, wang yu wrote:


Craig,
I have tested it. It's cool!
The db connection will be returned into pool and I can reuse the
JPAEntityManager!
Thanks again.

Regards,
Yu Wang

On Fri, Jan 22, 2010 at 9:57 AM, wang yu  wrote:

Craig:
Thank you for your quick response.
If I close it, the connection will be returned into pool or just  
destroyed?


Regards,
Yu Wang

On Thu, Jan 21, 2010 at 8:56 PM, Craig L Russell > wrote:

Hi Yu Wang,

You need to tell OpenJPA that you're not using the Connection any  
more by

calling close(). See this Example 4.4 in the user's manual:

import java.sql.*;
 import org.apache.openjpa.persistence.*;
 ... OpenJPAEntityManager kem = OpenJPAPersistence.cast(em);
Connection conn = (Connection) kem.getConnection();
 // do JDBC stuff
 conn.close();

Regards,

Craig
On Jan 21, 2010, at 1:26 AM, wang yu wrote:


Gurus:
I use OpenJPA 1.2.1 and dbcp:
   name="openjpa.ConnectionDriverName"


 value="org.apache.commons.dbcp.BasicDataSource" />
   name="openjpa.ConnectionProperties"


 value="driverClassName=oracle.jdbc.driver.OracleDriver,
url=jdbc:oracle:thin:@localhost:1521:orcl, username=,
password=XXX, maxActive=8, maxWait=1,  
poolPreparedStatements=true"

/>

And I found the connection pool worked perfect for JPA query. But  
if I

use JDBC query like following:
   OpenJPAEntityManager open_manager =  
OpenJPAPersistence

   .cast(entitiManager);
   Connection conn = (Connection)
open_manager.getConnection();
   java.sql.PreparedStatement PrepStmt = null;
   java.sql.ResultSet sqlResults = null;
   try {
   PrepStmt = connection
   .prepareStatement("select  
* from

tsam.MON_BRIDGE");
   sqlResults = PrepStmt.executeQuery();
   } catch (SQLException e) {
   log.error(e.getMessage());
   } finally {
   try {
   if (sqlResults != null)
   sqlResults.close();
   if (PrepStmt != null)
   PrepStmt.close();
   } catch (SQLException e) {

   }
   }

The connection cannot be put into pool and the result is out of db
connection.
How should I do?  Should I use createNativeQuery(String sql, Class
resultClass)  to query with native sql?


Regards,
Yu Wang


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!






Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: ClassCastException in pcReplaceField

2010-01-21 Thread Craig L Russell

Hi,

On Jan 21, 2010, at 7:23 PM, Russell Collins wrote:

While the initial thought is that the "equals()" method would work,  
the specification for Set requires a compareTo method be present by  
implementing the Comparable interface.


This is not my understanding. Set requires only hashCode and equals.  
Ordered collections (e.g. TreeSet) require the elements to implement  
Comparable or to have a Comparator specified at creation.


Maybe we're reading different parts of the spec...

Craig

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to release db connection into pool?

2010-01-21 Thread Craig L Russell

Hi Yu Wang,

You need to tell OpenJPA that you're not using the Connection any more  
by calling close(). See this Example 4.4 in the user's manual:


import java.sql.*;
 import org.apache.openjpa.persistence.*;
 ... OpenJPAEntityManager kem = OpenJPAPersistence.cast(em);
Connection conn = (Connection) kem.getConnection();
 // do JDBC stuff
 conn.close();

Regards,

Craig
On Jan 21, 2010, at 1:26 AM, wang yu wrote:


Gurus:
I use OpenJPA 1.2.1 and dbcp:



And I found the connection pool worked perfect for JPA query. But if I
use JDBC query like following:
OpenJPAEntityManager open_manager = OpenJPAPersistence
.cast(entitiManager);
Connection conn = (Connection) open_manager.getConnection();
java.sql.PreparedStatement PrepStmt = null;
java.sql.ResultSet sqlResults = null;
try {
PrepStmt = connection
.prepareStatement("select * from 
tsam.MON_BRIDGE");
sqlResults = PrepStmt.executeQuery();
} catch (SQLException e) {
log.error(e.getMessage());
} finally {
try {
if (sqlResults != null)
sqlResults.close();
if (PrepStmt != null)
PrepStmt.close();
} catch (SQLException e) {

}
}

The connection cannot be put into pool and the result is out of db  
connection.

How should I do?  Should I use createNativeQuery(String sql, Class
resultClass)  to query with native sql?


Regards,
Yu Wang


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: openjpa ignoring column annotation

2010-01-11 Thread Craig L Russell

Hi,

Is there a reason you're using a native query for this, and not a  
JPAQL query?


What you get from a native query is data, not objects. Your code is  
looking for an Account object and should use a JPAQL query.


Craig

On Jan 11, 2010, at 11:06 AM, racarlson wrote:



public class Database
{

//
***
	private final static Logger LOG =  
Logger.getLogger( Database.class ) ;

//log4j
private static EntityManagerFactory entityManagerFactory ;
private static final String JPA_FACTORY_NAME = "MyJpaFactory";

//
***
private void closeEntityManager (
final EntityManager em )
{
try
{
if ( ( em != null ) &&
( em.isOpen( ) ) )
{
em.close( ) ;
}
}
catch ( final Exception e )
{
Database.LOG.error( e ) ;
}
}

//
***
private void setupEntityManagerFactory ( )
{
if ( Database.entityManagerFactory == null )
{
Database.entityManagerFactory =
 
Persistence.createEntityManagerFactory( Database.JPA_FACTORY_NAME ) ;

}
}

//
***
private EntityManager getEntityManager ( )
{
this.setupEntityManagerFactory( ) ;
final EntityManager entityManager =
Database.entityManagerFactory.createEntityManager( ) ;
entityManager.getTransaction( ).begin( );
return entityManager ;
}

//
***
private EntityManager getEntityManagerWithOutTransaction ( )
{
this.setupEntityManagerFactory( ) ;
return Database.entityManagerFactory.createEntityManager( ) ;
}
//
***
private void logQuery (
final String temp )
{
if ( temp != null )
{
Database.LOG.debug( temp ) ;
}
}
//
***
public Account getAccountByBillingSystemAccountNumber (
final String billingSystemAccountNumber )
{
		final EntityManager em =  
this.getEntityManagerWithOutTransaction( ) ;

return this.getAccountByBillingSystemAccountNumber(
billingSystemAccountNumber ,
em ,
true ) ;
}

//
***
protected Account getAccountByBillingSystemAccountNumber (
final String billingSystemAccountNumber ,
final EntityManager em ,
final boolean canCloseEntityManager )
{
Account rtn = null ;
try
{
final String sql =
  "select * from ACCOUNT a "+
  "where
a.BILLING_SYSTEM_ACCOUNT_NUMBER = \'" +
   billingSystemAccountNumber + "\' " ;
final Query q = em.createNativeQuery( sql , 
Account.class ) ;
this.logQuery( sql ) ;
if ( q != null )
{
rtn = ( Account ) q.getResultList( ).get( 0 );
}
}
catch ( final NoResultException nre )
{
rtn = null ;
}
finally
{
if ( canCloseEntityManager )
{
this.closeEntityManager( em ) ;
}
}
return rtn ;
}
//
***

  /* some other methods to access other tables
  ...
  */
}

--
View this message in context: 
http://n2.nabble.com/openjpa-ignoring-column-annotation-tp4286639p4287474.html
Sent from the OpenJPA Users mailing list archive 

Re: Multibyte characters on SQL Server and Sybase

2010-01-11 Thread Craig L Russell


On Jan 11, 2010, at 10:57 AM, Michael Dick wrote:

On Mon, Jan 11, 2010 at 12:38 PM, Craig L Russell >wrote:



Hi Mike,


On Jan 11, 2010, at 7:24 AM, Michael Dick wrote:

Hi Craig,


That sounds reasonable for this specific use case. I'm a little  
leery of
doing too much validation of the columnDefinition attribute,  
though. It

just
seems pretty easy for us to get it wrong (ie converting VARCHAR to
LVARCHAR
based on the column length, or some other optimization).



I'm really not suggesting that we do extensive analysis of the
columnDefinition. Just transforming NVARCHAR(n) which is ANSI  
standard SQL
into the dialect needed by non-ANSI databases, instead of simply  
passing the

columnDefinition as is to the DDL.



Where would we draw the line though? Just column types that we know  
won't
work? I could go along with that as long as we're clear on exactly  
what we

will change.


I'm suggesting that we look at the columnDefinition and if it contains  
NVARCHAR or NCHAR then we do a string substitution that is mediated by  
DBDictionary and implemented by a subclass.


Craig




What about adding a variable to DBDictionary along the lines of
"preferNLSVarChar", and then we'll try to use the database's  
specific

NVARCHAR equivalent?



That's not the issue at all. As I understand it, the application  
has some
columns that have national use characters and those specific  
columns need to
be defined to use NVARCHAR or its non-ANSI dialect. Not all columns  
should

be NVARCHAR.





Marc, presumably you have different persistence unit definitions  
for each

database. If that's the case then you could use a different
xml-mapping-file
and set the columnDefinitions to the database specific type in the
xml-mapping-file.

Or even more hacky, you could just override the VARCHAR type in the
DBDictionary. Ie add this property to persistence.xml :
name="openjpa.jdbc.DBDictionary(varCharTypeName=NVARCHAR)"/>


Either way the application will have to know the proper type, but  
at least

you can make some progress.



I think that either you or I misunderstand the issue. As I  
understand it,
the application knows the the column type (national use or not),  
and the
problem is how to get OpenJPA to generate the proper DDL for the  
database

variant.



It was me, thanks for setting me straight. The multiple mapping-file
approach would work for only a few columns, but DBDictionary hacks  
wouldn't

be optimal for most apps.

-mike

On Fri, Jan 8, 2010 at 1:34 PM, Craig L Russell 
wrote:




On Jan 8, 2010, at 11:00 AM, Marc.Boudreau wrote:


No, the problem is that code can be run on a variety of database

platforms
like DB2, SQL Server, Oracle, etc...
So if I use @Column(columnDefinition="NVARCHAR(256)"), it will  
only work

on
SQL Server and Sybase, because the other database platforms don't
recognize
the NVARCHAR type.


I see. How about having the DataDictionary process the  
columnDefinition

in
a database-specific way? IIRC, all of the databases support  
national use

character set columns but in their own way.

The columnDefinition is not further standardized in the  
specification so

we
can do anything we want to with it.

We could analyze the columnDefinition and look for the ANSI  
standard

strings NCHAR(n) and NVARCHAR(n) and translate these into the
database-specific type.

Craig




Craig L Russell wrote:



Hi,

On Jan 8, 2010, at 7:53 AM, Marc Boudreau wrote:



Currently, OpenJPA maps String fields to VARCHAR on SQLServer  
and

Sybase.
There doesn't appear to be a way to cause a String field to be
mapped to
NVARCHAR other than by using the @Column annotation and  
settings its

columnDefinition to "NVARCHAR".


What is the objection to using this technique on the columns  
that you

want to hold national use characters? It seems this use case is
exactly suited to this feature.

At the same time, blindly using NVARCHAR

for all String fields is too costly in terms of storage space  
on the
database.  It ends up limiting the maximum size of the column  
(less

characters can fit because more bytes are used to store them).

Unfortunately, the applications we write are required to be  
database

neutral because we support multiple vendors.

I'd like to start a discussion on this matter.  Here are a  
couple of

points
to lead us off...
What's the severity of this missing functionality?
Could an OpenJPA specific annotation be introduced to allow the
mapping
tool to use NVARCHAR instead of VARCHAR?.



Is the problem that the OpenJPA mapping tool doesn't support the
standard columnDefinition annotation in the way you want it to?

Craig





Marc Boudreau
Software Developer
IBM Cognos Content Manager
marc.boudr...@ca.ibm.com
Phone: 613-356-6412



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S

Re: openjpa ignoring column annotation

2010-01-11 Thread Craig L Russell

Hi,

Could you post the code that stimulates this error? It looks like the  
query might actually be the problem...


Craig

On Jan 11, 2010, at 9:00 AM, racarlson wrote:



I have the following annotation and variable name (getter/setter)  
listed
below, if I change it to add an underscore it works, but with the  
column
name different than the getter/setter it gives me an errors.  
Hibernate jpa
didn't do this. I also listed the error below, how do I get this to  
work
with openjpa, using a different column name than variable, our  
project is
large and we can't go through changing all the column names right  
now, we
are trying to use openjpa instead of hibernate since its built into  
the j2ee

container

   private java.sql.Timestamp dateCreated;

   @Column(name = "DATE_CREATED")
   public java.sql.Timestamp getDateCreated()
   {
   return this.dateCreated;
   }

   public void setDateCreated(java.sql.Timestamp dateCreated)
   {
   this.dateCreated = dateCreated;
   }

the error:
<1.0.0 nonfatal user error>
org.apache.openjpa.persistence.ArgumentException: Result type "class
Account" does not have any public fields or setter methods for the
projection or aggregate result element "DATE_CREATED", nor does it  
have a

generic put(Object,Object) method that can be used, nor does it have a
public constructor that takes the types null.
--
View this message in context: 
http://n2.nabble.com/openjpa-ignoring-column-annotation-tp4286639p4286639.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Multibyte characters on SQL Server and Sybase

2010-01-11 Thread Craig L Russell

Hi Mike,

On Jan 11, 2010, at 7:24 AM, Michael Dick wrote:


Hi Craig,

That sounds reasonable for this specific use case. I'm a little  
leery of
doing too much validation of the columnDefinition attribute, though.  
It just
seems pretty easy for us to get it wrong (ie converting VARCHAR to  
LVARCHAR

based on the column length, or some other optimization).


I'm really not suggesting that we do extensive analysis of the  
columnDefinition. Just transforming NVARCHAR(n) which is ANSI standard  
SQL into the dialect needed by non-ANSI databases, instead of simply  
passing the columnDefinition as is to the DDL.


What about adding a variable to DBDictionary along the lines of
"preferNLSVarChar", and then we'll try to use the database's specific
NVARCHAR equivalent?


That's not the issue at all. As I understand it, the application has  
some columns that have national use characters and those specific  
columns need to be defined to use NVARCHAR or its non-ANSI dialect.  
Not all columns should be NVARCHAR.


Marc, presumably you have different persistence unit definitions for  
each
database. If that's the case then you could use a different xml- 
mapping-file

and set the columnDefinitions to the database specific type in the
xml-mapping-file.

Or even more hacky, you could just override the VARCHAR type in the
DBDictionary. Ie add this property to persistence.xml :


Either way the application will have to know the proper type, but at  
least

you can make some progress.


I think that either you or I misunderstand the issue. As I understand  
it, the application knows the the column type (national use or not),  
and the problem is how to get OpenJPA to generate the proper DDL for  
the database variant.


Craig


-mike

On Fri, Jan 8, 2010 at 1:34 PM, Craig L Russell  
wrote:




On Jan 8, 2010, at 11:00 AM, Marc.Boudreau wrote:


No, the problem is that code can be run on a variety of database  
platforms

like DB2, SQL Server, Oracle, etc...
So if I use @Column(columnDefinition="NVARCHAR(256)"), it will  
only work

on
SQL Server and Sybase, because the other database platforms don't
recognize
the NVARCHAR type.



I see. How about having the DataDictionary process the  
columnDefinition in
a database-specific way? IIRC, all of the databases support  
national use

character set columns but in their own way.

The columnDefinition is not further standardized in the  
specification so we

can do anything we want to with it.

We could analyze the columnDefinition and look for the ANSI standard
strings NCHAR(n) and NVARCHAR(n) and translate these into the
database-specific type.

Craig




Craig L Russell wrote:



Hi,

On Jan 8, 2010, at 7:53 AM, Marc Boudreau wrote:




Currently, OpenJPA maps String fields to VARCHAR on SQLServer and
Sybase.
There doesn't appear to be a way to cause a String field to be
mapped to
NVARCHAR other than by using the @Column annotation and settings  
its

columnDefinition to "NVARCHAR".



What is the objection to using this technique on the columns that  
you

want to hold national use characters? It seems this use case is
exactly suited to this feature.

At the same time, blindly using NVARCHAR
for all String fields is too costly in terms of storage space on  
the
database.  It ends up limiting the maximum size of the column  
(less

characters can fit because more bytes are used to store them).

Unfortunately, the applications we write are required to be  
database

neutral because we support multiple vendors.

I'd like to start a discussion on this matter.  Here are a  
couple of

points
to lead us off...
What's the severity of this missing functionality?
Could an OpenJPA specific annotation be introduced to allow the
mapping
tool to use NVARCHAR instead of VARCHAR?.



Is the problem that the OpenJPA mapping tool doesn't support the
standard columnDefinition annotation in the way you want it to?

Craig





Marc Boudreau
Software Developer
IBM Cognos Content Manager
marc.boudr...@ca.ibm.com
Phone: 613-356-6412



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!





--
View this message in context:
http://n2.nabble.com/Multibyte-characters-on-SQL-Server-and-Sybase-tp4274154p4274294.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Multibyte characters on SQL Server and Sybase

2010-01-08 Thread Craig L Russell


On Jan 8, 2010, at 11:00 AM, Marc.Boudreau wrote:



No, the problem is that code can be run on a variety of database  
platforms

like DB2, SQL Server, Oracle, etc...
So if I use @Column(columnDefinition="NVARCHAR(256)"), it will only  
work on
SQL Server and Sybase, because the other database platforms don't  
recognize

the NVARCHAR type.


I see. How about having the DataDictionary process the  
columnDefinition in a database-specific way? IIRC, all of the  
databases support national use character set columns but in their own  
way.


The columnDefinition is not further standardized in the specification  
so we can do anything we want to with it.


We could analyze the columnDefinition and look for the ANSI standard  
strings NCHAR(n) and NVARCHAR(n) and translate these into the database- 
specific type.


Craig



Craig L Russell wrote:


Hi,

On Jan 8, 2010, at 7:53 AM, Marc Boudreau wrote:




Currently, OpenJPA maps String fields to VARCHAR on SQLServer and
Sybase.
There doesn't appear to be a way to cause a String field to be
mapped to
NVARCHAR other than by using the @Column annotation and settings its
columnDefinition to "NVARCHAR".


What is the objection to using this technique on the columns that you
want to hold national use characters? It seems this use case is
exactly suited to this feature.


At the same time, blindly using NVARCHAR
for all String fields is too costly in terms of storage space on the
database.  It ends up limiting the maximum size of the column (less
characters can fit because more bytes are used to store them).

Unfortunately, the applications we write are required to be database
neutral because we support multiple vendors.

I'd like to start a discussion on this matter.  Here are a couple of
points
to lead us off...
 What's the severity of this missing functionality?
 Could an OpenJPA specific annotation be introduced to allow the
mapping
 tool to use NVARCHAR instead of VARCHAR?.


Is the problem that the OpenJPA mapping tool doesn't support the
standard columnDefinition annotation in the way you want it to?

Craig





Marc Boudreau
Software Developer
IBM Cognos Content Manager
marc.boudr...@ca.ibm.com
Phone: 613-356-6412


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!





--
View this message in context: 
http://n2.nabble.com/Multibyte-characters-on-SQL-Server-and-Sybase-tp4274154p4274294.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Multibyte characters on SQL Server and Sybase

2010-01-08 Thread Craig L Russell

Hi,

On Jan 8, 2010, at 7:53 AM, Marc Boudreau wrote:




Currently, OpenJPA maps String fields to VARCHAR on SQLServer and  
Sybase.
There doesn't appear to be a way to cause a String field to be  
mapped to

NVARCHAR other than by using the @Column annotation and settings its
columnDefinition to "NVARCHAR".


What is the objection to using this technique on the columns that you  
want to hold national use characters? It seems this use case is  
exactly suited to this feature.



At the same time, blindly using NVARCHAR
for all String fields is too costly in terms of storage space on the
database.  It ends up limiting the maximum size of the column (less
characters can fit because more bytes are used to store them).

Unfortunately, the applications we write are required to be database
neutral because we support multiple vendors.

I'd like to start a discussion on this matter.  Here are a couple of  
points

to lead us off...
  What's the severity of this missing functionality?
  Could an OpenJPA specific annotation be introduced to allow the  
mapping

  tool to use NVARCHAR instead of VARCHAR?.


Is the problem that the OpenJPA mapping tool doesn't support the  
standard columnDefinition annotation in the way you want it to?


Craig





Marc Boudreau
Software Developer
IBM Cognos Content Manager
marc.boudr...@ca.ibm.com
Phone: 613-356-6412


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to map "optional" joins without having to define optional intermediate entities?

2010-01-07 Thread Craig L Russell

Hi,

On Jan 7, 2010, at 2:57 PM, KARR, DAVID (ATTCINW) wrote:


Let's say that I have an entity called "Category" with a "name" field.
It's mapped to a table called "CATEGORY" and the "NAME" column.  This
part works fine.

Now I'm trying to add a mapping for a secondary table, called
"CATEGORY_ES" which has rows corresponding to many of the rows in
"CATEGORY", but not all of them.  This table also has a "NAME" column,
but it's in Spanish, instead of English.

I originally thought I would map CATEGORY_ES as a "secondary-table",  
but

that appears to not be possible, as there are rows in "CATEGORY" that
don't have corresponding rows in "CATEGORY_ES".  When I do a query for
rows that don't have a row in CATEGORY_ES, the query fails.

I originally had a "name" field, so I was thinking I would make that a
transient field, and also have "nameEN" and "nameES", and do a
translation after properties are set to determine what "name" is.

I might have to make the "Category" entity have a one-to-one field
called "categoryES", of type "CategoryES" (mapped to the obvious  
table)
which will either be set or not.  Does this look like the best way  
to do

this?


Yes. A OneToOne with cardinality 0..1 sounds like it exactly matches  
your description. The main requirement is that there is a column in  
the CATEGORY table that is the target of a foreign key in the  
CATEGORY_ES table.


The semantics of a one to one relationship are very similar to a  
secondary table so most of your application logic should work. Just  
check the reference for null (don't assume that the other side  
exists). If you query the CATEGORY_ES table you can go back to the  
CATEGORY table, but not necessarily the other way.


Craig

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: OpenJPA 1.2.2 release.

2010-01-07 Thread Craig L Russell

+1

If there are any significant bugs fixed in the 2.x branches that  
haven't been back ported, now is the time to discuss them.


Craig

On Jan 6, 2010, at 6:49 PM, Michael Dick wrote:


Hi all,

There have been several requests for a 1.2.2 release recently, and I  
think

we're a bit overdue. The 1.2.x tree is in pretty good shape, is anyone
opposed to locking it down this week and starting a release on Monday?

-mike


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Easier strategy to debug "Fail to convert to internal representation"

2009-12-23 Thread Craig L Russell

Hi,

Just speaking for myself, it's good practice to capture all the "user  
domain" information possible in exceptions, logs, and traces.


So it's fair to ask that unhelpful exceptions be "dressed"  
appropriately (to help users figure out what they've done wrong or to  
identify exactly where the implementation went wrong) by trying to add  
some context to the exceptions.


Craig

On Dec 23, 2009, at 2:22 PM, KARR, DAVID (ATTCINW) wrote:


So, I have this big orm.xml that I've been expanding on for the last
couple of weeks.  I've tested most of the relationships, but I'm sure
I've missed some.

I had connected a couple more relationships and ran a test, and I got
"SQLException: Fail to convert to internal representation".  This  
likely

means that the data type I specified for a field doesn't match its
representation.  So, I checked the last entities I worked on.  I  
didn't

see any problems there.  I concluded adding the new relationships must
have caused a row of some entity to appear that I haven't worked on  
for

a while, but which has this illegal mapping.

Unfortunately, this error and the previous debug output doesn't give  
me

any information about which entity and field is in error, so I had to
dig deeper.

In Eclipse I started to walk up the stack to see if I could find a  
place
that might possibly give me a clue about where I was.  So, I noticed  
in

"JDBCStoreManager.load()" that this is the first place where an
exception was caught (numerous stack entries below that were just
processing the exception).  I set a breakpoint in here right after it
obtained the "ClassMapping" object, which has the entity class in it.
By watching the printout of the ClassMapping object and noting whether
continuing hit the exception, I finally found the entity that had the
problem.  Once I found that, I inspected the fields and found the
problem.

Before I make a suggestion, is there some other information I could  
have

looked at to give me a clue about which entity was having the problem?

It seems to me that this "catch" clause in that method (shown below)  
is

missing the opportunity to provide a little more useful information.
The resulting SQLException doesn't tell me anything.  Is it reasonable
to enhance this to provide more information?

   } catch (SQLException se) {
   throw SQLExceptions.getStore(se, _dict);
   }


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: unmanaged exception

2009-12-23 Thread Craig L Russell

Hi,

I surmise that you're using an automatically generated id for "a",  
otherwise accessing the id field would not normally trigger a database  
access.


But I also suspect that the real issue is that the unmanaged entity is  
not being handled properly...


Craig

On Dec 23, 2009, at 8:01 AM, Daryl Stultz wrote:

On Wed, Dec 23, 2009 at 10:42 AM, Daryl Stultz   
wrote:


So it seems I should at least not be referencing the id of "a"  
until after

the commit.



While this may have prevented this particular exception (or maybe it  
would
have happened upon commit anyway), I can't see that referencing the  
id of a
newly persisted entity before commit is a bad coding practice.  
Anyone have a

different opinion?

--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
mailto:da...@6degrees.com


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to get hardcoded ordinal values to map to specific Enum values?

2009-12-23 Thread Craig L Russell

Hi,

On Dec 23, 2009, at 10:15 AM, Pinaki Poddar wrote:

I wish again that OpenJPA or JPA in general had an extension  
mechanism

for the orm.xml.  This is something that might be better in XML
configuration than annotations.


I agree, and I believe this is a major deficiency in the JPA  
specification, and it's been raised numerous times.


The JPA spec does not provide for extension points, neither in  
annotations nor in xml. And the fact that all JCP specifications  
require that vendors must not extend the specified xml means that  
vendors must have their own separate annotations in their own name  
spaces, and they must have a completely different "orm.xml" in their  
own name spaces in order to complement (not extend) the orm.


Craig

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to get hardcoded ordinal values to map to specific Enum values?

2009-12-19 Thread Craig L Russell

Hi,

On Dec 19, 2009, at 11:04 AM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: craig.russ...@sun.com [mailto:craig.russ...@sun.com]
Sent: Friday, December 18, 2009 1:59 PM
To: users@openjpa.apache.org
Subject: Re: How to get hardcoded ordinal values to map to specific
Enum values?

Hi,

I haven't used this, but it seems that you should start looking at
EnumValueHandler:

http://openjpa.apache.org/builds/latest/docs/javadoc/org/apache/ 
openjpa

/jdbc/meta/strats/EnumValueHandler.html


It appears this feature is completely custom to OpenJpa, and isn't  
even
in the JPA 2.0 specification.  That's unfortunate, but it appears to  
be

the only way to do this.


I agree. It's too late for this cycle of JPA specification, but please  
take the time to let the JPA folks know what you'd like to see in  
future releases of the specification.


mailto:jsr-317-feedb...@sun.com

Craig

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to get hardcoded ordinal values to map to specific Enum values?

2009-12-18 Thread Craig L Russell

Hi,

I haven't used this, but it seems that you should start looking at  
EnumValueHandler: http://openjpa.apache.org/builds/latest/docs/javadoc/org/apache/openjpa/jdbc/meta/strats/EnumValueHandler.html


Does anyone here know where to find examples of annotating a field  
with a custom ValueHandler?


Craig

On Dec 18, 2009, at 11:01 AM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: Daryl Stultz [mailto:da...@6degrees.com]
Sent: Thursday, December 17, 2009 6:52 PM
To: users@openjpa.apache.org
Subject: Re: How to get hardcoded ordinal values to map to specific
Enum values?

On Thu, Dec 17, 2009 at 6:31 PM, Craig L Russell
wrote:


You would need your own value type as an extra attribute of your

Enum

class, and then use a special OpenJPA mapping to get the values to

and from

the database.



Maybe it's the "special OpenJPA mapping" info the OP is looking  
for. I

know
I'm interested in it.


Yes, I'm probably going to have to find a reasonable solution for  
this.

My database stores the custom ordinal values, and I have to map to a
real Enum value.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: How to get hardcoded ordinal values to map to specific Enum values?

2009-12-17 Thread Craig L Russell

Hi,

I'm not sure I understand. Are you asking how to get a Java enum to  
use your special values, or for OpenJPA to map your special type?


Java enum types assign the ordinal values without any help from the  
programmer. If you want your own values, ordinal isn't the right  
concept. You would need your own value type as an extra attribute of  
your Enum class, and then use a special OpenJPA mapping to get the  
values to and from the database.


Craig

On Dec 17, 2009, at 11:33 AM, KARR, DAVID (ATTCINW) wrote:


I'm trying to map a field that is essentially an enumerated type.  The
ordinal values are stored in the DB.  I can specify
"@Enumerated(EnumType.ORDINAL)" on the field, and then in the  
definition

of my Java enumerated type, I can define the possible values I can
expect.  What seems to be missing here is that I have to map specific
ordinal values.  I can't just assume the first value maps to "0",  
and so
on.  I don't see an obvious way to define an enumerated type where I  
can

set the ordinal values.  Am I missing something simple here?


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Get "You cannot access the EntityTransaction when using managed transactions." when I implement @Transactional methods with OpenJPA

2009-12-17 Thread Craig L Russell


On Dec 17, 2009, at 12:58 PM, KARR, DAVID (ATTCINW) wrote:


Hi,

I'm happy to hear of your success.

If you would like to help future generations of OpenJPA developers
avoid what you had to experience, would you consider opening a JIRA  
to

suggest where the documentation could be improved?


I'm willing, but I don't know yet that I understand the details of  
what

I did wrong or right, so I'm not sure what additional statements could
be made.  The point about the mismatch between the transaction-type  
and
the "data-source" settings is a good candidate.  I guess I can at  
least

make that statement.


The main thing is to open an issue and propose where in the  
documentation *you* would expect the missing information to be. We can  
then discuss in the context of a "change request" what material should  
go there.


Thanks,

Craig




Thanks,

Craig




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Get "You cannot access the EntityTransaction when using managed transactions." when I implement @Transactional methods with OpenJPA

2009-12-17 Thread Craig L Russell
Ref

lectiveMethodInvocation.java:171)
at
org.springframework.aop.framework.Cglib2AopProxy
$DynamicAdvisedIntercept
or.intercept(Cglib2AopProxy.java:635)
at
com.att.ecom.dynamiccontent.service.CatalogService$$EnhancerByCGLIB$
$5a7
c3444.retrieveCatalogTree()
at
com.att.ecom.dynamiccontent.content.Content.getCatalog(Content.java:
35)



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Get "You cannot access the EntityTransaction when using managed transactions." when I implement @Transactional methods with OpenJPA

2009-12-16 Thread Craig L Russell

Hi,

There are two transaction models you can use in a Java EE container.  
If you use the JTA datasource, you need to use the Java EE transaction  
manager. If you use only a non-JTA datasource, you can manage the  
transactions using EntityTransaction.


I don't know the details with regard to integrating with Spring, but  
you might be ok with just using the non-JTA datasource in your  
environment. If you use the JTA datasource, you need to use the  
managed transaction interface (I recall you can look this up as a JNDI  
resource).


Craig

On Dec 16, 2009, at 8:45 AM, KARR, DAVID (ATTCINW) wrote:

I have an app using Spring 2.5.6, OpenJPA 1.2.1, and WebLogic  
10.3.2.  I

specified a JTA datasource in the persistence.xml.  I have a Spring
controller that calls my DAO class which uses the EntityManager.  This
is working ok with respect to transactions.  As my app is only going  
to

be reading the database, I would think I wouldn't need transactions.
However, because of one problem I'm having with traversing an
association path, I thought I would try to implement a transactional
service layer, and do the association walking within that layer.

So, I added a class with a "@Transactional" method and put that in
between the Controller and the DAO.  Now, I'm seeing the following
exception stack trace:


Caused by:
org.springframework.transaction.CannotCreateTransactionException:  
Could

not open JPA EntityManager for transaction; nested exception is

org.apache.openjpa.persistence.InvalidStateException: You cannot  
access

the EntityTransaction when using managed transactions.
at
org 
.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransaction

Manager.java:375)
at
org 
.springframework.transaction.support.AbstractPlatformTransactionManag

er.getTransaction(AbstractPlatformTransactionManager.java:374)
at
org 
.springframework.transaction.interceptor.TransactionAspectSupport.cre

ateTransactionIfNecessary(TransactionAspectSupport.java:263)
at
org 
.springframework.transaction.interceptor.TransactionInterceptor.invok

e(TransactionInterceptor.java:101)
at
org 
.springframework.aop.framework.ReflectiveMethodInvocation.proceed(Ref

lectiveMethodInvocation.java:171)
at
org.springframework.aop.framework.Cglib2AopProxy 
$DynamicAdvisedIntercept

or.intercept(Cglib2AopProxy.java:635)
at
com.att.ecom.dynamiccontent.service.CatalogService$$EnhancerByCGLIB$ 
$5a7

c3444.retrieveCatalogTree()
at
com.att.ecom.dynamiccontent.content.Content.getCatalog(Content.java: 
35)

----


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: ordered one-to-many with join-table

2009-12-14 Thread Craig L Russell

Hi,

Take a look at the OrderBy annotation, or the JPA 2.0 OrderColumn  
annotation for the standard way to represent ordered collections.


Here's the description from 7.6.3 of the OpenJPA user's guide:

Relational databases do not guarantee that records are returned in  
insertion order. If you want to make sure that your collection  
elements are loaded in the same order they were in when last stored,  
you must declare an order column. An order column can be declared  
using OpenJPA's org.apache.openjpa.persistence.jdbc.OrderColumn  
annotation or the JPA 2.0 javax.persistence.OrderColumn annotation or  
order-column orm element as defined in Section 3, “ XML Schema ”.  
OpenJPA's org.apache.openjpa.persistence.jdbc.OrderColumn annotation  
has the following properties:


•
String name: Defaults to the name of the relationship property or  
field of the entity or embeddable class + _ORDER. To use the JPA 1.0  
default order column name ORDR, set the Section 5.7, “  
openjpa.Compatibility ” option UseJPA2DefaultOrderColumnName to false.

•
boolean enabled
•
int precision
•
String columnDefinition
•
boolean insertable
•
boolean updatable
Order columns are always in the container table. You can explicitly  
turn off ordering (if you have enabled it by default via your mapping  
defaults) by setting the enabled property to false. All other  
properties correspond exactly to the same-named properties on the  
standard Column annotation, described in Section 3, “ Column ”.


Craig

On Dec 14, 2009, at 10:11 AM, KARR, DAVID (ATTCINW) wrote:

I have two entities with a one-to-many association from the first to  
the

second, and the database uses a join-table to represent this
association.  The join table also has a "SEQUENCE_NUM" column to
represent the required ordering of this element in the collection of
elements.  I understand how to use "join-table" and "join-column" to
describe the basic relationship, but I don't see any way to specify  
that

the collection should be ordered by a column value in the join table.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Issues using same domain classes in JPA and CXF/JAXB

2009-12-13 Thread Craig L Russell

Hi,

Just one other thing. Could you post your Category.java as well? I'm  
thinking that the problem might be in the Category class...


Craig

On Dec 13, 2009, at 5:11 PM, KARR, DAVID (ATTCINW) wrote:


-Original Message-
From: craig.russ...@sun.com [mailto:craig.russ...@sun.com]
Sent: Sunday, December 13, 2009 4:26 PM
To: users@openjpa.apache.org
Subject: Re: Issues using same domain classes in JPA and CXF/JAXB

Hi Rick,

I don't know about three kinds of enhancement. Build time runs before
the classes are put into the jars for runtime. Runtime enhancement
enhances classes during loading. Running without enhancement is not
runtime enhancement.


Ok, so I'm running without enhancement at this point.


On Dec 13, 2009, at 1:56 PM, Rick Curtis wrote:


I'm going to suggest you spend a few more cycles on getting
buildtime enhancement working as runtime enhanced classes has a
number of known issues... Enough issues that we have disabled this
support as the default behavior in trunk. HTH


I don't believe this is true. We disabled running *without
enhancement* but runtime (load time) enhancement should work just as
well as build time enhancement.


My attempt to set up load-time enhancement didn't work, due to  
struggles

with classloaders and other problems.  I'm hoping I can somehow get
build-time enhancement working.  Hopefully the JIRA I filed will  
make it
obvious either what I've done wrong, or what's wrong with the  
enhancer.



On Dec 13, 2009, at 3:53 PM, "KARR, DAVID (ATTCINW)"
 wrote:


-Original Message-
From: Rick Curtis [mailto:curti...@gmail.com]
Sent: Sunday, December 13, 2009 10:50 AM
To: users@openjpa.apache.org
Subject: Re: Issues using same domain classes in JPA and CXF/JAXB

Sorry I haven't followed this chain of emails, but what type of
enhancement are you using?


Well, presently I believe I'm just using "run-time" enhancement.  I
had
troubles with both "load-time" (javaagent) and "build-time"
enhancement
(enhancer task).  I'll eventually try to submit a ticket,
particularly
for the problems I had with build-time enhancement.


On Dec 13, 2009, at 12:04 PM, "KARR, DAVID (ATTCINW)"




wrote:


-Original Message-
From: craig.russ...@sun.com [mailto:craig.russ...@sun.com]
Sent: Saturday, December 12, 2009 6:18 PM
To: users@openjpa.apache.org
Subject: Re: Issues using same domain classes in JPA and

CXF/JAXB


Hi KARR, DAVID,

I'd say that not copying annotations over to enhanced classes is

a

deficiency, if not a bug in OpenJPA.

OpenJPA is not the only consumer of runtime annotations.

Can you please file a JIRA for this issue?


Done: <https://issues.apache.org/jira/browse/OPENJPA-1428>.

In the meantime, I have a workaround using a generic method that
basically creates an instance of my class, then uses
"BeanUtils.copyProperties()" to copy over everything.  That

object

then
serializes fine, because its class has the annotations.


On Dec 12, 2009, at 2:19 PM, KARR, DAVID (ATTCINW) wrote:


I'm building an app that retrieves data with OpenJPA and tries

to

serialize it in xml or json with CXF/JAXB.  I'm using

annotations

on

the
domain class to specify both the logical JPA (not physical) and

JAXB

behavior (with the physical JPA in XML config).  In theory I
would
think
this should work, but in my first test I found that CXF didn't
serialize
the object that I retrieved from JPA.

After some thinking, I thought to write some debug code that

prints

out
the runtime annotations on the class, both for the class of the
returned
instance, and the class that it's declared as.  What I found
(because I
realized I should have expected this) is that the runtime class

didn't

have the required annotations that the declared class did.

When

JPA

enhanced the classes, it didn't copy the annotations.

My app currently doesn't use build-time enhancement or the
javaagent.  I
can't remember exactly what OpenJPA does in that situation.  I

think

it's still enhancing the class, but on demand.

Is this issue with non-copied annotations really an issue, or

should

I

look elsewhere for why CXF isn't serializing my data (I'm
asking a
similar question on the CXF list)?


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Issues using same domain classes in JPA and CXF/JAXB

2009-12-13 Thread Craig L Russell

Hi Rick,

I don't know about three kinds of enhancement. Build time runs before  
the classes are put into the jars for runtime. Runtime enhancement  
enhances classes during loading. Running without enhancement is not  
runtime enhancement.


On Dec 13, 2009, at 1:56 PM, Rick Curtis wrote:

I'm going to suggest you spend a few more cycles on getting  
buildtime enhancement working as runtime enhanced classes has a  
number of known issues... Enough issues that we have disabled this  
support as the default behavior in trunk. HTH


I don't believe this is true. We disabled running *without  
enhancement* but runtime (load time) enhancement should work just as  
well as build time enhancement.


Craig


Thanks,
Rick

On Dec 13, 2009, at 3:53 PM, "KARR, DAVID (ATTCINW)"  
 wrote:



-Original Message-
From: Rick Curtis [mailto:curti...@gmail.com]
Sent: Sunday, December 13, 2009 10:50 AM
To: users@openjpa.apache.org
Subject: Re: Issues using same domain classes in JPA and CXF/JAXB

Sorry I haven't followed this chain of emails, but what type of
enhancement are you using?


Well, presently I believe I'm just using "run-time" enhancement.  I  
had
troubles with both "load-time" (javaagent) and "build-time"  
enhancement
(enhancer task).  I'll eventually try to submit a ticket,  
particularly

for the problems I had with build-time enhancement.

On Dec 13, 2009, at 12:04 PM, "KARR, DAVID (ATTCINW)" >

wrote:


-Original Message-
From: craig.russ...@sun.com [mailto:craig.russ...@sun.com]
Sent: Saturday, December 12, 2009 6:18 PM
To: users@openjpa.apache.org
Subject: Re: Issues using same domain classes in JPA and CXF/JAXB

Hi KARR, DAVID,

I'd say that not copying annotations over to enhanced classes is a
deficiency, if not a bug in OpenJPA.

OpenJPA is not the only consumer of runtime annotations.

Can you please file a JIRA for this issue?


Done: <https://issues.apache.org/jira/browse/OPENJPA-1428>.

In the meantime, I have a workaround using a generic method that
basically creates an instance of my class, then uses
"BeanUtils.copyProperties()" to copy over everything.  That object
then
serializes fine, because its class has the annotations.


On Dec 12, 2009, at 2:19 PM, KARR, DAVID (ATTCINW) wrote:


I'm building an app that retrieves data with OpenJPA and tries to
serialize it in xml or json with CXF/JAXB.  I'm using annotations

on

the
domain class to specify both the logical JPA (not physical) and

JAXB
behavior (with the physical JPA in XML config).  In theory I  
would

think
this should work, but in my first test I found that CXF didn't
serialize
the object that I retrieved from JPA.

After some thinking, I thought to write some debug code that

prints

out
the runtime annotations on the class, both for the class of the
returned
instance, and the class that it's declared as.  What I found
(because I
realized I should have expected this) is that the runtime class

didn't

have the required annotations that the declared class did.  When

JPA

enhanced the classes, it didn't copy the annotations.

My app currently doesn't use build-time enhancement or the
javaagent.  I
can't remember exactly what OpenJPA does in that situation.  I

think

it's still enhancing the class, but on demand.

Is this issue with non-copied annotations really an issue, or

should

I
look elsewhere for why CXF isn't serializing my data (I'm  
asking a

similar question on the CXF list)?


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Impact of not using a transactional service layer in a read-only JPA application?

2009-12-13 Thread Craig L Russell

Hi KARR, DAVID,

This is just a high level view, but if you're not modifying the  
database the only thing you should be concerned about is inconsistent  
data.


Within a single database request, results will be consistent assuming  
you use read-committed isolation. This is true even if you use auto- 
commit on your connection. But inconsistencies are possible between  
requests. For example, data that you read in one request might be  
deleted by the time you read again.


There is increasing overhead (performance degradation) associated with  
increasing consistency. The overhead might or might not be significant  
for your application.


The JPA specification focuses on optimistic consistency and in most  
cases you aren't getting higher levels of consistency than read- 
committed anyway.


Craig

On Dec 13, 2009, at 10:13 AM, KARR, DAVID (ATTCINW) wrote:

I'm constructing an application that at the present time and  
foreseeable

future, will just be reading from a database (Oracle) and serializing
those results to XML/JSON.  It will not be writing or modifying the
database.

I have a Controller layer, and I have a DAO layer.  I'm going to
assemble a service layer between those two, but I'm wondering  
whether I

should specify transactional semantics in that layer.  I can put
"readOnly" on it (using Spring's "Transactional" annotation), or  
perhaps

set the transaction attribute to "NOT_SUPPORTED".

I think I'd prefer to have the service layer be transactional, even if
it's only read-only, but I'm wondering what the impact will be either
way.  Will this give me overhead I don't need?  Will NOT using
transactional semantics possibly create some race condition that might
bite me somehow?


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Issues using same domain classes in JPA and CXF/JAXB

2009-12-12 Thread Craig L Russell

Hi KARR, DAVID,

I'd say that not copying annotations over to enhanced classes is a  
deficiency, if not a bug in OpenJPA.


OpenJPA is not the only consumer of runtime annotations.

Can you please file a JIRA for this issue?

Thanks,

Craig

On Dec 12, 2009, at 2:19 PM, KARR, DAVID (ATTCINW) wrote:


I'm building an app that retrieves data with OpenJPA and tries to
serialize it in xml or json with CXF/JAXB.  I'm using annotations on  
the

domain class to specify both the logical JPA (not physical) and JAXB
behavior (with the physical JPA in XML config).  In theory I would  
think
this should work, but in my first test I found that CXF didn't  
serialize

the object that I retrieved from JPA.

After some thinking, I thought to write some debug code that prints  
out
the runtime annotations on the class, both for the class of the  
returned
instance, and the class that it's declared as.  What I found  
(because I

realized I should have expected this) is that the runtime class didn't
have the required annotations that the declared class did.  When JPA
enhanced the classes, it didn't copy the annotations.

My app currently doesn't use build-time enhancement or the  
javaagent.  I

can't remember exactly what OpenJPA does in that situation.  I think
it's still enhancing the class, but on demand.

Is this issue with non-copied annotations really an issue, or should I
look elsewhere for why CXF isn't serializing my data (I'm asking a
similar question on the CXF list)?


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Re: Fetch plan question

2009-12-07 Thread Craig L Russell


On Dec 7, 2009, at 1:44 PM, Jean-Baptiste BRIAUD -- Novlog wrote:


How do you do that ?

I'm using a new EM for one client-server network request but even in  
one client-server network request there may be several database  
request, so I would be interested to know how to you set a fetch  
plan for one request only.


There is a fetch plan for the em that can be modified. Subsequent  
operations use the modified fetch plan.


There is a fetch plan for a query that is initialized to the fetch  
plan for the em at the time you create the query. The fetch plan for  
the query can be modified and it only affects that query.


If you want to modify the fetch plan for a series of queries and then  
revert the fetch plan, take a look at  
OpenJPAEntityManager.pushFetchPlan and popFetchPlan.


Craig


On Dec 7, 2009, at 22:13 , Daryl Stultz wrote:


On Mon, Dec 7, 2009 at 3:48 PM, Jean-Baptiste BRIAUD -- Novlog <
j-b.bri...@novlog.com> wrote:


Now, thanks to you, I'm doing the following :
 final FetchPlan fetchPlan = entityManager.getFetchPlan();
 fetchPlan.clearFetchGroups();
 fetchPlan.clearFields();
 fetchPlan.removeFetchGroup(FetchGroup.NAME_DEFAULT);


FWIW, if you don't know already, modifying the fetch plan at the EM  
level
affects all subsequent queries. You can also modify the fetch plan  
for

individual queries.

--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
mailto:da...@6degrees.com




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Fetch plan question

2009-12-06 Thread Craig L Russell

Hi,

On Dec 5, 2009, at 3:27 AM, Jean-Baptiste BRIAUD -- Novlog wrote:


Hi,

I'm a big fan of fetch plan but there is something I don't understand.
I have a root class with many link or relation.
I'm using annotation and only annotation. Mose of the link are set  
with EAGER fetching and cascade ALL.
This is done so by default, fetch will be eager and action will  
cascade the attributes.

This is the wanted default behavior when no fetch plan is used.

I'm using fetch plan to override that behavior when I need it.
Unfortunately, because most of the time my relation are eagerly  
fetched by default it may have hidden bad behavior or bug to me.
When I use fetch plan to add some field to retrieve tham, it works  
but maybe due to the default behavior I have specified in the  
annotation.
However I had tested to add to fetch plan an attribute that was not  
eagerly retreived and it had worked when I was discovering the  
OpenJPA API.


The question come now : how to dynamically, using fetch plan,  
exclude (rather than adding) to the fetch plan some attribute not to  
retrieve ?


Can you show some snippet of code where you specify the fetch plan?  
When you retrieve the current fetch plan all of the defaults are in  
place so you need to remove them from the fetch plan if you don't want  
to fetch the fields.


If you have fetch groups defined you might need to remove these as  
well to completely remove the fields from the list of fields fetched.


It is something to override my default definition of fetching in the  
annotation.


I found the method fetplan.removeField(...) is it the right one to  
use to exclude an attribute from being processed (whatever it will  
be updated, read, ...) ?


Can that be mixed in the fetch plan : removeField and addField ?


Sure.


To speed up the code is there a way to exclude all attributes before  
adding the few I want to add ?
It would be quicker and simpler than having to remove one by one the  
field I didn't added.

fetplan.removeAllFields(Class) and the other way around :
fetplan.addAllFields(Class)


This sounds like a non-optimization to me. It just isn't that  
expensive to construct fetch plans.


Craig


Thanks !


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: What happens with invalid query hints?

2009-12-02 Thread Craig L Russell
The specification offers this explanation (3.8.9 in the draft  
specification dated 2009-10-06):


Vendors are permitted to support the use of additional, vendor- 
specific locking hints.  Vendor-specific
hints must not use the javax.persistence namespace. Vendor-specific  
hints must be ignored if

they are not understood.

So the model is that there are three categories of hints:

1. hints that are known to OpenJPA, which are hints defined by the  
standard plus the hints that are defined by OpenJPA to extend the  
standard hints. These hints are defined in the OpenJPA name space  
"openjpa.hint.*" or in the standard hint name space  
"javax.persistence.*"


2. hints that are defined by other vendors in their own name space.

3. hints that are in the OpenJPA name space but are not known to  
OpenJPA. These are usually typos by the user. For example,  
"openjpa.hint.OOptimizeResultCount".


Category 1 hints are honored if possible (e.g. the database in use  
supports the hint).


Category 2 hints are ignored by OpenJPA. This allows you to use the  
same set of hints in a program with different persistence vendors.


Category 3 hints result in an exception. Typos should be caught and  
reported.


Sometimes we developers assume that users read and understand the  
specification along with the OpenJPA user documentation. Bad assumption.


By the way, this model for hints follows the general model for  
properties in several JCP specifications.


Craig

On Dec 2, 2009, at 3:55 PM, Kevin Sutter wrote:

:-)  Yeah, that's kind of confusing, isn't it?  I'm assuming that it  
should

read as follows:

"Hints which can not be processed by a particular database are  
ignored.
Otherwise, invalid hints will result in an ArgumentException being  
thrown."


OpenJPA has defined certain hints (and JPA 2.0 has defined some some  
hints)
that could be used with Queries.  Some of these hints do not apply  
to all

databases.  So, if you try using one of the hints, for example
openjpa.hint.OptimizeResultCount and your designated database  
doesn't have
the capability to provide the necessary processing for this request,  
then it

would be ignored.

Otherwise, if you pass in a hint that OpenJPA knows nothing about  
(let's

just say openjpa.hint.FailFast), then you would receive an
ArgumentException.  The curious thing here is that this seems to  
contradict
the hint processing as defined by JPA 2.0.  I believe invalid hints  
are

supposed to be ignored to allow for programming model compatibilities
between JPA providers.  So, maybe there's some clean up necessary in  
our

documentation.

Does this help?  We should probably open a JIRA to address this
documentation hiccup.  If you don't have access to open a JIRA, then  
let me

know and I can open one.

Thanks,
Kevin

On Wed, Dec 2, 2009 at 2:36 PM, KARR, DAVID (ATTCINW)  
wrote:


I'm just reading through the docs (1.2.1) right now, and I noticed  
the

following statement in section 10.1.7,  "Query Hints":

"Invalid hints or hints which can not be processed by a particular
database are ignored. Otherwise, invalid hints will result in an
ArgumentException being thrown."

I'm a little confused by this.  Under what circumstances will a  
hint be

ignored, and when will it get an ArgumentException?



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: OpenJPA 1.2.1 violates JPA 1.0 specification

2009-11-21 Thread Craig L Russell

Hi Amit,

Is the NoResultException the only exception in your transaction? If  
so, I agree that this should not roll back the transaction. And please  
file a JIRA for this.


"Surely there is a TCK test case" that covers this situation?

Craig

On Nov 21, 2009, at 6:25 AM, Amit Puri wrote:


Hi All

I have a weird problem out here which seems to be violating the JPA  
1.0

specification.
One of my query which throws a NoResultException results in the  
complete

transaction being
rollled back.I find the following in JPA spec which clearly says that
NoResultException
should not result in a roll back.

For your reference here is the section 3.7 of JPA spec from my copy.

Section 3.7 of the JPA 1.0 spec states that:
--
PersistenceException
The PersistenceException is thrown by the persistence provider when a
problem
occurs. It may be thrown to report that the invoked operation could  
not

complete because of an
unexpected error (e.g., failure of the persistence provider to open a
database connection).
All other exceptions defined by this specification are subclasses of  
the

PersistenceException.
All instances of PersistenceException except for instances of
NoResultException
and NonUniqueResultException will cause the current transaction,
if one is active, to be marked for rollback.
---

Here is the exception trace.

---
2009-11-19 12:15:58,531 ERROR user error>
org.apache.openjpa.persistence.NoResultException: The query on  
candidate

type "class ---" with filter
"---" was configured  
to have

a unique result, but no instance matched the query.
   at  
org.apache.openjpa.kernel.QueryImpl.singleResult(QueryImpl.java:1299)
   at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java: 
1221)

...
...
2009-11-19 12:16:04,281 INFO  [Transaction] TX Required: Committing
transaction  
org.apache.geronimo.transaction.manager.transactioni...@4d004d

2009-11-19 12:16:04,640 WARN  [Transaction] Unexpected exception from
beforeCompletion; transaction will roll back

org.apache.openjpa.persistence.PersistenceException: The transaction  
has
been rolled back.  See the nested exceptions for details on the  
errors that

occurred.
   at
org 
.apache.openjpa.kernel.BrokerImpl.newFlushException(BrokerImpl.java: 
2163)

   at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:2010)
---

Please clarify.

Thanks
Amit


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: @Version annotation

2009-11-16 Thread Craig L Russell
mport java.io.Serializable;
import javax.persistence.*;
import org.apache.openjpa.persistence.jdbc.ForeignKey;

/**
* @author Ken
*/
@Entity
@NamedQueries({
 @NamedQuery(name="all.properties",
 query="Select p From Property p"),
 @NamedQuery(name="elementTemplate.properties",
 query="Select p From Property p" +
   " where p.elementTemplate = :owner"),
 @NamedQuery(name="one.property",
 query="Select p From Property p" +
   " where p.dbID = :id")
})
@Table(name="Property")
public class Property implements Serializable {

 private static final long serialVersionUID = -696476498387460L;

 private String  dbID;
 private String  property;
 private String  value;
 private Element element;
 private ElementTemplate elementTemplate;

 /**
  * This is the default constructor.
  */
 public Property() {
 }

 /**
  * Set the property dbID.
  *
  * @param String dbID
  */
 public void setDbID(String dbID) {
   this.dbID = dbID;
 }

 /**
  * Get the property dbID.
  *
  * @return
  */
 @Id
 @GeneratedValue(strategy=GenerationType.AUTO, generator="uuid-hex")
 public String getDbID() {
   return this.dbID;
 }

 /**
  * Set the property property
  *
  * @param String property
  */
 public void setProperty(String property) {
   this.property = property;
 }

 /**
  * Get the property property.
  *
  * @return String
  */
 public String getProperty() {
   return property;
 }

 /**
  * Set the property value.
  *
  * @param String value
  */
 public void setValue(String value) {
   this.value = value;
 }

 /**
  * Get the property value.
  *
  * @return String
  */
 public String getValue() {
   return value;
 }

 /**
  * Set property element.
  *
  * @param Element element
  */
 public void setElement(Element element) {
   this.element = element;
 }

 /**
  * Get the property element.
  *
  * @return Element
  */
 @ManyToOne(fetch=FetchType.LAZY)
 @ForeignKey(name="FK_property_element")
 @JoinColumn(name="element")
 public Element getElement() {
   return element;
 }

 /**
  * Set the property elementTemplate.
  *
  * @param ElementTemplate elementTemplate
  */
 public void setElementTemplate(ElementTemplate elementTemplate) {
   this.elementTemplate = elementTemplate;
 }

 /**
  * Get the property elementTemplate.
  *
  * @return ElementTemplate
  */
 @ManyToOne(fetch=FetchType.LAZY)
 @ForeignKey(name="FK_property_elementTemplate")
 @JoinColumn(name="elementTemplate")
 public ElementTemplate getElementTemplate() {
   return elementTemplate;
 }

}

to update the element I do this:
Element element = em.find(Element.class, elementID);

//Add changements to the element


// update the element.
em.persist(element);

The update worked but I don't see any version number in my database  
field.


Please help :)


--
View this message in context: 
http://n2.nabble.com/Version-annotation-tp4013067p4013067.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.






Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Fetchgroups recursion problem

2009-10-23 Thread Craig L Russell
One other thing to consider: there is a fetch plan for each query that  
may be different from the fetch plan for other operations. If you're  
modifying the fetch plan after the query is created, the modifications  
won't affect the query.


Craig

On Oct 23, 2009, at 8:21 AM, Fay Wang wrote:

By default, the max fetch depth is set to -1 for no limit. However,  
this graph is an "indirect recursion", i.e., from State ->  
Transition -> State.  I think this makes Openjpa to stop  
prematurely...


Fay




- Original Message 
From: calin014 
To: users@openjpa.apache.org
Sent: Fri, October 23, 2009 2:13:56 AM
Subject: Re: Fetchgroups recursion problem


Thanks for the quick reply.

I tried that but with no luck.
I did something like this before executing the query:

OpenJPAEntityManager ojem = OpenJPAPersistence.cast(entityManager);
System.out.println("getMaxFetchDepth() before: " +
ojem.getFetchPlan().getMaxFetchDepth());
ojem.getFetchPlan().setMaxFetchDepth(15);
System.out.println("getMaxFetchDepth() after: " +
ojem.getFetchPlan().getMaxFetchDepth());
ojem.getFetchPlan().addFetchGroups("State_OutgoingTransitions",
"IncomingTransitions");

The output is:

getMaxFetchDepth() before: -1
getMaxFetchDepth() after: 15

The result is the same as described in the first post.

Am i doing it wrong?


Pinaki Poddar wrote:


Hi,
There are two attributes that control the closure of a graph as  
fetched by

a FetchPlan/FetchConfiguration, namely
  Recursion Depth
  Max Fetch Depth

According to the cited use case, Max Fetch Depth is the relevant  
attribute
that will control depth of traversal from a root entity (s1). By  
default,
the max fetch depth is set to 1 and hence the immediate neighbors  
of s1

are fetched and not s4 or s5 which is at depth 2 from s1.

Recursion depth, on the other hand, controls the depth of traversal  
for
recursive relation on types. If s1 had a recursive relation then  
recursion

depth would have controlled traversal of that relation path.




--
View this message in context: 
http://n2.nabble.com/Fetchgroups-recursion-problem-tp3874382p3877617.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.






Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Bug in Delete from query ?

2009-09-14 Thread Craig L Russell

Hi Fabien,

Sounds like a bug in the OpenJPA HSQLDB driver. Can you file a bug in  
JIRA (and if you have the time, provide a patch for the driver). Take  
a look at allowsAliasInBulkClause that configures the behavior of the  
DELETE FROM SQL translation.


Thanks,

Craig

On Sep 14, 2009, at 12:19 AM, Fabien Charlet wrote:


Hello !

I'm quite new to JPA and OpenJPA.
I have developped a DAL using OpenJPA and HSQLDB for tests.

My problem is when I launch a "Delete from" query, HSQLDB complains
about a malformed SQL syntax.

Here is my query :

final Query q = entityManager.get().createQuery("DELETE FROM Log");
q.executeUpdate();

Where Log is an entity configured.
But OpenJPA generates the query :

DELETE FROM LOG t0

And HsqlDB replies :

Caused by: org.hsqldb.HsqlException: unexpected token: T0
at org.hsqldb.Error.error(Error.java:76)
at org.hsqldb.ParserBase.unexpectedToken(ParserBase.java:749)
at org.hsqldb.ParserCommand.compileStatement(ParserCommand.java:66)
at org.hsqldb.Session.compileStatement(Session.java:808)
at org.hsqldb.StatementManager.compile(StatementManager.java:418)
at org.hsqldb.Session.execute(Session.java:882)
	at  
org 
.hsqldb.jdbc.JDBCPreparedStatement.(JDBCPreparedStatement.java: 
3631)


I quickly looked at HsqlDB specs
(http://hsqldb.org/web/hsqlDocsFrame.html), where I find that the
query should be

DELETE FROM LOG

without the ending T0.

Is someone know this issue ?

I am using OpenJPA 1.2.1 and HsqlDB hsqldb-1.9.0-rc4

Thanks for help.

--

Cordialement

Fabien CHARLET


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Why OpenJPA rocks. was: Custom proxy : idea review needed

2009-08-05 Thread Craig L Russell

Hi Jean-Baptiste,

Thanks for your comments regarding the utility of the fetch plan in  
OpenJPA. The concept of a fetch plan with fetch groups is more than  
three years old but is still not widely adopted nor standardized in JPA.


Your description of "horizontal" versus "vertical" filtering is an  
interesting one. I hadn't heard of fetch plans described using this  
terminology before.


Thanks,

Craig

On Aug 5, 2009, at 7:33 AM, Jean-Baptiste BRIAUD -- Novlog wrote:

The need is simple : get trees of partially valuated business object  
instances from the "big graph" of possible linked business classes.
The fact came from old SQooL (a bad IT joke from old school and old  
SQL) :

1. only get back from DB what you need.
2. the other thing is to try to make as few trip to the DB as  
possible. One request is better than several.

DB will optimized if the request look like complex.
In other words, let the DB optimize for you, as much as possible.

So, I'm in Java, I want instances of my business classes but with  
only attributes I need (rule 1).
That attributes that had to be taken or left from the DB can be  
@Basic one or any other relational one like @ManyToOne, @OneToOne, ...

I also want to express that in one request ideally (point 2).
I let the Java framework optimize for me and send as few SQL request  
as possible.


Basically, there is 2 "filters" to distinguish :
* vertical one that can filter several "lines" from the million in  
the table : this is the WHERE clause
* horizontal one that can bring back only certain column : this is  
the SELECT clause

both are usefull.

Vertical one are well taken into account by all the frameworks I had  
to use.


Horizontal one had been poorly taken into account by other  
frameworks I had to use.
I can get back with some instances (vertical filter) but with all  
attributes (bad horizontal filter).
Or I can have both working but the result is an array of hash tables  
and I need instances of my business classes.
This is really bad since that hash map's keys are not the attribute  
(field) name but the position in the SELECT clause !

Meta information is just lost in translation and nobody care :-)

Some framework are able to handle poorly (better than not handling  
at all) horizontal filtering via eager or lazy loading but this only  
concern relational attributes excluding @Basic one and also, most of  
the time this can't be set dynamicaly and you have to provide as a  
developper specific constructor of your business class.
All that horizontal filtaring has to be set statically via  
annotation or more painfully via XML but not dynamically.
At the end, it is a lot of work for the developper for not having  
real horizontal filter.


As a conclusion, and as far as I know, OpenJPA was the only one able  
to give me back some (efficient vertical filter) instances of my  
business classes partially valuated depending on my needs (efficient  
horizontal filter) both filters can be specified entirely  
dynamically if I want but it is also statically via annotation.

This had been possible with only one request.

Without any reference to an old OO DB, OpenJPA is a precious  
gemstone !


On Aug 5, 2009, at 15:36 , Pinaki Poddar wrote:



Hi,
I can tell OpenJPA rocks, what I did had been tested impossible to  
do with

other frameworks.


Good to know that you found OpenJPA useful.

Will you please elaborate which aspects/features of OpenJPA are  
distinct

from other frameworks in your view?





-
Pinaki
--
View this message in context: 
http://n2.nabble.com/Custom-proxy-%3A-idea-review-needed-tp3376839p3391748.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: [DISCUSS] Drop build support for Java 5?

2009-08-05 Thread Craig L Russell
ehavior should be as JPA-like as
possible with the option for other frameworks to change the  
configuration

to

suit their needs.





3. If the above appears to be a worthwhile target scenario to
support, then the dynamic class construction approach perhaps can
prove useful than hand-coding JDBC 4 dependency.



4. We take a decision regarding these aspects by mid-April and
announce it to be effective from, say, mid-June. I am not keen on
exact duration of the prior notice but 2 months looked to be
reasonable.




Fair enough. My concern lies mainly with the dynamic class  
construction

and
the impact on performance. Introducing additional code path in  
order to
support a backleveled JDK seems wrong to me. Maybe I'm too anxious  
to be

on

the bleeding edge.

-mike








Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: MySQL ignoring BigDecimal precision.

2009-08-03 Thread Craig L Russell

Hi Chris,

Can you please file a JIRA with a reproducible test case?

Thanks,

Craig

On Aug 3, 2009, at 12:29 PM, C N Davies wrote:


I'm using MySQL and I find that when Open JPA generates the tables it
defaults any BigDecimal columns to Decimal(10,0) ignoring my  
annotation that

says I want 10,6.  I'm using:



   @Column(name="chargerate", precision=10,scale=6,
nullable=false)



The manual says there is an issue with Floats and Doubles but doesn't
mention BigDecimals:



"Floats and doubles may lose precision when stored in some  
datastores."




I'm using InnoDB tables on MySQL 5.1.32



Thanks



Chris







Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Shouldn't cascade all delete orphans?

2009-08-03 Thread Craig L Russell

Hi Chris,

Cascade ALL should not, and OpenJPA does not, include remove orphans.  
Simply removing an Entity from a relationship should not normally  
cause a life cycle change in the Entity.


There should be, and is in OpenJPA, a separate annotation for this  
kind of relationship. IIRC, remove orphans is being added in the JPA  
2.0 spec.


Regards,

Craig

On Aug 3, 2009, at 11:08 AM, C N Davies wrote:

I'll 90% sure this worked ok when I was using Hibernate but doesn't  
seem to
work on OpenJPA.  I have an entity which has a collection contained  
in it,
each of the items of this collection are entities so the collection  
member

is annotated as cascade ALL.  If I programmatically remove one of the
entities from the collection then persist the main entity, the  
object that
was references in the list is not deleted from the DB. Is this the  
expected

behaviour?



Here is a sample of what I mean:



Here is the main entity which as you see has a collection of Assets  
in it.




Public class AssetRegister{

 @OneToMany(cascade={CascadeType.ALL})

 private List assets;

}



In my code I might do like this:



AssetRegister ar =  



List assets = ar.getAssets();

Assets.delete();

Assets.setAssets(assets);





The add and the delete to/from the collection works fine and the  
join in the

asset register table is removed however the asset entity itself is not
removed from the asset table and since it is a OneToMany  
relationship the

asset is now orphaned.



I know I can programmatically delete the asset entity but  I want to  
know if

this the expected behaviour?



Thanks for any advice



Chris





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Persist issue in multithreaded environment

2009-08-03 Thread Craig L Russell

Hi,

There should not be any need to set the Multithreaded flag to  
correctly use exactly one entity manager for a thread. The entity  
manager factory itself is thread safe.


The usual symptom for a multithreading problem is an exception thrown  
inside an implementation method that can't be explained by application  
code.


What are your symptoms?

Regards,

Craig

On Aug 3, 2009, at 6:43 AM, Claudio Di Vita wrote:





Jean-Baptiste BRIAUD -- Novlog wrote:


Did you compared using == what you think is different instance of
EntityManager ?
Reading your message tend to prove that EntityManager is shared and
when you think you got a new one it is not a new one.



Why I have to check an instance returned by a threadlocal variable ??

I use the following Th ThreadLocal:

private static class ThreadLocalEntityManager extends
ThreadLocal {

   /* (non-Javadoc)
* @see java.lang.ThreadLocal#get()
*/
   @Override
   public EntityManager get() {

   /* Get the current entity manager */
   EntityManager em = super.get();

   /* The entity manager was closed */
   if (!em.isOpen()) {

   /* Create a new entity manager */
   em = factory.createEntityManager();

   /* Update the entity manager */
   set(em);
   }

   return em;
   }

   /* (non-Javadoc)
* @see java.lang.ThreadLocal#initialValue()
*/
   @Override
   protected EntityManager initialValue() {

   return factory.createEntityManager();
   }
   }

Where factory is a static EntityManagerFactory.

What is going wrong ??

-
Not everything that can be counted counts, and not everything that  
counts can

be counted - Albert Einstein
--
View this message in context: 
http://n2.nabble.com/Persist-issue-in-multithreaded-environment-tp3377510p3377741.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Type of query's result

2009-07-28 Thread Craig L Russell

Hi,

If what you want is the result as an A with some number of fields  
populated, you can use a fetch plan with a fetch group that calls for  
just the fields that you need.


In my experience, there is not much performance difference between  
fetching 1 primitive field versus 10 primitive fields from the  
database. If you fetch 100 or 1000, there might be a difference.


Craig

On Jul 28, 2009, at 1:08 PM, Jean-Baptiste BRIAUD -- Novlog wrote:


OK, that will solve from 40 to 60% of my problem :-)
In fact, if I understood correctly, this need a specific constructor.

When request is static, it solve the problem.
But when the request is dynamic (the request's string provided at  
runtime) I can't provide the corresponding constructor at runtime.

Any idea for that cases ?

Related question : does it save a lot of time and ressources to  
restrict attribute retreived from database ?
In fact, one solution might be to always retreive all attributes ...  
but I always learn from plain old SQL to use the select part of the  
SQL request to restrict as much as possible the columns to retreive.


Any ideas welcome !


On Jul 28, 2009, at 20:37 , Luis Fernando Planella Gonzalez wrote:


Use select new A(a.attribute1, a.attribute2) from A a
Create a constructor to accommodate the received parameters.

Em Terça-feira 28 Julho 2009, às 15:27:31, Jean-Baptiste BRIAUD --  
Novlog escreveu:

Hi,

If I use the following request SELECT a FROM A a, I got a List
witch is perfectly fine.
If I use SELECT a.attribute1, a.attribute2 FROM A a, I got a
List where each List element is an array of 2 values for
attribute1 and attribute2.

How can I have a List where only attribute1 and attribute2 are  
set

instead of that List ?

Thanks.
PS : I sent this message 3 hours ago but can't see it on the list...
so I decided to write this one.







Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: openJPA generates select per row - impossible to use for simple select statements

2009-07-15 Thread Craig L Russell

Hi,

Could I ask again what the code is doing, following the  
query.getResultList() ?


Where is the list of results being used? Are the results being  
serialized or detached?


Thanks,

Craig

On Jul 15, 2009, at 12:19 AM, om wrote:



Hi All!

I’m new in openJPA, and didn’t manage to get over the problem with  
simple
select statement for one object after a few days of investigation.  
Please

help!

For simple select from one object, OpenJPA ( same strategy for 1.0,  
1.2.1,
2.0 ) fist generates right query to retrieve all rows and fields,  
and then

start generating query per object by primary key.

Code :
 StringBuffer queryBuf = new StringBuffer("SELECT a FROM Account  
AS a

WHERE a.date = :date ");
 PersistenceProviderImpl impl = new PersistenceProviderImpl();
 OpenJPAEntityManagerFactory fac =
impl.createEntityManagerFactory("MainPersistence",  
System.getProperties());

 OpenJPAEntityManager man = fac.createEntityManager();

 Query query = man.createQuery(queryBuf.toString());
 query.setParameter("date", reportDate);  
 List res = query.getResultList();


LOG TRACE

[7/14/09 16:57:50:475 MSD]  R 266  MainPersistence  TRACE   
openjpa.Runtime -
Query "SELECT a FROM Account AS a WHERE a.date = :date " is cached  
as target

query "null"  
[7/14/09 16:57:50:475 MSD] R 266  MainPersistence  TRACE   
openjpa.Query -
Executing query: [SELECT a FROM Account AS a WHERE a.date = :date]  
with

parameters: {date=java.util.GregorianCalendar[]}
[7/14/09 16:57:50:475 MSD] R 266  MainPersistence  TRACE   
openjpa.jdbc.SQL
-  executing prepstmnt 1388597956  
SELECT

t0.id, t0.version, t0.cur_code, t0.acc_date, t0.mask, t0.acc_name,
t0.acc_seq, t0.value FROM ACCOUNT t0 WHERE (t0.acc_date = ?)
[params=(Timestamp) 2009-07-03 00:00:00.0]
[7/14/09 16:57:50:553 MSD] R 344  MainPersistence  TRACE   
openjpa.jdbc.SQL -

 [78 ms] spent

[7/14/09 16:57:50:553 MSD] R 344  MainPersistence  TRACE   
openjpa.jdbc.SQL -

 executing prepstmnt 139855958 SELECT
t0.mask, t0.acc_name, t0.acc_seq, t0.value FROM ACCOUNT t0 WHERE  
t0.id = ?

[params=(long) 328]
[7/14/09 16:57:50:631 MSD] R 422  MainPersistence  TRACE   
[WebContainer : 2]

openjpa.jdbc.SQL -  [78 ms] spent
[7/14/09 16:57:50:631 MSD] R 422  MainPersistence  TRACE   
[WebContainer : 2]

openjpa.jdbc.SQL -  executing prepstmnt
646850190 SELECT t0.mask, t0.acc_name, t0.acc_seq, t0.value FROM  
ACCOUNT t0

WHERE t0.id = ? [params=(long) 329]
[7/14/09 16:57:50:709 MSD] R 500  MainPersistence  TRACE   
[WebContainer : 2]

openjpa.jdbc.SQL -  [78 ms] spent
[7/14/09 16:57:50:709 MSD] R 500  MainPersistence  TRACE   
[WebContainer : 2]

openjpa.jdbc.SQL -  executing prepstmnt
2146074602 SELECT t0.mask, t0.acc_name, t0.acc_seq, t0.value FROM  
ACCOUNT t0

WHERE t0.id = ? [params=(long) 330]
[7/14/09 16:57:50:787 MSD] R 578  MainPersistence  TRACE   
[WebContainer : 2]

openjpa.jdbc.SQL -  [78 ms] spent
..


I need just list of detached objects to show it in grid. As it’s  
seen from
log trace above, first query is enough to return all necessary  
objects and

fields.
Why OpenJPA makes select per object after that? In this case simple  
code
above works 37 seconds for retrieving 440 rows, since same jdbc  
select and

wrap works 1.5 sec. I’ve tried different query hints and a few openjpa
versions, but with no result.


--
View this message in context: 
http://n2.nabble.com/openJPA-generates-select-per-row---impossible-to-use-for-simple-select-statements-tp3261512p3261512.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: openJPA generates select per row - impossible to use for select statements

2009-07-14 Thread Craig L Russell

Hi Mikhail V. Ostryanin.

The code you show is not sufficient to generate the log output.

What happens after you get the result list? What code actually  
generates the individual select calls?


Regards,

Craig

On Jul 14, 2009, at 7:07 AM, Ostryanin, Mikhail wrote:


Hi!

I’m new with openJPA, and didn’t manage to get over the problem with  
simple select statement for one object after a few days of  
investigation. Please help!



For simple select from one object, OpenJPA ( same strategy for 1.0,  
1.2.1, 2.0 ) fist generates right query to retrieve all fields, and  
then start generating query per object by primary key.


Code :

  StringBuffer queryBuf = new StringBuffer("SELECT a FROM  
Account AS a WHERE a.date = :date ");


  PersistenceProviderImpl impl = new PersistenceProviderImpl();

  OpenJPAEntityManagerFactory fac =  
impl.createEntityManagerFactory("MainPersistence",  
System.getProperties());


  OpenJPAEntityManager man = fac.createEntityManager();

  Query query = man.createQuery(queryBuf.toString());

  query.setParameter("date", reportDate);

  List res = query.getResultList();


LOG TRACE


[7/14/09 16:57:50:475 MSD]  R 266  MainPersistence  TRACE   
openjpa.Runtime - Query "SELECT a FROM Account AS a WHERE a.date  
= :date " is cached as target query "null"


[7/14/09 16:57:50:475 MSD] R 266  MainPersistence  TRACE   
openjpa.Query - Executing query: [SELECT a FROM Account AS a WHERE  
a.date = :date] with parameters: {date=java.util.GregorianCalendar[]}


 [7/14/09 16:57:50:475 MSD] R 266  MainPersistence  TRACE   
openjpa.jdbc.SQL -  executing  
prepstmnt 1388597956 SELECT t0.id, t0.version, t0.cur_code,  
t0.acc_date, t0.mask, t0.acc_name, t0.acc_seq, t0.value FROM ACCOUNT  
t0 WHERE (t0.acc_date = ?) [params=(Timestamp) 2009-07-03 00:00:00.0]


[7/14/09 16:57:50:553 MSD] R 344  MainPersistence  TRACE   
openjpa.jdbc.SQL -  [78 ms] spent



[7/14/09 16:57:50:553 MSD] R 344  MainPersistence  TRACE   
openjpa.jdbc.SQL -  executing  
prepstmnt 139855958 SELECT t0.mask, t0.acc_name, t0.acc_seq,  
t0.value FROM ACCOUNT t0 WHERE t0.id = ? [params=(long) 328]


[7/14/09 16:57:50:631 MSD] R 422  MainPersistence  TRACE   
[WebContainer : 2] openjpa.jdbc.SQL - 1329090360> [78 ms] spent


[7/14/09 16:57:50:631 MSD] R 422  MainPersistence  TRACE   
[WebContainer : 2] openjpa.jdbc.SQL - 1329090360> executing prepstmnt 646850190 SELECT t0.mask,  
t0.acc_name, t0.acc_seq, t0.value FROM ACCOUNT t0 WHERE t0.id = ?  
[params=(long) 329]


[7/14/09 16:57:50:709 MSD] R 500  MainPersistence  TRACE   
[WebContainer : 2] openjpa.jdbc.SQL - 1329090360> [78 ms] spent


[7/14/09 16:57:50:709 MSD] R 500  MainPersistence  TRACE   
[WebContainer : 2] openjpa.jdbc.SQL - 1329090360> executing prepstmnt 2146074602 SELECT t0.mask,  
t0.acc_name, t0.acc_seq, t0.value FROM ACCOUNT t0 WHERE t0.id = ?  
[params=(long) 330]


[7/14/09 16:57:50:787 MSD] R 578  MainPersistence  TRACE   
[WebContainer : 2] openjpa.jdbc.SQL - 1329090360> [78 ms] spent


..


I need just list of detached objects to show it in grid. As it’s  
seen from log trace above, first query is enough to return all  
necessary objects and fields.


Why OpenJPA makes select per object after that? In this case simple  
code above works 37 seconds for retrieving 440 rows, since same jdbc  
select and wrap works 1.5 sec. I’ve tried different query hints and  
a few openjpa versions, but with no result.




Best regards,

Mikhail V. Ostryanin
Sr.Developer/Analyst
UBS, Moscow

Tel: +7 495 648 22 14

mikhail.ostrya...@ubs.com



Visit our website at http://www.ubs.com

This message contains confidential information and is intended only
for the individual named.  If you are not the named addressee you
should not disseminate, distribute or copy this e-mail.  Please
notify the sender immediately by e-mail if you have received this
e-mail by mistake and delete this e-mail from your system.

E-mails are not encrypted and cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses.  The sender
therefore does not accept liability for any errors or omissions in the
contents of this message which arise as a result of e-mail  
transmission.

If verification is required please request a hard-copy version.  This
message is provided for informational purposes and should not be
construed as a solicitation or offer to buy or sell any securities
or related financial instruments.


UBS reserves the right to retain all messages. Messages are protected
and accessed only in legally justified cases.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: JoinColumn: match its length to the referenced column?

2009-07-08 Thread Craig L Russell


On Jul 8, 2009, at 12:28 PM, Laird Nelson wrote:

One more bug confirmation for today: when generating DDL, I am  
noticing that
OpenJPA does not match the length or type of a JoinColumn to its  
referenced

column.

That is, suppose the referenced column is defined like this:
@Column(name = "title", length = 4, columnDefinition = "CHAR(4)")

...and in another class the JoinColumn is defined like this:
@JoinColumn(name = "title", referencedColumnName = "title")

When I run the DDL machinery in OpenJPA, the first column is correctly
defined as being a CHAR(4).  But the foreign key column is defined  
as being

VARCHAR(255).

EclipseLink handles this the way I would expect; I haven't tried  
Hibernate.


Should I file a JIRA on this?


Sure. Thanks,

Craig



Thanks,
Laird


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Bug confirmation? Cannot have more than one unique column

2009-07-08 Thread Craig L Russell

Sounds like a bug. If you open a JIRA it can be confirmed.

Then you can provide a patch and someone will test and commit it. ;-)

Craig

On Jul 8, 2009, at 11:46 AM, Laird Nelson wrote:


I wanted to confirm this bug here before I recorded it.

If I have two columns in my entity that are both marked @Column( ...  
unique
= true), then the DDL generation machinery of OpenJPA attempts to  
create two
unique constraints, one for each column--so far so good--but with  
the same
constraint name of _UNQ.  The H2 database, at least, does not permit  
two

unique constraints to have the same identifier.

Can someone confirm that this is a bug, and, if so, I will enter it  
into

JIRA.

Thanks,
Laird


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: [VOTE] OpenJPA Logo Contest

2009-07-07 Thread Craig L Russell


On Jul 7, 2009, at 9:00 AM, Kevin Sutter wrote:


Cool.  Can I now change my vote?  I now like #18 the best...  :-)


Generally, you can change your vote any time before voting ends...  
Just update the wiki page.


Craig


First - #18
Second - #13
Third - #10

Also, is the url a "given" or not?  I noticed that some of the  
submissions
have the url integrated with the design.  It would be nice to have  
the url
available for a quick reference, but is that a given wherever the  
logo is

used or not?

Thanks,
Kevin

On Tue, Jul 7, 2009 at 10:37 AM, Donald Woods   
wrote:



Two late entries (#16 and #17) from Pid have been added.
A requested variation for #13 has been added as #18.

Also, some modifications/clarification on voting -
The current voting will run through 23:59 GMT Thursday, July 9.
Then a "finalist" round of the top 5 logos will start and run  
through 23:50

GMT Tuesday, July 14.


-Donald



Donald Woods wrote:


It's time to vote on your favorite logo!

Anyone can vote for up to 3 logos and logo submitters can vote for  
their

own logos. For example,

First - #13
Second - #15
Third - #7

Please include all of your votes in a single email reply to this  
thread
and use the index on the wiki page to denote which entry you are  
voting on

(1 - 15.)


-Donald


Donald Woods wrote:


Announcing the OpenJPA Logo Contest!

Submissions: Accepted now through June 30, 2009
Rules and Guidelines: See the Logo Contest page [1] for more  
details.

Voting: Will occur from July 1 through July 14.
Winner: Will be announced on or after July 15.

[1] http://cwiki.apache.org/openjpa/logo-contest.html


-Donald






Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Best practice to avoid duplicates on @Column(unique=true)

2009-06-19 Thread Craig L Russell

Hi David,

On Jun 19, 2009, at 2:28 PM, David Goodenough wrote:


I realise that this is not strickly an OpenJPA problem, rather a more
general JPA one, but there are experts here who might know the answer.

I have a number of classes which represent tables which have not only
an Id field, but also various other fields which are marked as unique.

When I persist and then try to flush a new object which has a non- 
unique
value in the object (the user entered bad data) it breaks the  
transaction

and throws an error.  All of which is quite understandable.

The question is what is the best way to avoid it.  Do I have to  
build the

checking into the application, or is there a more generic way which I
can use as a validation technique before I try to persist the object.


You could check each user-entered field against the database by using  
a JPAQL query, e.g. "SELECT FROM FOO WHERE uniqueField1 EQ ? OR  
uniqueField2 EQ ?" and fail if there is already an instance in the  
database.


But of course due to transaction isolation you might still encounter  
an exception when you flush. Depending on your database, and the  
isolation level you use, the above query might just lock the range  
that you query for... Ask your database vendor.


Craig



David


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Use getters/setters only

2009-06-16 Thread Craig L Russell

Hi Daryl,

On Jun 11, 2009, at 12:24 PM, Daryl Stultz wrote:


I am using field access. I placed my annotations on the fields simply
because I like them there. I didn't realize there was a functional
difference. Is there any advantage/disadvantage to field vs property  
access?
It seems property access has the potential gotcha, while field does  
not.

Perhaps there is some other cost...


For me, property access should only be used with persistent abstract  
classes or persistent interfaces. In these cases, the properties are  
implemented by OpenJPA and the user has no opportunity for mischief.


In the case of non-abstract classes, IMHO using properties mixes two  
concerns: the client view (which has to use the accessors to get to  
the data) and the persistence implementation view (which has to use  
the accessors to store the data in the database). When using property  
access for non-abstract classes, you can't put business logic into the  
accessors because this interferes with the persistence  
implementation's use of the accessors.


HTH,

Craig

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Can we have an entity with no @Id?

2009-05-20 Thread Craig L Russell

Hi,

Tables without primary keys is one of the features of JDO that was not  
adopted by JPA.


Maybe you should look at JDO implementations.

Craig

On May 20, 2009, at 8:12 AM, is_maximum wrote:



Hello

To Andrei I want to say that because it is simple to create an  
object and

send it to be persisted however this could be a good idea.

And to Kevin, if in secondary table we have only a foreign key to  
distinct
records that would be enough because the id for the secondary table   
is not
used anywhere. All we need from these two tables is a report that  
tells us
what kind of events (secondary table) have been occurred for a  
specific
operation in specific time (master table) and order is not important  
since
the time of event will order those records in right way. So what is  
this ID
good for? In case of existence of this ID we have to either create  
an Oracle

sequence (which is a bottleneck in database because it takes up cache
particularly in a clustered environment) or selecting the maximum ID  
(which
is required to scan all the records) or create a sequence table  
managed by
ORM (That will end up with a select statement and an update  
following that)

or create that id manually (that it is almost impossible)


Kevin Sutter wrote:


Hi is_maximum,
I'm still a little confused by your scenario.  Following your  
described
scenario...  Your master table would have an Id field, but your  
secondary

table would not have an explicit Id field.  The foreign key from your
master
to secondary would just be some arbitrary column from the secondary  
table?

Do I have that right?  And, why would removing an Id field help with
performance?  You mention to get rid of its sequence, but there's no
requirement to define an Id field with a sequence.

Even though I'm still a little confused by your scenario, there are a
couple
of items to be aware of from an OpenJPA perspective.  The JPA spec
requires
an Id field, but OpenJPA does not require one.  Well, not exactly.
Instead
of declaring an explicit Id field, you could instead declare an Id  
via the
@DataStoreId annotation [1].  This hides the Id field from your  
Entity

definition, but under the covers we still use an implicit Id field in
order
to insert and find records.

Another possibility is coming with the updated JPA 2 specification  
and the
use of derived identities.  I'm probably stretching this one a bit,  
but

this
support would allow you to derive an identity for an Entity based  
on the
identity of a "parent" Entity.  This is normally used when the  
dependent
Entity is the owner of a many-to-one or one-to-one relationship to  
the
parent entity.  Here again, OpenJPA provides a similar  
functionality with

their Entities as Identity fields.

So, bottom line is that some type of Identity is required for proper
Entity
definition and usage.  But, OpenJPA (and eventually the JPA 2 spec)
provides
for some methods to get around the explicit definition of an Id  
field.


Hope this helps,
Kevin

[1]
http://openjpa.apache.org/builds/latest/docs/manual/manual.html#ref_guide_pc_oid_datastore
[2]
http://openjpa.apache.org/builds/latest/docs/manual/manual.html#ref_guide_pc_oid_entitypk

On Wed, May 20, 2009 at 7:14 AM, is_maximum  wrote:



Hi
We have some tables in which id is not important and actually it is
useless.
For example we have two logging tables, one is master and the  
other is
keeping details. The only foreign key from the master table is  
enough and
the second table has no relationship with other tables so if we  
remove

its
ID we can get rid of its sequence and this will be great in terms of
performance since this table is considered to keep lots of records  
and

its
purpose is for preserving events took place in the system

Now my question is how to remove its @Id from the entity since the
OpenJPA
complains if the entity has no field marked as id

Thanks
--
View this message in context:
http://n2.nabble.com/Can-we-have-an-entity-with-no-%40Id--tp2945752p2945752.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.








-
--
Regards
Mohammad
http://pixelshot.wordpress.com Pixelshot
--
View this message in context: 
http://n2.nabble.com/Can-we-have-an-entity-with-no-%40Id--tp2945752p2946693.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: @Version annotated Field not included in SELECT (prepstmnt)

2009-05-20 Thread Craig L Russell

Hi Heiko,


The following types are supported for version properties: int,  
Integer, short, Short, long,

Long, Timestamp.


Try changing the type of the version field and see if that helps.

If so, please file a JIRA to give an error message instead of a silent  
fail.


Thanks,

Craig

On May 20, 2009, at 2:10 AM, it-media.k...@daimler.com wrote:


Hello,

I'm having an issue in openJPA 1.2.1 with a entity model as follows:

@Entity
@Table(name = "PARTNER", schema = "PART")
@NamedQuery(name = "getPartner", query = "SELECT p FROM Partner p  
where

p.partKey = :partKey")
public class Partner
{
   @Id
   @Column(name = "PART_KEY")
   private BigDecimal partKey;

   @OneToMany(mappedBy = "partner", fetch = FetchType.LAZY)
   private List rollen;
}

@Entity
@Table(name = "PARTNERROLLE")
public class PartnerRolle
{
   @EmbeddedId
   private PartnerRolleKey key;

@Version
   @Column(name = "VERSION")
   private BigDecimal version;

   @ManyToOne(fetch = FetchType.LAZY)
   @JoinColumn(name = "PART_KEY")
   private Partner partner;
}

When I ask for a Partner and later on call getRollen(), in the
corresponding SELECT (prepstmt), the Column VERSION of PARTNERROLLE  
is NOT
queried (not listed in the SELECT). When I remove the @Version  
annotation,
it works corerctly, thus the VERSION is filled with the correct  
value. It

only works as expected, when I do NOT set @Version to that column.

I doubt this is correct behaviour, cause even though its a simple  
query,
ALL annotated columns should be returned, whether or not they  
changed. I

was relying on the return of the VERSION field and ran into a
NullPointerException because the BigDecimal was not correctly  
instatiated.


Any suggestion, even if it is "this is correct behaviour, because ..."
would be appreciated.

Thanx,

Best regards,

Heiko

If you are not the intended addressee, please inform us immediately  
that you have received this e-mail in error, and delete it. We thank  
you for your cooperation.


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: strange JPA Enhance stack

2009-05-19 Thread Craig L Russell

Hi Marc,

As Mike says, we could be a bit more friendly. By the way, SERP has  
been very reliable so this hasn't been much of an issue to date.


Could you please file a JIRA so we don't forget about it?

Thanks,

Craig

On May 19, 2009, at 5:53 AM, Michael Dick wrote:


Hi Marc,

Sounds like an unchecked exception thrown by Serp and we could be a  
bit
friendlier about how we handle it. At least dumping the classname  
that we're
trying to load would help. Adding a try / catch might be helpful.  
Which

version of OpenJPA are you using?

My guess is that it is OpenJDK related and the compiled class is not
matching Serp's expectation. I haven't seen this exception before  
though and

I can't tell you why you're getting it.

-mike

On Tue, May 19, 2009 at 3:46 AM, Marc Logemann   
wrote:



David,

no, have not done this. Its quite a hughe effort to create a  
project from
scratch and deploy it to SCM and configure TeamCity to "compile/ 
test" it. I
still think its not a library issue because i use ivy within ANT  
and the

build system resolves libs the same way as its done on each developer
machine. The stack doesnt look like a missing lib to me. And its  
definitely
also not an ANT issue because i am using the same ANT Target on the  
dev

machines.

I hoped somebody who developed the enhancer can tell something  
about the
stack. I mean something like "this problem can only occur under  
this or that

condition".

In any way... do we agree that an IllegalArgumentException in SERP  
shouldnt
be the first Exception on the stack ? I definitely would expect an  
Enhancer
Exception saying whats wrong on an upper level. Something like  
"Enhancement
failed on Class X" or something. The root cause would be of course  
the SERP

exception.

---
regards
Marc Logemann
http://www.logemann.org
http://www.logentis.de




Am 18.05.2009 um 16:29 schrieb Rick Curtis:




-Marc

Any luck with the suggestion that David made?


David Beer-2 wrote:



On Sat, 16 May 2009 15:32:36 +0200
Marc Logemann  wrote:

Hi Marc

Can't seem that it is OpenJDK 6 related as I use it here for both  
my

development and continous build system (hudson under tomcat).

Are you using Ant or Maven with the build process. I have seen on  
lists
that this can sometimes be a problem. Can you create a small  
project
which has say just one class to enhance and see if that works  
through
you build system. I am thinking that it may be a classpath or  
library

problem.

David




--
View this message in context:
http://n2.nabble.com/strange-JPA-Enhance-stack- 
tp2912505p2933425.html

Sent from the OpenJPA Users mailing list archive at Nabble.com.






Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Runtime Enhancement: Problems with Ant Task in Eclipse

2009-05-18 Thread Craig L Russell

Hi Naomi,

On May 18, 2009, at 1:11 PM, naomi-...@gmx.de wrote:


Hi,

so far, I just worked with one binary folder and did not create jars  
from the project, too. I checked the whole workspace and the eclipse  
classpath but did not find any double classes or imports. I even  
removed all classes from the binary folder and rebuilt them.


I debugged my application and had a look at the entity classes. They  
contain an pcDetachedState and a pcStateManager field. Is this a  
sign that the classes have been enhanced or do these fields also  
occur in unenhanced classes?


These are a sign that the classes have been enhanced. I'm stumped.  
Perhaps someone could give you the location of the code inside openjpa  
that checks to see if the classes have been enhanced. There might be a  
bug there.


Craig



-Naomi

 Original-Nachricht 

Datum: Mon, 18 May 2009 09:17:33 -0700
Von: Craig L Russell 
An: users@openjpa.apache.org
Betreff: Re: Runtime Enhancement: Problems with Ant Task in Eclipse



Hi Naomi,

On May 18, 2009, at 8:34 AM, naomi-...@gmx.de wrote:


Hey David,

thank you for the tip!

I checked one of my entities with javap:

1. After a clean and manually invoking the enhance task
2. After execution of my application

Both times the class has pc* methods, so it seems that they are
enhanced and not overwritten by Eclipse.
So why the error message? :(


Perhaps there is a packaging issue. Is it possible that there are
multiple versions of the classes in your classpath when you run your
project?

Regards,

Craig



-Naomi

 Original-Nachricht 

Datum: Mon, 18 May 2009 11:13:02 -0400
Von: David Ezzio 
An: users@openjpa.apache.org
Betreff: Re: Runtime Enhancement: Problems with Ant Task in Eclipse



Hi Naomi,

Three easy ways to verify that your classes have been enhanced.

One install DJ Decompiler (Windows) or another decompiler and  
verify

that the class file is enhanced.

Two, run the JDK command:
javap -c 
and look for a bunch of methods with names that start with "pc".

Three, do a clean (unenhanced compile) and note file sizes.
Then do the enhancement step and expect to see a 4+ KB gain in file
sizes for enhanced files.

Once you can tell unequivocally whether a file is enhanced, I'm  
sure

you'll have luck troubleshooting the issue.

Cheers,

David

naomi-...@gmx.de wrote:

Hi Rick,

I also had that thought, but the message lists all of my entities,
so I

think they all have not been enhanced.


I oriented on the following tutorial for creating and invoking the
build

script:






http://webspherepersistence.blogspot.com/2009/04/openjpa-enhancement-eclipse-builder.html


I edited the XML to avoid setting and checking the arguments and

additionally added my entities to the build path.


I also invoked the script on Eclipse's "Clean" process (does not
make

sense, I think, does it?) and I started it "manually" (right click
on the
enhance task and choosing "run as" - "Ant build").


-Naomi

 Original-Nachricht 

Datum: Mon, 18 May 2009 07:06:16 -0700 (PDT)
Von: Rick Curtis 
An: users@openjpa.apache.org
Betreff: Re: Runtime Enhancement: Problems with Ant Task in  
Eclipse



Is it possible that only a portion of your Entities are being
enhanced

by

the
build script? How are you invoking the ant build script?

-Rick
--
View this message in context:




http://n2.nabble.com/Runtime-Enhancement%3A-Problems-with-Ant-Task-in-Eclipse-tp2932839p2933295.html

Sent from the OpenJPA Users mailing list archive at Nabble.com.




--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate +
Telefonanschluss für nur 17,95 Euro/mtl.!*

http://dslspecial.gmx.de/freedsl-surfflat/?ac=OM.AD.PD003K11308T4569a

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate +  
Telefonanschluss für nur 17,95 Euro/mtl.!* http://dslspecial.gmx.de/freedsl-surfflat/?ac=OM.AD.PD003K11308T4569a


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Runtime Enhancement: Problems with Ant Task in Eclipse

2009-05-18 Thread Craig L Russell

Hi Naomi,

On May 18, 2009, at 8:34 AM, naomi-...@gmx.de wrote:


Hey David,

thank you for the tip!

I checked one of my entities with javap:

1. After a clean and manually invoking the enhance task
2. After execution of my application

Both times the class has pc* methods, so it seems that they are  
enhanced and not overwritten by Eclipse.

So why the error message? :(


Perhaps there is a packaging issue. Is it possible that there are  
multiple versions of the classes in your classpath when you run your  
project?


Regards,

Craig



-Naomi

 Original-Nachricht 

Datum: Mon, 18 May 2009 11:13:02 -0400
Von: David Ezzio 
An: users@openjpa.apache.org
Betreff: Re: Runtime Enhancement: Problems with Ant Task in Eclipse



Hi Naomi,

Three easy ways to verify that your classes have been enhanced.

One install DJ Decompiler (Windows) or another decompiler and verify
that the class file is enhanced.

Two, run the JDK command:
javap -c 
and look for a bunch of methods with names that start with "pc".

Three, do a clean (unenhanced compile) and note file sizes.
Then do the enhancement step and expect to see a 4+ KB gain in file
sizes for enhanced files.

Once you can tell unequivocally whether a file is enhanced, I'm sure
you'll have luck troubleshooting the issue.

Cheers,

David

naomi-...@gmx.de wrote:

Hi Rick,

I also had that thought, but the message lists all of my entities,  
so I

think they all have not been enhanced.


I oriented on the following tutorial for creating and invoking the  
build

script:




http://webspherepersistence.blogspot.com/2009/04/openjpa-enhancement-eclipse-builder.html


I edited the XML to avoid setting and checking the arguments and

additionally added my entities to the build path.


I also invoked the script on Eclipse's "Clean" process (does not  
make
sense, I think, does it?) and I started it "manually" (right click  
on the

enhance task and choosing "run as" - "Ant build").


-Naomi

 Original-Nachricht 

Datum: Mon, 18 May 2009 07:06:16 -0700 (PDT)
Von: Rick Curtis 
An: users@openjpa.apache.org
Betreff: Re: Runtime Enhancement: Problems with Ant Task in Eclipse


Is it possible that only a portion of your Entities are being  
enhanced

by

the
build script? How are you invoking the ant build script?

-Rick
--
View this message in context:


http://n2.nabble.com/Runtime-Enhancement%3A-Problems-with-Ant-Task-in-Eclipse-tp2932839p2933295.html

Sent from the OpenJPA Users mailing list archive at Nabble.com.




--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate +  
Telefonanschluss für nur 17,95 Euro/mtl.!* http://dslspecial.gmx.de/freedsl-surfflat/?ac=OM.AD.PD003K11308T4569a


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: TABLE_PER_CLASS does not work as described in JSR-220 (JPA-1.0)

2009-05-11 Thread Craig L Russell

Hi Serge,

Could you please open a JIRA so we can track this issue?

Thanks,

Craig

On May 11, 2009, at 11:03 AM, Serge Bogatyrev wrote:

This mixed strategy can be very usefull. But obviously there is  
uncompatibility with standards. I think that additional annotation  
can be used to change standard behaviour. So, at this example strict  
TABLE_PER_CLASS strategy should work without any additional  
annotations. And to get the mixed strategy we shoud add @Inheritance  
annotation to some class in hierarchy (e.g. AbsractFoo).


Pinaki Poddar:
You are right. OpenJPA should use the inheritance strategy used at  
the root

of the hierarchy throughout the derived tree. The extra strategy
specification perhaps is resulting from the facility to support mixed
strategy. Needs further investigation...


hallmit wrote:

Thanks guys, I put  
@Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)

to AbstractFoo and now it works fine...I have also put
@Inheritance(strategy=InheritanceType.TABLE_PER_CLASS) to the first
concrete class of my class hierarchy otherwise the SINGLE_TABLE  
strategy

is used for the branch. 
I thought it was sufficient to annotate the root class with
TABLE_PER_CLASS and so all classes in the hierarchy would have the  
same

strategy...apparently not for OpenJPA...

JSR 220 (JPA-1.0) spec talk :

"The Inheritance annotation defines the inheritance strategy to be  
used

for an entity class hierarchy.
It is specified on the entity class that is the root of the entity  
class

hierarchy."
...

In any case thank you very much for your help!







-
Pinaki Poddar  http://ppoddar.blogspot.com/
 http://www.linkedin.com/in/pinakipoddar
OpenJPA PMC Member/Committer
JPA Expert Group Member





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: TABLE_PER_CLASS does not work as described in JSR-200 (JPA-1.0)

2009-05-07 Thread Craig L Russell

Hi,

OpenJPA allows mixing inheritance types in the hierarchy.

Can you try annotating each class with the @Inheritance annotation?

The inheritance strategy for the abstract class should be subclass to  
avoid having a table for the abstract class.


Craig

On May 7, 2009, at 7:30 AM, Leonardo NOLETO wrote:



Hi,

I have a big problem with TABLE_PER_CLASS mapping...I want map each  
concrete

classes in my class hierarchy to a separate table (in accordance with
JSR-200 page 39 - Table per Concrete Class Strategy)

When I use OpenJPA as provider, I have the following behavior :

@Entity
@Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
abstract class RootClass { // persistent properties}

@Entity
@Table(name="BAR_TABLE")
class Bar extends RootClass {...}

@Entity
abstract AbstractFoo extends RootClass { // extra attributes }

@Entity
@Table(name="FOO1_TABLE")
class Foo1 extends AbstractFoo {...}

@Entity
@Table(name="FOO2_TABLE")
class Foo2 extends AbstractFoo {...}

The image below helps to visualize my problem:

http://n2.nabble.com/file/n2828426/tpc_issue_openjpa2.png

If I use Hibernate as provider this works fine.

Does anyone know why?

I use openjpa-1.2.0.

--
View this message in context: 
http://n2.nabble.com/TABLE_PER_CLASS-does-not-work-as-described-in-JSR-200-%28JPA-1.0%29-tp2828426p2828426.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: OpenJPA replaces a collection field

2009-04-29 Thread Craig L Russell

Hi Paul,

Thanks for following up on this.

On Apr 28, 2009, at 11:19 PM, Paul Copeland wrote:

Here is an OpenJPA 1.2.1 example that demonstrates the "OpenJPA  
replaces a collection field" problem. This may not be technically a  
bug but it can be a problem for the unaware user.


The JPA specification is silent on this issue.

However, the JDO specification requires that the collection field be  
replaced at makePersistent (not flush). Since OpenJPA is based on  
Kodo, which implemented JDO, this behavior is historic.


The reason for replacing the user's collection is to be able to track  
changes made to the collection after makePersistent and before commit.  
The reason to do it at makePersistent is so the user knows when it was  
done.


This happens with a new persisted object with a collection field  
that is initially null.  If the null collection is initialized to an  
empty collection it will correctly register additions until it is  
flushed.  Then the collection field is replaced and the old  
collection is stale.





In this example the flush is an explicit call.  But the same problem  
happens when the object is implicitly flushed to get the generated  
identity for a foreign key.


This is more interesting. Does the implicit flush not happen at commit  
or user flush?


Craig



In the main program the behavior changes when "em.flush()" is  
commented out.


= LazyLoadTest.java ==
package com.jpatest;

import javax.persistence.*;
import java.util.*;

import com.jpatest.persistence.*;

public class LazyLoadTest
{
  public static void main(String args[])
  throws Throwable
  {
  EntityManagerFactory emf =  
Persistence.createEntityManagerFactory("jpatest");

  EntityManager em = emf.createEntityManager();
  em.getTransaction().begin();
  ListHolder holder = new ListHolder();
  em.persist(holder);
  List listReference = holder.getMembers();
  em.persist(new ListMember(holder));
  System.err.println("id="+holder.id+"  
size[expect=1]="+listReference.size());


  em.persist(new ListMember(holder));
  System.err.println("id="+holder.id+"  
size[expect=2]="+listReference.size());


  em.flush(); //  this causes problem when uncommented
  em.persist(new ListMember(holder));
  System.err.println("id="+holder.id+"  
size[expect=3]="+listReference.size());


  em.getTransaction().commit();
  em.close();
  }
}


= ListHolder.java ==
package com.jpatest.persistence;

import java.util.*;
import javax.persistence.*;

@Entity
@Table (name="list_holder")
public class ListHolder
  implements java.io.Serializable
{
  @GeneratedValue(strategy=GenerationType.IDENTITY)
  @Id public long id;
  @Version private long version;

  @OneToMany(mappedBy="holder", fetch=FetchType.LAZY,
 cascade={CascadeType.PERSIST,CascadeType.REMOVE})
  private List members;

  public ListHolder() {}

  public List getMembers()
  {
  if (members == null) {
  System.err.println(" ListHolder members == null");
  members = new ArrayList();
  }
  return members;
  }
}

= ListMember.java ==
package com.jpatest.persistence;

import javax.persistence.*;

@Entity
@Table (name="list_member")
public class ListMember
  implements java.io.Serializable
{
  @GeneratedValue(strategy=GenerationType.IDENTITY)
  @Id private long id;
  @Version private long version;

  @ManyToOne  
(optional=false,fetch=FetchType.LAZY,cascade=CascadeType.ALL)

  private ListHolder holder;

  protected ListMember() {}
  public ListMember(ListHolder holder)
  {
  this.holder = holder;
  holder.getMembers().add(this);
  }
}




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Does OpenJPA replace Collections?

2009-04-13 Thread Craig L Russell

Hi Paul,

On Apr 13, 2009, at 8:51 PM, Paul Copeland wrote:


Hi Craig -

Do you mean a JIRA about the case where "OpenJPA will decide to  
replace the collection"?  I agree it is not a bug, just a potential  
issue for the unwary.


If OpenJPA replaces collections without some good reason, I think it's  
worth a JIRA to make sure it isn't a bug.



Would it be useful to create a small example and post it here?


Yes.

Craig



- Paul

On 4/13/2009 7:41 PM, Craig L Russell wrote:

Hi Paul,

On Apr 13, 2009, at 5:15 PM, Paul Copeland wrote:

Craig - Thanks for the responses. This confirms that for a new  
Entity a collection field may be null unless the application  
initializes it.


When you say "flushed" does that include calling  
EntityManager.flush() before the transaction is committed?  The  
spec says the field can be null until it is "fetched".  My  
expectation is that the field may remain null even after calling  
EntityManager.flush().


The surprising thing is that when you add elements to such an  
application initialized empty collection, in some situations  
OpenJPA will decide to replace the collection.  At that point if  
you are holding a reference to  the value returned by  
getMyPcList() that collection will then be stale, possibly leading  
to inconsistent results for the caller.


This is worth a JIRA if only to clarify the code and the behavior  
that the code exposes.


Craig



- Paul

(other comments below)

On 4/13/2009 4:18 PM, Craig L Russell wrote:

Hi Paul,

On Apr 13, 2009, at 9:04 AM, Paul Copeland wrote:

Are there any responses from the OpenJPA experts on my two  
assertions below?  If the assertions seem wrong I will put  
together examples that demonstrate the behavior.  If the  
assertions are correct that is not necessary.


From JPA Spec Section 2.1.7 - "If there are no associated  
entities for a multi-valued relationship of an entity fetched  
from the database,
the persistence provider is responsible for returning an empty  
collection as the value of the relationship."


Note the words "fetched from the database". My reading of this  
is that if the Entity is new and has not been flushed to the  
database (even though persist() has been called) the value could  
be null rather than an empty collection. So the behavior of  
OpenJPA returning null (assertion #1) would be consistent with  
the spec.


That's how I read it as well. Until the new entity is flushed,  
there's no reason to have the entity manager provider messing  
with the values.



- Paul

On 4/9/2009 12:22 PM, Paul Copeland wrote:

Thanks for the assistance Craig -

Here are two assertions that I have observed in my testing with  
OpenJPA 1.2.1 -


(1) A Field Access persistent collection has a null value when  
the field is accessed if the collection is empty. This is the  
state of the field in the transaction after the entity is first  
persisted before the transaction is committed (these are the  
conditions that occur in my process).  Corollary - the null  
field is NOT automatically changed to an empty Collection when  
first accessed. A method returning the collection field will  
return null.


This is discussed above. The entity is not "fetched" but rather  
newly persisted.



(2) The value of a null collection field (state as in #1 above)  
that has been assigned to an initialized non-null value may be  
automatically replaced before the transaction is committed at  
which point references to the assigned value will be stale and  
no longer updated (for instance when entity's are added to the  
collection).


This is discussed above. Until flush, any user changes to the  
collection should be reflected in the database.


But one other thing to consider. It's the application's  
responsibility to manage both sides of a relationship to be  
consistent at commit. So if you're looking to update only the  
other side of a relationship you're in trouble unless you use  
some OpenJPA special techniques.


Good point about updating both sides of the relation.  In this  
case I am using the OpenJPA API to detect if the other side has  
not been loaded yet and only updating the other side when  
necessary.  This is to avoid loading a potentially very large  
collection that is not going to be used during the life of that  
EntityManager.  If and when the other side is loaded OpenJPA will  
include the new elements then.


This does not change the question about null or empty collections  
however.



Craig



If the experts believe either of these assertions are incorrect  
then I definitely want to investigate further.


- Paul

(further comments below)


On 4/9/2009 11:13 AM, Craig L Russell wrote:

Hi Paul,

On Apr 9, 2009, at 9:40 AM, Paul Copeland wrote:


Couple of clarifications -

A lazily loaded FIELD ACCESS collection is a null value when  
initially accessed if the Collection is

Re: Does OpenJPA replace Collections?

2009-04-13 Thread Craig L Russell

Hi Paul,

On Apr 13, 2009, at 5:15 PM, Paul Copeland wrote:

Craig - Thanks for the responses. This confirms that for a new  
Entity a collection field may be null unless the application  
initializes it.


When you say "flushed" does that include calling  
EntityManager.flush() before the transaction is committed?  The spec  
says the field can be null until it is "fetched".  My expectation is  
that the field may remain null even after calling  
EntityManager.flush().


The surprising thing is that when you add elements to such an  
application initialized empty collection, in some situations OpenJPA  
will decide to replace the collection.  At that point if you are  
holding a reference to  the value returned by getMyPcList() that  
collection will then be stale, possibly leading to inconsistent  
results for the caller.


This is worth a JIRA if only to clarify the code and the behavior that  
the code exposes.


Craig



- Paul

(other comments below)

On 4/13/2009 4:18 PM, Craig L Russell wrote:

Hi Paul,

On Apr 13, 2009, at 9:04 AM, Paul Copeland wrote:

Are there any responses from the OpenJPA experts on my two  
assertions below?  If the assertions seem wrong I will put  
together examples that demonstrate the behavior.  If the  
assertions are correct that is not necessary.


From JPA Spec Section 2.1.7 - "If there are no associated entities  
for a multi-valued relationship of an entity fetched from the  
database,
the persistence provider is responsible for returning an empty  
collection as the value of the relationship."


Note the words "fetched from the database". My reading of this is  
that if the Entity is new and has not been flushed to the database  
(even though persist() has been called) the value could be null  
rather than an empty collection. So the behavior of OpenJPA  
returning null (assertion #1) would be consistent with the spec.


That's how I read it as well. Until the new entity is flushed,  
there's no reason to have the entity manager provider messing with  
the values.



- Paul

On 4/9/2009 12:22 PM, Paul Copeland wrote:

Thanks for the assistance Craig -

Here are two assertions that I have observed in my testing with  
OpenJPA 1.2.1 -


(1) A Field Access persistent collection has a null value when  
the field is accessed if the collection is empty. This is the  
state of the field in the transaction after the entity is first  
persisted before the transaction is committed (these are the  
conditions that occur in my process).  Corollary - the null field  
is NOT automatically changed to an empty Collection when first  
accessed. A method returning the collection field will return null.


This is discussed above. The entity is not "fetched" but rather  
newly persisted.



(2) The value of a null collection field (state as in #1 above)  
that has been assigned to an initialized non-null value may be  
automatically replaced before the transaction is committed at  
which point references to the assigned value will be stale and no  
longer updated (for instance when entity's are added to the  
collection).


This is discussed above. Until flush, any user changes to the  
collection should be reflected in the database.


But one other thing to consider. It's the application's  
responsibility to manage both sides of a relationship to be  
consistent at commit. So if you're looking to update only the other  
side of a relationship you're in trouble unless you use some  
OpenJPA special techniques.


Good point about updating both sides of the relation.  In this case  
I am using the OpenJPA API to detect if the other side has not been  
loaded yet and only updating the other side when necessary.  This is  
to avoid loading a potentially very large collection that is not  
going to be used during the life of that EntityManager.  If and when  
the other side is loaded OpenJPA will include the new elements then.


This does not change the question about null or empty collections  
however.



Craig



If the experts believe either of these assertions are incorrect  
then I definitely want to investigate further.


- Paul

(further comments below)


On 4/9/2009 11:13 AM, Craig L Russell wrote:

Hi Paul,

On Apr 9, 2009, at 9:40 AM, Paul Copeland wrote:


Couple of clarifications -

A lazily loaded FIELD ACCESS collection is a null value when  
initially accessed if the Collection is EMPTY (I said "null"  
incorrectly below).


My comment below was intended to compare your "if null then  
initialize" paradigm with my "initialize to an empty collection  
during construction". So if the first time you access the  
collection it is null your code sets the value to an empty  
collection. My recommended code would never encounter a null  
collection.


Your way works (as do other ways).  :-)



The test I have shows this behavior for a newly persisted  
Entit

Re: Does OpenJPA replace Collections?

2009-04-13 Thread Craig L Russell

Hi Paul,

On Apr 13, 2009, at 9:04 AM, Paul Copeland wrote:

Are there any responses from the OpenJPA experts on my two  
assertions below?  If the assertions seem wrong I will put together  
examples that demonstrate the behavior.  If the assertions are  
correct that is not necessary.


From JPA Spec Section 2.1.7 - "If there are no associated entities  
for a multi-valued relationship of an entity fetched from the  
database,
the persistence provider is responsible for returning an empty  
collection as the value of the relationship."


Note the words "fetched from the database". My reading of this is  
that if the Entity is new and has not been flushed to the database  
(even though persist() has been called) the value could be null  
rather than an empty collection. So the behavior of OpenJPA  
returning null (assertion #1) would be consistent with the spec.


That's how I read it as well. Until the new entity is flushed, there's  
no reason to have the entity manager provider messing with the values.



- Paul

On 4/9/2009 12:22 PM, Paul Copeland wrote:

Thanks for the assistance Craig -

Here are two assertions that I have observed in my testing with  
OpenJPA 1.2.1 -


(1) A Field Access persistent collection has a null value when the  
field is accessed if the collection is empty. This is the state of  
the field in the transaction after the entity is first persisted  
before the transaction is committed (these are the conditions that  
occur in my process).  Corollary - the null field is NOT  
automatically changed to an empty Collection when first accessed. A  
method returning the collection field will return null.


This is discussed above. The entity is not "fetched" but rather newly  
persisted.



(2) The value of a null collection field (state as in #1 above)  
that has been assigned to an initialized non-null value may be  
automatically replaced before the transaction is committed at which  
point references to the assigned value will be stale and no longer  
updated (for instance when entity's are added to the collection).


This is discussed above. Until flush, any user changes to the  
collection should be reflected in the database.


But one other thing to consider. It's the application's responsibility  
to manage both sides of a relationship to be consistent at commit. So  
if you're looking to update only the other side of a relationship  
you're in trouble unless you use some OpenJPA special techniques.


Craig



If the experts believe either of these assertions are incorrect  
then I definitely want to investigate further.


- Paul

(further comments below)


On 4/9/2009 11:13 AM, Craig L Russell wrote:

Hi Paul,

On Apr 9, 2009, at 9:40 AM, Paul Copeland wrote:


Couple of clarifications -

A lazily loaded FIELD ACCESS collection is a null value when  
initially accessed if the Collection is EMPTY (I said "null"  
incorrectly below).


My comment below was intended to compare your "if null then  
initialize" paradigm with my "initialize to an empty collection  
during construction". So if the first time you access the  
collection it is null your code sets the value to an empty  
collection. My recommended code would never encounter a null  
collection.


Your way works (as do other ways).  :-)



The test I have shows this behavior for a newly persisted Entity  
during the same transaction where em.persist(entity) is called.  
This is with a LAZY loaded collection.


During persist, the provider should not replace fields. Replacing  
fields behavior should happen at commit (flush) time. So if you  
never explicitly initialize a field, it should have its Java  
default value until flush.


This is NOT what I am seeing.  In fact the replacement happens  
during the transaction under certain conditions where the proxy is  
apparently created during the transaction some time after the call  
to em.persist(entity) and before commit.




If you're talking about wrapping the persistent collection with an  
unmodifiable collection then you're talking about adding more  
objects. I thought you were trying to avoid any object construction?


I would construct the unmodifiable collection (if the idiom worked)  
only if and when the value is accessed and has already been  
loaded.  Other things being equal, I don't want to construct tens  
of thousands of Collections in a tight loop that are never used.   
Given database latencies it is a small point in overall  
performance.  As I said, there are good arguments either way and  
your recommendation is one reasonable approach, but apparently not  
a JPA requirement.




In some applications there is a difference between an empty  
collection and a null collection. There are properties that allow  
that behavior to be implemented as well, although that's non- 
standard and a bit more complicated.


It might be easier to

Re: Does OpenJPA replace Collections?

2009-04-09 Thread Craig L Russell

Hi Paul,

On Apr 9, 2009, at 9:40 AM, Paul Copeland wrote:


Couple of clarifications -

A lazily loaded FIELD ACCESS collection is a null value when  
initially accessed if the Collection is EMPTY (I said "null"  
incorrectly below).


My comment below was intended to compare your "if null then  
initialize" paradigm with my "initialize to an empty collection during  
construction". So if the first time you access the collection it is  
null your code sets the value to an empty collection. My recommended  
code would never encounter a null collection.


The test I have shows this behavior for a newly persisted Entity  
during the same transaction where em.persist(entity) is called. This  
is with a LAZY loaded collection.


During persist, the provider should not replace fields. Replacing  
fields behavior should happen at commit (flush) time. So if you never  
explicitly initialize a field, it should have its Java default value  
until flush.


If you're talking about wrapping the persistent collection with an  
unmodifiable collection then you're talking about adding more objects.  
I thought you were trying to avoid any object construction?


In some applications there is a difference between an empty collection  
and a null collection. There are properties that allow that behavior  
to be implemented as well, although that's non-standard and a bit more  
complicated.


It might be easier to look at a test case because I think we're  
talking past each other.


Craig



On 4/9/2009 9:26 AM, Paul Copeland wrote:

Hi Craig -

My experience is not what you are describing.  A lazily loaded  
FIELD ACCESS collection is a null value when initially accessed if  
the Collection is null (possibly a PROPERTY ACCESS collection  
behaves differently as mentioned by Pinaki , I haven't tested that).


To repeat what is below -

   getMyPcList()

returns null if the Collection is empty unless you initialize the  
value with "new ArrayList()".  This is what my testing shows with  
1.2.1 - I wish it weren't this way since that might it make it  
possible to use the Collections.unmodifiedList() idiom (as it is  
that idiom has unreliable behavior).   If the experts are pretty  
sure that I am wrong about this then I definitely want to  
investigate it further.  I'd like to hear more.


I don't think you have given a reason to require initializing the  
Collection at construction time or at first access -- there are  
reasonable aesthetic and performance arguments either way.


- Paul


On 4/9/2009 7:01 AM, Craig L Russell wrote:

Hi Paul,

I like to think of entities as POJOs first, so I can test them  
without requiring them to be persistent. So if you want code to be  
able to add elements to collections, the collections must not be  
null.


If you construct the field as null and then "lazily" instantiate  
an empty collection, then anyway you end up with an empty  
collection the first time you access the field. And constructing  
an empty collection should not be even a blip on your performance  
metric.


Considering everything, I still recommend that you instantiate an  
empty collection when you construct an entity.


Craig

On Apr 8, 2009, at 10:21 AM, Paul Copeland wrote:


Pinaki -

I tried your suggestion of not initializing the value of myPcList  
and I get a null pointer exception when adding to an empty list.


I noticed your example was for Property access and Russell (and  
I) were talking about Field access.  Do you agree that it is  
necessary to initialize an empty list when using Field access?


On Craig's advice to always construct a new ArrayList(), why is  
that necessary instead of just constructing it in the getter when  
it tests to null?  Otherwise you are constructing an ArrayList  
that is unnecessary when the List is NOT empty (usually) and also  
unnecessary in the case of LAZY loading if the List is never  
accessed (perhaps also a frequent case).  In some applications  
you might create lots of these objects and normal optimization is  
to avoid calling constructors unnecessarily.  Just want to be  
clear about whether it is necessary.


- Paul

On 4/8/2009 9:43 AM, Paul Copeland wrote:

Thanks Pinaki -

I think you are saying that at some point the proxy object does  
replace the local List.  Is that right?


I have seen that model - if (myPcList == null) myPcList = new  
ArrayList() - in various examples (not sure where now).  Thanks  
for clearing that up.  But then Craig Russell contradicts you in  
his reply (below) where he recommends always initializing the  
Collection in the constructor (which seems like a performance  
anti-pattern of wasted constructor calls since usually it will  
be replaced by the proxy).   Are you and Craig saying opposite  
things here?


In my testing when the List is empty - (myPcList == null) - does  
indeed evaluate to true.


  

Re: Does OpenJPA replace Collections?

2009-04-09 Thread Craig L Russell

Hi Paul,

I like to think of entities as POJOs first, so I can test them without  
requiring them to be persistent. So if you want code to be able to add  
elements to collections, the collections must not be null.


If you construct the field as null and then "lazily" instantiate an  
empty collection, then anyway you end up with an empty collection the  
first time you access the field. And constructing an empty collection  
should not be even a blip on your performance metric.


Considering everything, I still recommend that you instantiate an  
empty collection when you construct an entity.


Craig

On Apr 8, 2009, at 10:21 AM, Paul Copeland wrote:


Pinaki -

I tried your suggestion of not initializing the value of myPcList  
and I get a null pointer exception when adding to an empty list.


I noticed your example was for Property access and Russell (and I)  
were talking about Field access.  Do you agree that it is necessary  
to initialize an empty list when using Field access?


On Craig's advice to always construct a new ArrayList(), why is that  
necessary instead of just constructing it in the getter when it  
tests to null?  Otherwise you are constructing an ArrayList that is  
unnecessary when the List is NOT empty (usually) and also  
unnecessary in the case of LAZY loading if the List is never  
accessed (perhaps also a frequent case).  In some applications you  
might create lots of these objects and normal optimization is to  
avoid calling constructors unnecessarily.  Just want to be clear  
about whether it is necessary.


- Paul

On 4/8/2009 9:43 AM, Paul Copeland wrote:

Thanks Pinaki -

I think you are saying that at some point the proxy object does  
replace the local List.  Is that right?


I have seen that model - if (myPcList == null) myPcList = new  
ArrayList() - in various examples (not sure where now).  Thanks for  
clearing that up.  But then Craig Russell contradicts you in his  
reply (below) where he recommends always initializing the  
Collection in the constructor (which seems like a performance anti- 
pattern of wasted constructor calls since usually it will be  
replaced by the proxy).   Are you and Craig saying opposite things  
here?


In my testing when the List is empty - (myPcList == null) - does  
indeed evaluate to true.


 getMyPcList().add(new MyPcObject())

Therefore I thought the above would cause a null pointer exception  
when the List is empty.  You say that won't happen so I'll give it  
a try!


- Paul


On 4/8/2009 3:16 AM, Pinaki Poddar wrote:

Hi,
According to JPA spec:
"If there are no associated entities for a multi-valued  
relationship of an entity fetched from the database,
the persistence provider is responsible for returning an empty  
collection as the value of the relationship."


That is what OpenJPA does. So the application do not need to  
return an empty list for a null (initialized) list.


OpenJPA proxies all returned collections. So application code can  
simply do the following


// In the domain class
private List myPcList = null; // never explictly  
initialized


@OneToMany (mappedBy="ownerSide", fetch=FetchType.LAZY,   
cascade=CascadeType.PERSIST)

public  List getMyPcList()  {
 return myPcList; // return as it is
}

// In the application
List list = owner.getMyPcList();
assertNotNull(list);
assertTrue(java.util.List.class.isInstance(list));
assertNotSame(java.util.ArrayList.class, list.getClass());
list.add(new MyPcObject());
owner.setMyPcList(list);




On Apr 7, 2009, at 11:10 PM, Paul Copeland wrote:



Can OpenJPA replace a Collection when it is loaded?

With the code below when the list is initially empty you need to   
create a List (ArrayList) so you can add elements to it. When I   
persisted new objects on the ManyToOne side and added them to  
the  List that worked.  But the first time the List was loaded it  
seemed  to replace my ArrayList with the newly loaded data and  
made an older  reference to the ArrayList stale (no longer  
updated when more  elements were added to myPcList).  This was  
all in one transaction.


So now I wonder if the initial null List is a special case or if   
OpenJPA might replace the Collection anytime it decides to load  
it  again.  Anyone know the answer?




If the list is persistent and the class is enhanced, the  
collection  will always reflect what's in the database.


If I don't create an initial ArrayList how can I add elements  
when  the List is empty?




I'd recommend always having a non-empty list. Initialize it in  
the  constructor to an empty list and don't check it after that.


Here's what it would look like:

@OneToMany (mappedBy="ownerSide", fetch=FetchType.LAZY,   
cascade=CascadeType.PERSIST)

private List myPcList = new ArrayList();

List getMyPcList()
{
return myPcList;
}




Craig
Craig L Russell
Architect, Sun Java Enterprise System http://db.a

Re: detachment of fetch-groups doesnt work

2009-04-09 Thread Craig L Russell

It's worth a JIRA.

Detachment occurs either at context end or at explicit detach.

Does this anomalous behavior occur if you explicitly detach your  
entities, or only implicitly at context end?


And have you changed the openjpa.DetachState configuration property  
which governs how objects are detached?


Craig

On Apr 8, 2009, at 11:57 PM, Marc Logemann wrote:


is this worth a JIRA ?

---
regards
Marc Logemann
http://www.logemann.org
http://www.logentis.de




Am 08.04.2009 um 23:50 schrieb Tedman Leung:


just as additional information, this is also true about any lazy
PersistentCollection too. I found that if I accessed the collection  
while
it is in an attached state the values are available, but as soon as  
it
becomes detached the collection becomes null even if I had just  
accessed

it prior to detachment.

I'm not entirely sure if this is a bug or if it's suppose to work  
this way

when detached.

On Wed, Apr 08, 2009 at 05:18:48PM +0200, Marc Logemann wrote:

Hi,

with OpenJPA 1.2.0 i am having some problems detaching attributes  
which

are in a fetch-group. My persistence.xml is:



My Domain class header:

@FetchGroups({
  @FetchGroup(name="posDetail", attributes={
  @FetchAttribute(name="deliveryAddresses")
  })
})
public class Order {

  @OneToMany(mappedBy = "order", cascade = CascadeType.ALL, fetch =
FetchType.LAZY)
  List deliveryAddresses;
...
}

(in fact i also have the @LoadFetchGroup("posDetail")  to be  
sure.)


Now when i am leaving my DAO layer which means that the persistence
contexts ends, the delivery address is "null" even though its in the
fetch group. I even queried for the fetch group in the DAO before
leaving it via:

OpenJPAQuery oQuery =  
OpenJPAPersistence.cast(em.createQuery("select o

from Order o where o.oid = ?1"));
  oQuery.setParameter(1, oid);
  //
oQuery 
.getFetchPlan().setMaxFetchDepth(3).addFetchGroup("posDetail");

  List list = oQuery.getResultList();
  if(list != null && list.size() > 0)  {
  return (Order)list.iterator().next();
  }

I know it must be a detach issue because with the following
persistence.xml it works (but i definitely wont use this config in
production)



Am i missing something here? When i debug my DAO, the  
deliveryAddress

attribute is populated but as soon as i leave my DAO, its lost.

---
regards
Marc Logemann
http://www.logemann.org
http://www.logentis.de






--
 Ted Leung
      
ted...@sfu.ca


I can speak Canadian, American, Australian, and little English.





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Does OpenJPA replace Collections?

2009-04-08 Thread Craig L Russell


On Apr 8, 2009, at 2:18 AM, Craig L Russell wrote:



On Apr 7, 2009, at 11:10 PM, Paul Copeland wrote:


Can OpenJPA replace a Collection when it is loaded?

With the code below when the list is initially empty you need to  
create a List (ArrayList) so you can add elements to it. When I  
persisted new objects on the ManyToOne side and added them to the  
List that worked.  But the first time the List was loaded it seemed  
to replace my ArrayList with the newly loaded data and made an  
older reference to the ArrayList stale (no longer updated when more  
elements were added to myPcList).  This was all in one transaction.


So now I wonder if the initial null List is a special case or if  
OpenJPA might replace the Collection anytime it decides to load it  
again.  Anyone know the answer?


If the list is persistent and the class is enhanced, the collection  
will always reflect what's in the database.



If I don't create an initial ArrayList how can I add elements when  
the List is empty?


I'd recommend always having a non-empty list. Initialize it in the  
constructor to an empty list and don't check it after that.


Oops that. I meant:
I'd recommend always having a *non-null* list. Initialize it in the  
constructor to an empty list and don't check it after that.


Craig



Here's what it would look like:


@OneToMany (mappedBy="ownerSide", fetch=FetchType.LAZY,  
cascade=CascadeType.PERSIST)

private List myPcList = new ArrayList();

List getMyPcList()
{
return myPcList;
}



Craig




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Does OpenJPA replace Collections?

2009-04-08 Thread Craig L Russell


On Apr 7, 2009, at 11:10 PM, Paul Copeland wrote:


Can OpenJPA replace a Collection when it is loaded?

With the code below when the list is initially empty you need to  
create a List (ArrayList) so you can add elements to it. When I  
persisted new objects on the ManyToOne side and added them to the  
List that worked.  But the first time the List was loaded it seemed  
to replace my ArrayList with the newly loaded data and made an older  
reference to the ArrayList stale (no longer updated when more  
elements were added to myPcList).  This was all in one transaction.


So now I wonder if the initial null List is a special case or if  
OpenJPA might replace the Collection anytime it decides to load it  
again.  Anyone know the answer?


If the list is persistent and the class is enhanced, the collection  
will always reflect what's in the database.



If I don't create an initial ArrayList how can I add elements when  
the List is empty?


I'd recommend always having a non-empty list. Initialize it in the  
constructor to an empty list and don't check it after that.


Here's what it would look like:


 @OneToMany (mappedBy="ownerSide", fetch=FetchType.LAZY,  
cascade=CascadeType.PERSIST)

 private List myPcList = new ArrayList();

 List getMyPcList()
 {
     return myPcList;
 }



Craig




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Multiple join fetches

2009-04-06 Thread Craig L Russell

Hi Daryl,

On Apr 6, 2009, at 8:03 AM, Daryl Stultz wrote:


Hey all,

I'm new to JPA. I'm trying to learn to write queries. Suppose I have
GrandParents, Parents and Children entities with the obvious  
relationship.
How do I write a query that fetches all three levels? I want  
something like

this:

select gp from GrandParent as gp
from GrandParent
join fetch gp.parents
join fetch gp.parents.children

The first fetch works (if I comment out the second) but the second  
doesn't.
I've tried a number of variations but can't seem to get the second  
level to

fetch.
(Posted the above to general JPA forum.)
I am told JPA 1.0 supports only one level fetch. Is there some OpenJPA
extension for fetching more? I don't really need this right now, I'm  
just

trying to understand it all.


Fetch groups are intended to support this use case. Not JPA standard,  
and also not guaranteed to fetch all instances in one database round  
trip. But it might work for you.


Craig



Thanks.

--
Daryl Stultz
_
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
mailto:da...@6degrees.com


Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: enum = null becomes blank string in db

2009-04-03 Thread Craig L Russell
I'd expect that if the enum field is null, then null should be  
persisted. Not blank. That's what I thought was a bug.


Craig

On Apr 3, 2009, at 1:52 PM, Michael Dick wrote:


Have you tried the following ?



STRING


By default OpenJPA doesn't check for constraints on your columns. So  
if your
mappings (or annotations) aren't consistent with the constraints in  
the

database you can run into problems.

Alternatively you can configure OpenJPA to read the data from the  
database

by adding this property :


If you've tried either of those and we're still persisting a null  
value then

it's definitely a bug.

-mike

On Fri, Apr 3, 2009 at 1:51 PM, Craig L Russell  
wrote:



Hi Adam,

Sounds like a bug. Can you please file a JIRA?

Thanks,

Craig


On Apr 3, 2009, at 9:26 AM, Adam Hardy wrote:

Just tested this with static enhancement against mysql and have the  
same
problem. OpenJPA is inserting a blank string into the not-null  
field when

the the enum variable is null.

Is this a bug or to be expected?

Regards
Adam

Adam Hardy on 01/04/09 17:38, wrote:


I have an entity bean with this property in v1.2.0 and H2 db:


STRING

I just discovered that I can set the property on the bean to null  
and

save it to a field in the DB with a not-null constraint. It saves a
zero-length string.
On reading back the row however OpenJPA throws this:

org.apache.openjpa.persistence.PersistenceException: No enum  
const class

org.permacode.patternrepo.PatternRepoNumericDisplay.
Surely this is inconsistent? Shouldn't I get an error when trying  
to do

the write first of all?
Admittedly I have yet to test it with pre-enhanced beans but I  
figured it

would be the same (or is that a completely different code base?)





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: enum = null becomes blank string in db

2009-04-03 Thread Craig L Russell

Hi Adam,

Sounds like a bug. Can you please file a JIRA?

Thanks,

Craig

On Apr 3, 2009, at 9:26 AM, Adam Hardy wrote:

Just tested this with static enhancement against mysql and have the  
same problem. OpenJPA is inserting a blank string into the not-null  
field when the the enum variable is null.


Is this a bug or to be expected?

Regards
Adam

Adam Hardy on 01/04/09 17:38, wrote:

I have an entity bean with this property in v1.2.0 and H2 db:

 
 STRING

I just discovered that I can set the property on the bean to null  
and save it to a field in the DB with a not-null constraint. It  
saves a zero-length string.

On reading back the row however OpenJPA throws this:
  
org.apache.openjpa.persistence.PersistenceException: No enum const  
class org.permacode.patternrepo.PatternRepoNumericDisplay.
Surely this is inconsistent? Shouldn't I get an error when trying  
to do the write first of all?
Admittedly I have yet to test it with pre-enhanced beans but I  
figured it would be the same (or is that a completely different  
code base?)




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: how to store collection of enums as strings

2009-03-31 Thread Craig L Russell
I'd like to see a simple way to persist an EnumSet (which if I  
understand this use case is really what is desired):


@Enumerated(STRING)
@ElementCollection
private EnumSet myEnums = new EnumSet();

If no volunteers to implement this one (too bad it's not in the  
specification) then this would be my second choice:


@Enumerated(STRING)
@ElementCollection
private Set myEnums = new HashSet();

The @Enumerated tells us that you want the String value of the enum to  
be persisted (not the int value); the @ElementCollection tells us to  
persist the values individually and not as a serialized blob.


Craig

On Mar 31, 2009, at 8:46 AM, Jody Grassel wrote:

It sounds like you're trying to persist a collection of enumerations  
as a

set of data and not as a reference to other entities, is that correct?

If so, wouldn't a combination of @PersistentCollection and  
Externalization

(section 6.6 in the manual) provide the function you are looking for?


On Wed, Mar 25, 2009 at 11:48 AM, Tedman Leung  wrote:

Anyone know how to store a collection of enums as Strings instead  
of their

ordinal values? (preferably with annotations...)

i.e.
  @ManyToMany
  private Set myEnums=new HashSet();


--
 Ted Leung
  
ted...@sfu.ca


It's time for a new bike when the bulb in your shift light burns out.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Is JPA 2 going to have something like JAXBs @XmlJavaTypeAdapter ?

2009-03-30 Thread Craig L Russell
or instance, I can't seem to do the simplest thing with JPA  
such as

   take a java.util.UUID and have OpenJPA map it into a STRING.

   It looks like there is no chance of getting this into JPA 2  
since they

   are in final draft. I just can't imagine such a useful tool *not*
   existing in a powerful API like JPA, so what gives? Have I missed
   something fundamental?

   Thank you,
   Ryan





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: What is ManagedInterface

2009-03-09 Thread Craig L Russell

Hi is_,

On Mar 9, 2009, at 6:45 AM, is_maximum wrote:



Hi

Can anyone explain what is the ManagedInterface good for? What  
benefit would

achieve if we define all of our entities as interfaces?


If your entities are pure data (no behavior) then defining them as  
interfaces reduces mindless code generation for the implementation of  
get and set methods. All you do is declare the methods and OpenJPA  
does the rest.


Craig



thanks
--
View this message in context: 
http://n2.nabble.com/What-is-ManagedInterface-tp2449023p2449023.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Oracle LONG datatypes as persistent field problem

2009-03-05 Thread Craig L Russell

Hi Ram,

I haven't heard of this issue (retrieving blob data is a basic feature).

It sounds like the best way forward would be to have a reproducing  
test case attached to a JIRA issue to see what might be causing this  
particular failure. Can you file an issue?


Thanks,

Craig

On Mar 5, 2009, at 6:51 AM, RamAESIS wrote:



Hi,

I have an Oracle LONG type as a column for persistence in the  
entity. While

retrieving this entity JPA throws and exception saying "Stream already
closed". The reason for this exception is that the connection stream  
is
closed once we retrieve the data from LONG columns in oracle JDBC  
driver.
The solution/workaround is to retrieve the value of LONG column  in  
the end.
But even if i change the order of declaration so that the LONG  
column is at
the end the SQL PrepareStatement is generated with column names  
arranged in
alphabetical order i.e. LONG column comes in the middle. Is there  
any way to
force the ordering of columns in generated PreparedStatement, please  
help...


Regards,
Ram
--
View this message in context: 
http://n2.nabble.com/Oracle-LONG-datatypes-as-persistent-field-problem-tp2429848p2429848.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Quick question re date, time, timestamp or java.util.Date/Calendar

2009-03-05 Thread Craig L Russell

Hi Adam,

I think there is a misunderstanding. From the spec, 2.2:
The persistent fields or properties of an entity may be of the  
following types: Java primitive types;
java.lang.String; other Java serializable types (including wrappers of  
the primitive types,

java.math.BigInteger, java.math.BigDecimal, java.util.Date,
java.util.Calendar[5], java.sql.Date, java.sql.Time, java.sql.Timestamp,
byte[], Byte[], char[], Character[], and user-defined types that  
implement the Serial-
izable interface); enums; entity types; collections of entity types;  
embeddable classes (see Section

2.5); collections of basic and embeddable types (see Section 2.6).

So there is no problem using a java.sql Time, Date, or Timestamp as a  
persistent field or property type.


The @Temporal annotation was introduced so the provider would be able  
to figure out the correct methods to persist java.util.Date and  
java.util.Calendar, since these have no standard representation in the  
database.


Your code might work if you simply omit the @Temporal annotation  
entirely.


Craig

On Mar 5, 2009, at 4:39 AM, Adam Hardy wrote:

Actually the JPA spec (1.0 and 2.0) has a knock-on effect concerning  
the use of entity beans in the front-end.


Since I must use either java.util.Date or Calendar as the type for  
my temporal properties, I can't rely on the property type to  
distinguish between times and dates when it comes to displaying the  
values or for parsing incoming HTTP parameters.


This gives the programmer extra coding burden in the front-end, and  
I can't see any counter-balancing advantage in the persistence layer  
from banning the use of java.sql.Date and Time.


Have I missed something or is this an improvement that should be put  
into JPA 2 before it goes final?




Adam Hardy on 04/03/09 23:54, wrote:

Thanks Mike.
Looks like the same wording in JPA 2.0 too.
Regards Adam
Michael Dick on 04/03/09 19:39, wrote:

Hi Adam,
Looks like we're less stringent about the @Temporal annotation.  
I'd have to

look closer to see that's the case.
Regarding the JPA 2.0 spec you can find a copy of the public  
review draft here http://jcp.org/aboutJava/communityprocess/pr/jsr317/index.html

-mike
On Wed, Mar 4, 2009 at 10:57 AM, Adam Hardy >wrote:
I converted my project over from java.util.Date to  
java.sql.Timestamp for
entity fields after I figured that would give me more room to  
maneuver

with a new requirement for time fields.
It went smoothly with OpenJPA and made the MVC layer's type  
converter code a cinch to refactor.
However I then ran my tests under Hibernate JPA and Toplink  
Essentials,
and both complained bitterly that I was violating the spec and  
threw exceptions.
Looking through the JPA 1 spec, I see where I have transgressed  
(9.1.20):
"The Temporal annotation must be specified for persistent fields  
or properties of type java.util.Date and java.util.Calendar. It  
may only be specified for fields or properties of these types."
Is the OpenJPA interpretations deliberately including Timestamp  
or is that considered an OpenJPA feature?

Is there any change in JPA 2?
Also, can anyone give a URL for the JPA 2 spec pdf? Google turned  
up nothing.




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: How to get beter stacktraces????

2009-02-23 Thread Craig L Russell
I've see in these cases that it's hard to get the details of what went  
wrong since the error is being reported to you at commit time while  
the error actually occurred earlier.


Getting the earlier stack trace is what is needed, and this requires  
some coordination between WebLogic and OpenJPA.


In EJB, there is a separation of concerns (security, blah blah) that  
means that errors that occur while processing a request are not  
reported to the caller of the business method. The earlier OpenJPA  
exception may have been swallowed by WebLogic. [Certain attack models  
might make use of the specific error stack to discover privileged  
information about the implementation of the system.] Of course, this  
doesn't help the current model which is not trying to attack the  
system, but debug it. So to find the underlying error you may have to  
activate some server side debug tracing and/or logging.


The WebLogic folks might be able to help with the logging  
configuration needed to get the info you need.


Craig

On Feb 23, 2009, at 4:04 PM, kurt_cobain wrote:



Yeah I saw those; but in tracing down into the code, I can see that an
exception actually never gets thrown. What I get is an exception from
weblogic like this:

Feb 23, 2009 5:00:01 PM CST>occurred

during commit of transaction Name=[EJB
com 
(com 
...EntityName)],Xid=BEA1-0064C38439F43057A4BD(1453943),Status=Rolled

back. [Reason=<1.0.0 fatal store error>
org.apache.openjpa.util.StoreException: The transaction has been  
rolled

back.  See the nested exceptions for details on the errors that
occurred.],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since
begin=0,seconds
left 
= 
30 
,XAServerResourceInfo 
[weblogic 
.jdbc 
.wrapper 
.JTSXAResourceImpl 
]= 
(ServerResourceInfo 
[weblogic 
.jdbc 
.wrapper 
.JTSXAResourceImpl 
]= 
(state 
= 
rolledback 
,assigned 
=AdminServer),xar=weblogic.jdbc.wrapper.jtsxaresourcei...@866d46,re- 
Registered

=
false),SCInfo[base_domain 
+ 
AdminServer 
]=(state=rolledback),properties=({weblogic.transaction.name=[EJB

com...sessionBean.methodName(package.Class)],
weblogic 
.jdbc 
= 
t3 
:// 
10.200.10.89 
: 
7001 
}),OwnerTransactionManager 
=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=AdminServer 
+10.200.10.89:7001+base_domain+t3+,

XAResources={TagStore, LEGACYHOSTDS, VPS,
weblogic.jdbc.wrapper.JTSXAResourceImpl,
CSCUD1},NonXAResources={})],CoordinatorURL=AdminServer 
+10.200.10.89:7001+base_domain+t3+):
weblogic.transaction.RollbackException: The transaction has been  
rolled
back.  See the nested exceptions for details on the errors that  
occurred.

at
weblogic 
.transaction 
.internal 
.TransactionImpl.throwRollbackException(TransactionImpl.java:1818)

at
weblogic 
.transaction 
.internal 
.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:333)

at
weblogic 
.transaction 
.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:227)

at
weblogic 
.ejb 
.container 
.internal.BaseRemoteObject.postInvoke1(BaseRemoteObject.java:607)

at
weblogic 
.ejb 
.container 
.internal 
.StatelessRemoteObject.postInvoke1(StatelessRemoteObject.java:57)

at
weblogic 
.ejb 
.container 
.internal.BaseRemoteObject.postInvokeTxRetry(BaseRemoteObject.java: 
427)


--
View this message in context: 
http://n2.nabble.com/How-to-get-beter-stacktraces-tp2375022p2375201.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Strange error Please ensure that your database records are in the correct format

2009-02-12 Thread Craig L Russell

Hi Pinaki,

On Feb 12, 2009, at 5:02 PM, Pinaki Poddar wrote:



1. org.apache.renamed.openjpa.jdbc.meta.strats

Why is the package name different?

2. From the stack trace, the field strategy looks suspicious. It may  
have
been related to the fact that someone renamed/refactored openjpa  
packages.


3. IANAL, but is repackaging/renaming a open source software kosher  
from a

licensing point of view?


Yes, the Apache license is very liberal, so renaming packages is  
perfectly fine, as long as the conditions of the license are adhered to.


Of course, if the renaming causes problems, it might be very hard to  
get anyone to look at the problem.


Craig





Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


[Travel Assistance] Applications for ApacheCon EU 2009 - Now Open

2009-01-23 Thread Craig L Russell

The Travel Assistance Committee is now accepting applications for those
wanting to attend ApacheCon EU 2009 between the 23rd and 27th March 2009
in Amsterdam.

The Travel Assistance Committee is looking for people who would like to
be able to attend ApacheCon EU 2009 who need some financial support in
order to get there. There are very few places available and the criteria
is high, that aside applications are open to all open source developers
who feel that their attendance would benefit themselves, their
project(s), the ASF or open source in general.

Financial assistance is available for travel, accommodation and entrance
fees either in full or in part, depending on circumstances. It is
intended that all our ApacheCon events are covered, so it may be prudent
for those in the United States or Asia to wait until an event closer to
them comes up - you are all welcome to apply for ApacheCon EU of course,
but there must be compelling reasons for you to attend an event further
away that your home location for your application to be considered above
those closer to the event location.

More information can be found on the main Apache website at
http://www.apache.org/travel/index.html - where you will also find a
link to the online application form.

Time is very tight for this event, so applications are open now and will
end on the 4th February 2009 - to give enough time for travel
arrangements to be made.

Good luck to all those that apply.


Regards,
The Travel Assistance Committee
--




--
Tony Stevenson
t...@pc-tony.com  //  pct...@apache.org  // pct...@freenode.net
http://blog.pc-tony.com/

1024D/51047D66 ECAF DC55 C608 5E82 0B5E  3359 C9C7 924E 5104 7D66
--


-

Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!



  1   2   3   >