[jira] Assigned: (JDO-261) TestHashSetCollections, TestSetCollections : schema incorrect

2005-12-15 Thread Andy Jefferson (JIRA)
 [ http://issues.apache.org/jira/browse/JDO-261?page=all ]

Andy Jefferson reassigned JDO-261:
--

Assign To: Michelle Caisse  (was: Andy Jefferson)

Well if you look at the message received
Add request failed : INSERT INTO datastoreidentity0.HASHSET_OF_OBJECT2 
(IDENTIFIER,COLLVAL,ADPT_PK_IDX) VALUES (?,?,?)
you have a field that has a serialised element. You have an ORM definition of





So in the join table we have a FK back to the owner, a value column, and we 
need to impose a PK (since I have no spec that defines how to specify that no 
PK is required).
I don't see any primary-key specification, so how does JPOX know what is the PK 
of this join table ?

> TestHashSetCollections, TestSetCollections : schema incorrect
> -
>
>  Key: JDO-261
>  URL: http://issues.apache.org/jira/browse/JDO-261
>  Project: JDO
> Type: Bug
>   Components: tck20
> Reporter: Andy Jefferson
> Assignee: Michelle Caisse

>
> HashSetCollections/SetCollections are mapped incorrectly. They should have a 
> primary-key specified in the metadata to tell the JDO implementation which 
> columns to use for PK. Without this the JDO implementation can do whatever it 
> likes wrt defining a PK. This includes adding its own adapter columns. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Assigned: (JDO-261) TestHashSetCollections, TestSetCollections : schema incorrect

2005-12-15 Thread Michelle Caisse (JIRA)
 [ http://issues.apache.org/jira/browse/JDO-261?page=all ]

Michelle Caisse reassigned JDO-261:
---

Assign To: Andy Jefferson  (was: Michelle Caisse)

I fixed the orm metadata for datastore identity with revision 357130, but the 
error persists.   The error also occurs with application identity, which does 
not require the primary-key specification in the mapping.

test(org.apache.jdo.tck.models.fieldtypes.TestHashSetCollections)javax.jdo.JDODataStoreException:
 Add request failed : INSERT INTO applicationidentity0.HASHSET_OF_OBJECT2 
(IDENTIFIER,COLLVAL,ADPT_PK_IDX) VALUES (?,?,?) 

FailedObject:[Ljava.lang.Object;@102720c

at 
org.jpox.store.rdbms.scostore.NormalSetStore.addAll(NormalSetStore.java:657)

at 
org.jpox.store.mapping.CollectionMapping.postUpdate(CollectionMapping.java:282)

at 
org.jpox.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:282)

at org.jpox.store.rdbms.table.ClassTable.update(ClassTable.java:2118)

at org.jpox.store.StoreManager.update(StoreManager.java:780)

at org.jpox.state.StateManagerImpl.flush(StateManagerImpl.java:4401)

at 
org.jpox.state.StateManagerImpl.runReachability(StateManagerImpl.java:3154)

at 
org.jpox.AbstractPersistenceManager.preCommit(AbstractPersistenceManager.java:3145)

at org.jpox.NonmanagedTransaction.commit(NonmanagedTransaction.java:435)

at 
org.apache.jdo.tck.models.fieldtypes.TestHashSetCollections.runTest(TestHashSetCollections.java:96)

at 
org.apache.jdo.tck.models.fieldtypes.TestHashSetCollections.test(TestHashSetCollections.java:75)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at org.apache.jdo.tck.JDO_Test.runBare(JDO_Test.java:204)

at 
org.apache.jdo.tck.util.BatchTestRunner.start(BatchTestRunner.java:120)

at org.apache.jdo.tck.util.BatchTestRunner.main(BatchTestRunner.java:95)



> TestHashSetCollections, TestSetCollections : schema incorrect
> -
>
>  Key: JDO-261
>  URL: http://issues.apache.org/jira/browse/JDO-261
>  Project: JDO
> Type: Bug
>   Components: tck20
> Reporter: Andy Jefferson
> Assignee: Andy Jefferson

>
> HashSetCollections/SetCollections are mapped incorrectly. They should have a 
> primary-key specified in the metadata to tell the JDO implementation which 
> columns to use for PK. Without this the JDO implementation can do whatever it 
> likes wrt defining a PK. This includes adding its own adapter columns. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Subversion repository

2005-12-15 Thread Craig L Russell
Hi Henri,Assume this is fine unless I get some strong objection in tomorrow's JDO conference call. In which case I'll send you a message.Thanks for all,CraigOn Dec 15, 2005, at 8:53 PM, Henri Yandell wrote:Sorry for the lack of reply until now, I've only just got back online since Brian mentioned this to me.How does Saturday, 20:00 US/Eastern time sound?Hen[ASF SVN gopher]On Wed, 14 Dec 2005, Craig L Russell wrote: Hey,We're  going to be moving the repo from incubator to db so the url for checkout and commit will change. We (Apache infra) are planning on using the svn move command so all the history will be preserved.Once the change takes place the old repo won't work any more. You can check out from the new repo. If you have changes in an active workspace you will need to svn switch it to the new repository. Or perhaps better, check in before the move.I'd like to plan for the move to happen over this coming weekend.If this message affects you and it is incomprehensible, please let me know.Thanks,CraigCraig RussellArchitect, Sun Java Enterprise System http://java.sun.com/products/jdo408 276-5638 mailto:[EMAIL PROTECTED]P.S. A good JDO? O, Gasp!  Craig Russell Architect, Sun Java Enterprise System http://java.sun.com/products/jdo 408 276-5638 mailto:[EMAIL PROTECTED] P.S. A good JDO? O, Gasp!  

smime.p7s
Description: S/MIME cryptographic signature


[jira] Closed: (JDO-260) TestHashMapStringKeyCollections.test : schema incorrect

2005-12-15 Thread Michelle Caisse (JIRA)
 [ http://issues.apache.org/jira/browse/JDO-260?page=all ]
 
Michelle Caisse closed JDO-260:
---

Resolution: Fixed

jdo metadata used serialized attribute on the field element rather than 
serialized-value attribute on the map element for three fields.
Fixed with revision 357103.

> TestHashMapStringKeyCollections.test : schema incorrect
> ---
>
>  Key: JDO-260
>  URL: http://issues.apache.org/jira/browse/JDO-260
>  Project: JDO
> Type: Bug
>   Components: tck20
> Reporter: Andy Jefferson
> Assignee: Michelle Caisse

>
> Test TestHashMapStringKeyCollections (datastore-identity) fails with
> ERROR 42X14: 'HASH_MAP_OF_STRING_SIMPLE_INTERFACE90' is not a column in table 
> or VTI 'DATASTOREIDENTITY0.HASHMAPSTRINGKEY_COLLECTIONS'.
> The schema is inconsistent with the MetaData. If you compare it with the 
> MapStringKeyCollections case you find that this same field is mapped 
> differently in the metadata. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Clarifications on fetch depth

2005-12-15 Thread Jörg von Frantzius
Sorry that was meant to go to Marko only, please excuse German being 
used here...


Jörg von Frantzius schrieb:

Hallo Marko,

von Alexander habe ich vorhin gehört, daß Ihr Detaching ebenfalls zum 
Synchronisieren von Datenbanken benutzt. Das tue ich ebenfalls, und es 
funktioniert prima mit dem aktuellen Stand der Spezifikation und von 
JPOX. Und vor allem funktioniert es, ohne daß mein Code wissen muß, 
welche Klasse ein Objekt hat, und mit ein und dem selben 
Fetch-Group-Namen, mit dem für alle Klassen eine Fetch-Group definiert 
ist.


Bitte erkläre mir mal, wie ich ein beliebiges Objekt mit genau Tiefe 1 
detachen kann (also das Objekt selbst, und alles was von diesem direkt 
erreichbar ist), wenn ich keine Fetch-Depth für *alle* Felder 
definieren kann?


Wir können deswegen gerne mal morgen telefonieren, dann gibt's da 
glaube ich weniger Mißverständnisse. Meine Telefonnummer findest Du 
unten.


Grüße,
Jörg

Grüße,
J-rg

Marco Schulze schrieb:

Alexander Bieber wrote:

In my oppinion it would be better to change the spec back to its 
previous version, so that fetch-depth applies only to 
self-referencing (=recursive) fields (direct _and_ indirect). A hard 
limit could be set on detachCopy with an additional parameter 
detachDepth that will apply to the object graph of the top-level 
object that is detached.


Hello all!

I totally agree that the new behaviour not only makes life more 
complicated, but implies design problems as well, and that a change 
back to the old behaviour would be very helpful. But instead of a new 
parameter for the detachCopy method, I'd like to mention a possible 
alternative: a getter-setter-pair for the fetchplan:


   PersistenceManager.getFetchPlan().getMaxFetchDepth();
   PersistenceManager.getFetchPlan().setMaxFetchDepth(mfd);

This maximum fetch-depth should be relative to the root of the object 
graph, while the fetch-depth declared per field only applies to 
relative self-referencing - i.e. recursion (= old behaviour).


IMHO both, detachCopy derivate or fetchplan property, are adequate. 
But the new behaviour of fetch-groups is not logical, because the 
definition of a fetch-group happens per field/class while its effects 
depend on the use-case. IMHO, parameters that apply to the use-case 
should be defined runtime (i.e. parameter/getter-setter) while static 
definitions should be usecase-independent. I hope, you understand 
what I mean...


Any other opinions?

Best regards, Marco.








--
__
Dipl.-Inf. Jörg von Frantzius  |artnology GmbH
  |Milastr. 4
Tel +49 (0)30 4435 099 26  |  10437 Berlin
Fax +49 (0)30 4435 099 99  |  http://www.artnology.com
___|__



Re: Clarifications on fetch depth

2005-12-15 Thread Jörg von Frantzius

Hallo Marko,

von Alexander habe ich vorhin gehört, daß Ihr Detaching ebenfalls zum 
Synchronisieren von Datenbanken benutzt. Das tue ich ebenfalls, und es 
funktioniert prima mit dem aktuellen Stand der Spezifikation und von 
JPOX. Und vor allem funktioniert es, ohne daß mein Code wissen muß, 
welche Klasse ein Objekt hat, und mit ein und dem selben 
Fetch-Group-Namen, mit dem für alle Klassen eine Fetch-Group definiert ist.


Bitte erkläre mir mal, wie ich ein beliebiges Objekt mit genau Tiefe 1 
detachen kann (also das Objekt selbst, und alles was von diesem direkt 
erreichbar ist), wenn ich keine Fetch-Depth für *alle* Felder definieren 
kann?


Wir können deswegen gerne mal morgen telefonieren, dann gibt's da glaube 
ich weniger Mißverständnisse. Meine Telefonnummer findest Du unten.


Grüße,
Jörg

Grüße,
J-rg

Marco Schulze schrieb:

Alexander Bieber wrote:

In my oppinion it would be better to change the spec back to its 
previous version, so that fetch-depth applies only to 
self-referencing (=recursive) fields (direct _and_ indirect). A hard 
limit could be set on detachCopy with an additional parameter 
detachDepth that will apply to the object graph of the top-level 
object that is detached.


Hello all!

I totally agree that the new behaviour not only makes life more 
complicated, but implies design problems as well, and that a change 
back to the old behaviour would be very helpful. But instead of a new 
parameter for the detachCopy method, I'd like to mention a possible 
alternative: a getter-setter-pair for the fetchplan:


   PersistenceManager.getFetchPlan().getMaxFetchDepth();
   PersistenceManager.getFetchPlan().setMaxFetchDepth(mfd);

This maximum fetch-depth should be relative to the root of the object 
graph, while the fetch-depth declared per field only applies to 
relative self-referencing - i.e. recursion (= old behaviour).


IMHO both, detachCopy derivate or fetchplan property, are adequate. 
But the new behaviour of fetch-groups is not logical, because the 
definition of a fetch-group happens per field/class while its effects 
depend on the use-case. IMHO, parameters that apply to the use-case 
should be defined runtime (i.e. parameter/getter-setter) while static 
definitions should be usecase-independent. I hope, you understand what 
I mean...


Any other opinions?

Best regards, Marco.





--
__
Dipl.-Inf. Jörg von Frantzius  |artnology GmbH
  |Milastr. 4
Tel +49 (0)30 4435 099 26  |  10437 Berlin
Fax +49 (0)30 4435 099 99  |  http://www.artnology.com
___|__



Re: Clarifications on fetch depth

2005-12-15 Thread Marco Schulze

Alexander Bieber wrote:

In my oppinion it would be better to change the spec back to its 
previous version, so that fetch-depth applies only to self-referencing 
(=recursive) fields (direct _and_ indirect). A hard limit could be set 
on detachCopy with an additional parameter detachDepth that will apply 
to the object graph of the top-level object that is detached.


Hello all!

I totally agree that the new behaviour not only makes life more 
complicated, but implies design problems as well, and that a change back 
to the old behaviour would be very helpful. But instead of a new 
parameter for the detachCopy method, I'd like to mention a possible 
alternative: a getter-setter-pair for the fetchplan:


   PersistenceManager.getFetchPlan().getMaxFetchDepth();
   PersistenceManager.getFetchPlan().setMaxFetchDepth(mfd);

This maximum fetch-depth should be relative to the root of the object 
graph, while the fetch-depth declared per field only applies to relative 
self-referencing - i.e. recursion (= old behaviour).


IMHO both, detachCopy derivate or fetchplan property, are adequate. But 
the new behaviour of fetch-groups is not logical, because the definition 
of a fetch-group happens per field/class while its effects depend on the 
use-case. IMHO, parameters that apply to the use-case should be defined 
runtime (i.e. parameter/getter-setter) while static definitions should 
be usecase-independent. I hope, you understand what I mean...


Any other opinions?

Best regards, Marco.


JDO TCK Conference Call Friday, Dec 16, 9 am PST

2005-12-15 Thread Michelle Caisse

Hi,

We will have our regular meeting Friday, December 16 at 9 am PST to 
discuss JDO TCK issues and status.


Dial-in numbers are:
866 230-6968   294-0479#
International: +1 865 544-7856

Agenda:

1. Test status (Michael W)
2. Graduation issues, changes to the repository (Craig)
3. Query tests (Michael W)
4. Fieldtypes test status (Michelle)
5. Detached objects (Matthew)
6. getObjectsById tests (Geoff)
7. JPOX fixes/issues (Erik)

<>Action Items from weeks past:

[Dec 9 2005] What optional feature is making inheritance mapping 3 fail? 
AI:  Craig discuss with expert group.


[Dec 9 2005]: <>Michael sent a message to the expert group regarding 
lost updates to relationship fields in case the non-owning side made the 
update. AI: Craig reply and propose a spec change to require not losing 
updates. This is incompatible with the current draft of EJB 3.


[Dec 9 2005]: Nontransactional write semantics appear to differ between 
optimistic and datastore transactions. Is this intentional? AI: Craig 
discuss with expert group.


[Dec 2 2005] Error message in the enhancer log: tag doesn't conform  to 
the dtd. AI Michelle file a JIRA issue. Done.


[Dec 2 2005]  Is it allowed to have more  actual than formal parameters? 
No. AI Craig check to see if the spec  disallows this. AI Michael: raise 
an issue with the experts. Experts agree. More actual than formal 
parameters are allowed. Done.


[Dec 2 2005] Test ThreadSafe has a bug with multiple threads; sometimes 
two  threads succeed; looks like a timing bug in the test case. AI 
Michael  file a JIRA and assign it to Martin. Done.


[Dec 2 2005] Inheritance 3 fails. Optimization of inheritance 1 where 
there is no  table for abstract classes. JPOX doesn't support it. AI 
Craig:  discuss this mapping with expert group. It might be an optional 
feature.


[Nov 18 2005] AI: Erik look at JDO-206. 

[Nov 18 2005] AI: BEA to sign the donation paperwork for their test  
suite. Review how to merge their test cases into JDO TCK.


[Nov 4 2005] AI Martin: Update Martin's wiki with discusion of JDK  1.5 
issues. in progress


[Oct 14 2005] AI: Michelle distill the mapping support that JPOX has  
into a list of features that are/are not supported.


[Oct 14 2005] AI: Craig discuss mapping options with expert group.

[Oct 14 2005] AI: Push jars to Apache repository (Craig) In progress.  
Several things need to be updated, including project.properties,  
project.xml and maven.xml.


[Sep 2 2005] AI: To recruit members, update the web site. Articles on  
TheServerSide directing attention to the site. T-shirts, logo. AI:  
Craig write a ServerSide article.


[Aug 12 2005] AI: Craig to propose release of API20 and the entire 11  
release (API, RI, TCK). This generated a large response on the  
incubator alias.


[Aug 5 2005] AI: Brian McCallister can send info on the instructions  
how to sync Apache and ibiblio.  Re: Brian Topping needs info on  
creating a maven package.


[July 29 2005] AI: Michelle Chapter 18 wiki needs to be updated to  
include all JDO metadata elements and attributes. [not done]


[July 29 2005] AI: (Craig, Brian T.) Need some permissions that Brian  
is working on.


[Jul 8 2005] AI: Double-check locking in the PMF (Martin) Martin has  
implemented and will check in.


[April 15 2005] AI: Brian Topping will update the wiki to tell how to  
access our releases area.


[April 15 2005] AI: Brian Topping will do the maven goal for creating  
and uploading the snapshots. He will create a directory parallel to  
trunk called "releases" and put the snapshots there.


[May 13 2005] AI: Brian Topping will arrange for automated nightly  builds.

[May 13 2005] AI: Martin Zaun will investigate JSR 294 (Java 5) to  see 
impact on enhancer. Done.


[May 20 2005] AI: Craig to define the JCP distributions and see if  
maven can help.


Clarifications on fetch depth

2005-12-15 Thread Alexander Bieber

Hi all,

with a new version of JPOX I've noticed a change in the behaviour 
concerning fetch-depth. After searching the archieves of this list I 
believe to understand now that the intended use of fetch-depth is to 
restrict the depth object graph of a field upon detaching.
Nevertheless I see problems with that. Before the depth of the 
field-based object-graph could be defined by the fetch-plan that a user 
had set upon detaching. The only way of breaking this was by the use of 
"self-referencing" fields. Doing so it was possible to load large 
amounts of the datastore by simply detaching one object with 
"unsuitable" fetch-groups. Using the fetch-depth attribute to limit the 
graph depth of a detached field surely solves this problem but also 
breaks some functionality.
Imagine a PC A holding a list of PCs B that have members of a third PC 
C. Now with the restriction on the depth of all fields when detaching 
the list of the A a user would have to define different fetch groups for 
A's list defining whether B's C member should be included or not. Before 
this could be done by including a fetch group for B's C in the 
fetch-plan or not.
In my opinion also introduces some application design issues, as a 
developer can not be sure that by defining a fetchgroup including a 
field of an object this will be included upon detaching. It is no longer 
possible (except with fetch-depth=0) to define "generic" fetch-groups so 
the user can combine them to define what he wants to retrieve, rather 
the developer has to define fetch-groups for each possible usecase and 
might have to change the application-backend for new frontends and 
usecases.
In my oppinion it would be better to change the spec back to its 
previous version, so that fetch-depth applies only to self-referencing 
(=recursive) fields (direct _and_ indirect). A hard limit could be set 
on detachCopy with an additional parameter detachDepth that will apply 
to the object graph of the top-level object that is detached.


Any comments/replies are appreciated.

Best regards Alexander Bieber



[jira] Commented: (JDO-220) JPOX does not call jdoPostLoad() on queried instances or does not load fetch groups

2005-12-15 Thread Andy Jefferson (JIRA)
[ 
http://issues.apache.org/jira/browse/JDO-220?page=comments#action_12360512 ] 

Andy Jefferson commented on JDO-220:


The second of the fields ("number2") is checked that it is not loaded. So what 
about the case where the object has been pulled from the cache ? The field 
"number2" will still be loaded and consequently that part of the test will fail.

> JPOX  does not call jdoPostLoad() on queried instances or does not load fetch 
> groups
> 
>
>  Key: JDO-220
>  URL: http://issues.apache.org/jira/browse/JDO-220
>  Project: JDO
> Type: Bug
>   Components: tck20
> Reporter: Michael Watzek
> Assignee: Erik Bengtson

>
> Query test case GetFetchPlan fails throwing the exception below.
> The test case queries an instance of PCClass. PCClass has two persistent 
> fields and two corresponding transient fields which are set by jdoPostLoad(). 
> Furthermore, PCClass has two fetch groups. Each persistent field is contained 
> in one of those fetch groups. The test case checks if the queried instance 
> has the right values wrt transient fields. This check fails.
> junit.framework.AssertionFailedError: Assertion A14.6-21 (FetchPan) failed: 
> Field PCClass.number1 is in the default fetch group and should have been 
> loaded. The jdoPostLoad() callback has copied the field value to a transient 
> field which has an unexpected value: 0
>   at junit.framework.Assert.fail(Assert.java:47)
>   at 
> org.apache.jdo.tck.query.api.GetFetchPlan.checkDefaultFetchGroup(GetFetchPlan.java:94)
>   at 
> org.apache.jdo.tck.query.api.GetFetchPlan.testPositive(GetFetchPlan.java:64)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:324)
>   at junit.framework.TestCase.runTest(TestCase.java:154)
>   at org.apache.jdo.tck.JDO_Test.runBare(JDO_Test.java:204)
>   at junit.framework.TestResult$1.protect(TestResult.java:106)
>   at junit.framework.TestResult.runProtected(TestResult.java:124)
>   at junit.framework.TestResult.run(TestResult.java:109)
>   at junit.framework.TestCase.run(TestCase.java:118)
>   at junit.framework.TestSuite.runTest(TestSuite.java:208)
>   at junit.framework.TestSuite.run(TestSuite.java:203)
>   at junit.framework.TestSuite.runTest(TestSuite.java:208)
>   at junit.framework.TestSuite.run(TestSuite.java:203)
>   at junit.textui.TestRunner.doRun(TestRunner.java:116)
>   at junit.textui.TestRunner.doRun(TestRunner.java:109)
>   at 
> org.apache.jdo.tck.util.BatchTestRunner.start(BatchTestRunner.java:120)
>   at org.apache.jdo.tck.util.BatchTestRunner.main(BatchTestRunner.java:95)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (JDO-241) JPOX returns wrong query result for non-extent queries.

2005-12-15 Thread Andy Jefferson (JIRA)
[ 
http://issues.apache.org/jira/browse/JDO-241?page=comments#action_12360498 ] 

Andy Jefferson commented on JDO-241:


If you look at the test you find that the "candidate collection" passed in is 
of transient instances which hence have no identity (or at least 
JDOHelper.getObjectId() returns null). You are expecting the JDO implementation 
to find these instances without allowing it to know their identities ? H

To quote the spec [14.6] :

For portability, the elements in the collection must be persistent instances 
associated with the same
PersistenceManager as the Query instance. An implementation might support 
transient instances in the collection.


I notice that the second sentence here has a word "might". Hence the test 
cannot impose the restriction since its an optional feature.

> JPOX returns wrong query result for non-extent queries.
> ---
>
>  Key: JDO-241
>  URL: http://issues.apache.org/jira/browse/JDO-241
>  Project: JDO
> Type: Bug
>   Components: tck20
> Reporter: Michael Watzek
> Assignee: Erik Bengtson

>
> Test case DistinctCandidateInstances fails because JPOX returns an empty 
> collection for the query below. The query uses a candidate collection.
> 14:22:46,781 (main) DEBUG [org.apache.jdo.tck] - Executing JDO query: SELECT 
> FROM org.apache.jdo.tck.pc.company.Person
> 14:22:46,796 (main) DEBUG [org.apache.jdo.tck] - Query result: []
> 14:22:46,812 (main) DEBUG [org.apache.jdo.tck] - Wrong query result: 
> expected: [FullTimeEmployee(1, emp1Last, emp1First, born 10/Jun/1970, phone 
> {work=123456-1, home=}, hired 1/Jan/1999, weeklyhours 40.0, $2.0), 
> FullTimeEmployee(2, emp2Last, emp2First, born 22/Dec/1975, phone 
> {work=123456-2, home=}, hired 1/Jul/2003, weeklyhours 40.0, $1.0), 
> PartTimeEmployee(3, emp3Last, emp3First, born 5/Sep/1972, phone 
> {work=123456-3, home=}, hired 15/Aug/2002, weeklyhours 19.0, $15000.0), 
> PartTimeEmployee(4, emp4Last, emp4First, born 6/Sep/1973, phone 
> {work=124456-3, home=3343}, hired 15/Apr/2001, weeklyhours 0.0, $13000.0), 
> FullTimeEmployee(5, emp5Last, emp5First, born 5/Jul/1962, phone 
> {work=126456-3, home=3363}, hired 15/Aug/1998, weeklyhours 0.0, $45000.0), 
> FullTimeEmployee(1, emp1Last, emp1First, born 10/Jun/1970, phone 
> {work=123456-1, home=}, hired 1/Jan/1999, weeklyhours 40.0, $2.0), 
> FullTimeEmployee(2, emp2Last, emp2First, born 22/Dec/1975, phone 
> {work=123456-2, home=}, hired 1/Jul/2003, weeklyhours 40.0, $1.0), 
> PartTimeEmployee(3, emp3Last, emp3First, born 5/Sep/1972, phone 
> {work=123456-3, home=}, hired 15/Aug/2002, weeklyhours 19.0, $15000.0), 
> PartTimeEmployee(4, emp4Last, emp4First, born 6/Sep/1973, phone 
> {work=124456-3, home=3343}, hired 15/Apr/2001, weeklyhours 0.0, $13000.0), 
> FullTimeEmployee(5, emp5Last, emp5First, born 5/Jul/1962, phone 
> {work=126456-3, home=3363}, hired 15/Aug/1998, weeklyhours 0.0, $45000.0)]
> got:  []
> 14:22:46,812 (main) INFO  [org.apache.jdo.tck] - Exception during setUp or 
> runtest: 
> junit.framework.AssertionFailedError: Assertion A14.6.9-2 
> (DistintCandidateInstances) failed: 
> Wrong query result: 
> expected: [FullTimeEmployee(1, emp1Last, emp1First, born 10/Jun/1970, phone 
> {work=123456-1, home=}, hired 1/Jan/1999, weeklyhours 40.0, $2.0), 
> FullTimeEmployee(2, emp2Last, emp2First, born 22/Dec/1975, phone 
> {work=123456-2, home=}, hired 1/Jul/2003, weeklyhours 40.0, $1.0), 
> PartTimeEmployee(3, emp3Last, emp3First, born 5/Sep/1972, phone 
> {work=123456-3, home=}, hired 15/Aug/2002, weeklyhours 19.0, $15000.0), 
> PartTimeEmployee(4, emp4Last, emp4First, born 6/Sep/1973, phone 
> {work=124456-3, home=3343}, hired 15/Apr/2001, weeklyhours 0.0, $13000.0), 
> FullTimeEmployee(5, emp5Last, emp5First, born 5/Jul/1962, phone 
> {work=126456-3, home=3363}, hired 15/Aug/1998, weeklyhours 0.0, $45000.0), 
> FullTimeEmployee(1, emp1Last, emp1First, born 10/Jun/1970, phone 
> {work=123456-1, home=}, hired 1/Jan/1999, weeklyhours 40.0, $2.0), 
> FullTimeEmployee(2, emp2Last, emp2First, born 22/Dec/1975, phone 
> {work=123456-2, home=}, hired 1/Jul/2003, weeklyhours 40.0, $1.0), 
> PartTimeEmployee(3, emp3Last, emp3First, born 5/Sep/1972, phone 
> {work=123456-3, home=}, hired 15/Aug/2002, weeklyhours 19.0, $15000.0), 
> PartTimeEmployee(4, emp4Last, emp4First, born 6/Sep/1973, phone 
> {work=124456-3, home=3343}, hired 15/Apr/2001, weeklyhours 0.0, $13000.0), 
> FullTimeEmployee(5, emp5Last, emp5First, born 5/Jul/1962, phone 
> {work=126456-3, home=3363}, hired 15/Aug/1998, weeklyhours 0.0, $45000.0)]
> got:  []
>   at junit.framework.Assert.fail(Assert.java:47)
>   at org.apache.jdo.tck.JDO_Test.fail(JDO_Test.java:546)
>   at org.apache.jdo.tck.query.QueryTest.queryFailed(QueryTest.java:500)
>   at 
> org.apache.jdo.tck.quer