[jira] Commented: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-12 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-253?page=comments#action_12420797 ] 

Knut Anders Hatlen commented on DERBY-253:
--

The patch looks good. I will run some tests and commit it.

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-836) ResultSetMetaData.getColumnDisplaySize sometimes returns wrong values for DECIMAL columns

2006-07-12 Thread Mayuresh Nirhali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-836?page=all ]

Mayuresh Nirhali updated DERBY-836:
---

Attachment: derby836-v6.diff

Thanks Dan for pointing out the error.

I have now added diffs for the master files which were failing due to this 
change and a complete patch is now attached as derby836-v6.diff.




> ResultSetMetaData.getColumnDisplaySize sometimes returns wrong values for 
> DECIMAL columns
> -
>
>  Key: DERBY-836
>  URL: http://issues.apache.org/jira/browse/DERBY-836
>  Project: Derby
> Type: Bug

>   Components: JDBC, Newcomer
> Versions: 10.2.0.0
> Reporter: Daniel John Debrunner
> Assignee: Mayuresh Nirhali
> Priority: Minor
>  Attachments: derby836-v2.diff, derby836-v3.diff, derby836-v4.diff, 
> derby836-v6.diff, derby836.diff, derby836_v5.diff
>
> DECIMAL(10,0)
> max display width value:   -1234567890  length 11
> embedded : 11 correct
> client: 12 WRONG
> DECIMAL(10,10)
> max display width value:   -0.1234567890  length 13
> embedded : 13 correct
> client: 12 WRONG
> DECIMAL(10,2)
> max display width value:   -12345678.90  length 12
> embedded : 13 WRONG
> client: 12 correct
> I've added output early on in jdbcapi/metadata_test.java (and hence the tests 
> metadata.jar and odbc_metadata.java) to show this issue:
> E.g. for embedded
> DECIMAL(10,0) -- precision: 10 scale: 0 display size: 12 type name: DECIMAL
> DECIMAL(10,10) -- precision: 10 scale: 10 display size: 12 type name: DECIMAL
> DECIMAL(10,2) -- precision: 10 scale: 2 display size: 12 type name: DECIMAL
> I will add this test output once DERBY-829 is fixed so as not to cause 
> conflicts.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: prioritized 10.2 bug list

2006-07-12 Thread Kathey Marsden

   Andrew McIntyre wrote:


On 7/12/06, Rick Hillegas <[EMAIL PROTECTED]> wrote:


People have targetted bugs for fixing in 10.2 but haven't assigned
the bugs to themselves. What does this mean? It could mean any of the
following:


[snip many different things it  may or may not mean]

I think Jira is a communication tool,  so  Fix Version  (and all the 
other fields) should mean the same thing to everyone so reports have 
meaning.


I would like to see:

10.2 Assigned issues and Critical/Blocker issues are ones that we plan 
to fix for the release
10.2 Unassigned issues are  issues that members of the community think 
would offer a high return on time investment and can reasonably be fixed 
before the release.


This would require a lot of iterative maintenance  by the community to 
push unassigned  issues off as  the release comes nearer and they are no 
longer realistic, but that could serve as a  good form of community bug 
review I think.   For   the 10.1.3 release I started the page 
http://wiki.apache.org/db-derby/HighValueFixCandidates to help 
facilitate community bug review, but this is kind of a pain to maintain 
(you can see no one has updated it for 10.2 yet and some of those bugs 
are fixed).   The comments with justification and reasoning could be 
moved to the comment when the issue is changed to 10.2.  


Thoughts?

Kathey

Note the High Value Fix Definition  is below.  This could be refined

Members of the development community and those involved in support and 
QA can list fixes that they see as giving a potentially high return on 
time investment based on:


   *

 Frequency and likelihood the issue might be hit by users.

   *

 Estimated difficulty of fix.

   *

 Risk to existing users if the fix is implemented.

   *

 Severity of the issue.

   *

 Availability of a workaround.

   *

 The amount of user/developer time is wasted by the issue.




[jira] Commented: (DERBY-1453) jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server

2006-07-12 Thread Kathey Marsden (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1453?page=comments#action_12420767 ] 

Kathey Marsden commented on DERBY-1453:
---

Thanks for investigatting this issue Rajesh.  I agree that the priority can be 
lowered from Critical to even Lowest or even Won't fix.  Running the 10.1 tests 
against 10.2 is a good excercise for this release but  a doomed enterprise  
long term.   A more  sustainable compatibility testing framework especially for 
LOBS and Metadata will be required I think.

I filed DERBY-1509 for the issue of the test printing  FAIL for tests that 
PASS.  I seem to recall this has confused people and wasted lots of people's 
time in the past as well.   That would be a higher value fix I thnk than trying 
to manager multiple masters  in 10.1


> jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server
> -
>
>  Key: DERBY-1453
>  URL: http://issues.apache.org/jira/browse/DERBY-1453
>  Project: Derby
> Type: Test

>   Components: Network Server, Network Client
> Versions: 10.2.0.0, 10.1.3.0
>  Environment: derbyclient.jar and derbyTesting.jar from 10.1
> all other jars from 10.2
> Reporter: Deepa Remesh
> Priority: Critical
>  Fix For: 10.2.0.0

>
> Diff is:
> *** Start: blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:09:39 ***
> 510a511,513
> > FAIL -- unexpected exception 
> > SQLSTATE(40XL1): A lock could not be obtained within the time requested
> > START: clobTest93
> 512,513d514
> < START: clobTest93
> < clobTest92 finished
> 766 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 
> 'java.sql.String' from an InputStream. SQLSTATE: XJ001: Java exception: 
> 'ERROR 40XD0: Container has been closed: java.io.IOException'.
> 766a767
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> 769 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 'BLOB' 
> from an InputStream. SQLSTATE: XJ001: Java exception: 'ERROR 40XD0: Container 
> has been closed: java.io.IOException'.
> 769a770
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> Test Failed.
> *** End:   blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:10:20 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-1509) blobclob4BLOB expected test ouput prints "FAIL -- unexpected exception" for exceptions that are expected

2006-07-12 Thread Kathey Marsden (JIRA)
blobclob4BLOB expected test ouput prints "FAIL -- unexpected exception"  for 
exceptions that are expected
-

 Key: DERBY-1509
 URL: http://issues.apache.org/jira/browse/DERBY-1509
 Project: Derby
Type: Bug

  Components: Test  
Versions: 10.1.3.2
Reporter: Kathey Marsden
Priority: Minor


The test jdbcapi/blobclob4BLOB.java has a checked in master which shows test 
cases failing.  As Rajesh pointed out in DERBY-1453, some of these exceptions 
are expected. This can be very confusing when diagnosing issues with this test. 
 It would be much better if the test printed PASSED: Expected excepition 
instead of FAILED: *** unexpected exception for expected exceptions.  Here 
is one example that Rajesh analyzed and found really should be a pass case.


START: clobTest92
FAIL -- unexpected exception 
SQLSTATE(40XL1): A lock could not be obtained within the time requested

There are others that need to be checked to see if they too are just a problem 
with the test or an actual product issue.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1395) Change the client SQLState to match that of embedded for the exception thrown on a closed statement whose connection is also closed

2006-07-12 Thread David Van Couvering (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1395?page=comments#action_12420761 ] 

David Van Couvering commented on DERBY-1395:


When the connection is closed, I think it's more informative on the client 
side, where it tells you that the connection is closed rather than just saying 
the statement is closed.  I would argue that I should fix the embedded side to 
return 08003 when the connection is closed, that would actually be more 
consistent with the standard.

However, in the case when the connection is open and the statement is closed, I 
will fix the client to use XJ012 instead of XCL31.

> Change the client SQLState to match that of embedded for the exception thrown 
> on a closed statement whose connection is also closed
> ---
>
>  Key: DERBY-1395
>  URL: http://issues.apache.org/jira/browse/DERBY-1395
>  Project: Derby
> Type: Improvement

>   Components: Network Client
> Versions: 10.2.0.0, 10.1.3.0
> Reporter: Deepa Remesh
> Assignee: David Van Couvering
> Priority: Trivial

>
> Scenario: Both connection and statement are closed and an operation is 
> performed on a closed statement. SQLStates are as follows:
> Embedded: SQLSTATE: XJ012, Message: Statement Closed
> Client before DERBY-843 fix: SQLSTATE = null, message = Statement closed
> Client after DERBY-843 fix: SQLSTATE: 08003, Message: connection closed
> This issue is related to the effort started in DERBY-254.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1453) jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server

2006-07-12 Thread Rajesh Kartha (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1453?page=comments#action_12420757 ] 

Rajesh Kartha commented on DERBY-1453:
--


I did some investigation on this. So there are two separate issues:

A)  Lock while updating an int col

Scenario:
1) Open connection (conn1), set autocommit false
2) select int, clob from table
3) create another connection (conn2) , set autocommit false
4) attempt to update the clob column - pass
5) attempt to update the int col - fails with lock time out

Since locks are not obtained on clobs step 4 works fine, whereas  the update on 
the int (step 5) awaits the for the lock (step 2)
to be released. 

I verified this behaviour on different cominations:
10.2 Embedded 
10.2 Server and 10.2 Derby Client
10.1 Embedded
10.2 Server and 10.1 Derby Client

10.1 Server and 10.1 Derby Client  - update does happen !

Only with the v10.1 Network Server (10.1.3.2 - (420033)), the update of the int 
col (step 5) goes thru fine. which I think is not a correct behavior and hence 
10.1 NW Server seems to be issue.

B) The error messages are different

This would require a master update on the 10.1 branch to fix teh output 
expected in case of testing against a 10.2 server

Both of the above issues are targetted for the  10.1 branch, hence I plan to:
- change  fix version to be 10.1.3.2
- lower the Priority from 'Critical'

unless I hear otherwise.





> jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server
> -
>
>  Key: DERBY-1453
>  URL: http://issues.apache.org/jira/browse/DERBY-1453
>  Project: Derby
> Type: Test

>   Components: Network Server, Network Client
> Versions: 10.2.0.0, 10.1.3.0
>  Environment: derbyclient.jar and derbyTesting.jar from 10.1
> all other jars from 10.2
> Reporter: Deepa Remesh
> Priority: Critical
>  Fix For: 10.2.0.0

>
> Diff is:
> *** Start: blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:09:39 ***
> 510a511,513
> > FAIL -- unexpected exception 
> > SQLSTATE(40XL1): A lock could not be obtained within the time requested
> > START: clobTest93
> 512,513d514
> < START: clobTest93
> < clobTest92 finished
> 766 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 
> 'java.sql.String' from an InputStream. SQLSTATE: XJ001: Java exception: 
> 'ERROR 40XD0: Container has been closed: java.io.IOException'.
> 766a767
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> 769 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 'BLOB' 
> from an InputStream. SQLSTATE: XJ001: Java exception: 'ERROR 40XD0: Container 
> has been closed: java.io.IOException'.
> 769a770
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> Test Failed.
> *** End:   blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:10:20 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]
 
A B closed DERBY-1507:
--


> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: d1507_v1.patch, d1507_v2.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]
 
A B resolved DERBY-1507:


Fix Version: 10.2.0.0
 Resolution: Fixed
 Derby Info:   (was: [Patch Available])

Verified fix by running lang/xmlBinding.java after the commit.  Thanks Andrew.

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: d1507_v1.patch, d1507_v2.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-1419) derbyall/i18nTest/MessageBundleTest.diff test failed in nightly run, jdk15

2006-07-12 Thread David Van Couvering (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1419?page=all ]
 
David Van Couvering resolved DERBY-1419:


Resolution: Fixed

Fixed, removed from derbyall. 

> derbyall/i18nTest/MessageBundleTest.diff test failed in nightly run, jdk15
> --
>
>  Key: DERBY-1419
>  URL: http://issues.apache.org/jira/browse/DERBY-1419
>  Project: Derby
> Type: Bug

>   Components: Services
>  Environment: Generating report for RunSuite derbyall  null null null true 
> -- Java Information --
> Java Version:1.5.0_02
> Java Vendor: Sun Microsystems Inc.
> Java home:   c:\cloudtst\jartest\jdk15\jre
> Java classpath:  
> c:/cloudtst/jartest/classes/derby.jar;c:/cloudtst/jartest/classes/derbyLocale_zh_TW.jar;c:/cloudtst/jartest/classes/derbyLocale_zh_CN.jar;c:/cloudtst/jartest/classes/derbyLocale_pt_BR.jar;c:/cloudtst/jartest/classes/derbyLocale_ko_KR.jar;c:/cloudtst/jartest/classes/derbyLocale_ja_JP.jar;c:/cloudtst/jartest/classes/derbyLocale_it.jar;c:/cloudtst/jartest/classes/derbyLocale_fr.jar;c:/cloudtst/jartest/classes/derbyLocale_es.jar;c:/cloudtst/jartest/classes/derbyLocale_de_DE.jar;c:/cloudtst/jartest/classes/derbytools.jar;c:/cloudtst/jartest/classes/derbynet.jar;c:/cloudtst/jartest/classes/derbyclient.jar;;c:/cloudtst/jartest/classes/derbyrun.jar;c:/cloudtst/jartest/classes/derbyTesting.jar;c:/cloudtst/jartest/classes/maps.jar;c:/cloudtst/jartest/classes/functionTests.jar;c:/cloudtst/jartest/classes/csext.jar;c:/cloudtst/jartest/tools/java/junit.jar;c:/cloudtst/jartest/tools/java/jndi/fscontext.jar;c:/cloudtst/jartest/tools/java/RmiJdbc.jar;c:/cloudtst/jartest/drda/jcc/2.6/db2jcc.jar;c:/cloudtst/jartest/drda/jcc/2.6/db2jcc_license_c.jar
> OS name: Windows XP
> OS architecture: x86
> OS version:  5.1
> Java user name:  cloudtest
> Java user home:  C:\Documents and Settings\cloudtest
> Java user dir:   C:\cloudtst\jartest\JarResults.2006-06-14\jdk15_derbyall
> java.specification.name: Java Platform API Specification
> java.specification.version: 1.5
> - Derby Information 
> JRE - JDBC: J2SE 5.0 - JDBC 3.0
> [C:\cloudtst\jartest\classes\derby.jar] 10.2.0.4 alpha - (414425)
> [C:\cloudtst\jartest\classes\derbytools.jar] 10.2.0.4 alpha - (414425)
> [C:\cloudtst\jartest\classes\derbynet.jar] 10.2.0.4 alpha - (414425)
> [C:\cloudtst\jartest\classes\derbyclient.jar] 10.2.0.4 alpha - (414425)
> [C:\cloudtst\jartest\drda\jcc\2.6\db2jcc.jar] 2.6 - (90)
> [C:\cloudtst\jartest\drda\jcc\2.6\db2jcc_license_c.jar] 2.6 - (90)
> --
> - Locale Information -
> Current Locale :  [English/United States [en_US]]
> Found support for locale: [de_DE]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [es]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [fr]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [it]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [ja_JP]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [ko_KR]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [pt_BR]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [zh_CN]
>version: 10.2.0.4 alpha - (414425)
> Found support for locale: [zh_TW]
>version: 10.2.0.4 alpha - (414425)
> --
> Reporter: Mike Matrigali
> Assignee: David Van Couvering
> Priority: Minor
>  Fix For: 10.2.0.0

>
> here is the diff:
> * Diff file derbyall/i18nTest/MessageBundleTest.diff
> *** Start: MessageBundleTest jdk1.5.0_02 derbyall:i18nTest 2006-06-14 
> 22:43:27 ***
> 1 del
> < ERROR: Message id XSDAN.S in SQLState.java was not found in 
> messages_en.properties
> 2 del
> < ERROR: Message id X0RQ3.C in SQLState.java was not found in 
> messages_en.properties
> 3 del
> < ERROR: Message id XSAX1 in SQLState.java was not found in 
> messages_en.properties
> 4 del
> < ERROR: Message id XCL32.S in SQLState.java was not found in 
> messages_en.properties
> 5 del
> < ERROR: Message id J030 in MessageId.java was not found in 
> messages_en.properties
> 6 del
> < ERROR: Message id J029 in MessageId.java was not found in 
> messages_en.properties
> 6 add
> > Exception in thread "main" java.lang.NoClassDefFoundError: 
> > org.apache.derby.shared.common.reference.SQLState
> Test Failed.
> *** End:   MessageBundleTest jdk1.5.0_02 derbyall:i18nTest 2006-06-14 
> 22:43:30 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1015) Define interface between network server and engine through Java interfaces.

2006-07-12 Thread David Van Couvering (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1015?page=comments#action_12420750 ] 

David Van Couvering commented on DERBY-1015:


I will take care of committing the second patch as well.

David

> Define interface between network server and engine through Java interfaces.
> ---
>
>  Key: DERBY-1015
>  URL: http://issues.apache.org/jira/browse/DERBY-1015
>  Project: Derby
> Type: Improvement

>   Components: JDBC
> Reporter: Daniel John Debrunner
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0
>  Attachments: Derby1015.p2.diff.txt, derby1015.diff.txt, 
> derby1015.p2.stat.txt, derby1015.stat.txt
>
> API between the network server and engine is not well defined, leading to 
> inconsistent & multiple ways of handling the different objects returned, such 
> as reflection, explicit casting etc. This in turn has lead to bugs such as 
> DERBY-966 . DERBY-1005, and DERBY-1006, and access to underlying objects by 
> the application that should be hidden.
> Define interfaces, such as EngineConnection, that both EmbedConnection and 
> BrokeredConnection implement. Thus the network server can rely on the fact 
> that any connection it obtains will implement EngineConnection, and call the 
> required methods through that interface.
> Most likely will need EngineConnection, EnginePreparedStatement and 
> EngineResultSet.. These interfaces would be internal to derby and not exposed 
> to applications.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Reopened: (DERBY-1015) Define interface between network server and engine through Java interfaces.

2006-07-12 Thread Sunitha Kambhampati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1015?page=all ]
 
Sunitha Kambhampati reopened DERBY-1015:



Thanks David for the review and commit.  The # 421435  revision number only 
took care of the derby1015.diff.txt .  That commit missed adding two files 
EnginePreparedStatement and EngineParameterMetaData. 

I am opening this issue, because there is another patch that needs to be 
committed. derby1015.p2.diff.txt.   

Can someone commit this too if it looks ok.  Thanks. 

> Define interface between network server and engine through Java interfaces.
> ---
>
>  Key: DERBY-1015
>  URL: http://issues.apache.org/jira/browse/DERBY-1015
>  Project: Derby
> Type: Improvement

>   Components: JDBC
> Reporter: Daniel John Debrunner
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0
>  Attachments: Derby1015.p2.diff.txt, derby1015.diff.txt, 
> derby1015.p2.stat.txt, derby1015.stat.txt
>
> API between the network server and engine is not well defined, leading to 
> inconsistent & multiple ways of handling the different objects returned, such 
> as reflection, explicit casting etc. This in turn has lead to bugs such as 
> DERBY-966 . DERBY-1005, and DERBY-1006, and access to underlying objects by 
> the application that should be hidden.
> Define interfaces, such as EngineConnection, that both EmbedConnection and 
> BrokeredConnection implement. Thus the network server can rely on the fact 
> that any connection it obtains will implement EngineConnection, and call the 
> required methods through that interface.
> Most likely will need EngineConnection, EnginePreparedStatement and 
> EngineResultSet.. These interfaces would be internal to derby and not exposed 
> to applications.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: svn commit: r421435 - in /db/derby/code/trunk/java: drda/org/apache/derby/impl/drda/ engine/org/apache/derby/iapi/jdbc/ engine/org/apache/derby/impl/jdbc/

2006-07-12 Thread David Van Couvering

Thanks for catching that, Sunitha, I added them.

David

Sunitha Kambhampati wrote:
Thanks David for the commit. I think you forgot to add the engine 
interfaces.  maybe missed a svn add ?


Thanks
Sunitha.

[EMAIL PROTECTED] wrote:


Author: davidvc
Date: Wed Jul 12 15:05:50 2006
New Revision: 421435

URL: http://svn.apache.org/viewvc?rev=421435&view=rev
Log:
DERBY-1015: Define interface between network server and engine through 
Java interfaces.  Contributed by Sunitha Kambhampati.  Passes 
derbynetclientmats.



Modified:
   
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 

   
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAStatement.java 

   
db/derby/code/trunk/java/engine/org/apache/derby/iapi/jdbc/BrokeredPreparedStatement.java 

   
db/derby/code/trunk/java/engine/org/apache/derby/impl/jdbc/EmbedParameterSetMetaData.java 

   
db/derby/code/trunk/java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java 



Modified: 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 

URL: 
http://svn.apache.org/viewvc/db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java?rev=421435&r1=421434&r2=421435&view=diff 

== 

--- 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 
(original)
+++ 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 
Wed Jul 12 15:05:50 2006

@@ -56,7 +56,7 @@
import org.apache.derby.iapi.services.sanity.SanityManager;
import org.apache.derby.iapi.services.stream.HeaderPrintWriter;
import org.apache.derby.iapi.tools.i18n.LocalizedResource;
-import org.apache.derby.impl.jdbc.EmbedParameterSetMetaData;
+import org.apache.derby.iapi.jdbc.EngineParameterMetaData;
import org.apache.derby.impl.jdbc.EmbedSQLException;
import org.apache.derby.impl.jdbc.Util;

@@ -3948,7 +3948,7 @@
String strVal;
PreparedStatement ps = stmt.getPreparedStatement();
int codePoint;
-EmbedParameterSetMetaData pmeta = null;
+EngineParameterMetaData pmeta = null;
Vector paramDrdaTypes = new Vector();
Vector paramLens = new Vector();
ArrayList paramExtPositions = null;
@@ -4095,7 +4095,7 @@
 * @throws SQLException
 */
private ArrayList readAndSetParams(int i, DRDAStatement stmt, int
-   drdaType, 
EmbedParameterSetMetaData pmeta,
+   drdaType, 
EngineParameterMetaData pmeta,

   ArrayList paramExtPositions,
   int paramLenNumBytes)
throws DRDAProtocolException, SQLException
@@ -5804,7 +5804,7 @@
{
PreparedStatement ps = stmt.getPreparedStatement();
ResultSetMetaData rsmeta = ps.getMetaData();
-EmbedParameterSetMetaData pmeta = stmt.getParameterMetaData();
+EngineParameterMetaData pmeta = stmt.getParameterMetaData();
int numElems = 0;
if (e == null || e instanceof SQLWarning)
{
@@ -5856,7 +5856,7 @@

ResultSet rs = null;
ResultSetMetaData rsmeta = null;
-EmbedParameterSetMetaData pmeta = null;
+EngineParameterMetaData pmeta = null;
if (!stmt.needsToSendParamData)
rs = stmt.getResultSet();
if (rs == null)// this is a CallableStatement, use 
parameter meta data

@@ -5952,7 +5952,7 @@
 * @throws SQLException
 */
private void writeSQLDTAGRP(DRDAStatement stmt, ResultSetMetaData 
rsmeta, -EmbedParameterSetMetaData pmeta,

+EngineParameterMetaData pmeta,
int colStart, int colEnd, boolean first)
throws DRDAProtocolException, SQLException
{
@@ -6695,7 +6695,7 @@
 * @throws DRDAProtocolException
 * @throws SQLException
 */
-private void writeSQLDAGRP(ResultSetMetaData rsmeta, 
EmbedParameterSetMetaData pmeta, int elemNum, boolean rtnOutput)
+private void writeSQLDAGRP(ResultSetMetaData rsmeta, 
EngineParameterMetaData pmeta, int elemNum, boolean rtnOutput)

throws DRDAProtocolException, SQLException
{
//jdbc uses offset of 1
@@ -6831,14 +6831,14 @@
}

  -private void writeSQLUDTGRP(ResultSetMetaData rsmeta, 
EmbedParameterSetMetaData pmeta, int jdbcElemNum, boolean rtnOutput)
+private void writeSQLUDTGRP(ResultSetMetaData rsmeta, 
EngineParameterMetaData pmeta, int jdbcElemNum, boolean rtnOutput)

throws DRDAProtocolException,SQLException
{
writer.writeByte(CodePoint.NULLDATA);

}

-private void writeSQLDOPTGRP(ResultSetMetaData rsmeta, 
EmbedParameterSetMetaData pmeta, int jdbcElemNum, boolean rtnOutput)
+private void writeSQLDOPTGRP(ResultSetMetaData rsmeta, 
EngineParameterMetaData pmeta, int jdbcElemNum, boolean rtnOutput)

throws DRD

Re: Re: prioritized 10.2 bug list

2006-07-12 Thread Andrew McIntyre

On 7/12/06, Rick Hillegas <[EMAIL PROTECTED]> wrote:

People have targetted bugs for fixing in 10.2 but haven't assigned
the bugs to themselves. What does this mean? It could mean any of the
following:

1) These bugs are stretch goals which the reporters hope someone will
address before we cut the branch.


Or perhaps simply noticed something outside their area of expertise
that appears to be a must-fix before release issue.


2) The reporters used JIRA to record FIXME messages to themselves and
simply forgot to assign the bugs to themselves.


Certainly possible, I did that with a recent sysinfo issue and then
somebody else fixed it before I got back to it. So even if you forget
to assign yourself to it, doesn't mean someone else won't work on it!
:-)


3) Something else. What do you think it means?


Similar to 1, the copyright notice issue is an example of a mandate
from the ASF. We won't likely see many of these, but someone in the
community will have to address the issue before the community can
finish the release.


As you can see, I am confused too. My noodle is particularly baked by
bugs which are

a) unassigned
b) targetted for 10.2
c) marked low priority

What does that mean?


Nice-to-haves. For example, there might be JIRAs issues for bugs with
functionality introduced in 10.2 that the person who introduced the
functionality filed but didn't assign themselves to. They might not be
showstoppers for the release, they may just need a release note or a
note in the docs.


I have been assuming that "Fix in 10.2" means that the community really
wants the issue resolved before we cut the branch. I have tried to
ensure that "Fix in 10.2" includes all Blocker and Critical issues.
Beyond that, I have tried not to dictate which of the Major issues get
rolled into 10.2. Should I? Instead, I have left this decision to the
community's collective judgement.


Well, in some sense you get to dictact what goes into the release as
release manager, since you should certainly feel free to bump an issue
out to a later release if it simply won't make the train in time and
it's not a showstopper quality issue. If someone just can't get their
'major' feature or bug fix in around the time you want this release
train to leave the station, it will just have to wait for the next. Of
course, you should always be sensitive to the community as well. If
somebody in the community upgrades an issue to blocker, pay attention
to what they have to say. If somebody files a new issue with blocker,
but really just needs a release note, don't be afraid to say it. It's
a balancing act, and these things tend to work themselves out in the
end.


As you can see, I'm unclear about how to handle unassigned low priority
bugs targetted for 10.2. Do we really want these to trump untargetted
Major bugs?


If you see a Major bug that seems like a must-fix for 10.2 that
currently has no target release, even if you don't plan on working on
it yourself, I would say mark it FixIn 10.2. I would say, since you're
the release manager, you can certainly punt any unassigned major bugs
out to a future release if they aren't showstoppers, and with
increasing zeal the closer you get to the release date.


If we get through all of our Major 10.2 issues, the
community might want to make another pass through the untargetted Major
bugs and promote some of them to 10.2 ahead of the cruft.


+1


Please bear with me. This is my first attempt to manage a release and I
welcome your advice about how to make the process more sensible.


Isn't it fun? "Like herding cats" was one description I've heard. :-)


Kathey Marsden wrote:

> Rick I don't understand this list.If I think a bug is a good
> candidate for  10.2, do I just mark it fix in 10.2  and leave it
> unassigned?
> I had thought Fix In 10.2 just meant someone planned to fix it for 10.2

It sounds as though you would like a consistency checker which binds the
concepts of Assignment and ReleaseTarget. A person should not be allowed
to assign a bug without specifiying a target release. And vice-versa, a
person should not be able to specify a target release without assigning
the bug to someone--presumably themself. Do I understand you correctly?


I don't agree with either of these. A person should be allowed to
assign themselves to a bug without specifying what release they are
working on. In the case of a complicated feature, e.g. the BOOLEAN
work, just because someone is working on it today does not mean they
know what release it will end up going into today.

And it's certainly possible that someone could identify an issue, like
the copyright rototill, that's a must-fix for 10.2 and specify that as
the target release without ever intending to do any work on it.

Of course, JIRA doesn't have a mechanism for enforcing these
constraints anyway, so there's no way to enforce either of these
things even if we wanted to. :-)

I think the bottom line is that if anyone browsing through JIRA sees
anything that look

Re: Optimizer patch reviews? (DERBY-781, DERBY-1357)

2006-07-12 Thread Satheesh Bandaram
I will start to review these patches with the goal of committing them.
If anyone has comments or suggestions, please share with the group...

Satheesh

Army wrote:

> I posted two patches for some optimizer changes a little over a week
> ago: one for DERBY-781 and one for DERBY-1357.
>
> Has anyone had a chance to review either of them, or is anyone
> planning to?  I'm hoping to have these reviewed and committed sometime
> in the next few days so that I'm not forced to try to address issues
> at the last minute for the first 10.2 release candidate.
>
> Optimizer changes can sometimes be rather tricky, so the sooner the
> review--and the more eyes on the code--the better.
>
> The DERBY-1357 changes are quite small and are very easily reviewable,
> while the DERBY-781 changes are more involved.  Anyone have some time
> to review either of these patches?
>
> Many thanks,
> Army
>
>




[jira] Commented: (DERBY-533) Re-enable national character datatypes

2006-07-12 Thread Satheesh Bandaram (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-533?page=comments#action_12420746 ] 

Satheesh Bandaram commented on DERBY-533:
-

I earlier had some interest in enabling national characters for 10.2, but after 
dates for 10.2 became clearer, I decided it wouldn't be feasible to research 
and implement this functionality in the remaining time. I will add a comment to 
JIRA entry stating this. I also don't have itch to look into this issue for 
10.3, though I think this is very useful functionality.

> Re-enable national character datatypes
> --
>
>  Key: DERBY-533
>  URL: http://issues.apache.org/jira/browse/DERBY-533
>  Project: Derby
> Type: New Feature

>   Components: SQL
> Versions: 10.1.1.0
> Reporter: Rick Hillegas

>
> SQL 2003 coyly defines national character types as "implementation defined". 
> Accordingly, there is considerable variability in how these datatypes behave. 
> Oracle and MySQL use these datatypes to store unicode strings. This would not 
> distinguish national from non-national character types in Derby since Derby 
> stores all strings as unicode sequences.
> The national character datatypes (NCHAR, NVARCHAR, NCLOB and their synonymns) 
> used to exist in Cloudscape but were disabled in Derby. The disabling comment 
> in the grammar says "need to re-enable according to SQL standard". Does this 
> mean that the types were removed because they chafed against SQL 2003? If so, 
> what are their defects?
> --
> Cloudscape 3.5 provided the following support for national character types:
> - NCHAR and NVARCHAR were legal datatypes.
> - Ordering operations on these datatypes was determined by the collating 
> sequence associated with the locale of the database.
> - The locale was a DATABASE-wide property which could not be altered.
> - Ordering on non-national character datatypes was lexicographic, that is, 
> character by character.
> --
> Oracle 9i provides the following support for national character types:
> - NCHAR, NVARCHAR2, and NCLOB datatypes are used to store unicode strings.
> - Sort order can be overridden per SESSION or even per QUERY, which means 
> that these overridden sort orders are not supported by indexes.
> --
> DB2 does not appear to support national character types. Nor does its DRDA 
> data interchange protocol.
> --
> MySQL provides the following support for national character types:
> - National Char and National Varchar datatypes are used to hold unicode 
> strings. I cannot find a national CLOB type.
> - The character set and sort order can be changed at SERVER-wide, TABLE-wide, 
> or COLUMN-specific levels.
> --
> If we removed the disabling logic in Derby, I believe that the following 
> would happen:
> - We would get NCHAR, NVARCHAR, and NCLOB datatypes.
> - These would sort according to the locale that was bound to the database 
> when it was created.
> - We would have to build DRDA transport support for these types.
> The difference between national and non-national datatypes would be their 
> sort order.
> I am keenly interested in understanding what defects (other than DRDA 
> support) should be addressed in the disabled implementation.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: svn commit: r421435 - in /db/derby/code/trunk/java: drda/org/apache/derby/impl/drda/ engine/org/apache/derby/iapi/jdbc/ engine/org/apache/derby/impl/jdbc/

2006-07-12 Thread Sunitha Kambhampati
Thanks David for the commit. I think you forgot to add the engine 
interfaces.  maybe missed a svn add ?


Thanks
Sunitha.

[EMAIL PROTECTED] wrote:


Author: davidvc
Date: Wed Jul 12 15:05:50 2006
New Revision: 421435

URL: http://svn.apache.org/viewvc?rev=421435&view=rev
Log:
DERBY-1015: Define interface between network server and engine through 
Java interfaces.  Contributed by Sunitha Kambhampati.  Passes 
derbynetclientmats.



Modified:
   db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java
   db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAStatement.java
   
db/derby/code/trunk/java/engine/org/apache/derby/iapi/jdbc/BrokeredPreparedStatement.java
   
db/derby/code/trunk/java/engine/org/apache/derby/impl/jdbc/EmbedParameterSetMetaData.java
   
db/derby/code/trunk/java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java

Modified: 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java
URL: 
http://svn.apache.org/viewvc/db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java?rev=421435&r1=421434&r2=421435&view=diff
==
--- 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 
(original)
+++ 
db/derby/code/trunk/java/drda/org/apache/derby/impl/drda/DRDAConnThread.java 
Wed Jul 12 15:05:50 2006
@@ -56,7 +56,7 @@
import org.apache.derby.iapi.services.sanity.SanityManager;
import org.apache.derby.iapi.services.stream.HeaderPrintWriter;
import org.apache.derby.iapi.tools.i18n.LocalizedResource;
-import org.apache.derby.impl.jdbc.EmbedParameterSetMetaData;
+import org.apache.derby.iapi.jdbc.EngineParameterMetaData;
import org.apache.derby.impl.jdbc.EmbedSQLException;
import org.apache.derby.impl.jdbc.Util;

@@ -3948,7 +3948,7 @@
String strVal;
PreparedStatement ps = stmt.getPreparedStatement();
int codePoint;
-   EmbedParameterSetMetaData pmeta = null;
+   EngineParameterMetaData pmeta = null;
Vector paramDrdaTypes = new Vector();
Vector paramLens = new Vector();
ArrayList paramExtPositions = null;
@@ -4095,7 +4095,7 @@
 * @throws SQLException
 */
private ArrayList readAndSetParams(int i, DRDAStatement stmt, int
-  
drdaType, EmbedParameterSetMetaData pmeta,
+  
drdaType, EngineParameterMetaData pmeta,
   
ArrayList paramExtPositions,
   int 
paramLenNumBytes)
throws DRDAProtocolException, SQLException
@@ -5804,7 +5804,7 @@
{
PreparedStatement ps = stmt.getPreparedStatement();
ResultSetMetaData rsmeta = ps.getMetaData();
-   EmbedParameterSetMetaData pmeta = stmt.getParameterMetaData();
+   EngineParameterMetaData pmeta = stmt.getParameterMetaData();
int numElems = 0;
if (e == null || e instanceof SQLWarning)
{
@@ -5856,7 +5856,7 @@

ResultSet rs = null;
ResultSetMetaData rsmeta = null;
-   EmbedParameterSetMetaData pmeta = null;
+   EngineParameterMetaData pmeta = null;
if (!stmt.needsToSendParamData)
rs = stmt.getResultSet();
if (rs == null) // this is a CallableStatement, use 
parameter meta data
@@ -5952,7 +5952,7 @@
 * @throws SQLException
 */
	private void writeSQLDTAGRP(DRDAStatement stmt, ResultSetMetaData rsmeta, 
-EmbedParameterSetMetaData pmeta,

+   
EngineParameterMetaData pmeta,
int colStart, 
int colEnd, boolean first)
throws DRDAProtocolException, SQLException
{
@@ -6695,7 +6695,7 @@
 * @throws DRDAProtocolException
 * @throws SQLException
 */
-   private void writeSQLDAGRP(ResultSetMetaData rsmeta, 
EmbedParameterSetMetaData pmeta, int elemNum, boolean rtnOutput)
+   private void writeSQLDAGRP(ResultSetMetaData rsmeta, 
EngineParameterMetaData pmeta, int elemNum, boolean rtnOutput)
throws DRDAProtocolException, SQLException
{
//jdbc uses offset of 1
@@ -6831,14 +6831,14 @@
}

  
-	private void writeSQLUDTGRP(ResultSetMetaData rsmeta, EmbedParameterSetMetaData pmeta, int jdbcElemNum, boolean rtnOutput)

+   private void writeSQLUDTGRP(ResultSetMetaData rsmeta, 
EngineParameterMetaData pmeta, int jdbcElemNum, boolean rtnOutput)
throws DRDAProtocolException,SQLExceptio

Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Satheesh Bandaram
Jean T. Anderson wrote:

>One thing to consider is DdlUtils is database agnostic. For example,
>adding support for "create view" doesn't mean just adding it for Derby,
>but also adding it for every database supported (see the list at
>http://db.apache.org/ddlutils/database-support.html ).
>  
>
This is important... While ddlUtils goal is to support all databases,
Ramin is attempting to fix his target database to Derby and to make
source database mySQL for his migration utility. (for now) It doesn't
seem like Ramin should take on adding database specific code to ddlUtils
(that DatabaseMetadata might not expose directly) and to test on 13
databases that ddlUtils currently supports.

Here are the issues I see with using ddlUtils

   1. Doesn't support many database objects, like views, procedures,
  functions and check constraints. It may not even be ddlUtils goal
  to support database specific objects that other databases may not
  support.
   2. Database migration is likely to require some intervention when
  automatic migration doesn't succeed. When this happens, would
  users have to modify XML files that ddlUtils generates and also
  other schema files that migration utility generates? Doesn't seem
  right... having to modify XML files for tables, but other output
  files for views or constraints etc.
   3. ddlUtils is still under development and has not had an official
  release yet. While it may or may not be stable enough, should
  Derby community vote to include pre-released software into Derby's
  official releases? I, for one, would like to see this mySQL to
  Derby migration in 10.2 release in some form.

It is also possible to develop mySQL specific schema capture (for views
and other objects) just for this utility and then contribute that logic
to ddlUtils project if there is interest. To do complete and successful
migration, some database specific operations must be performed and that
may not be ddlUtils goals for now.

Satheesh

>You might consider posting to ddlutils-dev@db.apache.org to ask what
>level of effort people think might be required to implement the missing
>features.
>
> -jean
>
>
>  
>
>>Thanks
>>Ramin
>>
>>On 7/11/06, Bryan Pendleton <[EMAIL PROTECTED]> wrote:
>>
>>
>>
The DdlUtils tool seems not be capable of migrating views, CHECK
constraints,  and stored procedures. I would like to know what do you
think if DdlUtils tool can be reused for migrating the tables and
Indexes, and use the DatabaseMetadata for migrating views and stored
procedures? .


>>>Perhaps another possibility would be for you to improve DdlUtils so
>>>that it has these desirable features. The end result would be a better
>>>DdlUtils *and* a MySQL-to-Derby migration tool.
>>>
>>>thanks,
>>>
>>>bryan
>>>
>>>
>>>  
>>>
>
>
>  
>




[jira] Commented: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread Andrew McIntyre (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1507?page=comments#action_12420744 ] 

Andrew McIntyre commented on DERBY-1507:


Committed d1507_v2.patch with revision 421448. Please resolve and close this 
issue if there is no further work to be done.

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch, d1507_v2.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

Derby Info: [Patch Available]

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch, d1507_v2.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

Attachment: d1507_v2.patch

Attaching a second version of the patch (d1507_v2.patch) that creates a 
test-specific policy file instead of turning security manager off.  Thanks to 
Andrew and Myrna for the suggestion!

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch, d1507_v2.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: prioritized 10.2 bug list

2006-07-12 Thread Rick Hillegas

Hi Kathey,

This is a good topic to discuss. Thank you for raising it. There are a 
lot of seemingly independent variables in a JIRA and I don't think that 
the community fills in these variables consistently. Maybe that's 
inevitable since we don't enforce any consistency rules when updating a 
JIRA. People have targetted bugs for fixing in 10.2 but haven't assigned 
the bugs to themselves. What does this mean? It could mean any of the 
following:


1) These bugs are stretch goals which the reporters hope someone will 
address before we cut the branch.


2) The reporters used JIRA to record FIXME messages to themselves and 
simply forgot to assign the bugs to themselves.


3) Something else. What do you think it means?

As you can see, I am confused too. My noodle is particularly baked by 
bugs which are


a) unassigned
b) targetted for 10.2
c) marked low priority

What does that mean?

I have been assuming that "Fix in 10.2" means that the community really 
wants the issue resolved before we cut the branch. I have tried to 
ensure that "Fix in 10.2" includes all Blocker and Critical issues. 
Beyond that, I have tried not to dictate which of the Major issues get 
rolled into 10.2. Should I? Instead, I have left this decision to the 
community's collective judgement.


As you can see, I'm unclear about how to handle unassigned low priority 
bugs targetted for 10.2. Do we really want these to trump untargetted 
Major bugs? If we get through all of our Major 10.2 issues, the 
community might want to make another pass through the untargetted Major 
bugs and promote some of them to 10.2 ahead of the cruft.


Please bear with me. This is my first attempt to manage a release and I 
welcome your advice about how to make the process more sensible.


Kathey Marsden wrote:


Rick Hillegas wrote:

Thanks to everyone for helping clean up JIRA and clarify the issues 
we want to address in 10.2. It would be great if we could march in 
priority order through the issues listed in the Open 10.2 Issues 
report at 
http://wiki.apache.org/db-derby/TenTwoRelease#head-7cf194b6c7305a0e83d0c9c422f0632215f6cb19. 



Rick I don't understand this list.If I think a bug is a good 
candidate for  10.2, do I just mark it fix in 10.2  and leave it 
unassigned?

I had thought Fix In 10.2 just meant someone planned to fix it for 10.2



It sounds as though you would like a consistency checker which binds the 
concepts of Assignment and ReleaseTarget. A person should not be allowed 
to assign a bug without specifiying a target release. And vice-versa, a 
person should not be able to specify a target release without assigning 
the bug to someone--presumably themself. Do I understand you correctly?




Thanks for the clarification.

Kathey






[jira] Resolved: (DERBY-1015) Define interface between network server and engine through Java interfaces.

2006-07-12 Thread David Van Couvering (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1015?page=all ]
 
David Van Couvering resolved DERBY-1015:


Resolution: Fixed
Derby Info:   (was: [Patch Available])

Committed revision 421435.  Passes derbynetclientmats on JDK 1.5

> Define interface between network server and engine through Java interfaces.
> ---
>
>  Key: DERBY-1015
>  URL: http://issues.apache.org/jira/browse/DERBY-1015
>  Project: Derby
> Type: Improvement

>   Components: JDBC
> Reporter: Daniel John Debrunner
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0
>  Attachments: Derby1015.p2.diff.txt, derby1015.diff.txt, 
> derby1015.p2.stat.txt, derby1015.stat.txt
>
> API between the network server and engine is not well defined, leading to 
> inconsistent & multiple ways of handling the different objects returned, such 
> as reflection, explicit casting etc. This in turn has lead to bugs such as 
> DERBY-966 . DERBY-1005, and DERBY-1006, and access to underlying objects by 
> the application that should be hidden.
> Define interfaces, such as EngineConnection, that both EmbedConnection and 
> BrokeredConnection implement. Thus the network server can rely on the fact 
> that any connection it obtains will implement EngineConnection, and call the 
> required methods through that interface.
> Most likely will need EngineConnection, EnginePreparedStatement and 
> EngineResultSet.. These interfaces would be internal to derby and not exposed 
> to applications.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

Derby Info:   (was: [Patch Available])

Thanks for the suggestion Andrew; I overlooked the test-specific policy file 
option.  I'll try that and post another patch.

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread Myrna van Lunteren

On 7/12/06, A B (JIRA)  wrote:

[ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

   Attachment: d1507_v1.patch

Attaching a simple patch to disable security manager for lang/xmlBinding.java.  
If anyone has any better alternatives, please let me know.  Otherwise, this is 
a one-line change so if someone could commit, that'd be great.


Would this be a situation where a test specific policy file could help?

Myrna



> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
{user.dir}/personal.dtd read)
> >   at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled 
Code))
> >   at 
java.security.AccessController.checkPermission(AccessController.java(Compiled Code))
> >   at 
java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
Code))
> This failure does not show up in the nightlies because the XML tests are not 
currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
the permission it needs, but Xerces, which Derby uses to parse XML documents, does 
not.  More specifically, and XML document can include a pointer to a schema 
document, which Xerces will then try to read.  In order for that to work the 
Xerces classes would have to have read permission on user.dir, but we can't add 
that permission to the derby_tests.policy file because the Xerces classes could be 
in a Xerces jar anywhere on the system, or they could be included the JVM's own 
jar (ex. IBM 1.4).  And further, when DERBY-567 is fixed the parser that is used 
could vary from JVM to JVM, so it might not be Xerces but some other parser that 
needs read permission.
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
(when I did that the test ran cleanly), but it seems to me like that would defeat the 
purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
run this specific test with no security manager.

--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
  http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
  http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

Derby Info: [Patch Available]

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread Andrew McIntyre (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1507?page=comments#action_12420737 ] 

Andrew McIntyre commented on DERBY-1507:


Instead of disabling the security manager, why not grant permission to read 
user.dir to all in a test-specific policy file for this test?

i.e. add an xmlBinding.policy that contains:

grant {
  permission java.io.FilePermission "${user.dir}/personal.dtd" "read";
};

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Derby Internals Wiki

2006-07-12 Thread Daniel John Debrunner
Jean T. Anderson wrote:

> Sanket Sharma wrote:
> 
>>Hi,
>>
>>While reading Derby source code for my project, I thought It will be
>>good to share my knowledge with other developers. Since my project is
>>about adding JMX to Derby, it will interact with a lot of internal API
>>calls. As I continue to read and understand code, I think will good if
>>I can document all this somewhere. Is there any Derby Internals wiki
>>page where I can post all this information?
> 
> 
> Not on the wiki that I'm aware of, but some internals writeups are on
> the web site here:
> 
> http://db.apache.org/derby/papers/index.html
> 
> 
>>If it is not already there, I'll add a page to my JMX wiki page and
>>start documenting everything there. It may be later linked to Derby
>>main page.
>>
>>I welcome any comments and suggestions.
> 
> 
> Anything you can add to the knowledge is good and doing it on the wiki
> is fine.

Agreed, I'm adding some pages in this general area, links to
descriptions of functionality, design specs etc. You can link your
information off the JMX page and then later link it into the area I'm
adding.

Thanks,
Dan.



[jira] Updated: (DERBY-1508) improve store costing of fetch of single row where not all columns are fetched.

2006-07-12 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1508?page=all ]

Mike Matrigali updated DERBY-1508:
--


One idea is to put part of the estimate back on the datatype.  Have the store 
ask each datatype about it's size.  When the
costing work was originally done there was no info in the datatype about it's 
size available, but since then I believe support
has been added for estimating at least in memory size of the datatypes - either 
this or something similar may be used
now to improve the costing.  Maybe the question is average size, maybe maximum 
size.  Then some calculation from this
static info about the datatypes and the average actual row size could come up 
with some average estimate of the overflow
lengths of long columns.

> improve store costing of fetch of single row where not all columns are 
> fetched.
> ---
>
>  Key: DERBY-1508
>  URL: http://issues.apache.org/jira/browse/DERBY-1508
>  Project: Derby
> Type: Improvement

>   Components: Store
> Reporter: Mike Matrigali
> Priority: Minor

>
> Currently HeapCostController ignores information about what subset of columns 
> is being requested.  For instance
> in getFetchFromRowLocationCost() validColumns argument is unused.  In 
> getScanCost() , scanColumnList is unused.
> Mostly this probably does no matter as the cost of getting the row dominates 
> the per column subset cost.  The area
> where this matters is the case of long columns.  The cost of fetching 
> multiple row keyed by a row location  with a 2 gigabyte column that is not in 
> the select list  is currently way smaller than the cost of doing the same 
> query by scanning the table for the same set of rows.  
> Currently the heap estimate associates cost with total average row length, 
> gotten by using the # of rows and the amount of space in the 
> container.  It does not have a statistic available of the average size of 
> each column.  At it's level  all column lengths are variable and could
> possibly be > 2 gig.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-1508) improve store costing of fetch of single row where not all columns are fetched.

2006-07-12 Thread Mike Matrigali (JIRA)
improve store costing of fetch of single row where not all columns are fetched.
---

 Key: DERBY-1508
 URL: http://issues.apache.org/jira/browse/DERBY-1508
 Project: Derby
Type: Improvement

  Components: Store  
Reporter: Mike Matrigali
Priority: Minor


Currently HeapCostController ignores information about what subset of columns 
is being requested.  For instance
in getFetchFromRowLocationCost() validColumns argument is unused.  In 
getScanCost() , scanColumnList is unused.

Mostly this probably does no matter as the cost of getting the row dominates 
the per column subset cost.  The area
where this matters is the case of long columns.  The cost of fetching multiple 
row keyed by a row location  with a 2 gigabyte column that is not in 
the select list  is currently way smaller than the cost of doing the same query 
by scanning the table for the same set of rows.  

Currently the heap estimate associates cost with total average row length, 
gotten by using the # of rows and the amount of space in the 
container.  It does not have a statistic available of the average size of each 
column.  At it's level  all column lengths are variable and could
possibly be > 2 gig.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Derby Internals Wiki

2006-07-12 Thread Jean T. Anderson
Sanket Sharma wrote:
> Hi,
> 
> While reading Derby source code for my project, I thought It will be
> good to share my knowledge with other developers. Since my project is
> about adding JMX to Derby, it will interact with a lot of internal API
> calls. As I continue to read and understand code, I think will good if
> I can document all this somewhere. Is there any Derby Internals wiki
> page where I can post all this information?

Not on the wiki that I'm aware of, but some internals writeups are on
the web site here:

http://db.apache.org/derby/papers/index.html

> If it is not already there, I'll add a page to my JMX wiki page and
> start documenting everything there. It may be later linked to Derby
> main page.
> 
> I welcome any comments and suggestions.

Anything you can add to the knowledge is good and doing it on the wiki
is fine.

 -jean



[jira] Commented: (DERBY-1483) Java Function defined with a BIGINT parameter invokes the method with a signature of method(long) rather than method(Long)

2006-07-12 Thread Kathey Marsden (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1483?page=comments#action_12420731 ] 

Kathey Marsden commented on DERBY-1483:
---

Is this current doc  reference sufficient?

http://db.apache.org/derby/docs/10.1/ref/rrefjdbc75719.html



> Java Function defined with a BIGINT parameter invokes the method with a 
> signature of method(long) rather than method(Long)
> --
>
>  Key: DERBY-1483
>  URL: http://issues.apache.org/jira/browse/DERBY-1483
>  Project: Derby
> Type: Bug

>   Components: Documentation
> Versions: 10.1.2.1
> Reporter: Stan Bradbury
> Priority: Minor

>
> Calling a function passing BIGINT to a method accepting Long fails with the 
> message:
> ERROR 42X50: No method was found that matched the method call 
> derbyJavaUtils.bigintToHexString(long), tried all combinations of object and 
> primitive types and any possible type conversion for any  parameters the 
> method call may have. The method might exist but it is not public and/or 
> static, or the parameter types are not method invocation convertible.
> The method needs to accept the primative type: long to work.  BIGINT as 
> docuemented as having a compile time type of java.lang.Long - this is why I 
> expected the example method to work: see the Reference manual: 
> http://db.apache.org/derby/docs/10.1/ref/rrefsqlj30435.html.
>   
> Example: define the function bigintToHexString to accept a BIGINT parameter 
> (see below) and reference the corresponding  java method bigintToHexString 
> (shown below) that accepts a Long.  Add the jarfile with the class to the DB, 
> setup the database classpath and invoke with the query shown.
>   >>> Java Class:
> import java.sql.*;
> public class derbyJavaUtils
> {
> // bigintToHexString
> public static String bigintToHexString(Long myBigint)
> {
>   return myBigint.toHexString(myBigint.longValue());
> }
> // bigintToHexString2 - this will work if used for the function
> public static String bigintToHexString2(long myBigint)
> {
>   Long myLong = null;
>   return myLong.toHexString(myBigint);
> }
> }
>  >> COMPILE IT AND JAR IT :  jar -cvf derbyJavaUtils.jar DerbyJavaUtils.class
> >> Setup the function as follows in a database:
>   .. CALL sqlj.install_jar( 'derbyJavaUtils.jar','APP.derbyJavaUtils',0);
>   .. CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.database.classpath', 
> 'APP.derbyJavaUtils');
>   .. CREATE FUNCTION app.bigintToHexString(hexString bigint)
> RETURNS VARCHAR(16)
> PARAMETER STYLE JAVA NO SQL
> LANGUAGE JAVA 
> EXTERNAL NAME 'derbyJavaUtils.bigintToHexString'
>   === One possible test query:
> select  'C' ||  bigintToHexString2(CONGLOMERATENUMBER) ||  '.dat', TABLENAME, 
> ISINDEX
> from SYS.SYSCONGLOMERATES a, SYS.SYSTABLES b
>   where a.TABLEID = b.TABLEID
>   As mention in the code comments the method: bigintToHexString2 - will work 
> if used for the function

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1507?page=all ]

A B updated DERBY-1507:
---

Attachment: d1507_v1.patch

Attaching a simple patch to disable security manager for lang/xmlBinding.java.  
If anyone has any better alternatives, please let me know.  Otherwise, this is 
a one-line change so if someone could commit, that'd be great.

> lang/xmlBinding.java fails with Security Expression
> ---
>
>  Key: DERBY-1507
>  URL: http://issues.apache.org/jira/browse/DERBY-1507
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
> Reporter: A B
> Assignee: A B
> Priority: Minor
>  Attachments: d1507_v1.patch
>
> I recently tried to run the lang/xmlBinding.java test and I noticed that the 
> test fails with a Security Exception:
> > FAIL: Unexpected exception:
> > ERROR 2200L: XMLPARSE operand is not an XML document; see next exception 
> > for details.
> > java.security.AccessControlException: access denied (java.io.FilePermission 
> > {user.dir}/personal.dtd read)
> >   at 
> > java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
> >  Code))
> >   at 
> > java.security.AccessController.checkPermission(AccessController.java(Compiled
> >  Code))
> >   at 
> > java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled 
> > Code))
> >   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> > Code))
> This failure does not show up in the nightlies because the XML tests are not 
> currently run as part of derbyall (see DERBY-563, DERBY-567).
> I looked a this a bit and eventually realized that the test itself has all 
> the permission it needs, but Xerces, which Derby uses to parse XML documents, 
> does not.  More specifically, and XML document can include a pointer to a 
> schema document, which Xerces will then try to read.  In order for that to 
> work the Xerces classes would have to have read permission on user.dir, but 
> we can't add that permission to the derby_tests.policy file because the 
> Xerces classes could be in a Xerces jar anywhere on the system, or they could 
> be included the JVM's own jar (ex. IBM 1.4).  And further, when DERBY-567 is 
> fixed the parser that is used could vary from JVM to JVM, so it might not be 
> Xerces but some other parser that needs read permission. 
> One workaround would be to grant read FilePermission on {user.dir} to "all" 
> (when I did that the test ran cleanly), but it seems to me like that would 
> defeat the purpose of much of the security manager testing.  So until a better
> option arises, I think the only (or at least, the easiest) option is to just 
> run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Jean T. Anderson
Jean T. Anderson wrote:
> Ramin Moazeni wrote:
> 
>>Hi Bryan,
>>
>>I am not sure this would be feasible given the amount of time I have
>>to finish this project as well as my familiarity with DdlUtils code
>>base. But if everybody agrees to it, I can start working on it.
>>
> One thing to consider is DdlUtils is database agnostic. For example,
> adding support for "create view" doesn't mean just adding it for Derby,
> but also adding it for every database supported (see the list at
> http://db.apache.org/ddlutils/database-support.html ).
> 
> You might consider posting to ddlutils-dev@db.apache.org to ask what
> level of effort people think might be required to implement the missing
> features.

Some discussion regarding constraints, triggers, and stored procedures
is here:

http://issues.apache.org/jira/browse/DDLUTILS-28


 -jean


>  -jean
> 
> 
> 
>>Thanks
>>Ramin
>>
>>On 7/11/06, Bryan Pendleton <[EMAIL PROTECTED]> wrote:
>>
>>
The DdlUtils tool seems not be capable of migrating views, CHECK
constraints,  and stored procedures. I would like to know what do you
think if DdlUtils tool can be reused for migrating the tables and
Indexes, and use the DatabaseMetadata for migrating views and stored
procedures? .
>>>
>>>Perhaps another possibility would be for you to improve DdlUtils so
>>>that it has these desirable features. The end result would be a better
>>>DdlUtils *and* a MySQL-to-Derby migration tool.
>>>
>>>thanks,
>>>
>>>bryan
>>>
>>>
> 
> 



[jira] Created: (DERBY-1507) lang/xmlBinding.java fails with Security Expression

2006-07-12 Thread A B (JIRA)
lang/xmlBinding.java fails with Security Expression
---

 Key: DERBY-1507
 URL: http://issues.apache.org/jira/browse/DERBY-1507
 Project: Derby
Type: Bug

  Components: Test  
Versions: 10.2.0.0
Reporter: A B
 Assigned to: A B 
Priority: Minor


I recently tried to run the lang/xmlBinding.java test and I noticed that the 
test fails with a Security Exception:

> FAIL: Unexpected exception:
> ERROR 2200L: XMLPARSE operand is not an XML document; see next exception for 
> details.
> java.security.AccessControlException: access denied (java.io.FilePermission 
> {user.dir}/personal.dtd read)
>   at 
> java.security.AccessControlContext.checkPermission(AccessControlContext.java(Compiled
>  Code))
>   at 
> java.security.AccessController.checkPermission(AccessController.java(Compiled 
> Code))
>   at 
> java.lang.SecurityManager.checkPermission(SecurityManager.java(Compiled Code))
>   at java.lang.SecurityManager.checkRead(SecurityManager.java(Compiled 
> Code))

This failure does not show up in the nightlies because the XML tests are not 
currently run as part of derbyall (see DERBY-563, DERBY-567).

I looked a this a bit and eventually realized that the test itself has all the 
permission it needs, but Xerces, which Derby uses to parse XML documents, does 
not.  More specifically, and XML document can include a pointer to a schema 
document, which Xerces will then try to read.  In order for that to work the 
Xerces classes would have to have read permission on user.dir, but we can't add 
that permission to the derby_tests.policy file because the Xerces classes could 
be in a Xerces jar anywhere on the system, or they could be included the JVM's 
own jar (ex. IBM 1.4).  And further, when DERBY-567 is fixed the parser that is 
used could vary from JVM to JVM, so it might not be Xerces but some other 
parser that needs read permission. 

One workaround would be to grant read FilePermission on {user.dir} to "all" 
(when I did that the test ran cleanly), but it seems to me like that would 
defeat the purpose of much of the security manager testing.  So until a better
option arises, I think the only (or at least, the easiest) option is to just 
run this specific test with no security manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1506) full table scans of tables which don't use indexes, which have blobs, but don't reference blob data still read all pages of the table

2006-07-12 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1506?page=all ]

Mike Matrigali updated DERBY-1506:
--


I haven't thought about this much, but the following approaches all would solve 
the problem, some easier than others, 
these address the long column issue (I don't think the long row issue is as 
important):

1) Provide alternate heap and/or container implementation where overflow 
pointer of overflow  column points to page 
 in another container, thus effectively moving "blob" space out of current 
container.  Need to decide how many 
 blob spaces per table.  Some options are:  1 per table,  N per table (each 
growing to X bytes where X may be max
  file size on the device), 1 per blob column, ...

  I lean toward moving the blob space out of the current space rather than 
segmenting the current space to blob 
  and non blob space.  This would allow a possible easier path in the 
future to allow stuff like non-logged blobs.

2) provide an alternate(upgraded) implementation of the container 
implementation where the page map tracked 
 page type in addition to allocated state.  Or separate page maps for page 
type.  Then the scan of "main" pages
 could be optimized to use the page maps to efficiently get to the "next" 
main page.  Should be careful not to make
  these new page maps a concurrency problems where multiple scans now block 
each other on access to the 
 page maps.

3) For already indexed tables, figure out way for optimizer to use the index  
for the scan  (I am likely to report this as a 
 separate JIRA issue). 

4) For unindexed tables, assuming fix for 3 is implemented.  We could create an 
internal index on the table that would
 use existing techonology and basically provide functionality of #2 - at 
the cost of maintaining the index.My initial
  take would be that it is reasonable to assume some sort of index (or 
primary key) on large tables, in applications
  that care about performance.


> full table scans of tables which don't use indexes, which have blobs, but 
> don't reference blob data still read all pages of the table
> -
>
>  Key: DERBY-1506
>  URL: http://issues.apache.org/jira/browse/DERBY-1506
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.3.1
> Reporter: Mike Matrigali
> Priority: Minor

>
> A full table scan that does not use an index always reads every allocated 
> page of the table through the cache.  In two cases 
> this means logically it reads more data that necessary: long rows (where the 
> head columns span multiple pages) and long columns
> (columns where the data in a single column spans multiple pages).  In both 
> these cases the space for the "overflow" portion of the
> row or column is currently stored in the same space as the regular "main" 
> pages.  The current implementation of a table scan of 
> a heap container is to call raw store to give it a linear list of main pages 
> with rows, raw store conglomerate implementations step through each allocated 
> page in the container and returns the "main" pages (reading the overflow 
> pages into cache, identifying them, and skipping them) 
> the access layer which then returns rows as requested to the query processing 
> layer.
> If a table contains rows with very long columns (ie. 2gig blobs), and the 
> tablescan does not request the blob data then a lot of data
> is read from disk but not required by the query. 
> A more unusual case is a table scan on requiring a few columns from a table 
> made up of  2 gig rows made up of all less than 32k columns.,
> in this case also derby  will read all pages as part of a tablescan even if 
> only the first column is the only required column of the chain.
> Note that this is only a problem in tablescan of heap tables.  In both cases 
> if an index is used to get the row, then ONLY the required data is
> read from disk.  In the long column case the main row has only a pointer to 
> the overflow chain for the blob and it will not be read unless the
> blob data is required.  In the long row case data, columns appear in the 
> container in the order they are created in the original "create table"
> statement.  Data is read from disk into cache for all columns from the 1st up 
> to the "last"  one referenced in the query.  Data objects are only
> instantiated from the cache data for the columns referenced in the query.
> I have marked this low in priority as I believe that full, unindexed tables 
> scans of tables with gigabyte blobs are not the normal case.  Seems like most 
> applications would do keyed lookups of the table.But there may be apps 
> that need to
> do full table reporting on the non'-blob data in such a table.

-- 
This message is automatically generated by JIRA.

[jira] Updated: (DERBY-1499) test run exits for _foundation run because unloadEmbeddedDriver uses driverManager, which isn't available.

2006-07-12 Thread Myrna van Lunteren (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1499?page=all ]

Myrna van Lunteren updated DERBY-1499:
--

Attachment: DERBY-1499_102_20060712.diff

I rethought my earlier solution, so the newer patch - DERBY-1499_20060712.diff 
- changes RunList.java to check on the util.TestUtil field HAVE_DRIVER_CLASS. 
This way the solution is not dependent on any jvm version.

> test run exits for _foundation run because unloadEmbeddedDriver uses 
> driverManager, which isn't available.
> --
>
>  Key: DERBY-1499
>  URL: http://issues.apache.org/jira/browse/DERBY-1499
>  Project: Derby
> Type: Bug

>   Components: Test
> Versions: 10.2.0.0
>  Environment: windows - with wctme5.7_foundation 
> Reporter: Myrna van Lunteren
> Assignee: Myrna van Lunteren
>  Attachments: DERBY-1499_102_20060711.diff, DERBY-1499_102_20060712.diff
>
> The wctme5.7_foundation runs have been exiting out for a while. 
> We only run this environment weekly and were under the impression there was a 
> time out. But I ran the suite, capturing the output in a file, and the last 
> part of the run shows the problem is with unloadEmbeddedDriver:
> Now do RunList
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> java.sql.DriverManager
>   at 
> org.apache.derbyTesting.functionTests.harness.RunList.unloadEmbeddedDriver(RunList.java:1636)
>   at 
> org.apache.derbyTesting.functionTests.harness.RunList.runSuites(RunList.java:276)
>   at 
> org.apache.derbyTesting.functionTests.harness.RunList.(RunList.java:167)
>   at 
> org.apache.derbyTesting.functionTests.harness.RunSuite.getSuitesList(RunSuite.java:208)
>   at 
> org.apache.derbyTesting.functionTests.harness.RunSuite.main(RunSuite.java:147)
> UnloadEmbeddedDriver was put in to allow running with useprocess=false and 
> autoload of the driver, so it's needed. 
> But maybe we can add an appropriate if condition so it doesn't try to do this 
> with foundation.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1506) full table scans of tables which don't use indexes, which have blobs, but don't reference blob data still read all pages of the table

2006-07-12 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1506?page=all ]

Mike Matrigali updated DERBY-1506:
--


The following approach was suggested by [EMAIL PROTECTED]

Maybe a simple fix would be to segment the pages in the container. Store the
relational information in the pages at the "top" of the container, and a
reference to the first page of "Blob space". Then each row would then store
a reference to the blob pages that are relative to the start of the
Blobspace.





I wouldn't call this a bug per se, but a design issue.
I would suggest opening up a JIRA case to focus on the data storage rather
than the optimizer.

Clearly Derby's storage of Blobs is inefficient.

So, the key would be to make improvements on Derby's storage, that would
have minimal impact on existing implementations.

If we extend the container concept to be more intelligent about the data and
its pages in the "table", we can greatly improve performance.  By segmenting
the blobs away from the rest of the table data, we could increase the
efficiency of the table, with the only sacrifice being an extra jump to get
to the front of the blob. (Relatively speaking, this cost would be minimal
considering that we're talking about blobs that for the most part span
multiple pages.






> full table scans of tables which don't use indexes, which have blobs, but 
> don't reference blob data still read all pages of the table
> -
>
>  Key: DERBY-1506
>  URL: http://issues.apache.org/jira/browse/DERBY-1506
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.3.1
> Reporter: Mike Matrigali
> Priority: Minor

>
> A full table scan that does not use an index always reads every allocated 
> page of the table through the cache.  In two cases 
> this means logically it reads more data that necessary: long rows (where the 
> head columns span multiple pages) and long columns
> (columns where the data in a single column spans multiple pages).  In both 
> these cases the space for the "overflow" portion of the
> row or column is currently stored in the same space as the regular "main" 
> pages.  The current implementation of a table scan of 
> a heap container is to call raw store to give it a linear list of main pages 
> with rows, raw store conglomerate implementations step through each allocated 
> page in the container and returns the "main" pages (reading the overflow 
> pages into cache, identifying them, and skipping them) 
> the access layer which then returns rows as requested to the query processing 
> layer.
> If a table contains rows with very long columns (ie. 2gig blobs), and the 
> tablescan does not request the blob data then a lot of data
> is read from disk but not required by the query. 
> A more unusual case is a table scan on requiring a few columns from a table 
> made up of  2 gig rows made up of all less than 32k columns.,
> in this case also derby  will read all pages as part of a tablescan even if 
> only the first column is the only required column of the chain.
> Note that this is only a problem in tablescan of heap tables.  In both cases 
> if an index is used to get the row, then ONLY the required data is
> read from disk.  In the long column case the main row has only a pointer to 
> the overflow chain for the blob and it will not be read unless the
> blob data is required.  In the long row case data, columns appear in the 
> container in the order they are created in the original "create table"
> statement.  Data is read from disk into cache for all columns from the 1st up 
> to the "last"  one referenced in the query.  Data objects are only
> instantiated from the cache data for the columns referenced in the query.
> I have marked this low in priority as I believe that full, unindexed tables 
> scans of tables with gigabyte blobs are not the normal case.  Seems like most 
> applications would do keyed lookups of the table.But there may be apps 
> that need to
> do full table reporting on the non'-blob data in such a table.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1506) full table scans of tables which don't use indexes, which have blobs, but don't reference blob data still read all pages of the table

2006-07-12 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1506?page=all ]

Mike Matrigali updated DERBY-1506:
--

Description: 
A full table scan that does not use an index always reads every allocated page 
of the table through the cache.  In two cases 
this means logically it reads more data that necessary: long rows (where the 
head columns span multiple pages) and long columns
(columns where the data in a single column spans multiple pages).  In both 
these cases the space for the "overflow" portion of the
row or column is currently stored in the same space as the regular "main" 
pages.  The current implementation of a table scan of 
a heap container is to call raw store to give it a linear list of main pages 
with rows, raw store conglomerate implementations step through each allocated 
page in the container and returns the "main" pages (reading the overflow pages 
into cache, identifying them, and skipping them) 
the access layer which then returns rows as requested to the query processing 
layer.

If a table contains rows with very long columns (ie. 2gig blobs), and the 
tablescan does not request the blob data then a lot of data
is read from disk but not required by the query. 
A more unusual case is a table scan on requiring a few columns from a table 
made up of  2 gig rows made up of all less than 32k columns.,
in this case also derby  will read all pages as part of a tablescan even if 
only the first column is the only required column of the chain.

Note that this is only a problem in tablescan of heap tables.  In both cases if 
an index is used to get the row, then ONLY the required data is
read from disk.  In the long column case the main row has only a pointer to the 
overflow chain for the blob and it will not be read unless the
blob data is required.  In the long row case data, columns appear in the 
container in the order they are created in the original "create table"
statement.  Data is read from disk into cache for all columns from the 1st up 
to the "last"  one referenced in the query.  Data objects are only
instantiated from the cache data for the columns referenced in the query.

I have marked this low in priority as I believe that full, unindexed tables 
scans of tables with gigabyte blobs are not the normal case.  Seems like most 
applications would do keyed lookups of the table.But there may be apps that 
need to
do full table reporting on the non'-blob data in such a table.

  was:
A full table scan that does not use an index always reads every allocated page 
of the table through the cache.  In two cases 
this means logically it reads more data that necessary: long rows (where the 
head columns span multiple pages) and long columns
(columns where the data in a single column spans multiple pages).  In both 
these cases the space for the "overflow" portion of the
row or column is currently stored in the same space as the regular "main" 
pages.  The current implementation of a table scan of 
a heap container is to call raw store to give it a linear list of main pages 
with rows, raw store conglomerate implementations step through each allocated 
page in the container and returns the "main" pages (reading the overflow pages 
into cache, identifying them, and skipping them) 
the access layer which then returns rows as requested to the query processing 
layer.

If a table contains rows with very long columns (ie. 2gig blobs), and the 
tablescan does not request the blob data then a lot of data
is read from disk but not required by the query. 
A more unusual case is a table scan on requiring a few columns from a table 
made up of  2 gig rows made up of all less than 32k columns.,
in this case also derby  will read all pages as part of a tablescan even if 
only the first column is the only required column of the chain.

Note that this is only a problem in tablescan of heap tables.  In both cases if 
an index is used to get the row, then ONLY the required data is
read from disk.  In the long column case the main row has only a pointer to the 
overflow chain for the blob and it will not be read unless the
blob data is required.  In the long row case data, columns appear in the 
container in the order they are created in the original "create table"
statement.  Data is read from disk into cache for all columns from the 1st up 
to the "last"  one referenced in the query.  Data objects are only
instantiated from the cache data for the columns referenced in the query.


This improvement report was prompted by the following performance report on the 
derby list:
Hi all,

When experimenting with BLOB's I ran into a performance issue
that I cannot completely explain, but it could be a bug.

Given the following table:

CREATE TABLE BLOB_TABLE (
BLOB_ID BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT 
BY 1),
BLOB_SIZE BIGINT NOT NULL,
BLOB_CRC BIGINT NOT NULL,
BLOB_NAME VARCHAR(255) NOT NULL,
BLOB_DATA BLOB(2G) NOT NULL,
UNIQUE (BLOB_CRC, BLOB_

Re: prioritized 10.2 bug list

2006-07-12 Thread Kathey Marsden

Rick Hillegas wrote:

Thanks to everyone for helping clean up JIRA and clarify the issues we 
want to address in 10.2. It would be great if we could march in 
priority order through the issues listed in the Open 10.2 Issues 
report at 
http://wiki.apache.org/db-derby/TenTwoRelease#head-7cf194b6c7305a0e83d0c9c422f0632215f6cb19. 



Rick I don't understand this list.If I think a bug is a good 
candidate for  10.2, do I just mark it fix in 10.2  and leave it unassigned?

I had thought Fix In 10.2 just meant someone planned to fix it for 10.2

Thanks for the clarification.

Kathey




[jira] Created: (DERBY-1506) full table scans of tables which don't use indexes, which have blobs, but don't reference blob data still read all pages of the table

2006-07-12 Thread Mike Matrigali (JIRA)
full table scans of tables which don't use indexes, which have blobs, but don't 
reference blob data still read all pages of the table
-

 Key: DERBY-1506
 URL: http://issues.apache.org/jira/browse/DERBY-1506
 Project: Derby
Type: Improvement

  Components: Store  
Versions: 10.1.3.1
Reporter: Mike Matrigali
Priority: Minor


A full table scan that does not use an index always reads every allocated page 
of the table through the cache.  In two cases 
this means logically it reads more data that necessary: long rows (where the 
head columns span multiple pages) and long columns
(columns where the data in a single column spans multiple pages).  In both 
these cases the space for the "overflow" portion of the
row or column is currently stored in the same space as the regular "main" 
pages.  The current implementation of a table scan of 
a heap container is to call raw store to give it a linear list of main pages 
with rows, raw store conglomerate implementations step through each allocated 
page in the container and returns the "main" pages (reading the overflow pages 
into cache, identifying them, and skipping them) 
the access layer which then returns rows as requested to the query processing 
layer.

If a table contains rows with very long columns (ie. 2gig blobs), and the 
tablescan does not request the blob data then a lot of data
is read from disk but not required by the query. 
A more unusual case is a table scan on requiring a few columns from a table 
made up of  2 gig rows made up of all less than 32k columns.,
in this case also derby  will read all pages as part of a tablescan even if 
only the first column is the only required column of the chain.

Note that this is only a problem in tablescan of heap tables.  In both cases if 
an index is used to get the row, then ONLY the required data is
read from disk.  In the long column case the main row has only a pointer to the 
overflow chain for the blob and it will not be read unless the
blob data is required.  In the long row case data, columns appear in the 
container in the order they are created in the original "create table"
statement.  Data is read from disk into cache for all columns from the 1st up 
to the "last"  one referenced in the query.  Data objects are only
instantiated from the cache data for the columns referenced in the query.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Derby Internals Wiki

2006-07-12 Thread Sanket Sharma

Hi,

While reading Derby source code for my project, I thought It will be
good to share my knowledge with other developers. Since my project is
about adding JMX to Derby, it will interact with a lot of internal API
calls. As I continue to read and understand code, I think will good if
I can document all this somewhere. Is there any Derby Internals wiki
page where I can post all this information?
If it is not already there, I'll add a page to my JMX wiki page and
start documenting everything there. It may be later linked to Derby
main page.

I welcome any comments and suggestions.

Best Regards,
Sanket Sharma


Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Jean T. Anderson
Ramin Moazeni wrote:
> Hi Bryan,
> 
> I am not sure this would be feasible given the amount of time I have
> to finish this project as well as my familiarity with DdlUtils code
> base. But if everybody agrees to it, I can start working on it.
> 

One thing to consider is DdlUtils is database agnostic. For example,
adding support for "create view" doesn't mean just adding it for Derby,
but also adding it for every database supported (see the list at
http://db.apache.org/ddlutils/database-support.html ).

You might consider posting to ddlutils-dev@db.apache.org to ask what
level of effort people think might be required to implement the missing
features.

 -jean


> Thanks
> Ramin
> 
> On 7/11/06, Bryan Pendleton <[EMAIL PROTECTED]> wrote:
> 
>> > The DdlUtils tool seems not be capable of migrating views, CHECK
>> > constraints,  and stored procedures. I would like to know what do you
>> > think if DdlUtils tool can be reused for migrating the tables and
>> > Indexes, and use the DatabaseMetadata for migrating views and stored
>> > procedures? .
>>
>> Perhaps another possibility would be for you to improve DdlUtils so
>> that it has these desirable features. The end result would be a better
>> DdlUtils *and* a MySQL-to-Derby migration tool.
>>
>> thanks,
>>
>> bryan
>>
>>



[jira] Reopened: (DERBY-630) create trigger fails with null pointer exception

2006-07-12 Thread Susan Cline (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-630?page=all ]
 
Susan Cline reopened DERBY-630:
---


DERBY-630 is listed as a duplicate of DERBY-85 and was marked fixed in release 
10.1.3, however, I believe it is not a duplicate, and although DERBY-85 is 
fixed in 10.1.3, DERBY-630 is not.

the test case for DERBY-85 consists of this:

creates a table in the non-default schema 
creates a trigger in the default schema on a table in the non-default schema.

the test case for DERBY-630 consists of this:

creates a table in the non-default schema 
creates a trigger in the non-default schema on a table in the non-default schema

Below is the output from the two test cases run against the 10.1.3 release,
plus an additional one that shows creating a table in the default schema, then 
a trigger in the default schema on the table in the default schema does succeed.


DERBY- 85:

ij> connect 'jdbc:derby:firstDB;create=true;user=someUser;password=somePwd'; 
ij> create table itko.t1 (i int); 
0 rows inserted/updated/deleted 
ij> create trigger trig1 after update on itko.t1 for each row mode db2sql 
select 
* from sys.systables; 
0 rows inserted/updated/deleted 

DERBY-630 (simplified test case):

ij> connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd'; 
ij> create table itko.t1 (i int); 
0 rows inserted/updated/deleted 
ij> create trigger itko.trig1 after update on itko.t1 for each row mode db2sql 
s 
elect * from sys.systables; 
ERROR XJ001: Java exception: ': java.lang.NullPointerException'. 

Test case showing creating everything in the default schema is OK:

ij> connect 'jdbc:derby:newDB;create=true;user=someUser;password=somePwd'; 
ij> create table t1 (i int); 
0 rows inserted/updated/deleted 
ij> create trigger trig1 after update on t1 for each row mode db2sql select * 
fr 
om sys.systables; 
0 rows inserted/updated/deleted 

Sysinfo:

java org.apache.derby.tools.sysinfo
-- Java Information --
Java Version:1.5.0_06
Java Vendor: Sun Microsystems Inc.
Java home:   C:\JDK\jdk1.5.0_06\jre
Java classpath:  derby.jar;.;derbytools.jar;
OS name: Windows XP
OS architecture: x86
OS version:  5.1
Java user name:  slc
Java user home:  C:\Documents and Settings\Administrator
Java user dir:   C:\projects\gilles_tool\releases\Nov_refresh\installs\db-derby-
10.1.3.1-lib\lib
java.specification.name: Java Platform API Specification
java.specification.version: 1.5
- Derby Information 
JRE - JDBC: J2SE 5.0 - JDBC 3.0
[C:\projects\gilles_tool\releases\Nov_refresh\installs\db-derby-10.1.3.1-lib\lib
\derby.jar] 10.1.3.1 - (417277)
[C:\projects\gilles_tool\releases\Nov_refresh\installs\db-derby-10.1.3.1-lib\lib
\derbytools.jar] 10.1.3.1 - (417277)
--
- Locale Information -
--


> create trigger fails with null pointer exception
> 
>
>  Key: DERBY-630
>  URL: http://issues.apache.org/jira/browse/DERBY-630
>  Project: Derby
> Type: Bug

>   Components: SQL
> Versions: 10.1.1.0
>  Environment: windows 2000, sun jdk 1.5.0
> Reporter: mardacay

>
> When i create a brand new database, and execute the following statements all 
> in one transaction or each of them in their own transaction, then it fails at 
> trigger creation with null pointer exception. if i exclude the schema names 
> from statement, then it runs fine. (If S1 is ommited from every statement 
> then it runs fine). Once the version without the schema names run fine, i can 
> run the version that has schema names, fine also. 
> create schema S1;
> create table
>   S1.PRODUCT(
> PRODUCT_ID VARCHAR(255) unique not null,
> VERSION BIGINT
>   );
>   
> create table
>   S1.CATEGORY(
> CAT_ID VARCHAR(255),
> NAME varchar(255) not null,
> VERSION BIGINT
>   );
> create table
>   S1.PROD_IN_CAT(
> CAT_ID VARCHAR(255) not null,
> PRODUCT_ID VARCHAR(255) not null,
> VERSION BIGINT
>   );
>   
> create trigger S1.product_v 
> after update of version on S1.product
> referencing new as n
> for each row
> mode db2sql
>   update S1.prod_in_cat set version = n.version where 
> S1.prod_in_cat.product_id=n.product_id;
> java.lang.NullPointerException
>   at 
> org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(Unknown
>  Source)
>   at 
> org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(Unknown 
> Source)
>   at 
> org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(Unknown
>  Source)
>   at 
> org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(Unknown
>  Source)Stopping progress indicator for: Executing SQL
>   at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source)
>   at org.apa

Re: Patch review and turnaround as 10.2 approaches

2006-07-12 Thread Daniel John Debrunner
Kathey Marsden wrote:

> I think
> a big part of the problem is that there is a myth that a committer with
> expertise in a certain area needs to be the *first* to look at a patch,
> but they only need to be the last. 

Part of that is a "myth" as well, the last person to look at a patch
needs to be a committer in order to commit it, no requirement to be an
expert in that area. The committer just has to make a judgement as to if
the patch should be committed, they can rely on other reviewers'
comments and/or their own thoughts to do that.

Dan.




Re: CacheManager create() semantics

2006-07-12 Thread Mike Matrigali


Gokul Soundararajan wrote:

Hi all,

I'm implementing a new replacement algorithm (clock-pro) for the CacheManager.
In this algorithm, there is a concept of a history item. Basically, after an
eviction, the algorithm remembers the page id for some time. I was wondering on
how this affects the semantics of create().

Just to refresh, in Clock, create() creates a new entry and adds to the cache.
But, it throws an exception if the cache already contains the item. In my
implementation, the new item may not exist, exist, or exist as a history item.
Should I throw an exception if I find it as a history item?

If I am interpreting it right, the "history" is an internal 
datastructure to the cache implementation for help with replacement

decisions.  Given that I would say that create() symantics should not
change, and that the internal implementation of clock-pro somehow
manages the "history" internal without exposing it to the cache
user.

At this point do you have a writeup of the replacement algorithm you
are implementing?


Thanks,

Gokul







Re: Language based matching

2006-07-12 Thread Satheesh Bandaram
Rick Hillegas wrote:

> At one point I was keen on re-enabling the national string types. Now
> I am leaning toward implementing the ANSI collation language. I think
> this is more powerful. In particular, it lets you support more than
> one language-sensitive ordering in the same database.

I also had interest in enabling national characters for 10.2, but after
dates for 10.2 became clearer, I decided it wouldn't be feasible to
research and implement this functionality in the remaining time. I will
add a comment to JIRA entry stating this. I also don't have itch to look
into this issue for 10.3, though I think this is very useful functionality.

Satheesh




[jira] Created: (DERBY-1505) Reference Manual - Limitations of triggered-SQL-statement do not match current behaviour

2006-07-12 Thread Deepa Remesh (JIRA)
Reference Manual - Limitations of triggered-SQL-statement do not match current 
behaviour


 Key: DERBY-1505
 URL: http://issues.apache.org/jira/browse/DERBY-1505
 Project: Derby
Type: Bug

  Components: Documentation  
Versions: 10.2.0.0
Reporter: Deepa Remesh
Priority: Minor


Reference manual (Section CREATE TRIGGER statement --> Triggered-SQL-statement 
) needs to be updated to match the current behaviour:

1) I think the following statements are not fully correct:
   # It must not create, alter, or drop the table upon which the trigger is 
defined.
   # It must not add an index to or remove an index from the table on which the 
trigger is defined.
   # It must not add a trigger to or drop a trigger from the table upon which 
the trigger is defined.

These actions are not allowed on "any" table not just the " table upon which 
the trigger is defined". In general, the code does not allow execution of any 
DDL statement within a trigger. I think we can replace the above statements 
with something like "It cannot contain DDL statements."

2) The following statement is not valid:
"The triggered-SQL-statement can reference database objects other than the 
table upon which the trigger is declared."

The triggered-SQL-statement can reference the table upon which the trigger is 
declared. Hence this statement can be removed.

It would be good if someone familiar in this area can confirm the proposed 
changes are okay.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



CacheManager create() semantics

2006-07-12 Thread Gokul Soundararajan
Hi all,

I'm implementing a new replacement algorithm (clock-pro) for the CacheManager.
In this algorithm, there is a concept of a history item. Basically, after an
eviction, the algorithm remembers the page id for some time. I was wondering on
how this affects the semantics of create().

Just to refresh, in Clock, create() creates a new entry and adds to the cache.
But, it throws an exception if the cache already contains the item. In my
implementation, the new item may not exist, exist, or exist as a history item.
Should I throw an exception if I find it as a history item?

Thanks,

Gokul



prioritized 10.2 bug list

2006-07-12 Thread Rick Hillegas
Thanks to everyone for helping clean up JIRA and clarify the issues we 
want to address in 10.2. It would be great if we could march in priority 
order through the issues listed in the Open 10.2 Issues report at 
http://wiki.apache.org/db-derby/TenTwoRelease#head-7cf194b6c7305a0e83d0c9c422f0632215f6cb19.


Here's a quick summary of where we stand:

3 Blockers
4 Criticals
61 Majors

It would be great if we could get the two unclaimed Blockers assigned 
this week:


DERBY-1377 - Rototill copyrights.
DERBY-936 - Rototill release ids in our user guides.

Thanks,
-Rick


[jira] Updated: (DERBY-1377) Update copyright headers to comply with new ASF policy

2006-07-12 Thread Jean T. Anderson (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1377?page=all ]

Jean T. Anderson updated DERBY-1377:


Description: 
A new copyright header policy will take effect for distributions released 
starting on Sep 1, 2006. Committers will receive notification, but a heads up 
with details is in the legal-discuss thread starting with 
http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200606.mbox/[EMAIL 
PROTECTED]
Date was 1-Aug-2006, is now 1-Sep-2006:
http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200607.mbox/[EMAIL 
PROTECTED]

  was:
A new copyright header policy will take effect for distributions released 
starting on Aug 1, 2006. Committers will receive notification, but a heads up 
with details is in the legal-discuss thread starting with 
http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200606.mbox/[EMAIL 
PROTECTED]



> Update copyright headers to comply with new ASF policy
> --
>
>  Key: DERBY-1377
>  URL: http://issues.apache.org/jira/browse/DERBY-1377
>  Project: Derby
> Type: Bug

>   Components: Documentation
> Versions: 10.2.0.0
> Reporter: Jean T. Anderson
> Priority: Blocker
>  Fix For: 10.2.0.0

>
> A new copyright header policy will take effect for distributions released 
> starting on Sep 1, 2006. Committers will receive notification, but a heads up 
> with details is in the legal-discuss thread starting with 
> http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200606.mbox/[EMAIL 
> PROTECTED]
> Date was 1-Aug-2006, is now 1-Sep-2006:
> http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200607.mbox/[EMAIL 
> PROTECTED]

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1377) Update copyright headers to comply with new ASF policy

2006-07-12 Thread Jean T. Anderson (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1377?page=comments#action_12420707 ] 

Jean T. Anderson commented on DERBY-1377:
-

Minor note: due date moved from 1-August- 2006 to 1-September-2006; see
http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200607.mbox/[EMAIL 
PROTECTED]


> Update copyright headers to comply with new ASF policy
> --
>
>  Key: DERBY-1377
>  URL: http://issues.apache.org/jira/browse/DERBY-1377
>  Project: Derby
> Type: Bug

>   Components: Documentation
> Versions: 10.2.0.0
> Reporter: Jean T. Anderson
> Priority: Blocker
>  Fix For: 10.2.0.0

>
> A new copyright header policy will take effect for distributions released 
> starting on Aug 1, 2006. Committers will receive notification, but a heads up 
> with details is in the legal-discuss thread starting with 
> http://mail-archives.apache.org/mod_mbox/www-legal-discuss/200606.mbox/[EMAIL 
> PROTECTED]

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-758) CachedPage.readPage() can loop forever on insane builds if the read from the container keep failing with IO Exceptions.

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-758?page=all ]
 
Suresh Thalamati closed DERBY-758:
--


> CachedPage.readPage() can loop forever on insane builds if  the read from the 
> container keep failing with  IO Exceptions.
> -
>
>  Key: DERBY-758
>  URL: http://issues.apache.org/jira/browse/DERBY-758
>  Project: Derby
> Type: Bug

>   Components: Store
> Reporter: Suresh Thalamati
> Assignee: Mike Matrigali
>  Fix For: 10.2.0.0, 10.1.3.0

>
> org.apache.derby.impl.store.raw.data.CachedPage.readPage() loops forever  if  
> read  from the container 
> keeps failing  with an  IOException.  On debug build it marks the system as 
> corrupt , but in non-debug builds
> it just keeps retrying to read the page from the disk. 
> I think that is not good,  if a disk fails for some reason when attempting to 
> read a page;   Derby will just 
> hog the cpu and user will not know why. 
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Patch review and turnaround as 10.2 approaches

2006-07-12 Thread Daniel John Debrunner
Kathey Marsden wrote:

> I would like to propose that every active  developer  registered in Jira
>  make efforts to reduce the patch backlog by trying to do something to
> move along at least 2 patches outside of our  personal 10.2 line items 
> each week.See:
> http://wiki.apache.org/db-derby/PatchListMaintenance

+1 but extend it to anyone on this list, anyone can review a patch. We
have ~12 committers, ~40 registered Jira developers but ~200 people
subscribed to this list, that's a lot of eyes and could address a great
number of patches per week.

Remember also that if you have submitted a patch then get some good
karma by reviewing someone else's patch, eventually the karma will come
back to you as people review your patch. Good karma also has the
positive effect of making people pick your patches up sooner since you
demonstrate community spirit, and it will help in becoming a committer
and thus being able to submit your own patches.

And feel free to review a patch even if it has been already reviewed, he
more eyes the better.

Thanks to Kathey for setting up the PatchListMaintenance page.

Dan.






[jira] Closed: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-239?page=all ]
 
Suresh Thalamati closed DERBY-239:
--


> Need a online backup feature  that does not block update operations   when 
> online backup is in progress.
> 
>
>  Key: DERBY-239
>  URL: http://issues.apache.org/jira/browse/DERBY-239
>  Project: Derby
> Type: New Feature

>   Components: Store
> Versions: 10.1.1.0
> Reporter: Suresh Thalamati
> Assignee: Suresh Thalamati
>  Fix For: 10.2.0.0
>  Attachments: obtest_customer.jar, onlinebackup.html, onlinebackup1.html, 
> onlinebackup2.html, onlinebackup_1.diff, onlinebackup_2.diff, 
> onlinebackup_3.diff, onlinebackup_4.diff, onlinebackup_5.diff, 
> onlinebackup_6.diff, onlinebackup_7.diff, onlinebackup_8.diff
>
> Currently Derby allows users to perfoms  online backups using 
> SYSCS_UTIL.SYSCS_BACKUP_DATABASE() procedure,  but while the backup is in 
> progress, update operations are temporarily blocked, but read operations can 
> still proceed.
> Blocking update operations can be real issue specifically in client server 
> environments, because user requests will be blocked for a long time if a 
> backup is in the progress on the server.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-523) Non logged operation that starts before the log archive mode is enabled can not be recovered during rollforward recovery.

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-523?page=all ]
 
Suresh Thalamati closed DERBY-523:
--


> Non logged operation that starts before the log archive mode is enabled  can 
> not be recovered during rollforward recovery.
> --
>
>  Key: DERBY-523
>  URL: http://issues.apache.org/jira/browse/DERBY-523
>  Project: Derby
> Type: Bug

>   Components: Store
> Reporter: Suresh Thalamati
> Assignee: Suresh Thalamati
>  Fix For: 10.2.0.0

>
> once the log archive mode that is required for roll-forward recoveryis 
> enabled all the operations are logged including the operations that are not 
> logged normally like create index.But I think the currently derby does not 
> handle the case where  correctly . non-logged operations  that were started 
> before log archive mode is enabled 
> This issue was discussed  along with real time online backup  (derby-239) on 
> the list.   Conclusing was to block the existing system procedure  if the 
> non-logged opeation were in progress while enabling the log archive mode.  
> and  add a new system procedures that will take  extra parameters to deceide 
> whethere to block /fail. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-437) SYSCS_UTIL.SYSCS_COMPRESS_TABLE does not work on tables that are created with delimited identifier names.

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-437?page=all ]
 
Suresh Thalamati closed DERBY-437:
--


> SYSCS_UTIL.SYSCS_COMPRESS_TABLE  does not work  on tables that are created 
> with delimited identifier names.
> ---
>
>  Key: DERBY-437
>  URL: http://issues.apache.org/jira/browse/DERBY-437
>  Project: Derby
> Type: Bug

>   Components: SQL
> Versions: 10.0.2.2
> Reporter: Suresh Thalamati
> Assignee: Suresh Thalamati
>  Fix For: 10.2.0.0, 10.1.3.0, 10.1.2.4
>  Attachments: derby437.diff
>
> COMPRESS_TABLE procedure forms SQL statement undeneath,  so if  the user does 
> not pass quoted names , it is not working with delimited table/schema names..
> eg: create table "Order"(a int ) ;
> ij> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP' , 'Order'  ,1) ;
> ERROR 38000: The exception 'SQL Exception: Syntax error: Encountered "Order" 
> at
> line 1, column 17.' was thrown while evaluating an expression.
> ERROR 42X01: Syntax error: Encountered "Order" at line 1, column 17.
> With quoted names it works fine.
> ij> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP' , '"Order"'  ,1) ;
> 0 rows inserted/updated/deleted
> If  it is  expected that  user to pass quoted names for 
> SYSCS_UTIL.SYSCS_COMPRESS_TABLE, then it is ok.
> But doc is not clear:
> COMPRESS_TABLE doc in the reference manual:
> TABLENAME
>An input argument of type VARCHAR(128) that specifies the table name
>of the table. The string must exactly match the case of the table
>name, and the argument of "Fred" will be passed to SQL as the
>delimited identifier 'Fred'. Passing a null will result in an error.
> So either doc has to be fixed or code needs to be fixed to handle  quoted 
> names for compress table.
> I think the code has to fixed to be consistent with other system procedures. 
> .  i.e
> If you  created a schema, table or column name as a non-delimited identifier, 
> you must pass the name in all upper case. If you created a schema, table or 
> column name as a delimited identifier, you must pass the name in the same 
> case as it was created.
> For example:
> create table "Order"(a int ) ;
> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP' , 'Order'  ,1) ;
> create table t1( a int ) 
> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP' , 'T1'  ,1) ;

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-1045) forupdate.sql , holdCursorIJ.sql ..etc are failing when run with 10.1 client againest trunk

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1045?page=all ]
 
Suresh Thalamati closed DERBY-1045:
---


> forupdate.sql , holdCursorIJ.sql ..etc are failing when run with 10.1 client 
> againest trunk
> ---
>
>  Key: DERBY-1045
>  URL: http://issues.apache.org/jira/browse/DERBY-1045
>  Project: Derby
> Type: Bug

>   Components: Build tools, JDBC
> Versions: 10.2.0.0
>  Environment: Java Version:1.5.0_02
> Java Vendor: Sun Microsystems Inc.
> Java home:   d:\dev\src\jdk15\jre
> Java classpath:  
> d:/dev/src/classes/derby.jar;d:/dev/src/classes/derbytools.jar;d:/dev/src/classes/derbynet.jar;d:/dev/src/classes/v10_1_client/derbyclient.jar;d:/dev/src/classes/functionTests.jar;d:/dev/src/classes/v10_1_client/derbyTesting.jar;d:/dev/src/tools/java/jndi/fscontext.jar;d:/dev/src/tools/java/junit.jar;d:/dev/src/classes/derbyLocale_zh_TW.jar;d:/dev/src/classes/derbyLocale_zh_CN.jar;d:/dev/src/classes/derbyLocale_pt_BR.jar;d:/dev/src/classes/derbyLocale_ko_KR.jar;d:/dev/src/classes/derbyLocale_ja_JP.jar;d:/dev/src/classes/derbyLocale_it.jar;d:/dev/src/classes/derbyLocale_fr.jar;d:/dev/src/classes/derbyLocale_es.jar;d:/dev/src/classes/derbyLocale_de_DE.jar;
> OS name: Windows 2000
> OS architecture: x86
> OS version:  5.0
> Java user dir:   
> D:\dev\src\JarResults.2006-02-22\jdk15_derbynetclientmats_client_v101
> java.specification.name: Java Platform API Specification
> java.specification.version: 1.5
> - Derby Information 
> JRE - JDBC: J2SE 5.0 - JDBC 3.0
> [D:\dev\src\classes\derby.jar] 10.2.0.0 alpha - (380027)
> [D:\dev\src\classes\derbytools.jar] 10.2.0.0 alpha - (380027)
> [D:\dev\src\classes\derbynet.jar] 10.2.0.0 alpha - (380027)
> [D:\dev\src\classes\v10_1_client\derbyclient.jar] 10.1.2.3 - (379660)
> Reporter: Suresh Thalamati
> Assignee: Andrew McIntyre
>  Fix For: 10.2.0.0
>  Attachments: derbynetclientmats_report.txt
>
> derbynetclientmats/derbynetmats/derbynetmats.fail:lang/forupdate.sql
> derbynetclientmats/derbynetmats/derbynetmats.fail:lang/holdCursorIJ.sql
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:store/holdCursorJDBC30.sql
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/LOBTest.java
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/blobclob4BLOB.java
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/parameterMapping.java
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/setTransactionIsolation.java
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/metadataJdbc20.java
> 
> derbynetclientmats/derbynetmats/derbynetmats.fail:jdbcapi/connectionJdbc20.java
> Sample diff :
> *** Start: forupdate jdk1.5.0_02 DerbyNetClient derbynetmats:derbynetmats 
> 2006-02-23 07:30:19 ***
> 23 del
> < ERROR 42X01: Syntax error: Encountered "" at line 1, column 23.
> 23a23
> > ERROR 42X01: Syntax error: Encountered "" at line 3, column 23.
> 59 del
> < ERROR (no SQLState): Invalid cursor name "C1" in the Update/Delete 
> statement.
> 59a59
> > ERROR 42X30: Cursor 'SQL_CURLH000C1' not found. Verify that autocommit is 
> > OFF.
> 132 del
> < ERROR (no SQLState): Invalid cursor name "C4" in the Update/Delete 
> statement.
> 132a132
> > ERROR 42X30: Cursor 'SQL_CURLH000C1' not found. Verify that autocommit is 
> > OFF.
> 135 del
> < ERROR (no SQLState): Invalid cursor name "C4" in the Update/Delete 
> statement.
> 135a135
> > ERROR 42X30: Cursor 'SQL_CURLH000C1' not found. Verify that autocommit is 
> > OFF.
> 180 del
> < ERROR 42X01: Syntax error: Encountered "." at line 1, column 34.
> 180a180
> > ERROR 42X01: Syntax error: Encountered "." at line 3, column 34.
> Test Failed.
> *** End:   forupdate jdk1.5.0_02 DerbyNetClient derbynetmats:derbynetmats 2006

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1015) Define interface between network server and engine through Java interfaces.

2006-07-12 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1015?page=comments#action_12420702 ] 

Daniel John Debrunner commented on DERBY-1015:
--

Since David is further ahead than me here I will defer to him for the commit

> Define interface between network server and engine through Java interfaces.
> ---
>
>  Key: DERBY-1015
>  URL: http://issues.apache.org/jira/browse/DERBY-1015
>  Project: Derby
> Type: Improvement

>   Components: JDBC
> Reporter: Daniel John Debrunner
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0
>  Attachments: Derby1015.p2.diff.txt, derby1015.diff.txt, 
> derby1015.p2.stat.txt, derby1015.stat.txt
>
> API between the network server and engine is not well defined, leading to 
> inconsistent & multiple ways of handling the different objects returned, such 
> as reflection, explicit casting etc. This in turn has lead to bugs such as 
> DERBY-966 . DERBY-1005, and DERBY-1006, and access to underlying objects by 
> the application that should be hidden.
> Define interfaces, such as EngineConnection, that both EmbedConnection and 
> BrokeredConnection implement. Thus the network server can rely on the fact 
> that any connection it obtains will implement EngineConnection, and call the 
> required methods through that interface.
> Most likely will need EngineConnection, EnginePreparedStatement and 
> EngineResultSet.. These interfaces would be internal to derby and not exposed 
> to applications.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1500) PreparedStatement#setObject(int parameterIndex, Object x) throws SQL Exception when binding Short value in embedded mode

2006-07-12 Thread Andrew McIntyre (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1500?page=all ]

Andrew McIntyre updated DERBY-1500:
---

Component: JDBC

> PreparedStatement#setObject(int parameterIndex, Object x) throws SQL 
> Exception when binding Short value in embedded mode
> 
>
>  Key: DERBY-1500
>  URL: http://issues.apache.org/jira/browse/DERBY-1500
>  Project: Derby
> Type: Bug

>   Components: JDBC
> Versions: 10.1.1.0, 10.1.3.1
>  Environment: WindowsXP
> Reporter: Markus Fuchs
>  Attachments: ShortTest.java
>
> When trying to insert a row into the table 
> SHORT_TEST( ID int, SHORT_VAL smallint)
> an exception is thrown, if the object value given to 
> PreparedStatement#setObject(int parameterIndex, Object x) is of type Short. 
> The exception thrown is:
> --- SQLException ---
> SQLState:  22005
> Message:  An attempt was made to get a data value of type 'SMALLINT' from a 
> data value of type 'java.lang.Short'.
> ErrorCode:  2
> SQL Exception: An attempt was made to get a data value of type 'SMALLINT' 
> from a data value of type 'java.lang.Short'.
>   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown 
> Source)
>   at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.dataTypeConversion(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setObject(Unknown 
> Source)
> Tested on Derby 10.1.1.0 and 10.1.3.1. The same test runs fine in network 
> mode.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Patch review and turnaround as 10.2 approaches

2006-07-12 Thread Kathey Marsden

David Van Couvering wrote:

+1.  If we aren't responsive to new contributor patches, then we won't 
have a lot of new contributors hanging around :).  I for one will 
*try* to work on 2 patches a week -- it depends on whether the patches 
submitted fall within my range of expertise.


That should not be a constraint in helping patches along.
Right now someone who knew nothing about the code could review the patch 
list, try out patches and see if they are current.  That would reduce 
the list a lot I think!  See 
http://wiki.apache.org/db-derby/PatchListMaintenance


Beyond that, just trying patches to see if they work, reviewing the code 
for just general good java practice, comments etc is valuable.  I think 
a big part of the problem is that there is a myth that a committer with 
expertise in a certain area needs to be the *first* to look at a patch, 
but they only need to be the last.  The work can be distributed across 
the community.   As a committer I am sure you have run derbyall enough 
times and seen it fail to know just running tests on patches is 
extremely valuable.


Here are a couple examples of efforts I made recently to  try to address 
patches from independent contributors even though I am no expert:


DERBY-578 (patch posted April 24)
- Pinged Rick and begged him to review which he did.
   - Synched up Manish's patch, ran tests, added comments and submitted 
a new patch. 
   - Committed and unchecked patch available.


DERBY-1208 (posted April 15)
   - Tried out the patch and threatened to commit it so someone 
qualified would feel compelled to jump in before I did it.

   - Unchecked  patch available after Dan's review.

DERBY-974 (Fix suggestion posted Feb 13. No formal patch)
   - Looked at the proposed change and suggested  we get a Derby 
reproducible case and invited Michael Hackett to become active in the 
community again and promised to not make him wait months before 
responding if he did.


Plus hopefully provided tools to move things along by adding content to
http://wiki.apache.org/db-derby/PatchListMaintenance
http://wiki.apache.org/db-derby/PatchAdvice

We all can contribute no matter our skill level in a particular area.

Kathey




[jira] Updated: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-12 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1156?page=all ]

Suresh Thalamati updated DERBY-1156:


Attachment: reencrypt_4.diff

DERBY -1156 (partial)
This patch adds some code required to support reconfigure(rencryption) of
an already existing encrypted database with a new password(secret key)
or an external user specified encryption key.

-- disables encryption/re-encryption of an existing database if there 
   are any global transaction in the prepared state after recovery. 

-- disables encryption/re-encryption of an existing database if database 
   is soft-upgraded to 10.2. 

-- Added test that tests re-encryption of an encrypted database
   when global transaction are in the prepared state after recovery. 

Tested the upgrade manually, will add the test case later. 

TESTS : derbyall test suite passed on Windows XP/JDK142

It would be great if some can review this patch. 

svn stat:
M  java\engine\org\apache\derby\impl\store\raw\xact\XactFactory.java
M  java\engine\org\apache\derby\impl\store\raw\xact\TransactionTable.java
M  java\engine\org\apache\derby\impl\store\raw\log\ReadOnly.java
M  java\engine\org\apache\derby\impl\store\raw\log\LogToFile.java
M  java\engine\org\apache\derby\impl\store\raw\RawStore.java
M  java\engine\org\apache\derby\iapi\store\raw\log\LogFactory.java
M  java\engine\org\apache\derby\iapi\store\raw\xact\TransactionFactory.java
M  java\engine\org\apache\derby\iapi\store\raw\RawStoreFactory.java
M  java\engine\org\apache\derby\loc\messages_en.properties
M  java\shared\org\apache\derby\shared\common\reference\SQLState.java
M  java\testing\org\apache\derbyTesting\functionTests\tests\store\copyfiles.
ant
A  
java\testing\org\apache\derbyTesting\functionTests\tests\store\encryptDatabaseTest2_app.properties
A  
java\testing\org\apache\derbyTesting\functionTests\tests\store\encryptDatabaseTest2.sql
A  
java\testing\org\apache\derbyTesting\functionTests\master\encryptDatabaseTest2.out
M  
java\testing\org\apache\derbyTesting\functionTests\suites\encryptionAll.runall

> allow the encrypting of an existing unencrypted db and allow the 
> re-encrypting of an existing encrypted db
> --
>
>  Key: DERBY-1156
>  URL: http://issues.apache.org/jira/browse/DERBY-1156
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.2.3
> Reporter: Mike Matrigali
> Assignee: Suresh Thalamati
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: encryptspec.html, reencrypt_1.diff, reencrypt_2.diff, 
> reencrypt_3.diff, reencrypt_4.diff
>
> encrypted database to be re-encrypted with a new password.
> Here are some ideas for an initial implementation.
> The easiest way to do this is to make sure we have exclusive access to the
> data and that no log is required in the new copy of the db.  I want to avoid
> the log as it also is encrypted.  Here is my VERY high level plan:
> 1) Force exclusive access by putting all the work in the low level store,
>offline boot method.  We will do redo recovery as usual, but at the end
>there will be an entry point to do the copy/encrypt operation.
> copy/encrypt process:
> 0) The request to encrypt/re-encrypt the db will be handled with a new set
>of url flags passed into store at boot time.  The new flags will provide
>the same inputs as the current encrypt flags.  So at high level the
>request will be "connect db old_encrypt_url_flags; new_encrypt_url_flags".
>TODO - provide exact new flag syntax.
> 1) Open a transaction do all logged work to do the encryption.  All logging
>will be done with existing encryption.
> 2) Copy and encrypt every db file in the database.  The target files will
>be in the data directory.  There will be a new suffix to track the new
>files, similar to the current process used for handling drop table in
>a transaction consistent manner without logging the entire table to the 
> log.
>Entire encrypted destination file is guaranteed synced to disk before
>transaction commits.  I don't think this part needs to be logged.
>Files will be read from the cache using existing mechanism and written
>directly into new encrypted files (new encrypted data does not end up in
>the cache).
> 3) Switch encrypted files for old files.  Do this under a new log operation
>so the process can be correctly rolled back if the encrypt db operation
>transaction fails.  Rollback will do file at a time switches, no reading
>of encrypted data is necessary.
> 4) log a "change encryption of db" log record, but do not update
>system.properties with the change.
> 5) commit transaction.
> 6) update system.properties and sync changes.
> 7) TODO - need someway to handle crash between steps 5 and 6.
> 6) checkpoint all data, at this 

[jira] Updated: (DERBY-1501) PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode

2006-07-12 Thread Andrew McIntyre (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1501?page=all ]

Andrew McIntyre updated DERBY-1501:
---

Component: JDBC

> PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL 
> Exception if given sqlType is LONGVARBINARY in embedded mode
> --
>
>  Key: DERBY-1501
>  URL: http://issues.apache.org/jira/browse/DERBY-1501
>  Project: Derby
> Type: Bug

>   Components: JDBC
> Versions: 10.1.1.0
>  Environment: WindowsXP
> Reporter: Markus Fuchs
>  Attachments: ByteArrayTest.java
>
> When inserting a row into following table
> BYTEARRAY_TEST( ID int, BYTEARRAY_VAL blob)
> PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL 
> Exception if given sqlType is LONGVARBINARY. You must give sqlType BLOB to 
> make the insert work. The same test works using sqlType LONGVARBINARY in 
> network mode. The following combinations don't work:
> Column type   sqlType not working mandatory sqlType
> BLOB   LONGVARBINARY BLOB
> CLOB   LONGVARCHARCLOB
> The issue here is that first Derby behaves differently in network and 
> embedded mode. And secondly, should accept LONGVARBINARY/LONGVARCHAR for 
> BLOB/CLOB columns.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-12 Thread Suresh Thalamati (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1156?page=comments#action_12420699 ] 

Suresh Thalamati commented on DERBY-1156:
-

Committed reencrypt_3.diff patch  to trunk with revision 416536

> allow the encrypting of an existing unencrypted db and allow the 
> re-encrypting of an existing encrypted db
> --
>
>  Key: DERBY-1156
>  URL: http://issues.apache.org/jira/browse/DERBY-1156
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.2.3
> Reporter: Mike Matrigali
> Assignee: Suresh Thalamati
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: encryptspec.html, reencrypt_1.diff, reencrypt_2.diff, 
> reencrypt_3.diff
>
> encrypted database to be re-encrypted with a new password.
> Here are some ideas for an initial implementation.
> The easiest way to do this is to make sure we have exclusive access to the
> data and that no log is required in the new copy of the db.  I want to avoid
> the log as it also is encrypted.  Here is my VERY high level plan:
> 1) Force exclusive access by putting all the work in the low level store,
>offline boot method.  We will do redo recovery as usual, but at the end
>there will be an entry point to do the copy/encrypt operation.
> copy/encrypt process:
> 0) The request to encrypt/re-encrypt the db will be handled with a new set
>of url flags passed into store at boot time.  The new flags will provide
>the same inputs as the current encrypt flags.  So at high level the
>request will be "connect db old_encrypt_url_flags; new_encrypt_url_flags".
>TODO - provide exact new flag syntax.
> 1) Open a transaction do all logged work to do the encryption.  All logging
>will be done with existing encryption.
> 2) Copy and encrypt every db file in the database.  The target files will
>be in the data directory.  There will be a new suffix to track the new
>files, similar to the current process used for handling drop table in
>a transaction consistent manner without logging the entire table to the 
> log.
>Entire encrypted destination file is guaranteed synced to disk before
>transaction commits.  I don't think this part needs to be logged.
>Files will be read from the cache using existing mechanism and written
>directly into new encrypted files (new encrypted data does not end up in
>the cache).
> 3) Switch encrypted files for old files.  Do this under a new log operation
>so the process can be correctly rolled back if the encrypt db operation
>transaction fails.  Rollback will do file at a time switches, no reading
>of encrypted data is necessary.
> 4) log a "change encryption of db" log record, but do not update
>system.properties with the change.
> 5) commit transaction.
> 6) update system.properties and sync changes.
> 7) TODO - need someway to handle crash between steps 5 and 6.
> 6) checkpoint all data, at this point guaranteed that there is no outstanding
>transaction, so after checkpoint is done there is no need for the log.
> ISSUES:
> o there probably should be something that catches a request to encrypt to
>   whatever db was already encrypted with.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-1480) Crash with JVM 1.4.2_08-b03

2006-07-12 Thread Andrew McIntyre (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1480?page=all ]
 
Andrew McIntyre resolved DERBY-1480:


Resolution: Invalid

> I will post a bugreport at sun.

Great, thanks! 

Closing this bug as Invalid as I believe it to be a JVM issue.

> Crash with JVM 1.4.2_08-b03
> ---
>
>  Key: DERBY-1480
>  URL: http://issues.apache.org/jira/browse/DERBY-1480
>  Project: Derby
> Type: Bug

> Versions: 10.0.2.1
>  Environment: solaris 10
> Reporter: Holger Tewis

>
> Derby crashes.
> derby.logs says:
> 2006-07-05 20:24:16.522 GMT:
>  Booting Derby version The Apache Software Foundation - Apache Derby - 
> 10.0.2.1 - (106978): instance c013800d-010c-405c-c29d-cbf29dbb
> on database directory /derby
> Database Class Loader started - derby.database.classpath=''
> hs_err_pid19973.log
> says:
> Unexpected Signal : 11 occurred at PC=0xF9DAE628
> Function=org.apache.derby.impl.store.raw.data.BasePage.shiftUp(I)Lorg/apache/derby/impl/store/raw/data/StoredRecordHeader;
>  (compiled Java code)
> Library=(N/A)
> Current Java thread:
> Dynamic libraries:
> 0x1 /opt/java/j2sdk1.4.2_08/jre/bin/java
> 0xff3f8000  /lib/libthread.so.1
> 0xff3a  /lib/libdl.so.1
> 0xff28  /lib/libc.so.1
> 0xff27  /platform/SUNW,Sun-Fire-V440/lib/libc_psr.so.1
> 0xfec0  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/client/libjvm.so
> 0xff23  /usr/lib/libCrun.so.1
> 0xff20  /lib/libsocket.so.1
> 0xff10  /lib/libnsl.so.1
> 0xff1e  /lib/libm.so.1
> 0xff26  /usr/lib/libsched.so.1
> 0xfeb0  /lib/libm.so.2
> 0xff0d  /lib/libscf.so.1
> 0xff0b  /lib/libdoor.so.1
> 0xff09  /lib/libuutil.so.1
> 0xff07  /lib/libmd5.so.1
> 0xff05  /platform/SUNW,Sun-Fire-V440/lib/libmd5_psr.so.1
> 0xff03  /lib/libmp.so.2
> 0xfead  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/native_threads/libhpi.so
> 0xfeaa  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libverify.so
> 0xfea6  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libjava.so
> 0xfea4  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libzip.so
> 0xf051  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libnet.so
> 0xefc6  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libioser12.so
> 0xefc4  /opt/java/j2sdk1.4.2_08/jre/lib/sparc/libnio.so
> 0xefa6  /lib/librt.so.1
> 0xef3d  /lib/libaio.so.1
> 0xef8e  /usr/lib/libsendfile.so.1
> Heap at VM Abort:
> Heap
>  def new generation   total 4032K, used 2264K [0xf180, 0xf1c0, 
> 0xf280)
>   eden space 3968K,  55% used [0xf180, 0xf1a26360, 0xf1be)
>   from space 64K, 100% used [0xf1bf, 0xf1c0, 0xf1c0)
>   to   space 64K,   0% used [0xf1be, 0xf1be, 0xf1bf)
>  tenured generation   total 6824K, used 5573K [0xf280, 0xf2eaa000, 
> 0xf580)
>the space 6824K,  81% used [0xf280, 0xf2d71750, 0xf2d71800, 0xf2eaa000)
>  compacting perm gen  total 10752K, used 10644K [0xf580, 0xf628, 
> 0xf980)
>the space 10752K,  99% used [0xf580, 0xf62652a8, 0xf6265400, 
> 0xf628)
> Local Time = Wed Jul  5 22:24:25 2006
> Elapsed Time = 13
> #
> # HotSpot Virtual Machine Error : 11
> # Error ID : 4F530E43505002EF 01
> # Please report this error at
> # http://java.sun.com/cgi-bin/bugreport.cgi
> #
> # Java VM: Java HotSpot(TM) Client VM (1.4.2_08-b03 mixed mode)
> #

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1005) getHoldability does not return CLOSE_CURSORS_AT_COMMIT in a global transaction

2006-07-12 Thread Deepa Remesh (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1005?page=comments#action_12420697 ] 

Deepa Remesh commented on DERBY-1005:
-

This issue was not fixed in 10.1.2.3. There was one more patch for this issue 
which went in later (svn revision# 412258). Currently there is no way to edit 
the issue and remove 10.1.2.3 from fix version as it is an archived fix 
version. Hence adding this comment. 

The fix is available in 10.1.3.


> getHoldability does not return CLOSE_CURSORS_AT_COMMIT in a global transaction
> --
>
>  Key: DERBY-1005
>  URL: http://issues.apache.org/jira/browse/DERBY-1005
>  Project: Derby
> Type: Bug

>   Components: Network Client
> Versions: 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0, 10.1.3.0, 10.1.2.3

>
> Holdability for a connection should automatically become 
> CLOSE_CURSORS_AT_COMMIT for a global transaction.
> For client xa Connection.getHoldability returns OLD_CURSORS_OVER_COMMIT 
> within a global transaction.
> This issue was discovered when converting checkDataSource30.java to run with 
> client and related code was disabled for client testing.
> To reproduce,  take out if (TestUtil.isEmbeddedFramework())   for this code 
> in jdbcapi/checkDataSource30.java
> if (TestUtil.isEmbeddedFramework())
>   {
>   // run only for embedded
>   // Network XA BUG: getHoldability does not 
> return CLOSE_CURSORS_AT_COMMIT for global transaction
>   System.out.println("Notice that connection's 
> holdability at this point is CLOSE_CURSORS_AT_COMMIT because it is part of 
> the global transaction");
>   System.out.println("CONNECTION(in xa 
> transaction) HOLDABILITY " + (conn1.getHoldability() == 
> ResultSet.HOLD_CURSORS_OVER_COMMIT));
>   }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Ramin Moazeni

Hi David

I do not believe the split will be exposed to the normal user.
However, in case of an error, the user might have to follow two
different paths to fix the problem. For example, modifying xml files
for ddlutils component and modifying sql statements
for the DatabaseMetadata approach.

I hope I answered your question

Thanks,
Ramin

On 7/11/06, David Van Couvering <[EMAIL PROTECTED]> wrote:

If this is the simplest approach in terms of design and code re-use it
sounds good to me, *as long as* the user doesn't have this split exposed
to him/her, and instead sees one seamless tool and user experience.

In particular, if Things Go Wrong, what is the user flow/experience --
how does the user back things out, figure out what went wrong, etc.
Does this approach complicate error determination and recovery?

Thanks,

David

Ramin Moazeni wrote:
> Hello
>
> As per my earlier post regarding the design document for MySQL to
> Derby Migration tool located at
> http://wiki.apache.org/db-derby/MysqlDerbyMigration/DesignDocument, I
> proposed two approches: 1) based on the use of DatabaseMetaData and 2)
> using DdlUtils tool.
>
> The DdlUtils tool seems not be capable of migrating views, CHECK
> constraints,  and stored procedures. I would like to know what do you
> think if DdlUtils tool can be reused for migrating the tables and
> Indexes, and use the DatabaseMetadata for migrating views and stored
> procedures? .
>
> Your comments are appreciated.
>
> Thanks
> Ramin Moazeni



Re: Patch review and turnaround as 10.2 approaches

2006-07-12 Thread David Van Couvering
+1.  If we aren't responsive to new contributor patches, then we won't 
have a lot of new contributors hanging around :).  I for one will *try* 
to work on 2 patches a week -- it depends on whether the patches 
submitted fall within my range of expertise.


David

Kathey Marsden wrote:
In almost every feature release I have been involved in,   folks start 
having trouble getting reviews in the final push as everyone scrambles 
to get their own work done.   Add to that the fact that we do a 
generally horrible job in keeping up with patches especially for 
independent contributors who are expected to wait  months for review; we 
are headed for trouble.  I think we will see a situation where  actual  
completed fixes will be left out of the release because they are 
ignored.  And that is bad for Derby.


I would like to propose that every active  developer  registered in Jira 
 make efforts to reduce the patch backlog by trying to do something to 
move along at least 2 patches outside of our  personal 10.2 line items  
each week.See:

http://wiki.apache.org/db-derby/PatchListMaintenance

and in general work to improve patch throughput by  heeding the advice at:
http://wiki.apache.org/db-derby/PatchAdvice

Critical to avoiding  fixes being left behind will be getting reviews 
from outside the committer base and  even just folks trying out patches 
for fix verification if they don't know the code.If a change has  
been tried out  and reviewed before a committer looks at it,  it can get 
in much, much, faster.


Thanks

Kathey



Re: Removing an archived fix version from the fix version list of an issue

2006-07-12 Thread Kathey Marsden

Andrew McIntyre wrote:



So, I think tracking snapshots while development is occurring, but
merging them to the release version for searching purposes to keep
things tidy is the way to go. Once there's an official release, we're
not likely to see any bug reports against snapshots so I'm not sure
there's any need to track them past an official release.

For Fix version I think this is very clear.  Developers always mark the 
lowest fix version as sysinfo reports it and release notes for releases 
are correct, users when looking for fixes know exactly what version they 
can pick up to get the fix. 

For Affects Version, it might be a little confusing for  folks reporting 
bugs because the version showed by sysinfo might be  gone,  but it is 
good practice to verify against the latest release on the branch before 
filing a bug anyway, so I think its ok.


For historical tracking past the release,  really only the svn # is  all 
that interesting,  but it certainly  would be helpful if the svn 
revision of the change was more integrated with Jira and  you could  
query Jira for things like what fixes went in in a certain svn range etc.


All that said,  I think it would be good to make sure we do the right 
thing here and make sure we are not blowing away versions that might be 
needed.
I can't think of  why they would be needed, but I have not done a 
thorough analysis.  Before you punch in merge it might be good to  send 
out a summary of the new process and a final "going, going..."  message 
to anyone who might see value in keeping this history.


BTW, Deepa, I would say just put a comment in DERBY-1005 for now about 
the fix  version being wrong.


Kathey



Re: UnsupportedClassVersionError running xa suite under jdk1.3

2006-07-12 Thread Rick Hillegas

Thanks, Myrna, this does the trick!

Cheers,
-Rick

Myrna van Lunteren wrote:


On 7/12/06, Rick Hillegas <[EMAIL PROTECTED]> wrote:


I'm seeing the following error when I run the xa suite under jdk1.3.
This looks like an environmental problem to me. Would appreciate advice
on the accepted way to fix my environment for running the tests under
jdk1.3.

The suite dies trying to load javax.transaction.xa.Xid. This is a class
which is not present in the 1.3 jdk but which appears in 1.4 and later.
It appears to me that since the loader can't find this class in the jdk,
it grabs it from geronimo-spec-jta-1.0.1B-rc4.jar, which I'm suspecting
causes the class version error.

My classpath includes the following jars from trunk/tools/java

db2jcc.jargeronimo-spec-jta-1.0.1B-rc4.jar
javacc.jar  junit.jar   xml-apis.jar
db2jcc_license_c.jar  geronimo-spec-servlet-2.4-rc4.jar
jce1_2_2.jarservlet.jar
empty.jar jakarta-oro-2.0.8.jar
jdbc2_0-stdext.jar  xercesImpl.jar

Here's the error:

Exception in thread "main" java.lang.UnsupportedClassVersionError:
javax/transaction/xa/Xid (Unsupported major.minor ver
sion 48.0)
   at java.lang.ClassLoader.defineClass0(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:488)
   at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:106)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:243)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:51)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:190)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:183)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:294)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:288)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:250)
   at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:310)
   at java.lang.Class.forName0(Native Method)
   at java.lang.Class.forName(Class.java:115)
   at
org.apache.derbyTesting.functionTests.harness.RunList.shouldSkipTest(Unknown 


Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.setSuiteProperties(Unknown 


Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.runSuites(Unknown
Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.(Unknown 
Source)

   at
org.apache.derbyTesting.functionTests.harness.RunSuite.getSuitesList(Unknown 


Source)
   at
org.apache.derbyTesting.functionTests.harness.RunSuite.main(Unknown 
Source)


Thanks,
-Rick


Hi Rick,

I have a jta1_2.jar in my jvm's jre/lib/ext which has this class.

If you have that puppy also, then maybe not having the geronimo jars
in the classpath will make a difference? Do classes from the classpath
get loaded before classes in the jre/lib/ext?

Myrna





Re: Language based matching

2006-07-12 Thread Rick Hillegas

Kathey Marsden wrote:


Rick Hillegas wrote:

[ some interesting stuff about performance]

LIKE is going to be a pile of work. I think your LOCALE_MATCHES 
function will have to duplicate a lot of the code in Derby. At the 
end of the day, you will replace LIKE with LOCALE_MATCHES and so lose 
the performance-enhancing query pre-processing which DERBY does for 
%. Here the weeds have become too thick for me.



So are you saying the answer to the question:
"Is there some easy Java regular expression matching function  like 
String.matches(Collator collator, String pattern, String value)? "


is "No"

Kathey

I don't know. I haven't looked deeply into this. Cursory googling didn't 
find anything.


Patch review and turnaround as 10.2 approaches

2006-07-12 Thread Kathey Marsden
In almost every feature release I have been involved in,   folks start 
having trouble getting reviews in the final push as everyone scrambles 
to get their own work done.   Add to that the fact that we do a 
generally horrible job in keeping up with patches especially for 
independent contributors who are expected to wait  months for review; we 
are headed for trouble.  I think we will see a situation where  actual  
completed fixes will be left out of the release because they are 
ignored.  And that is bad for Derby.


I would like to propose that every active  developer  registered in Jira 
 make efforts to reduce the patch backlog by trying to do something to 
move along at least 2 patches outside of our  personal 10.2 line items  
each week.See:

http://wiki.apache.org/db-derby/PatchListMaintenance

and in general work to improve patch throughput by  heeding the advice at:
http://wiki.apache.org/db-derby/PatchAdvice

Critical to avoiding  fixes being left behind will be getting reviews 
from outside the committer base and  even just folks trying out patches 
for fix verification if they don't know the code.If a change has  
been tried out  and reviewed before a committer looks at it,  it can get 
in much, much, faster.


Thanks

Kathey



[jira] Commented: (DERBY-1015) Define interface between network server and engine through Java interfaces.

2006-07-12 Thread David Van Couvering (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1015?page=comments#action_12420677 ] 

David Van Couvering commented on DERBY-1015:


I looked at the patches, and they look quite good, very simple and direct, and 
creating what I think is a very useful and important abstraction between the 
network server and the engine.  

I think it would be good to complete the abstraction and not depend directly on 
any engine classes, including EmbedSQLException, but I would argue that should 
be a separate JIRA.

I'll work on getting this committed.

> Define interface between network server and engine through Java interfaces.
> ---
>
>  Key: DERBY-1015
>  URL: http://issues.apache.org/jira/browse/DERBY-1015
>  Project: Derby
> Type: Improvement

>   Components: JDBC
> Reporter: Daniel John Debrunner
> Assignee: Daniel John Debrunner
>  Fix For: 10.2.0.0
>  Attachments: Derby1015.p2.diff.txt, derby1015.diff.txt, 
> derby1015.p2.stat.txt, derby1015.stat.txt
>
> API between the network server and engine is not well defined, leading to 
> inconsistent & multiple ways of handling the different objects returned, such 
> as reflection, explicit casting etc. This in turn has lead to bugs such as 
> DERBY-966 . DERBY-1005, and DERBY-1006, and access to underlying objects by 
> the application that should be hidden.
> Define interfaces, such as EngineConnection, that both EmbedConnection and 
> BrokeredConnection implement. Thus the network server can rely on the fact 
> that any connection it obtains will implement EngineConnection, and call the 
> required methods through that interface.
> Most likely will need EngineConnection, EnginePreparedStatement and 
> EngineResultSet.. These interfaces would be internal to derby and not exposed 
> to applications.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Language based matching

2006-07-12 Thread Kathey Marsden

Rick Hillegas wrote:

[ some interesting stuff about performance]

LIKE is going to be a pile of work. I think your LOCALE_MATCHES 
function will have to duplicate a lot of the code in Derby. At the end 
of the day, you will replace LIKE with LOCALE_MATCHES and so lose the 
performance-enhancing query pre-processing which DERBY does for %. 
Here the weeds have become too thick for me.



So are you saying the answer to the question:
"Is there some easy Java regular expression matching function  like 
String.matches(Collator collator, String pattern, String value)? "


is "No"

Kathey



Re: DERBY-1015 patches for review.

2006-07-12 Thread Daniel John Debrunner
Sunitha Kambhampati wrote:

> Last week, I attached two small patches to DERBY-1015, and one of the
> patches solves the case for DERBY-1227. "Define interface between
> network server and engine through Java interfaces".
> http://issues.apache.org/jira/browse/DERBY-1015
> 
> Can someone please review these patches.

I will look at them since I submitted the original issue.

Dan.



DERBY-1015 patches for review.

2006-07-12 Thread Sunitha Kambhampati
Last week, I attached two small patches to DERBY-1015, and one of the 
patches solves the case for DERBY-1227. "Define interface between 
network server and engine through Java interfaces". 
http://issues.apache.org/jira/browse/DERBY-1015


Can someone please review these patches.

Thanks much,
Sunitha. 



[jira] Updated: (DERBY-1330) Provide runtime privilege checking for grant/revoke functionality

2006-07-12 Thread Mamta A. Satoor (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1330?page=all ]

Mamta A. Satoor updated DERBY-1330:
---

Attachment: Derby1330uuidIndexForPermsSystemTablesV6diff.txt
Derby1330uuidIndexForPermsSystemTablesV6stat.txt

As per Dan's suggestion, I have moved the resetting of permission descriptor's 
uuid into DataDictionary.addRemovePermissionsDescriptor method. The derbyall 
runs fine with no new failures. I have also added couple tests to 
lang\grantRevokeDDL.sql

Please find this change in the attached 
Derby1330uuidIndexForPermsSystemTablesV6diff.txt. svn stat -q output is in 
Derby1330uuidIndexForPermsSystemTablesV6stat.txt

> Provide runtime privilege checking for grant/revoke functionality
> -
>
>  Key: DERBY-1330
>  URL: http://issues.apache.org/jira/browse/DERBY-1330
>  Project: Derby
> Type: Sub-task

>   Components: SQL
> Versions: 10.2.0.0
> Reporter: Mamta A. Satoor
> Assignee: Mamta A. Satoor
>  Attachments: AuthorizationModelForDerbySQLStandardAuthorization.html, 
> AuthorizationModelForDerbySQLStandardAuthorizationV2.html, 
> Derby1330PrivilegeCollectionV2diff.txt, 
> Derby1330PrivilegeCollectionV2stat.txt, 
> Derby1330PrivilegeCollectionV3diff.txt, 
> Derby1330PrivilegeCollectionV3stat.txt, 
> Derby1330ViewPrivilegeCollectionV1diff.txt, 
> Derby1330ViewPrivilegeCollectionV1stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV4diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV4stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV5diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV5stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV6diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV6stat.txt
>
> Additional work needs to be done for grant/revoke to make sure that only 
> users with required privileges can access various database objects. In order 
> to do that, first we need to collect the privilege requirements for various 
> database objects and store them in SYS.SYSREQUIREDPERM. Once we have this 
> information then when a user tries to access an object, the required 
> SYS.SYSREQUIREDPERM privileges for the object will be checked against the 
> user privileges in SYS.SYSTABLEPERMS, SYS.SYSCOLPERMS and 
> SYS.SYSROUTINEPERMS. The database object access will succeed only if the user 
> has the necessary privileges.
> SYS.SYSTABLEPERMS, SYS.SYSCOLPERMS and SYS.SYSROUTINEPERMS are already 
> populated by Satheesh's work on DERBY-464. But SYS.SYSREQUIREDPERM doesn't 
> have any information in it at this point and hence no runtime privilege 
> checking is getting done at this point.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Kathey Marsden

Ramin Moazeni wrote:


Hi Bryan,

I am not sure this would be feasible given the amount of time I have
to finish this project as well as my familiarity with DdlUtils code
base. But if everybody agrees to it, I can start working on it.


Hi Ramin,

I think that even if your project scope was reduced to accommodate doing 
it the right way, this incremental  contribution would provide the 
greatest value.I wonder if  Martin or someone from the DDLUtils 
project might be willing to co-mentor here, so that Ramin can work 
toward a MySQL to Derby Migration tool through DDLUtils.


Kathey








Re: Language based matching

2006-07-12 Thread Rick Hillegas

Hi Kathey,

My gut feeling is that you are headed off into the tall weeds here. That 
said, let me walk with you part way into the swamp. Another feature you 
might want would be DERBY-481, computed columns. This would help you get 
better performance. So, for instance, you could declare your data like this:


create table foo
(
   name  varchar(20)
   nameKey varchar(60) for bit data generated always as ( locale_order( 
'pl', 'PL', name )

);
create index foo_idx on foo( nameKey );

Lacking DERBY-481, you might get away with

create table foo
(
   name  varchar(20)
   nameKey varchar(60) for bit data
);
create index foo_idx on foo( nameKey );

and then use triggers or procedures to populate the nameKey column based 
on the value in name.


Then you could get decent performance if you did ORDER BY and GROUP BY 
on foo.nameKey. Other operations might look like this:


select * from foo
where nameKey in ( locale_order( 'pl', 'PL', 'dsfaf' ), ... );

select * from foo
where nameKey between locale_order( 'pl', 'PL', 'lkjh' ) and 
locale_order( 'pl', 'PL', 'mnbv' );


select * from foo
where nameKey < locale_order( 'pl', 'PL', 'asdfgf'' )

LIKE is going to be a pile of work. I think your LOCALE_MATCHES function 
will have to duplicate a lot of the code in Derby. At the end of the 
day, you will replace LIKE with LOCALE_MATCHES and so lose the 
performance-enhancing query pre-processing which DERBY does for %. Here 
the weeds have become too thick for me.




Kathey Marsden wrote:


Rick Hillegas wrote:



3) The locale-sensitive meaning of <, =, and > affected the operation 
of all orderings of national strings, including sorts, indexes, 
unions, group-by's, like's, between's, and in's.


At one point I was keen on re-enabling the national string types. Now 
I am leaning toward implementing the ANSI collation language. I think 
this is more powerful. In particular, it lets you support more than 
one language-sensitive ordering in the same database.


You and your customer face a hard problem trying to migrate national 
strings from Cloudscape 5.1.60 into Derby 10.1.3 or 10.2. I'm at a 
loss how to do this in a way that preserves Cloudscape's performance.




Thank you so much Rick for helping me understand this stuff. For now 
lets just assume this is just a small dataset and  set  performance 
aside I am interested to know
1)   When might Locale specific matching  be different  in the 
context  WHERE value LIKE '%<   >%'  (or whatever language we use)  
besides the deprecated Norwegian 'aa' and when might this be useful?  
Is it somehow related to bidirectional data like Hebrew and Arabic?


I'm afraid I don't understand the question. I think you are going to 
have to duplicate the LIKE processing code, splicing special characters 
into subkeys created by LOCALE_ORDER. I don't understand the issues with 
Semitic languages but I suspect that Arabic orthography creates some 
interesting cases.


 2)  Is there some easy java code that can be used to accomplish 
writing a LOCALE_MATCHES(pattern,value) function?


I'm afraid I can't point you at anything easier than Derby's code.

For the other functionality  I have these equivalent functions to 
offer as a workaround (see 
http://wiki.apache.org/db-derby/LanguageBasedOrdering)


ORDER BY -  Use ORDER BY expression with  LOCALE_ORDER function 
implemented with Collator.getCollationKey() 
<, =,  > , BETWEEN  -  Use LOCALE_COMPARE function implemented with 
Collator.compare()
IN - Since this is an exact match, would the non-locale specific 
matching work ok here?


I'm not sure I understand the question. I don't think you can get around 
wrapping local_order around the left and right expressions:


select * from bar where locale_order( 'pl', 'PL', name ) in ( select 
locale_order( 'pl', 'PL', name ) from wibble );


GROUP-BY - No solution yet but GROUP BY expression in progress will 
allow LOCALE_ORDER to be used.
*LIKE -   * Is there some easy Java regular expression 
matching function  like String.matches(Collator collator, String 
pattern, String value)? I can't find it.The code in 
org.apache.derby.iapi.types.Like looks pretty involved, but perhaps 
that is what is needed.   I  just want to confirm before I go down 
that path and try to figure it out.f


I agree that this looks pretty involved.



Thanks

Kathey

P.S.  I once came very close to getting a cash register meant to 
interface to a gas pump working in a Deli  with a scale  until Mother 
Nature stepped in and raised the Russian River to the point that it 
swallowed the whole thing up, so I have been known to try too hard for 
a workaround.  If  trying to workaround Locale specific processing 
with Derby with FUNCTIONS is   a doomed enterprise,  I welcome that 
perspective as historically I sometimes don't know when to give up.






[jira] Updated: (DERBY-551) Allow invoking java stored procedures from inside a trigger. Make CALL a valid statement in the trigger body.

2006-07-12 Thread Deepa Remesh (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-551?page=all ]

Deepa Remesh updated DERBY-551:
---

Attachment: derby-551-patch2-v1.diff

Thanks Dan for committing patch1. 

Based on Dan's suggestion, I am working on adding a check to the parser for 
disallowing procedures that modify SQL data in before triggers. However, this 
patch is not quite ready.

Meantime, I am attaching a follow-up patch 'derby-551-patch2-v1.diff' which 
adds more comments to InternalTriggerExecutionContext.validateStatement. This 
patch only adds comments and does not change any code. Please take a look at it 
and commit if okay.

> Allow invoking java stored procedures from inside a trigger. Make CALL a 
> valid statement in the trigger body.
> -
>
>  Key: DERBY-551
>  URL: http://issues.apache.org/jira/browse/DERBY-551
>  Project: Derby
> Type: New Feature

>   Components: SQL
> Versions: 10.1.1.0
>  Environment: All platforms
> Reporter: Satheesh Bandaram
> Assignee: Deepa Remesh
>  Fix For: 10.2.0.0
>  Attachments: ProcedureInTrigger_Tests_v1.html, derby-551-draft1.diff, 
> derby-551-draft1.status, derby-551-draft2.status, derby-551-draft3.diff, 
> derby-551-draft3.status, derby-551-patch1-v1.diff, 
> derby-551-patch1-v1.status, derby-551-patch2-v1.diff, derby-551draft2.diff
>
> Derby currently doesn't allow CALL statement to be used in a trigger body. It 
> would be great to allow java stored procedure invocation inside a trigger. 
> Since Derby doesn't have SQL procedure language, triggers can only execute a 
> single SQL statement. If we allow stored procedures in triggers, it would be 
> possible to write a trigger that involves more than just one SQL statement. 
> Functions are currently allowed, but they are read-only.
> I believe it is fairly easy to support this enhancement. Need good amount of 
> testing though.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-12 Thread Ramin Moazeni

Hi Bryan,

I am not sure this would be feasible given the amount of time I have
to finish this project as well as my familiarity with DdlUtils code
base. But if everybody agrees to it, I can start working on it.

Thanks
Ramin

On 7/11/06, Bryan Pendleton <[EMAIL PROTECTED]> wrote:

> The DdlUtils tool seems not be capable of migrating views, CHECK
> constraints,  and stored procedures. I would like to know what do you
> think if DdlUtils tool can be reused for migrating the tables and
> Indexes, and use the DatabaseMetadata for migrating views and stored
> procedures? .

Perhaps another possibility would be for you to improve DdlUtils so
that it has these desirable features. The end result would be a better
DdlUtils *and* a MySQL-to-Derby migration tool.

thanks,

bryan




Re: UnsupportedClassVersionError running xa suite under jdk1.3

2006-07-12 Thread Myrna van Lunteren

On 7/12/06, Rick Hillegas <[EMAIL PROTECTED]> wrote:

I'm seeing the following error when I run the xa suite under jdk1.3.
This looks like an environmental problem to me. Would appreciate advice
on the accepted way to fix my environment for running the tests under
jdk1.3.

The suite dies trying to load javax.transaction.xa.Xid. This is a class
which is not present in the 1.3 jdk but which appears in 1.4 and later.
It appears to me that since the loader can't find this class in the jdk,
it grabs it from geronimo-spec-jta-1.0.1B-rc4.jar, which I'm suspecting
causes the class version error.

My classpath includes the following jars from trunk/tools/java

db2jcc.jargeronimo-spec-jta-1.0.1B-rc4.jar
javacc.jar  junit.jar   xml-apis.jar
db2jcc_license_c.jar  geronimo-spec-servlet-2.4-rc4.jar
jce1_2_2.jarservlet.jar
empty.jar jakarta-oro-2.0.8.jar
jdbc2_0-stdext.jar  xercesImpl.jar

Here's the error:

Exception in thread "main" java.lang.UnsupportedClassVersionError:
javax/transaction/xa/Xid (Unsupported major.minor ver
sion 48.0)
   at java.lang.ClassLoader.defineClass0(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:488)
   at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:106)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:243)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:51)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:190)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:183)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:294)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:288)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:250)
   at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:310)
   at java.lang.Class.forName0(Native Method)
   at java.lang.Class.forName(Class.java:115)
   at
org.apache.derbyTesting.functionTests.harness.RunList.shouldSkipTest(Unknown
Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.setSuiteProperties(Unknown
Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.runSuites(Unknown
Source)
   at
org.apache.derbyTesting.functionTests.harness.RunList.(Unknown Source)
   at
org.apache.derbyTesting.functionTests.harness.RunSuite.getSuitesList(Unknown
Source)
   at
org.apache.derbyTesting.functionTests.harness.RunSuite.main(Unknown Source)

Thanks,
-Rick


Hi Rick,

I have a jta1_2.jar in my jvm's jre/lib/ext which has this class.

If you have that puppy also, then maybe not having the geronimo jars
in the classpath will make a difference? Do classes from the classpath
get loaded before classes in the jre/lib/ext?

Myrna


[jira] Updated: (DERBY-1417) Add new, lengthless overloads to the streaming api

2006-07-12 Thread Kristian Waagan (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1417?page=all ]

Kristian Waagan updated DERBY-1417:
---

Attachment: derby-1417-3a-embimpl-and-tests.diff
derby-1417-3a-embimpl-and-tests.stat

'derby-1417-3a-embimpl-and-tests.diff' provides tests and implementations for 
the following methods on the embedded side:
[ResultSet]
  public void updateAsciiStream(int columnIndex, InputStream x)
  public void updateBinaryStream(int columnIndex, InputStream x)
  public void updateCharacterStream(int columnIndex, Reader x)
  public void updateAsciiStream(String columnName, InputStream x)
  public void updateBinaryStream(String columnName, InputStream x)
  public void updateCharacterStream(String columnName, Reader reader)
  public void updateBlob(int columnIndex, InputStream x)
  public void updateBlob(String columnName, InputStream x)
  public void updateClob(int columnIndex, Reader x)
  public void updateClob(String columnName, Reader x)
[PreparedStatement]
  public void setBinaryStream(int parameterIndex, InputStream x)
  public void setAsciiStream(int parameterIndex, InputStream x)
  public void setCharacterStream(int parameterIndex, Reader reader)
  public void setClob(int parameterIndex, Reader reader)
  public void setBlob(int parameterIndex, InputStream inputStream)

*IMPORTANT*: This patch must be built with Mustang build 91 for the tests to 
compile!

Some of the tests are temporarily disabled for the client driver. These will be 
enabed when the client implementation is submitted.

I made some changes to ReaderToUTF8Stream, and to the 
setXXXStreamInteral-methods. I would appreciate if someone had a look at them.

Derbyall ran cleanly minus the 'dynamic' JDBC 4 tests (VerifySignatures, 
ClosedObjects, UnsupportedVetter).
I plan to do some additional testing with large LOBs, and will report back on 
this. These tests will not run as part of any suite (due to time and memory 
requirements), but I might submit the code for inclusion anyway.


To the committers: Please do not commit this before Mustang build 91 is out!
(must be available at http://download.java.net/jdk6/binaries/)

> Add new, lengthless overloads to the streaming api
> --
>
>  Key: DERBY-1417
>  URL: http://issues.apache.org/jira/browse/DERBY-1417
>  Project: Derby
> Type: New Feature

>   Components: JDBC
> Versions: 10.2.0.0
> Reporter: Rick Hillegas
> Assignee: Kristian Waagan
>  Fix For: 10.2.0.0
>  Attachments: derby-1417-01-castsInTests.diff, 
> derby-1417-1a-notImplemented.diff, derby-1417-1a-notImplemented.stat, 
> derby-1417-2a-rstest-refactor.diff, derby-1417-3a-embimpl-and-tests.diff, 
> derby-1417-3a-embimpl-and-tests.stat
>
> The JDBC4 Expert Group has approved a new set of overloads for the streaming 
> methods. These overloads do not take a length argument. Here are the new 
> overloads:
> PreparedStatement.setAsciiStream(int parameterIndex, java.io.InputStream x)
> PreparedStatement.setBinaryStream(int parameterIndex, java.io.InputStream x)
> PreparedStatement.setCharacterStream(int parameterIndex, java.io.Reader 
> reader)
> PreparedStatement.setNCharacterStream(int parameterIndex, java.io.Reader 
> reader)
> PreparedStatement.setBlob(int parameterIndex, java.io.InputStream inputStream)
> PreparedStatement.setClob(int parameterIndex, java.io.Reader reader)
> PreparedStatement.setNClob(int parameterIndex, java.io.Reader reader)
> CallableStatement.setAsciiStream(java.lang.String parameterName, 
> java.io.InputStream x)
> CallableStatement.setBinaryStream(java.lang.String parameterName, 
> java.io.InputStream x)
> CallableStatement.setCharacterStream(java.lang.String parameterName, 
> java.io.Reader reader)
> CallableStatement.setNCharacterStream(java.lang.String parameterName, 
> java.io.Reader reader)
> CallableStatement.setBlob(java.lang.String parameterName, java.io.InputStream 
> inputStream)
> CallableStatement.setClob(java.lang.String parameterName, java.io.Reader 
> reader)
> CallableStatement.setNClob(java.lang.String parameterName, java.io.Reader 
> reader)
> ResultSet.updateAsciiStream(int columnIndex, java.io.InputStream x)
> ResultSet.updateAsciiStream(java.lang.String columnLabel, java.io.InputStream 
> x)
> ResultSet.updateBinaryStream(int columnIndex, java.io.InputStream x)
> ResultSet.updateBinaryStream(java.lang.String columnLabel, 
> java.io.InputStream x, int length)
> ResultSet.updateCharacterStream(int columnIndex, java.io.Reader x)
> ResultSet.updateCharacterStream(java.lang.String columnLabel, java.io.Reader 
> x)
> ResultSet.updateNCharacterStream(int columnIndex, java.io.Reader x)
> ResultSet.updateNCharacterStream(java.lang.String columnLabel, java.io.Reader 
> x)  
> ResultSet.updateBlob(int columnIndex, java.io.InputStream inputStream)
> ResultSet.updateBlob(java.lang.String columnLabel, java.io.InputStream 
> inputStream)
> ResultSet.updateClob(int colum

Re: Revoke REFERENCES privilege and drop foreign key constraint

2006-07-12 Thread Mamta Satoor
Yes, that is what I had originally tried, which is to have the ConstraintDescriptor.makeInvalid call following when it receives REVOKE_PRIVILEGE action 
getDataDictionary().dropConstraintDescriptor(getTableDescriptor(), this, lcc.getTransactionExecute()); 
But looks like that is not sufficient to have all the other data structures clean themselves off of the constraint descriptor. For instance, after the call above, TableDescriptor still had the conglomerate of the constraint's backing index attached to it.

 
thanks,
Mamta 
On 7/12/06, Daniel John Debrunner <[EMAIL PROTECTED]> wrote:
Mamta Satoor wrote:> Hi,>> I spent some time prototyping revoke privilege for foreign key constraint
> based on my proposal earlier in this thread.> I added following code to ConstraintDescriptor.makeInvalid>>  if (action == DependencyManager.REVOKE_PRIVILEGE)>  {>   PreparedStatement ps = 
lcc.prepareInternalStatement("alter table "+> table.getQualifiedName() + " drop constraint " + constraintName);>>   ResultSet rs = ps.execute(lcc, true, 0L);>   rs.close();
>   rs.finish();>   return;>>  }>> This works fine as long as the user who issued the revoke references> privilege is a dba or owner of the table on which foreign key constraint is
> defined. But for any other user, the revoke references privilege barfs> saying that user can't perform the operation in that schema.I would have exepected the ConstraintDescriptor to drop itself directly
using the DataDictionary apis, rather than go back through SQL. Wouldthat be possible?Dan.


[jira] Updated: (DERBY-781) Materialize union subqueries in select list where possible to avoid creating invariant resultsets many times.

2006-07-12 Thread A B (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-781?page=all ]

A B updated DERBY-781:
--

Attachment: d781_v2.patch

Attaching an updated patch, d781_v2.patch, that is synced with the latest 
codeline and that also has a small fix to the lang/subquery test (there was a 
typo in the first patch).  Other than the minor test fix, this patch is 
identical to the _v1 patch.

Still awaiting review, if anyone has the time...

> Materialize union subqueries in select list where possible to avoid creating 
> invariant resultsets many times.
> -
>
>  Key: DERBY-781
>  URL: http://issues.apache.org/jira/browse/DERBY-781
>  Project: Derby
> Type: Improvement

>   Components: SQL
> Versions: 10.1.1.0, 10.2.0.0
>  Environment: generic
> Reporter: Satheesh Bandaram
> Assignee: A B
>  Attachments: DERBY-781_v1.html, d781_v1.patch, d781_v1.stat, d781_v2.patch
>
> Derby's handling of union subqueries in from list can be improved by 
> materializing invariant resultsets once, rather than creating them many times.
> For example:
> create view V1 as select i, j from T1 union select i,j from T2;
> create view V2 as select a,b from T3 union select a,b from T4;
> insert into T1 values (1,1), (2,2), (3,3), (4,4), (5,5);
> For a query like select * from V1, V2 where V1.j = V2.b and V1.i in 
> (1,2,3,4,5), it is possible the resultset for V2 is created 5 times. 
> (assuming V2 is choosen as the the inner table) This can be very costly if 
> the underlying selects can take long time and also may perform union many 
> times.
> Enhance materialization logic in setOperatorNode.java. It currently returns 
> FALSE always.
> public boolean performMaterialization(JBitSet outerTables)
>   throws StandardException
> {
>   // RESOLVE - just say no to materialization right now - should be a 
> cost based decision
>   return false;
>   /* Actual materialization, if appropriate, will be placed by our parent 
> PRN.
>* This is because PRN might have a join condition to apply.  
> (Materialization
>* can only occur before that.
>*/
>   //return true;
> } 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Optimizer patch reviews? (DERBY-781, DERBY-1357)

2006-07-12 Thread Army
I posted two patches for some optimizer changes a little over a week ago: one 
for DERBY-781 and one for DERBY-1357.


Has anyone had a chance to review either of them, or is anyone planning to?  I'm 
hoping to have these reviewed and committed sometime in the next few days so 
that I'm not forced to try to address issues at the last minute for the first 
10.2 release candidate.


Optimizer changes can sometimes be rather tricky, so the sooner the review--and 
the more eyes on the code--the better.


The DERBY-1357 changes are quite small and are very easily reviewable, while the 
DERBY-781 changes are more involved.  Anyone have some time to review either of 
these patches?


Many thanks,
Army



[jira] Commented: (DERBY-1504) Fix way to measure wasteful use of memory when LOB is communicated between network server and driver

2006-07-12 Thread Tomohito Nakayama (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1504?page=comments#action_12420641 ] 

Tomohito Nakayama commented on DERBY-1504:
--

This WorkBook means serverMemoryUsage.xls
 ( 
http://issues.apache.org/jira/secure/attachment/12336743/serverMemoryUsage.xls 
) .

> Fix way to measure wasteful use of memory when LOB is communicated between 
> network server and driver
> 
>
>  Key: DERBY-1504
>  URL: http://issues.apache.org/jira/browse/DERBY-1504
>  Project: Derby
> Type: Sub-task

>   Components: Miscellaneous
> Reporter: Tomohito Nakayama
> Assignee: Tomohito Nakayama
>  Attachments: MeasureMemoryUsageOfNetworkServer.java, serverMemoryUsage.xls
>
> I will upload the example of measurement in this issue and 
> fix it as way of measurement.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1504) Fix way to measure wasteful use of memory when LOB is communicated between network server and driver

2006-07-12 Thread Tomohito Nakayama (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1504?page=comments#action_12420640 ] 

Tomohito Nakayama commented on DERBY-1504:
--

I will measure server side memory usage using UsageOfNetworkServer.java
(http://issues.apache.org/jira/secure/attachment/12336742/MeasureMemoryUsageOfNetworkServer.java).

I hope the totalMemory shown in this WorkBook is reduced through DERBY-550.

> Fix way to measure wasteful use of memory when LOB is communicated between 
> network server and driver
> 
>
>  Key: DERBY-1504
>  URL: http://issues.apache.org/jira/browse/DERBY-1504
>  Project: Derby
> Type: Sub-task

>   Components: Miscellaneous
> Reporter: Tomohito Nakayama
> Assignee: Tomohito Nakayama
>  Attachments: MeasureMemoryUsageOfNetworkServer.java, serverMemoryUsage.xls
>
> I will upload the example of measurement in this issue and 
> fix it as way of measurement.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Revoke REFERENCES privilege and drop foreign key constraint

2006-07-12 Thread Daniel John Debrunner
Mamta Satoor wrote:
> Hi,
> 
> I spent some time prototyping revoke privilege for foreign key constraint
> based on my proposal earlier in this thread.
> I added following code to ConstraintDescriptor.makeInvalid
> 
>  if (action == DependencyManager.REVOKE_PRIVILEGE)
>  {
>   PreparedStatement ps = lcc.prepareInternalStatement("alter table "+
> table.getQualifiedName() + " drop constraint " + constraintName);
> 
>   ResultSet rs = ps.execute(lcc, true, 0L);
>   rs.close();
>   rs.finish();
>   return;
> 
>  }
> 
> This works fine as long as the user who issued the revoke references
> privilege is a dba or owner of the table on which foreign key constraint is
> defined. But for any other user, the revoke references privilege barfs
> saying that user can't perform the operation in that schema.

I would have exepected the ConstraintDescriptor to drop itself directly
using the DataDictionary apis, rather than go back through SQL. Would
that be possible?

Dan.




[jira] Updated: (DERBY-1504) Fix way to measure wasteful use of memory when LOB is communicated between network server and driver

2006-07-12 Thread Tomohito Nakayama (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1504?page=all ]

Tomohito Nakayama updated DERBY-1504:
-

Attachment: serverMemoryUsage.xls

This workbook is created from log when BlobOutOfMem.java is executed.
( http://issues.apache.org/jira/secure/attachment/12336598/BlobOutOfMem.java )

The log is created by UsageOfNetworkServer.java
(http://issues.apache.org/jira/secure/attachment/12336742/MeasureMemoryUsageOfNetworkServer.java)



> Fix way to measure wasteful use of memory when LOB is communicated between 
> network server and driver
> 
>
>  Key: DERBY-1504
>  URL: http://issues.apache.org/jira/browse/DERBY-1504
>  Project: Derby
> Type: Sub-task

>   Components: Miscellaneous
> Reporter: Tomohito Nakayama
> Assignee: Tomohito Nakayama
>  Attachments: MeasureMemoryUsageOfNetworkServer.java, serverMemoryUsage.xls
>
> I will upload the example of measurement in this issue and 
> fix it as way of measurement.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



UnsupportedClassVersionError running xa suite under jdk1.3

2006-07-12 Thread Rick Hillegas
I'm seeing the following error when I run the xa suite under jdk1.3. 
This looks like an environmental problem to me. Would appreciate advice 
on the accepted way to fix my environment for running the tests under 
jdk1.3.


The suite dies trying to load javax.transaction.xa.Xid. This is a class 
which is not present in the 1.3 jdk but which appears in 1.4 and later. 
It appears to me that since the loader can't find this class in the jdk, 
it grabs it from geronimo-spec-jta-1.0.1B-rc4.jar, which I'm suspecting 
causes the class version error.


My classpath includes the following jars from trunk/tools/java

db2jcc.jargeronimo-spec-jta-1.0.1B-rc4.jar   
javacc.jar  junit.jar   xml-apis.jar
db2jcc_license_c.jar  geronimo-spec-servlet-2.4-rc4.jar  
jce1_2_2.jarservlet.jar
empty.jar jakarta-oro-2.0.8.jar  
jdbc2_0-stdext.jar  xercesImpl.jar


Here's the error:

Exception in thread "main" java.lang.UnsupportedClassVersionError: 
javax/transaction/xa/Xid (Unsupported major.minor ver

sion 48.0)
   at java.lang.ClassLoader.defineClass0(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:488)
   at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:106)

   at java.net.URLClassLoader.defineClass(URLClassLoader.java:243)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:51)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:190)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:183)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:294)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:288)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:250)
   at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:310)
   at java.lang.Class.forName0(Native Method)
   at java.lang.Class.forName(Class.java:115)
   at 
org.apache.derbyTesting.functionTests.harness.RunList.shouldSkipTest(Unknown 
Source)
   at 
org.apache.derbyTesting.functionTests.harness.RunList.setSuiteProperties(Unknown 
Source)
   at 
org.apache.derbyTesting.functionTests.harness.RunList.runSuites(Unknown 
Source)
   at 
org.apache.derbyTesting.functionTests.harness.RunList.(Unknown Source)
   at 
org.apache.derbyTesting.functionTests.harness.RunSuite.getSuitesList(Unknown 
Source)
   at 
org.apache.derbyTesting.functionTests.harness.RunSuite.main(Unknown Source)


Thanks,
-Rick


[jira] Updated: (DERBY-1504) Fix way to measure wasteful use of memory when LOB is communicated between network server and driver

2006-07-12 Thread Tomohito Nakayama (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1504?page=all ]

Tomohito Nakayama updated DERBY-1504:
-

Attachment: MeasureMemoryUsageOfNetworkServer.java

This program start NetworkServer and 
print memory usage information to System.out for each millisecond.


> Fix way to measure wasteful use of memory when LOB is communicated between 
> network server and driver
> 
>
>  Key: DERBY-1504
>  URL: http://issues.apache.org/jira/browse/DERBY-1504
>  Project: Derby
> Type: Sub-task

>   Components: Miscellaneous
> Reporter: Tomohito Nakayama
> Assignee: Tomohito Nakayama
>  Attachments: MeasureMemoryUsageOfNetworkServer.java
>
> I will upload the example of measurement in this issue and 
> fix it as way of measurement.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-551) Allow invoking java stored procedures from inside a trigger. Make CALL a valid statement in the trigger body.

2006-07-12 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-551?page=comments#action_12420633 ] 

Daniel John Debrunner commented on DERBY-551:
-

Patch derby-551-patch1-v1.diff committed revision 421281. Thanks Deepa

> Allow invoking java stored procedures from inside a trigger. Make CALL a 
> valid statement in the trigger body.
> -
>
>  Key: DERBY-551
>  URL: http://issues.apache.org/jira/browse/DERBY-551
>  Project: Derby
> Type: New Feature

>   Components: SQL
> Versions: 10.1.1.0
>  Environment: All platforms
> Reporter: Satheesh Bandaram
> Assignee: Deepa Remesh
>  Fix For: 10.2.0.0
>  Attachments: ProcedureInTrigger_Tests_v1.html, derby-551-draft1.diff, 
> derby-551-draft1.status, derby-551-draft2.status, derby-551-draft3.diff, 
> derby-551-draft3.status, derby-551-patch1-v1.diff, 
> derby-551-patch1-v1.status, derby-551draft2.diff
>
> Derby currently doesn't allow CALL statement to be used in a trigger body. It 
> would be great to allow java stored procedure invocation inside a trigger. 
> Since Derby doesn't have SQL procedure language, triggers can only execute a 
> single SQL statement. If we allow stored procedures in triggers, it would be 
> possible to write a trigger that involves more than just one SQL statement. 
> Functions are currently allowed, but they are read-only.
> I believe it is fairly easy to support this enhancement. Need good amount of 
> testing though.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Maximum size of materialized LOB

2006-07-12 Thread Kristian Waagan

Hello,

I have been playing around with the length less streaming overloads in 
JDBC4. In the discussion of DERBY-1471, it was suggested that we forgot 
about the layerB streaming in DRDA at the moment and instead implemented 
a much simpler approach. When we have what we need, we can improve the 
length less overloads.


The planned approach on the client side is to create a LOB and have it 
materialize the whole stream in memory to determine the length. The LOB 
is then sent to the server by using the existing methods/API, which 
require a length argument.


I have seen that the maximum possible size of a byte array is 
Integer.MAX_VALUE. However, when I tried this out, I was not able to 
create such a big array in all VMs. I got this error message:

java.lang.OutOfMemoryError: Requested array size exceeds VM limit

Does anyone know what's causing these reduced sizes?
Here are the numbers (all Sun VMs):
1.3 2^31 -20
1.4 2^31 -20
1.5, -d32   2^31 -20
1.5, -d64   2^31 -40
1.6, -d32   2^31 -1  < as expected


Thus, maximum LOB size in Derby for the new length less overloads is 
limited by two factors (seen from the client):

1) Memory on the client (need at least 2xLOB size + overhead)
2) Maximum size of byte array (in range [2^31 -40, 2^31 -1])

Since 1) will be in effect most of the time (still not common with >4G 
RAM is it?), 2) will almost never be seen. I don't plan to support LOBs 
of 2^31 -1 bytes if the VM don't support that big byte arrays...


Out of curiosity, does anyone know the max limit for byte arrays on 
other VMs?
I attached the little program I used, don't forget to set the maximum 
heap size high enough.





--
Kristian
public class MaxByteArraySize {

public static void main(String[] args) {
int size = Integer.MAX_VALUE;
byte[] b;
while (true) {
System.out.print(size);
try {
b = new byte[size];
System.out.println("   SUCCESS!");
break;
} catch (Throwable t) {
size--;
System.out.println("   FAILED!");
}
}
}
}


Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-12 Thread TomohitoNakayama

Hello Andreas.



//Even if OutOfMemoryError happens, it would be not a problem in an 
environment with very small amount of memory 



I am not sure I understand why it would not be a problem in an 
environment with very small amount of memory. The side-effects of 
outofmemoryerror would be the same:


* they may occur anywhere in the user application
* client would hang if the DRDAConnThread stops 


Here, I think there exists difference of opinion between us.

I think expanding object into memory is not completely wrong,
if there are some reason.
// And I think there may exist the reason in the case of from server to 
client,
// such as that program works in both of client and server separatedly, 
though not sure yet
  I suspect whether it is practical to stream fron server to client 
on demand.
  Now I'm reading spec of DRDA and try to find answer, though not 
found it yet  ...


Then, in this my opinion, OutOfMemoryError in an environment with very 
small memory is not always problem.

It may be just a natural consequence.

I think you don't agree this my opinion,
it is natural because we are not copied person.

However, current implementation of network server side seems to expand 
object into memory more than once.

I think this behavior is very problematic,
because we can escape this problem just sharing memory where object was 
expanded to in the network server.

I think this behavior should be fixed.

I think you would agree my latter opinion.

Best regards.

Andreas Korneliussen wrote:


TomohitoNakayama wrote:


Hello.

Regretfully I'm not sure whether I can fix this issue until 10.2 release
because it is not clear where memory is used wastefully and
how we can judge whether we could solve this issue.



Hi.
I am not sure I could fix this for 10.2 either, therefore I suggested 
the other approach, which I would hope gives a less severe problem for 
the user to handle.


On the client side the problem occurs when 
NetStatementReply.copyEXTDTA(..) streams all the data over from the 
server and puts it into a byte array which is put into NetCursor. This 
array is later used to create LOBs. To fix it on the client side, the 
LOB implemtation classes would need to be reimplemented, so that they 
will do the streaming.


On the server side, the problem occurs in 
DRDAConnThread.readAndSetExtParam calls reader.getExtData(..). See my 
previous comments, where I suggested a solution.


It may of course be other problems also, however this is what I found 
when tracing this problem with a debugger.


//Even if OutOfMemoryError happens, it would be not a problem in an 
environment with very small amount of memory 



I am not sure I understand why it would not be a problem in an 
environment with very small amount of memory. The side-effects of 
outofmemoryerror would be the same:


* they may occur anywhere in the user application
* client would hang if the DRDAConnThread stops



//I think criterion to judge resolve of this issue is needed ...

However, I think throw Exception judging from amount of spare memory 
would be too cunning behavor ..



I think it would be good to use a conservative approach to avoid 
consuming all heap space in the VM.




Now ... I'm in dilemma.
At least, I won't veto it.



Well, I am not sure I will do it then, the best is to fix the real 
problem.


Regards

Andreas


Best regards.


Andreas Korneliussen (JIRA) wrote:

   [ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420591 
]

Andreas Korneliussen commented on DERBY-550:


Unless the streaming could be fixed for 10.2 so that we avoid 
OutOfMemoryError on the receiver side, I would propose the following:


We know the size of the LOB, and can check if it can go into memory 
(using the Runtime class). If it cannot go into memory, we can throw 
an SQLException, instead of consuming all memory in the VM until we 
get OutOfMemoryError.


By using this approach, we achieve the following:
* Side-effects on other connections in the VM: Although it is the 
LOB which is taking almost all the memory in the VM, the 
OutOfMemoryError may be thrown in another thread in the VM, causing 
side-effects on other connections or on the application itself.
* Currently, if the Network server goes out of memory when streaming 
data, the DRDAConnThread will stop. This causes hangs in the user 
applications.
If the streaming is fixed, there is not need to do this. Does anyone 
plan to fix the streaming issues for 10.2 ? If not, I will make a 
couple of JIRA issues to do the work of avoiding OutOfMemoryError by 
checking size before allocating the byte arrays.



 

BLOB : java.lang.OutOfMemoryError with network JDBC driver 
(org.apache.derby.jdbc.ClientDriver)
--- 



Key: DERBY-550
URL: http://issues.apache.org/jira/browse/DERBY-550
Pro

[jira] Commented: (DERBY-551) Allow invoking java stored procedures from inside a trigger. Make CALL a valid statement in the trigger body.

2006-07-12 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-551?page=comments#action_12420629 ] 

Daniel John Debrunner commented on DERBY-551:
-

Just to clarify, it seems that this patch does not disable MODIFIES SQL DATA 
procedures in a before trigger, only DDL statements in all triggers and DML 
actions  in a before trigger. Is that correct?
You indicate that the checks might move, once the code has settled i think the 
comments in InternalTriggerExecutionContext.validateStatement could be enhanced 
with your knowledge. For example adding commeents to the DDL check that  DDL 
statements as the trigger's action statement are disallowed by the parser and 
the check is for statements executed by procedures executed within a trigger 
context. Similar comments for before triggers, making it clear the multiple 
ways DML is disallowed, e.g. currently the DML is disallowed at compile time.

> Allow invoking java stored procedures from inside a trigger. Make CALL a 
> valid statement in the trigger body.
> -
>
>  Key: DERBY-551
>  URL: http://issues.apache.org/jira/browse/DERBY-551
>  Project: Derby
> Type: New Feature

>   Components: SQL
> Versions: 10.1.1.0
>  Environment: All platforms
> Reporter: Satheesh Bandaram
> Assignee: Deepa Remesh
>  Fix For: 10.2.0.0
>  Attachments: ProcedureInTrigger_Tests_v1.html, derby-551-draft1.diff, 
> derby-551-draft1.status, derby-551-draft2.status, derby-551-draft3.diff, 
> derby-551-draft3.status, derby-551-patch1-v1.diff, 
> derby-551-patch1-v1.status, derby-551draft2.diff
>
> Derby currently doesn't allow CALL statement to be used in a trigger body. It 
> would be great to allow java stored procedure invocation inside a trigger. 
> Since Derby doesn't have SQL procedure language, triggers can only execute a 
> single SQL statement. If we allow stored procedures in triggers, it would be 
> possible to write a trigger that involves more than just one SQL statement. 
> Functions are currently allowed, but they are read-only.
> I believe it is fairly easy to support this enhancement. Need good amount of 
> testing though.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[PATCH] DERBY-551: Allow invoking java stored procedures from inside a trigger. Make CALL a valid statement in the trigger body.

2006-07-12 Thread Deepa Remesh

I have a patch (derby-551-patch1-v1.diff) for this issue pending
review: http://issues.apache.org/jira/browse/DERBY-551#action_12420336

I would appreciate if someone can look at this patch.

Thanks,
Deepa


[jira] Commented: (DERBY-1130) Client should not allow databaseName to be set with setConnectionAttributes

2006-07-12 Thread Deepa Remesh (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1130?page=comments#action_12420626 ] 

Deepa Remesh commented on DERBY-1130:
-

For Kathey's question:  What is the exception for embedded if create=true is 
not specified? e.g. ds.setConnectionAttributes("databaseName=wombat") 
Exception for embedded is: XJ004 - Database '' not found.

I think we do not have the same SQL state in the client for the above 
exception. The corresponding SQLState is "08004 - The connection was refused 
because the database {0} was not found."

As the SQLStates do not match, I was thinking it would be okay to catch the 
exception at client itself and throw "08001 - Required property databaseName 
not set. " if database name is not set using setDatabaseName on the client data 
source. In case the database name is set using setDatabaseName and we try to 
over-ride it using "databaseName" property in setConnectionAttributes, the 
over-riding will fail. Only the database name set using setDatabaseName will be 
used. This was the behaviour in the patch I was working on.

However, if the above exception is confusing, I'll try to set a dummy database 
name on the client and send it to server to get back "08004 - The connection 
was refused because the database {0} was not found.". Please share your 
thoughts on this.

> Client should not allow databaseName to be set with setConnectionAttributes
> ---
>
>  Key: DERBY-1130
>  URL: http://issues.apache.org/jira/browse/DERBY-1130
>  Project: Derby
> Type: Bug

>   Components: Network Client
> Versions: 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.1.2.2, 
> 10.1.2.3, 10.2.0.0, 10.1.3.0, 10.1.2.4
> Reporter: Kathey Marsden
> Assignee: Deepa Remesh

>
> Per this thread,  setConnectionAttributes should not set databaseName. 
> http://www.nabble.com/double-check-on-checkDataSource-t1187602.html#a3128621
> Currently this is allowed for client but should be disabled.  I think it is 
> OK to change because we have documented that client will be changed to match 
> embedded for implementation defined behaviour.   Hopefully its use is rare as 
> most folks would use the standard setDatabaseName.  Still there should be a 
> release not when the change is made and it would be better to change it 
> sooner than later:
> Below is the repro. 
> Here is the output with Client
> D>java DatabaseNameWithSetConnAttr
> ds.setConnectionAttributes(databaseName=wombat;create=true)
> ds.getDatabaseName() = null (should be null)
> FAIL: Should not have been able to set databaseName with connection attributes
> Also look for tests  disabled with this bug number in the test 
> checkDataSource30.java
> import java.sql.*;
> import java.lang.reflect.Method;
> public class DatabaseNameWithSetConnAttr{
>   public static void main(String[] args) {
>   try {
>   
>   String attributes = "databaseName=wombat;create=true";
>   org.apache.derby.jdbc.ClientDataSource ds = new
>   org.apache.derby.jdbc.ClientDataSource();
>   //org.apache.derby.jdbc.EmbeddedDataSource ds = new
>   //org.apache.derby.jdbc.EmbeddedDataSource();
>   System.out.println("ds.setConnectionAttributes(" + 
> attributes + ")");
>   ds.setConnectionAttributes(attributes);
>   System.out.println("ds.getDatabaseName() = " +
>  ds.getDatabaseName() 
> + " (should be null)" );
>   Connection conn  = ds.getConnection();
>   } catch (SQLException e) {
>   String sqlState = e.getSQLState();
>   if (sqlState != null && 
> sqlState.equals("XJ041"))
>   {
>   System.out.println("PASS: An exception was 
> thrown trying to get a connetion from a datasource after setting databaseName 
> with setConnectionAttributes");
>   System.out.println("EXPECTED EXCEPTION: " + 
> e.getSQLState() 
>  + " 
> - " + e.getMessage());
>   return;
>   }
>   while (e != null)
>   {
>   System.out.println("FAIL - UNEXPECTED 
> EXCEPTION: " + e.getSQLState());
>   e.printStackTrace();
>   e = e.getNextException();
>   }
>   return;
>   }
>   System.out.println("FAIL: Should not have been able to set 
> databaseName with connection attributes");
>  

[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-12 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---

Attachment: derby-802v2.diff

The attached diff (derby-802v2.diff) has one change compared to the first diff:
* The logic for undoing the projection is moved to ProjectRestricResultSet and 
takes advantage of the projectMappings array already built there.

> OutofMemory Error when reading large blob when statement type is 
> ResultSet.TYPE_SCROLL_INSENSITIVE
> --
>
>  Key: DERBY-802
>  URL: http://issues.apache.org/jira/browse/DERBY-802
>  Project: Derby
> Type: Bug

>   Components: JDBC
> Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 
> 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
> Reporter: Sunitha Kambhampati
> Assignee: Andreas Korneliussen
> Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
> Grégoire Dubois on the list reported this problem.  From his mail: the 
> reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
> exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>
> private static File file = new 
> File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
> private static File destinationFile = new 
> File("/home/greg/DerbyDatabase/"+file.getName());
>
> /** Creates a new instance of derby_filewrite_fileread */
> public derby_filewrite_fileread() {   
>
> }
>
> public static void main(String args[]) {
> try {
> 
> Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
> Connection connection = DriverManager.getConnection 
> ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
> connection.setAutoCommit(false);
>
> Statement statement = 
> connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
> ResultSet.CONCUR_READ_ONLY);
> ResultSet result = statement.executeQuery("SELECT TABLENAME FROM 
> SYS.SYSTABLES");
>
> // Create table if it doesn't already exists.
> boolean exist=false;
> while ( result.next() ) {
> if ("db_file".equalsIgnoreCase(result.getString(1)))
> exist=true;
> }
> if ( !exist ) {
> System.out.println("Create table db_file.");
> statement.execute("CREATE TABLE db_file ("+
>" name  VARCHAR(40),"+
>" file  BLOB(2G) NOT 
> NULL)");
> connection.commit();
> }
>
> // Read file from disk, write on DB.
> System.out.println("1 - Read file from disk, write on DB.");
> PreparedStatement 
> preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) 
> VALUES (?,?)");
> FileInputStream fileInputStream = new FileInputStream(file);
> preparedStatement.setString(1, file.getName());
> preparedStatement.setBinaryStream(2, fileInputStream, 
> (int)file.length());   
> preparedStatement.execute();
> connection.commit();
> System.out.println("2 - END OF Read file from disk, write on 
> DB.");
>
>
> // Read file from DB, and write on disk.
> System.out.println("3 - Read file from DB, and write on disk.");
> result = statement.executeQuery("SELECT file FROM db_file WHERE 
> name='"+file.getName()+"'");
> byte[] buffer = new byte [1024];
> result.next();
> BufferedInputStream inputStream=new 
> BufferedInputStream(result.getBinaryStream(1),1024);
> FileOutputStream outputStream = new 
> FileOutputStream(destinationFile);
> int readBytes = 0;
> while (readBytes!=-1) {
> readBytes=inputStream.read(buffer,0,buffer.length);
> if ( readBytes != -1 )
> outputStream.write(buffer, 0, readBytes);
> } 
> inputStream.close();
> outputStream.close();
> System.out.println("4 - END OF Read file from DB, and write on 
> disk.");
> }
> catch (Exception e) {
> e.printStackTrace(System.err);
> }
> }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is a

Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-12 Thread Andreas Korneliussen

TomohitoNakayama wrote:

Hello.

Regretfully I'm not sure whether I can fix this issue until 10.2 release
because it is not clear where memory is used wastefully and
how we can judge whether we could solve this issue.



Hi.
I am not sure I could fix this for 10.2 either, therefore I suggested 
the other approach, which I would hope gives a less severe problem for 
the user to handle.


On the client side the problem occurs when 
NetStatementReply.copyEXTDTA(..) streams all the data over from the 
server and puts it into a byte array which is put into NetCursor. This 
array is later used to create LOBs. To fix it on the client side, the 
LOB implemtation classes would need to be reimplemented, so that they 
will do the streaming.


On the server side, the problem occurs in 
DRDAConnThread.readAndSetExtParam calls reader.getExtData(..). See my 
previous comments, where I suggested a solution.


It may of course be other problems also, however this is what I found 
when tracing this problem with a debugger.


//Even if OutOfMemoryError happens, it would be not a problem in an 
environment with very small amount of memory 


I am not sure I understand why it would not be a problem in an 
environment with very small amount of memory. The side-effects of 
outofmemoryerror would be the same:


* they may occur anywhere in the user application
* client would hang if the DRDAConnThread stops



//I think criterion to judge resolve of this issue is needed ...

However, I think throw Exception judging from amount of spare memory 
would be too cunning behavor ..


I think it would be good to use a conservative approach to avoid 
consuming all heap space in the VM.




Now ... I'm in dilemma.
At least, I won't veto it.



Well, I am not sure I will do it then, the best is to fix the real problem.

Regards

Andreas

Best regards.


Andreas Korneliussen (JIRA) wrote:

   [ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420591 
]

Andreas Korneliussen commented on DERBY-550:


Unless the streaming could be fixed for 10.2 so that we avoid 
OutOfMemoryError on the receiver side, I would propose the following:


We know the size of the LOB, and can check if it can go into memory 
(using the Runtime class). If it cannot go into memory, we can throw 
an SQLException, instead of consuming all memory in the VM until we 
get OutOfMemoryError.


By using this approach, we achieve the following:
* Side-effects on other connections in the VM: Although it is the LOB 
which is taking almost all the memory in the VM, the OutOfMemoryError 
may be thrown in another thread in the VM, causing side-effects on 
other connections or on the application itself.
* Currently, if the Network server goes out of memory when streaming 
data, the DRDAConnThread will stop. This causes hangs in the user 
applications.
If the streaming is fixed, there is not need to do this. Does anyone 
plan to fix the streaming issues for 10.2 ? If not, I will make a 
couple of JIRA issues to do the work of avoiding OutOfMemoryError by 
checking size before allocating the byte arrays.



 

BLOB : java.lang.OutOfMemoryError with network JDBC driver 
(org.apache.derby.jdbc.ClientDriver)
--- 



Key: DERBY-550
URL: http://issues.apache.org/jira/browse/DERBY-550
Project: Derby
   Type: Bug
  


 


 Components: JDBC, Network Server
   Versions: 10.1.1.0
Environment: Any environment.
   Reporter: Grégoire Dubois
   Assignee: Tomohito Nakayama
Attachments: BlobOutOfMem.java

Using the org.apache.derby.jdbc.ClientDriver driver to access the
Derby database through network, the driver is writting all the file 
into memory (RAM) before sending

it to the database.
Writting small files (smaller than 5Mo) into the database works fine,
but it is impossible to write big files (40Mo for example, or more), 
without getting the

exception java.lang.OutOfMemoryError.
The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
Here follows some code that creates a database, a table, and trys to 
write a BLOB. 2 parameters are to be changed for the code to work for 
you : DERBY_DBMS_PATH and FILE

import NetNoLedge.Configuration.Configs;
import org.apache.derby.drda.NetworkServerControl;
import java.net.InetAddress;
import java.io.*;
import java.sql.*;
/**
*
* @author  greg
*/
public class DerbyServer_JDBC_BLOB_test {
  // The unique instance of DerbyServer in the application.
   private static DerbyServer_JDBC_BLOB_test derbyServer;
  private NetworkServerControl server;
  private static final String DERBY_JDBC_DRIVER = 
"org.apache.derby.jdbc.ClientDriver";

   private static final String DERBY_DATABASE_NAME = "Test";
  // ###
   // ### SET HERE THE EXISTING PATH YOU WANT 


   // 

  1   2   >