[jira] Created: (DERBY-508) Wrong classname in javadoc for ClientDataSource

2005-08-16 Thread Knut Anders Hatlen (JIRA)
Wrong classname in javadoc for ClientDataSource
---

 Key: DERBY-508
 URL: http://issues.apache.org/jira/browse/DERBY-508
 Project: Derby
Type: Bug
  Components: Documentation  
Versions: 10.2.0.0
Reporter: Knut Anders Hatlen
Priority: Trivial


The javadoc for org.apache.derby.jdbc.ClientDataSource says:

The class ClientDataSource can be used in a connection pooling environment, 
and the class ClientXADataSource can be used in a distributed, and pooling 
environment.

The correct phrase should be: The class ClientConnectionPoolDataSource can be 
used in a connection pooling environment 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-508) Wrong classname in javadoc for ClientDataSource

2005-08-16 Thread Knut Anders Hatlen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-508?page=all ]

Knut Anders Hatlen updated DERBY-508:
-

Attachment: DERBY-508.ClientDataSource-javadoc.diff

Attached a patch which fixes the incorrect classname.

 Wrong classname in javadoc for ClientDataSource
 ---

  Key: DERBY-508
  URL: http://issues.apache.org/jira/browse/DERBY-508
  Project: Derby
 Type: Bug
   Components: Documentation
 Versions: 10.2.0.0
 Reporter: Knut Anders Hatlen
 Priority: Trivial
  Attachments: DERBY-508.ClientDataSource-javadoc.diff

 The javadoc for org.apache.derby.jdbc.ClientDataSource says:
 The class ClientDataSource can be used in a connection pooling environment, 
 and the class ClientXADataSource can be used in a distributed, and pooling 
 environment.
 The correct phrase should be: The class ClientConnectionPoolDataSource can 
 be used in a connection pooling environment 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Updated: (DERBY-508) Wrong classname in javadoc for ClientDataSource

2005-08-16 Thread Bernt M. Johnsen
I'll review and (possibly) commit it.

 Knut Anders Hatlen (JIRA) wrote (2005-08-16 08:45:57):
  [ http://issues.apache.org/jira/browse/DERBY-508?page=all ]
 
 Knut Anders Hatlen updated DERBY-508:
 -
 
 Attachment: DERBY-508.ClientDataSource-javadoc.diff
 
 Attached a patch which fixes the incorrect classname.
 
  Wrong classname in javadoc for ClientDataSource
  ---
 
   Key: DERBY-508
   URL: http://issues.apache.org/jira/browse/DERBY-508
   Project: Derby
  Type: Bug
Components: Documentation
  Versions: 10.2.0.0
  Reporter: Knut Anders Hatlen
  Priority: Trivial
   Attachments: DERBY-508.ClientDataSource-javadoc.diff
 
  The javadoc for org.apache.derby.jdbc.ClientDataSource says:
  The class ClientDataSource can be used in a connection pooling 
  environment, and the class ClientXADataSource can be used in a distributed, 
  and pooling environment.
  The correct phrase should be: The class ClientConnectionPoolDataSource can 
  be used in a connection pooling environment 
 
 -- 
 This message is automatically generated by JIRA.
 -
 If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
 -
 For more information on JIRA, see:
http://www.atlassian.com/software/jira
 

-- 
Bernt Marius Johnsen, Database Technology Group, 
Sun Microsystems, Trondheim, Norway


pgpdp8MgxQmny.pgp
Description: PGP signature


Developer status in Jira

2005-08-16 Thread Knut Anders Hatlen
Hi,

Could someone give me developer status in Jira? I am doing a little
research on DERBY-504 and would like to assign the bug to myself. My
username is knutanders.

Thanks!

-- 
Knut Anders



[jira] Updated: (DERBY-496) unit test 'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was failed

2005-08-16 Thread Tomohito Nakayama (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-496?page=all ]

Tomohito Nakayama updated DERBY-496:


Attachment: DERBY-496.patch

I upload the patch.


Next is description.

Modification:
* Remove whole diag package from derbyTesting.jar
* Add missing DiagnosticableGeneric class to diag package.

Test:
* run derbyall suite in next two classpath configuration , which 
contain derbyTesting.jar in different places , and confirm no error.

CLASSPATH=$DERBY_INSTALL/jars/sane/derbyTesting.jar:$DERBY_INSTALL/tools/java/jakarta-oro-2.0.8.jar:$DERBY_INSTALL/jars/sane/derbynet.jar:$DERBY_INSTALL/jars/sane/derby.jar:$DERBY_INSTALL/jars/sane/derbytools.jar:$DERBY_INSTALL/jars/sane/derbyclient.jar:$DERBY_INSTALL/tools/java/db2jcc.jar:$DERBY_INSTALL/tools/java/db2jcc_license_c.jar:$DERBY_INSTALL/jars/sane/derbyLocale_es.jar:$DERBY_INSTALL/jars/sane/derbyLocale_de_DE.jar:$DERBY_INSTALL/jars/sane/derbyLocale_fr.jar:$DERBY_INSTALL/jars/sane/derbyLocale_it.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ko_KR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_pt_BR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_CN.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_TW.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ja_JP.jar:$CLASSPATH


CLASSPATH=$DERBY_INSTALL/tools/java/jakarta-oro-2.0.8.jar:$DERBY_INSTALL/jars/sane/derbynet.jar:$DERBY_INSTALL/jars/sane/derby.jar:$DERBY_INSTALL/jars/sane/derbytools.jar:$DERBY_INSTALL/jars/sane/derbyclient.jar:$DERBY_INSTALL/tools/java/db2jcc.jar:$DERBY_INSTALL/tools/java/db2jcc_license_c.jar:$DERBY_INSTALL/jars/sane/derbyLocale_es.jar:$DERBY_INSTALL/jars/sane/derbyLocale_de_DE.jar:$DERBY_INSTALL/jars/sane/derbyLocale_fr.jar:$DERBY_INSTALL/jars/sane/derbyLocale_it.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ko_KR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_pt_BR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_CN.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_TW.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ja_JP.jar:$DERBY_INSTALL/jars/sane/derbyTesting.jar:$CLASSPATH



 unit test 'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was 
 failed
 --

  Key: DERBY-496
  URL: http://issues.apache.org/jira/browse/DERBY-496
  Project: Derby
 Type: Bug
   Components: Build tools
  Environment: [EMAIL PROTECTED]:~$ cat /proc/version
 Linux version 2.4.27-2-386 ([EMAIL PROTECTED]) (gcc version 3.3. 5 (Debian 
 1:3.3.5-12)) #1 Mon May 16 16:47:51 JST 2005
 [EMAIL PROTECTED]:~$ java -version
 java version 1.4.2_08
 Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_08-b03)
 Java HotSpot(TM) Client VM (build 1.4.2_08-b03, mixed mode)
 Reporter: Tomohito Nakayama
 Assignee: Tomohito Nakayama
  Attachments: DERBY-496.patch, derby.log

 As summary.
 I found next exception in derby.log.
 [main] FAIL - org.apache.derbyTesting.unitTests.harness.T_Fail: Test failed - 
 DiagnosticUtil.toDiagString() failed, got: (T_DiagTestClass1.toString(): 
 object with diag interface), expected: (D_T_DiagTestClass1: object with diag 
 interface).
 org.apache.derbyTesting.unitTests.harness.T_Fail: Test failed - 
 DiagnosticUtil.toDiagString() failed, got: (T_DiagTestClass1.toString(): 
 object with diag interface), expected: (D_T_DiagTestClass1: object with diag 
 interface).
   at 
 org.apache.derbyTesting.unitTests.harness.T_Fail.testFailMsg(T_Fail.java:95)
   at 
 org.apache.derbyTesting.unitTests.services.T_Diagnosticable.t_001(T_Diagnosticable.java:105)
   at 
 org.apache.derbyTesting.unitTests.services.T_Diagnosticable.runTestSet(T_Diagnosticable.java:207)
   at 
 org.apache.derbyTesting.unitTests.harness.T_MultiIterations.runTests(T_MultiIterations.java:94)
   at 
 org.apache.derbyTesting.unitTests.harness.T_Generic.Execute(T_Generic.java:117)
   at 
 org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.runATest(BasicUnitTestManager.java:183)
   at 
 org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.runTests(BasicUnitTestManager.java:245)
   at 
 org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.boot(BasicUnitTestManager.java:92)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
   at 
 org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java:1834)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.startServices(BaseMonitor.java:966)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.runWithState(BaseMonitor.java:398)
   at 
 org.apache.derby.impl.services.monitor.FileMonitor.init(FileMonitor.java:57)
   at 
 org.apache.derby.iapi.services.monitor.Monitor.startMonitor(Monitor.java:288)
   at 
 org.apache.derbyTesting.unitTests.harness.UnitTestMain.main(UnitTestMain.java:50)

-- 

Too long text in a line ... (Re: [jira] Updated: (DERBY-496) unit test 'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was failed)

2005-08-16 Thread TomohitoNakayama

Hello.

Uh-oh .
Too long text in a line 

Please scroll to right 

Best regards.


/*

Tomohito Nakayama
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Naka
http://www5.ocn.ne.jp/~tomohito/TopPage.html

*/
- Original Message - 
From: Tomohito Nakayama (JIRA) derby-dev@db.apache.org

To: derby-dev@db.apache.org
Sent: Tuesday, August 16, 2005 7:22 PM
Subject: [jira] Updated: (DERBY-496) unit test 
'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was failed



[ http://issues.apache.org/jira/browse/DERBY-496?page=all ]

Tomohito Nakayama updated DERBY-496:


   Attachment: DERBY-496.patch

I upload the patch.


Next is description.

Modification:
* Remove whole diag package from derbyTesting.jar
* Add missing DiagnosticableGeneric class to diag package.

Test:
* run derbyall suite in next two classpath configuration , which contain derbyTesting.jar in different places , and confirm no 
error.

CLASSPATH=$DERBY_INSTALL/jars/sane/derbyTesting.jar:$DERBY_INSTALL/tools/java/jakarta-oro-2.0.8.jar:$DERBY_INSTALL/jars/sane/derbynet.jar:$DERBY_INSTALL/jars/sane/derby.jar:$DERBY_INSTALL/jars/sane/derbytools.jar:$DERBY_INSTALL/jars/sane/derbyclient.jar:$DERBY_INSTALL/tools/java/db2jcc.jar:$DERBY_INSTALL/tools/java/db2jcc_license_c.jar:$DERBY_INSTALL/jars/sane/derbyLocale_es.jar:$DERBY_INSTALL/jars/sane/derbyLocale_de_DE.jar:$DERBY_INSTALL/jars/sane/derbyLocale_fr.jar:$DERBY_INSTALL/jars/sane/derbyLocale_it.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ko_KR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_pt_BR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_CN.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_TW.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ja_JP.jar:$CLASSPATH

CLASSPATH=$DERBY_INSTALL/tools/java/jakarta-oro-2.0.8.jar:$DERBY_INSTALL/jars/sane/derbynet.jar:$DERBY_INSTALL/jars/sane/derby.jar:$DERBY_INSTALL/jars/sane/derbytools.jar:$DERBY_INSTALL/jars/sane/derbyclient.jar:$DERBY_INSTALL/tools/java/db2jcc.jar:$DERBY_INSTALL/tools/java/db2jcc_license_c.jar:$DERBY_INSTALL/jars/sane/derbyLocale_es.jar:$DERBY_INSTALL/jars/sane/derbyLocale_de_DE.jar:$DERBY_INSTALL/jars/sane/derbyLocale_fr.jar:$DERBY_INSTALL/jars/sane/derbyLocale_it.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ko_KR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_pt_BR.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_CN.jar:$DERBY_INSTALL/jars/sane/derbyLocale_zh_TW.jar:$DERBY_INSTALL/jars/sane/derbyLocale_ja_JP.jar:$DERBY_INSTALL/jars/sane/derbyTesting.jar:$CLASSPATH




unit test 'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was 
failed
--

 Key: DERBY-496
 URL: http://issues.apache.org/jira/browse/DERBY-496
 Project: Derby
Type: Bug
  Components: Build tools
 Environment: [EMAIL PROTECTED]:~$ cat /proc/version
Linux version 2.4.27-2-386 ([EMAIL PROTECTED]) (gcc version 3.3. 5 (Debian 1:3.3.5-12)) #1 Mon May 16 16:47:51 
JST 2005

[EMAIL PROTECTED]:~$ java -version
java version 1.4.2_08
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_08-b03)
Java HotSpot(TM) Client VM (build 1.4.2_08-b03, mixed mode)
Reporter: Tomohito Nakayama
Assignee: Tomohito Nakayama
 Attachments: DERBY-496.patch, derby.log

As summary.
I found next exception in derby.log.
[main] FAIL - org.apache.derbyTesting.unitTests.harness.T_Fail: Test failed - DiagnosticUtil.toDiagString() failed, got: 
(T_DiagTestClass1.toString(): object with diag interface), expected: (D_T_DiagTestClass1: object with diag interface).
org.apache.derbyTesting.unitTests.harness.T_Fail: Test failed - DiagnosticUtil.toDiagString() failed, got: 
(T_DiagTestClass1.toString(): object with diag interface), expected: (D_T_DiagTestClass1: object with diag interface).

at org.apache.derbyTesting.unitTests.harness.T_Fail.testFailMsg(T_Fail.java:95)
at 
org.apache.derbyTesting.unitTests.services.T_Diagnosticable.t_001(T_Diagnosticable.java:105)
at 
org.apache.derbyTesting.unitTests.services.T_Diagnosticable.runTestSet(T_Diagnosticable.java:207)
at 
org.apache.derbyTesting.unitTests.harness.T_MultiIterations.runTests(T_MultiIterations.java:94)
at 
org.apache.derbyTesting.unitTests.harness.T_Generic.Execute(T_Generic.java:117)
at 
org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.runATest(BasicUnitTestManager.java:183)
at 
org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.runTests(BasicUnitTestManager.java:245)
at 
org.apache.derbyTesting.unitTests.harness.BasicUnitTestManager.boot(BasicUnitTestManager.java:92)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java:1834)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startServices(BaseMonitor.java:966)

[jira] Resolved: (DERBY-508) Wrong classname in javadoc for ClientDataSource

2005-08-16 Thread Bernt M. Johnsen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-508?page=all ]
 
Bernt M. Johnsen resolved DERBY-508:


Fix Version: 10.2.0.0
 Resolution: Fixed

 Wrong classname in javadoc for ClientDataSource
 ---

  Key: DERBY-508
  URL: http://issues.apache.org/jira/browse/DERBY-508
  Project: Derby
 Type: Bug
   Components: Documentation
 Versions: 10.2.0.0
 Reporter: Knut Anders Hatlen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-508.ClientDataSource-javadoc.diff

 The javadoc for org.apache.derby.jdbc.ClientDataSource says:
 The class ClientDataSource can be used in a connection pooling environment, 
 and the class ClientXADataSource can be used in a distributed, and pooling 
 environment.
 The correct phrase should be: The class ClientConnectionPoolDataSource can 
 be used in a connection pooling environment 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Fernanda Pizzorno (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-463?page=all ]

Fernanda Pizzorno updated DERBY-463:


Attachment: DERBY-463.diff

Change method write (byte[], int, int) in 
java/client/org/apache/derby/client/am/BlobOutputStream.java. offset_ was not 
being incremented.

 Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the 
 file pointer
 -

  Key: DERBY-463
  URL: http://issues.apache.org/jira/browse/DERBY-463
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: Sun java full version 1.4.2_05-b04
 Linux x86
 Derby is run in network server mode
 Reporter: Laurenz Albe
 Assignee: Fernanda Pizzorno
  Attachments: DERBY-463.diff

 I have a table
 PEOPLE(SEQ_ID INT NOT NULL PRIMARY KEY, PICTURE BLOB).
 A row is inserted; both values are not NULL.
 From inside a JDBC program, I select the Blob for update.
 I then get the Blob output stream with a call to
   Blob.setBinaryStream(long)
 To this stream I write several times with
   OutputStream.write(byte[], int, int)
 I close the stream, update the selected row with the new Blob and commit.
 The new value of the Blob now is exactly the value of the last content of the 
 byte[],
 and it is like the previous calls to write() have never taken place, or as if 
 the file pointer
 of the output stream has been reset between the calls.
 A sample program follows; the size of the input file picture.jpg is 23237, 
 the length
 of the Blob after the program has run is 23237 % 1024 = 709
  sample program -
 import java.sql.*;
 class TestApp {
private TestApp() {}
public static void main(String[] args)
  throws ClassNotFoundException, SQLException, java.io.IOException {
   // try to load JDBC driver
   Class.forName(com.ibm.db2.jcc.DB2Driver);
   // open the input file
   java.io.InputStream instream = new 
 java.io.FileInputStream(picture.jpg);
   // login to database
   Connection conn = DriverManager.getConnection(
 jdbc:derby:net://dbtuxe/testdb, laurenz, apassword);
   conn.setAutoCommit(false);
   // select Blob for update
   PreparedStatement stmt = conn.prepareStatement(
 SELECT PICTURE FROM PEOPLE WHERE SEQ_ID=? FOR UPDATE OF 
 PICTURE);
   stmt.setInt(1, 1);
   ResultSet rs = stmt.executeQuery();
   // get Blob output stream
   rs.next();
   Blob blob = rs.getBlob(1);
   java.io.OutputStream outstream = blob.setBinaryStream(1l);
   // copy the input file to the Blob in chunks of 1K
   byte[] buf = new byte[1024];
   int count;
   while (-1 != (count = instream.read(buf))) {
  outstream.write(buf, 0, count);
  System.out.println(Written  + count +  bytes to Blob);
   }
   // close streams
   instream.close();
   outstream.close();
   // update Blob with new value
   String cursor = rs.getCursorName();
   PreparedStatement stmt2 = conn.prepareStatement(
 UPDATE PEOPLE SET PICTURE=? WHERE CURRENT OF  + cursor);
   stmt2.setBlob(1, blob);
   stmt2.executeUpdate();
   // clean up
   stmt2.close();
   stmt.close();
   conn.commit();
   conn.close();
}
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-509) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Volker Edelmann (JIRA)
DERBY-132 resolved ? Table not automatically compressed 


 Key: DERBY-509
 URL: http://issues.apache.org/jira/browse/DERBY-509
 Project: Derby
Type: Bug
Versions: 10.1.1.0
 Environment: JDK 1.4.2, JDK 1.5.0,
Windows XP

Reporter: Volker Edelmann


I tried a test-program that repeatedly inserts a bunch of  data into 1 table 
and repeatedly deletes a bunch of data. 


derby.executeSelect(select count(*) c from rclvalues);

TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200);  
// insert 2.000.000 rows
derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);  
   
TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);

derby.executeSelect(select count(*) c from rclvalues);

At the end of the operation, the table contains approximately the same number 
of rows. But the size of the database  has grown  from
581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby 
does the compression now ( version 10.1.X.X.).
Did I overlook I still have to use  SYSCS_UTIL.SYSCS_COMPRESS_TABLE ?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Fernanda Pizzorno (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-463?page=all ]

Fernanda Pizzorno updated DERBY-463:


Attachment: DERBY-463.diff

 Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the 
 file pointer
 -

  Key: DERBY-463
  URL: http://issues.apache.org/jira/browse/DERBY-463
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: Sun java full version 1.4.2_05-b04
 Linux x86
 Derby is run in network server mode
 Reporter: Laurenz Albe
 Assignee: Fernanda Pizzorno
  Attachments: DERBY-463.diff

 I have a table
 PEOPLE(SEQ_ID INT NOT NULL PRIMARY KEY, PICTURE BLOB).
 A row is inserted; both values are not NULL.
 From inside a JDBC program, I select the Blob for update.
 I then get the Blob output stream with a call to
   Blob.setBinaryStream(long)
 To this stream I write several times with
   OutputStream.write(byte[], int, int)
 I close the stream, update the selected row with the new Blob and commit.
 The new value of the Blob now is exactly the value of the last content of the 
 byte[],
 and it is like the previous calls to write() have never taken place, or as if 
 the file pointer
 of the output stream has been reset between the calls.
 A sample program follows; the size of the input file picture.jpg is 23237, 
 the length
 of the Blob after the program has run is 23237 % 1024 = 709
  sample program -
 import java.sql.*;
 class TestApp {
private TestApp() {}
public static void main(String[] args)
  throws ClassNotFoundException, SQLException, java.io.IOException {
   // try to load JDBC driver
   Class.forName(com.ibm.db2.jcc.DB2Driver);
   // open the input file
   java.io.InputStream instream = new 
 java.io.FileInputStream(picture.jpg);
   // login to database
   Connection conn = DriverManager.getConnection(
 jdbc:derby:net://dbtuxe/testdb, laurenz, apassword);
   conn.setAutoCommit(false);
   // select Blob for update
   PreparedStatement stmt = conn.prepareStatement(
 SELECT PICTURE FROM PEOPLE WHERE SEQ_ID=? FOR UPDATE OF 
 PICTURE);
   stmt.setInt(1, 1);
   ResultSet rs = stmt.executeQuery();
   // get Blob output stream
   rs.next();
   Blob blob = rs.getBlob(1);
   java.io.OutputStream outstream = blob.setBinaryStream(1l);
   // copy the input file to the Blob in chunks of 1K
   byte[] buf = new byte[1024];
   int count;
   while (-1 != (count = instream.read(buf))) {
  outstream.write(buf, 0, count);
  System.out.println(Written  + count +  bytes to Blob);
   }
   // close streams
   instream.close();
   outstream.close();
   // update Blob with new value
   String cursor = rs.getCursorName();
   PreparedStatement stmt2 = conn.prepareStatement(
 UPDATE PEOPLE SET PICTURE=? WHERE CURRENT OF  + cursor);
   stmt2.setBlob(1, blob);
   stmt2.executeUpdate();
   // clean up
   stmt2.close();
   stmt.close();
   conn.commit();
   conn.close();
}
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Updated: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Oyvind . Bakksjo

Fernanda Pizzorno (JIRA) wrote:

 [ http://issues.apache.org/jira/browse/DERBY-463?page=all ]

Fernanda Pizzorno updated DERBY-463:


Attachment: DERBY-463.diff

Change method write (byte[], int, int) in 
java/client/org/apache/derby/client/am/BlobOutputStream.java. offset_ was not 
being incremented.


I'll review your patch.

--
Øyvind Bakksjø
Sun Microsystems, Database Technology Group
Haakon VII gt. 7b, N-7485 Trondheim, Norway
Tel: x43419 / +47 73842119, Fax: +47 73842101


[jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Volker Edelmann (JIRA)
DERBY-132 resolved ? Table not automatically compressed 


 Key: DERBY-510
 URL: http://issues.apache.org/jira/browse/DERBY-510
 Project: Derby
Type: Bug
Versions: 10.1.1.0
 Environment: JDK 1.4.2, JDK 1.5.0
Windows XP
Reporter: Volker Edelmann


 I tried a test-program that repeatedly inserts a bunch of data into 1 table 
and repeatedly deletes a bunch of data.

// table is not empty  when test-program starts
 derby.executeSelect(select count(*) c from rclvalues);

   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200); // 
insert 2.000.000 rows
derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);

 derby.executeSelect(select count(*) c from rclvalues);

At the end of the operation, the table contains approximately the same number 
of rows. But the size of the database has grown from
581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby 
does the compression now ( version 10.1.X.X.).  


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Fernanda Pizzorno (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-463?page=all ]

Fernanda Pizzorno updated DERBY-463:


Attachment: DERBY-463.stat

 Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the 
 file pointer
 -

  Key: DERBY-463
  URL: http://issues.apache.org/jira/browse/DERBY-463
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: Sun java full version 1.4.2_05-b04
 Linux x86
 Derby is run in network server mode
 Reporter: Laurenz Albe
 Assignee: Fernanda Pizzorno
  Attachments: DERBY-463.diff, DERBY-463.stat

 I have a table
 PEOPLE(SEQ_ID INT NOT NULL PRIMARY KEY, PICTURE BLOB).
 A row is inserted; both values are not NULL.
 From inside a JDBC program, I select the Blob for update.
 I then get the Blob output stream with a call to
   Blob.setBinaryStream(long)
 To this stream I write several times with
   OutputStream.write(byte[], int, int)
 I close the stream, update the selected row with the new Blob and commit.
 The new value of the Blob now is exactly the value of the last content of the 
 byte[],
 and it is like the previous calls to write() have never taken place, or as if 
 the file pointer
 of the output stream has been reset between the calls.
 A sample program follows; the size of the input file picture.jpg is 23237, 
 the length
 of the Blob after the program has run is 23237 % 1024 = 709
  sample program -
 import java.sql.*;
 class TestApp {
private TestApp() {}
public static void main(String[] args)
  throws ClassNotFoundException, SQLException, java.io.IOException {
   // try to load JDBC driver
   Class.forName(com.ibm.db2.jcc.DB2Driver);
   // open the input file
   java.io.InputStream instream = new 
 java.io.FileInputStream(picture.jpg);
   // login to database
   Connection conn = DriverManager.getConnection(
 jdbc:derby:net://dbtuxe/testdb, laurenz, apassword);
   conn.setAutoCommit(false);
   // select Blob for update
   PreparedStatement stmt = conn.prepareStatement(
 SELECT PICTURE FROM PEOPLE WHERE SEQ_ID=? FOR UPDATE OF 
 PICTURE);
   stmt.setInt(1, 1);
   ResultSet rs = stmt.executeQuery();
   // get Blob output stream
   rs.next();
   Blob blob = rs.getBlob(1);
   java.io.OutputStream outstream = blob.setBinaryStream(1l);
   // copy the input file to the Blob in chunks of 1K
   byte[] buf = new byte[1024];
   int count;
   while (-1 != (count = instream.read(buf))) {
  outstream.write(buf, 0, count);
  System.out.println(Written  + count +  bytes to Blob);
   }
   // close streams
   instream.close();
   outstream.close();
   // update Blob with new value
   String cursor = rs.getCursorName();
   PreparedStatement stmt2 = conn.prepareStatement(
 UPDATE PEOPLE SET PICTURE=? WHERE CURRENT OF  + cursor);
   stmt2.setBlob(1, blob);
   stmt2.executeUpdate();
   // clean up
   stmt2.close();
   stmt.close();
   conn.commit();
   conn.close();
}
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-477) JDBC client and embedded drivers differ wrt API subset supported

2005-08-16 Thread Fernanda Pizzorno (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-477?page=comments#action_1231 ] 

Fernanda Pizzorno commented on DERBY-477:
-

RecordSet#
relative(int rows) behaves different in embedded and client/server mode when 
the positioning before the first row or after the last row.

The embedded driver shows the behaviour described in the JDBC 3.0 
specification, returning false and postitioning in either before first or after 
last. While the client/server driver returns true and incrementing/decrementing 
the current row by rows and not setting the position to either before first 
of after last.

I have run a test with a result set with 10 rows, where I positioned in row 5 
and moved +20 and - 20 using relative(int rows). With the embedded driver the 
method returned false, the current row was set to 0 and isAfterLast() and 
isBeforeFirst() returned true (for +20 and -20 respectively). With the 
client/server driver the method returned true, the current row was set to 25 
and -15 (for +20 and -20 respectively) and isAfterLast() and isBeforeFirst() 
returned false.

 JDBC client and embedded drivers differ wrt API subset supported
 

  Key: DERBY-477
  URL: http://issues.apache.org/jira/browse/DERBY-477
  Project: Derby
 Type: Improvement
   Components: JDBC, Network Client, Network Server
 Versions: 10.1.1.0, 10.1.1.1, 10.2.0.0
 Reporter: Dag H. Wanvik
  Attachments: api-diffs.txt

 After having noticed some differences (mail url below) in Clob and
 Blob support, I did a walkthough of both drivers of things marked as
 not implemented, not supported, etc., to see how they align.
 [http://mail-archives.apache.org/mod_mbox/db-derby-dev/200507.mbox/[EMAIL 
 PROTECTED],
 Below is a summary of my findings. I attach a file api-diffs.txt
 which shows all not implemented methods in the interfaces I looked
 at, also for those cases where the drivers are in agreement.
 Caveat 1: This is all based on visual inspection of the code only.
 Caveat 2: I mostly looked at the top-level API implementation code, so
 there may be cases where the cut-off occurs at a lower level in the
 code which I missed.
 Caveat 3: The client uses lots of different strings to signal that a
 method us not implemented; e.g. Driver not capable, Driver not
 capable: X, Driver no capable (sic), not supported by server,
 under construction, JDBC 3 method called - not yet supported,
 jdbc 2 method not yet implemented, X is not supported. I may have
 missed some...
 On each line I list the method with signature, and status for the
 embedded driver and the client driver, in that order. I use NI to
 mark not implemented, and a + to mark (apparently) implemented.
 The next step should be to check the tests and the documentation, as
 soon as we agree on how the harminization should be carried out.
   Embedded Client
 Resultset#  
 getUnicodeStream(int columnIndex)   NI   +
 getUnicodeStream(String columnName) NI   +
 updateBlob(int columnIndex, Blob x) +   NI
 updateBlob(String columnName, Blob x)   +   NI
 updateClob(int columnIndex, Clob x) +   NI
 updateClob(String columnName, Clob x)   +   NI
 CallableStatement#   
 getBlob(int i)  NI   +   
 getBytes(int parameterIndex)NI   + 
 getClob(int i)  NI   + 
 registerOutParameter(int paramIndex, int sqlType, 
 
  String typeName)   NI   +²
 ²bug:does nothing!
 Should be filed as separate bug?
 Blob#
 setBinaryStream(long pos)   NI   +¹  
 setBytes(long pos, byte[] bytes)NI   +   
 setBytes(long pos, byte[] bytes, int offset, int len)   NI   +   
 truncate(long len)  NI   +   
 ¹bug reported by Laurentz Albe
 http://mail-archives.apache.org/mod_mbox/db-derby-dev/200507.mbox/[EMAIL 
 PROTECTED]
 Should be filed as separate bug?
 Clob#
 setAsciiStream(long pos)NI   +  
 setCharacterStream(long pos)NI   +  
 setString(long pos, String str) NI   +  
 setString(long pos, String str, int offset, int len)NI   +  
 truncate(long len)  NI   +  
 PreparedStatement#
 setNull(int paramIndex, int sqlType, String typeName)   NI   +³
 setUnicodeStream(int parameterIndex, InputStream x, int length) NI   +
 ³bug: ignores typename
 Should be 

Re: bug-check

2005-08-16 Thread Rick Hillegas
If we do revisit the common code problem, I'd like to throw something 
else onto the pile of common code: the DRDA constants. Methinks the 
network client and server should share these constants rather than clone 
them.


Cheers,
-Rick

David Van Couvering wrote:

Thaks, Satheesh.  Moving the engine assert mechanism over would 
involve either more cutting and pasting or revisiting the common jar 
file problem.  Personally, if we do any assert support in the client, 
I would like to just use JDK 1.4 assertions (and have it be a no-op 
for JDK 1.3 builds).


At any rate, to keep things contained, I am going to just continue 
using these error messages as written now, and we can address the 
issue around using asserts in the client as a separate JIRA item.


David

Satheesh Bandaram wrote:


I think some of them were inserted during development to support some
kind of assertions. We could change them now as appropriate. Should we
consider using engine's ASSERT() mechanism in the client too?

Satheesh

David Van Couvering wrote:

 


I meant suffix not suffice

David Van Couvering wrote:

  


Hi, all.  I am noticing that messages for exceptions thrown in
org.apache.derby.jdbc.ClientBaseDataSource often have the suffice
bug check: , for example bug check: corresponding property field
does not exist.

Does anyone have any history on this and why this is there?  Is this
correct, or should I be fixing something as I extract these messages
into a properties file?

Thanks,

David



  



 





[jira] Commented: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Laurenz Albe (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-463?page=comments#action_12318896 ] 

Laurenz Albe commented on DERBY-463:


While you are at it:

In my program there is a line of code:
  java.io.OutputStream outstream = blob.setBinaryStream(1l);

My original attempt was with blob.setBinaryStream(0l), as it should be in my 
opinion,
but then the first byte written to the Blob does not get written, i.e. the 
resulting Blob is one byte
short. By trial and error I found that it works when I use 1 instead of 0.

Is that also a bug or is that intended?

 Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the 
 file pointer
 -

  Key: DERBY-463
  URL: http://issues.apache.org/jira/browse/DERBY-463
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: Sun java full version 1.4.2_05-b04
 Linux x86
 Derby is run in network server mode
 Reporter: Laurenz Albe
 Assignee: Fernanda Pizzorno
  Attachments: DERBY-463.diff, DERBY-463.stat

 I have a table
 PEOPLE(SEQ_ID INT NOT NULL PRIMARY KEY, PICTURE BLOB).
 A row is inserted; both values are not NULL.
 From inside a JDBC program, I select the Blob for update.
 I then get the Blob output stream with a call to
   Blob.setBinaryStream(long)
 To this stream I write several times with
   OutputStream.write(byte[], int, int)
 I close the stream, update the selected row with the new Blob and commit.
 The new value of the Blob now is exactly the value of the last content of the 
 byte[],
 and it is like the previous calls to write() have never taken place, or as if 
 the file pointer
 of the output stream has been reset between the calls.
 A sample program follows; the size of the input file picture.jpg is 23237, 
 the length
 of the Blob after the program has run is 23237 % 1024 = 709
  sample program -
 import java.sql.*;
 class TestApp {
private TestApp() {}
public static void main(String[] args)
  throws ClassNotFoundException, SQLException, java.io.IOException {
   // try to load JDBC driver
   Class.forName(com.ibm.db2.jcc.DB2Driver);
   // open the input file
   java.io.InputStream instream = new 
 java.io.FileInputStream(picture.jpg);
   // login to database
   Connection conn = DriverManager.getConnection(
 jdbc:derby:net://dbtuxe/testdb, laurenz, apassword);
   conn.setAutoCommit(false);
   // select Blob for update
   PreparedStatement stmt = conn.prepareStatement(
 SELECT PICTURE FROM PEOPLE WHERE SEQ_ID=? FOR UPDATE OF 
 PICTURE);
   stmt.setInt(1, 1);
   ResultSet rs = stmt.executeQuery();
   // get Blob output stream
   rs.next();
   Blob blob = rs.getBlob(1);
   java.io.OutputStream outstream = blob.setBinaryStream(1l);
   // copy the input file to the Blob in chunks of 1K
   byte[] buf = new byte[1024];
   int count;
   while (-1 != (count = instream.read(buf))) {
  outstream.write(buf, 0, count);
  System.out.println(Written  + count +  bytes to Blob);
   }
   // close streams
   instream.close();
   outstream.close();
   // update Blob with new value
   String cursor = rs.getCursorName();
   PreparedStatement stmt2 = conn.prepareStatement(
 UPDATE PEOPLE SET PICTURE=? WHERE CURRENT OF  + cursor);
   stmt2.setBlob(1, blob);
   stmt2.executeUpdate();
   // clean up
   stmt2.close();
   stmt.close();
   conn.commit();
   conn.close();
}
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-511) Test of shutting down database when active connections alive.

2005-08-16 Thread Tomohito Nakayama (JIRA)
Test of shutting down database when active connections alive.
-

 Key: DERBY-511
 URL: http://issues.apache.org/jira/browse/DERBY-511
 Project: Derby
Type: Test
  Components: Test  
Reporter: Tomohito Nakayama


Confirm no trouble happens when database , of which active connection exists 
with , was shut down.

This test is made for compensating for ignoring error message found in 
DERBY-273.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Updated: (DERBY-496) unit test 'org.apache.derbyTesting.unitTests.services.T_Diagnosticable' was failed

2005-08-16 Thread Andrew McIntyre
On Aug 16, 2005, at 3:22 AM, Tomohito Nakayama (JIRA) wrote:     [ http://issues.apache.org/jira/browse/DERBY-496?page=all ]Tomohito Nakayama updated DERBY-496:    Attachment: DERBY-496.patchHi Tomohito,There is an easier way to accomplish this. Instead of using Ant to add a line to derby.list, you could add a line to tools/jar/extraDBMSclasses:Index: extraDBMSclasses.properties===--- extraDBMSclasses.properties (revision 232331)+++ extraDBMSclasses.properties (working copy)@@ -36,7 +36,9 @@ derby.module.database.consistency.checker=org.apache.derby.iapi.db.ConsistencyChecker derby.module.database.optimizer.trace=org.apache.derby.iapi.db.OptimizerTrace+derby.module.diag.diagnosticablegeneric=org.apache.derby.iapi.services.diag.DiagnosticableGeneric+ derby.module.classFactory.signedJar.jdk12=org.apache.derby.impl.services.reflect.JarFileJava2 derby.module.internalUtil.classsizecatalogimpl=org.apache.derby.iapi.services.cache.ClassSizeCatalogextraDBMSclasses.properties is a list of classes to include in derby.jar which are not put into the list of classes generated by the dependency checker (org.apache.derbyBuild.classlister) from the list of modules in modules.properties.andrew

[jira] Commented: (DERBY-463) Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the file pointer

2005-08-16 Thread Fernanda Pizzorno (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-463?page=comments#action_12318899 ] 

Fernanda Pizzorno commented on DERBY-463:
-

It also surprised me that it would start form 1, but it is intended to be so 
that the first byte is 1 and not 0.

 Successive writes to a java.sql.Blob.setBinaryStream(long) seem to reset the 
 file pointer
 -

  Key: DERBY-463
  URL: http://issues.apache.org/jira/browse/DERBY-463
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: Sun java full version 1.4.2_05-b04
 Linux x86
 Derby is run in network server mode
 Reporter: Laurenz Albe
 Assignee: Fernanda Pizzorno
  Attachments: DERBY-463.diff, DERBY-463.stat

 I have a table
 PEOPLE(SEQ_ID INT NOT NULL PRIMARY KEY, PICTURE BLOB).
 A row is inserted; both values are not NULL.
 From inside a JDBC program, I select the Blob for update.
 I then get the Blob output stream with a call to
   Blob.setBinaryStream(long)
 To this stream I write several times with
   OutputStream.write(byte[], int, int)
 I close the stream, update the selected row with the new Blob and commit.
 The new value of the Blob now is exactly the value of the last content of the 
 byte[],
 and it is like the previous calls to write() have never taken place, or as if 
 the file pointer
 of the output stream has been reset between the calls.
 A sample program follows; the size of the input file picture.jpg is 23237, 
 the length
 of the Blob after the program has run is 23237 % 1024 = 709
  sample program -
 import java.sql.*;
 class TestApp {
private TestApp() {}
public static void main(String[] args)
  throws ClassNotFoundException, SQLException, java.io.IOException {
   // try to load JDBC driver
   Class.forName(com.ibm.db2.jcc.DB2Driver);
   // open the input file
   java.io.InputStream instream = new 
 java.io.FileInputStream(picture.jpg);
   // login to database
   Connection conn = DriverManager.getConnection(
 jdbc:derby:net://dbtuxe/testdb, laurenz, apassword);
   conn.setAutoCommit(false);
   // select Blob for update
   PreparedStatement stmt = conn.prepareStatement(
 SELECT PICTURE FROM PEOPLE WHERE SEQ_ID=? FOR UPDATE OF 
 PICTURE);
   stmt.setInt(1, 1);
   ResultSet rs = stmt.executeQuery();
   // get Blob output stream
   rs.next();
   Blob blob = rs.getBlob(1);
   java.io.OutputStream outstream = blob.setBinaryStream(1l);
   // copy the input file to the Blob in chunks of 1K
   byte[] buf = new byte[1024];
   int count;
   while (-1 != (count = instream.read(buf))) {
  outstream.write(buf, 0, count);
  System.out.println(Written  + count +  bytes to Blob);
   }
   // close streams
   instream.close();
   outstream.close();
   // update Blob with new value
   String cursor = rs.getCursorName();
   PreparedStatement stmt2 = conn.prepareStatement(
 UPDATE PEOPLE SET PICTURE=? WHERE CURRENT OF  + cursor);
   stmt2.setBlob(1, blob);
   stmt2.executeUpdate();
   // clean up
   stmt2.close();
   stmt.close();
   conn.commit();
   conn.close();
}
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Patch tool acts strange

2005-08-16 Thread Craig Russell
Hi Philip,Philip Wilder (JIRA) wrote:- The SVN patch tool seems to act very strangely for updatableResultSet.out, deleting then adding lines that were identical. I cannot account for this behavior.In my experience, this is due to a white space change. For example, adding or removing a blank, or replacing a tab with blanks. There is a change, just not easily viewed with the naked eye.Many "diff" editors will allow you to "enable white space diff" so you can see the changes.Craig Craig Russell Architect, Sun Java Enterprise System http://java.sun.com/products/jdo 408 276-5638 mailto:[EMAIL PROTECTED] P.S. A good JDO? O, Gasp!  


Re: Patch tool acts strange

2005-08-16 Thread Thomas Lecavelier
Hi,

I met also this kind of problem because some of my co-workers used their
IDE in 'windows end line' markers where I was using unix end line markers.

If this could help you...

-- Tom

Craig Russell a écrit :
 
 In my experience, this is due to a white space change. For example,
 adding or removing a blank, or replacing a tab with blanks. There is a
 change, just not easily viewed with the naked eye.
 



signature.asc
Description: OpenPGP digital signature


Re: Patch tool acts strange

2005-08-16 Thread Philip Wilder

Thomas Lecavelier wrote:


Hi,

I met also this kind of problem because some of my co-workers used their
IDE in 'windows end line' markers where I was using unix end line markers.

If this could help you...

-- Tom

Craig Russell a écrit :
 


In my experience, this is due to a white space change. For example,
adding or removing a blank, or replacing a tab with blanks. There is a
change, just not easily viewed with the naked eye.

   



 

Ah, I think this explanation has merit. In the case of my patch it was 
not just a few lines that were replaced but every line :-S


With this in mind it would probably be prudent to run the derbynetclient 
suite (or at least the lang/updatableResultSet.java test) in a Linux 
environment before this patch gets released.


Philip


Re: [jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Mike Matrigali
Full compression of derby tables is not done automatically, I
am looking for input on how to schedule such an operation.  An
operation like this is going to have a large cpu, i/o, and
possible temporary disk space impact on the rest of the server.
As a zero admin db I think we should figure out some way to
do this automatically, but I think there are a number of
applications which would not be happy with such a performance
impact not under their control.

My initial thoughts are to pick a default time frame, say
once every 30 days to check for table level events like
compression and statistics generation and then execute the operations
at low priority.  Also add some sort of parameter so that
applications could disable the automatic background jobs.

Note that derby does automatically reclaim space from deletes
for subsequent inserts, but the granularity currently is at
a page level.  So deleting every 3rd or 5th row is the worst
case behavior.  The page level decision was a tradeoff as
reclaiming the space is time consuming so did not want to
schedule to work on a row by row basis.  Currently we schedule
the work when all the rows on a page are marked deleted.

Volker Edelmann (JIRA) wrote:

 DERBY-132 resolved ? Table not automatically compressed 
 
 
  Key: DERBY-510
  URL: http://issues.apache.org/jira/browse/DERBY-510
  Project: Derby
 Type: Bug
 Versions: 10.1.1.0
  Environment: JDK 1.4.2, JDK 1.5.0
 Windows XP
 Reporter: Volker Edelmann
 
 
  I tried a test-program that repeatedly inserts a bunch of data into 1 table 
 and repeatedly deletes a bunch of data.
 
 // table is not empty  when test-program starts
  derby.executeSelect(select count(*) c from rclvalues);
 
TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200); // 
 insert 2.000.000 rows
 derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
 derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);
 
  derby.executeSelect(select count(*) c from rclvalues);
 
 At the end of the operation, the table contains approximately the same number 
 of rows. But the size of the database has grown from
 581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby 
 does the compression now ( version 10.1.X.X.).  
 
 


Re: [VOTE] Policy for supported platforms

2005-08-16 Thread Rick Hillegas

Hi Andrew,

I've made the following changes to STATUS, which I'm submitting in the 
attached diff file:


1) Reworded the preamble to indicate that Derby is a full-fledged Apache 
DB project.


2) Listed 10.1.1.0 as released.

3) Noted our graduation.

4) Recorded the platform support vote.

I didn't know what you meant about listing the new committers (Bernt, 
Oyvind, and David). They are already at the end of the committer list.
By the way, does anyone know what the type column in the Detailed 
references table means? For the original committers, this
column holds some kind of login name or abbreviation which, at least in 
some cases, isn't the individual's JIRA id. Currently, this

column is blank for the new committers.

Thanks in advance to some industrious committer for checking in these 
changes,

-Rick

Andrew McIntyre wrote:



On Aug 15, 2005, at 12:16 PM, Rick Hillegas wrote:

For the record, this proposal passed with four +1 votes and no  other 
votes:


 Lance Andersen
 Rick Hillegas
 Shreyas Kaushik
 Francois Orsini

Does someone (maybe me?) need to inscribe/enshrine/link/post this  fact?



The mailing list archives will preserve the decision for posterity.

This is also just the sort of thing that is good to have in the  
STATUS file at the top of the tree. If anyone has some time to update  
it, it would be good to record our recent graduation from the  
Incubator, addition of new committers (just the votes are currently  
recorded there), the recent release of 10.1, and recent voting  
activity such as this into STATUS.


I won't have time to update it till later this week, maybe not till  
next week, but if some industrious Derby developer takes the time to  
produce a patch to update the STATUS file, I'm sure one of the  
committers will give it their immediate attention. (hint, hint)


andrew



Index: STATUS
===
--- STATUS  (revision 233011)
+++ STATUS  (working copy)
@@ -3,30 +3,24 @@
 
 Web site: http://incubator.apache.org/derby/
 
-Incubator Status
+Project Status
 
   Description
 
-  Derby is a snapshot of the IBM's Cloudscape Java relational database. IBM 
is
-  opening the code by contributing it to The Apache Software Foundation and 
-  basing future versions of IBM Cloudscape on the Apache-managed code.
+  Derby began as a snapshot of the IBM's Cloudscape Java relational 
database. IBM
+  contributed the code to The Apache Software Foundation and 
+  bases future versions of IBM Cloudscape on the Apache-managed code.
 
-  To participate in the Derby podling, you should join the mailing list. Just 
-  send an empty message to [EMAIL PROTECTED] .
+  To participate in Derby, you should join the mailing list. Just 
+  send an empty message to [EMAIL PROTECTED]
 
-  The initial goal of the project while in incubation is to build a viable 
-  developer community around the codebase.
+  Derby graduated from incubation in July, 2005 and is now a
+  full-fledged Apache DB project..
 
-  The second goal of Derby-in-incubation is to successfully produce a release. 
-  Since Derby is in incubation, such a release would not have formal standing; 
-  it will serve as a proof-of-concept to demonstrate to the developers' and 
-  incubator's satisfaction that this aspect of the project is health and 
-  understood.
-
 Project info
 
-  * The Apache DB project will own the Derby subproject, and the subproject 
will
-follow the Apache DB PMC's direction. Significant contributors to this sub-
+  * The Apache DB project owns the Derby subproject, and the subproject
+follows the Apache DB PMC's direction. Significant contributors to this 
sub-
 project (for example, after a significant interval of sustained
 contribution) will be proposed for commit access to the codebase.
 
@@ -155,11 +149,12 @@
 Has the Incubator decided that the project has accomplished all of the above
 tasks?
 
+Graduated from Incubator in July 2005.
 
 Releases:
 
  * 10.0.2.1 : Released 12/06/2004
- * 10.1.0.0 : In development
+ * 10.1.1.0 : Released 8/3/2005
 
 PENDING ISSUES
 ==
@@ -235,7 +230,13 @@
   and support from Lance J Anderson, Suresh Thalamati, James Eacret, Dag H. 
Wanvik,
   Susan Cline, Olav Sandstaa, Manjula G Kutty
   
+[VOTE] to sunset support for jdk1.3 and to support jdbc4.0 in release 10.2.
+  See email thread: [VOTE] Policy for supported platforms. Vote
+  ended on August 12, 2005.
+  Passed with +1 support votes from Lance Andersen, Rick Hillegas,
+  Shreyas Kaushik, and Francois Orsini.
 
+
 OTHER NEWS
 ==
 
@@ -248,16 +249,4 @@
 RELEASE STATUS
 ==
 
-Derby PPMC voted to release 10.1 - waiting for permission from Incubator PMC to
-distribute the software.
-
-Any release must be approved by the Incubator PMC and must clearly be marked as
-an incubator release, according to the Apache Incubator guidelines:
-
-http://incubator.apache.org/incubation/Incubation_Policy.html#Releases%0A

Re: [jira] Created: (DERBY-505) Add system procedure to allow setting statement timeout

2005-08-16 Thread Mike Matrigali
I am wondering why this is necessary, since there is a way to do
this through jdbc - why add a different way to do this?  I assume
users could always create their own procedure if they needed it.
What is the circumstance that you need this from SQL rather
than JDBC.

To me this just doesn't seem like the right use of the derby
provided system procedures.

We added the system utility system procedures as a last resort
for the things which had no sql standard, like backup and import.  Any
use of system procedure is non-standard and will cause issues for
database portability, so I think it is important to not add to them
if it is not necessary.

If there really is a need to do this from sql rather than jdbc
I would prefer in the following order:
1) let users create their own procedure using existing available syntax
2) do the setting as a property rather than a system procedure


Oyvind Bakksjo (JIRA) wrote:

 Add system procedure to allow setting statement timeout
 ---
 
  Key: DERBY-505
  URL: http://issues.apache.org/jira/browse/DERBY-505
  Project: Derby
 Type: New Feature
   Components: SQL  
 Versions: 10.1.1.0
 Reporter: Oyvind Bakksjo
  Assigned to: Oyvind Bakksjo 
 Priority: Minor
 
 
 Propose to add a system procedure:
 
   SYSCS_UTIL.SYSCS_SET_STATEMENT_TIMEOUT(INT)
 
 This procedure will enable the query timeout functionality not only through 
 JDBC, but also through SQL. I suggest the following semantics:
 
 The timeout value (in seconds) set with this procedure will apply to all 
 subsequent statements executed on the current connection (the same connection 
 on which the procedure was called), until a different value is set with the 
 same procedure. A value of 0 indicates no timeout. Supplying a negative value 
 will cause an exception. For each executed statement, the semantics are the 
 same as for using Statement.setQueryTimeout() through JDBC.
 


[jira] Assigned: (DERBY-504) SELECT DISTINCT returns duplicates when selecting from subselects

2005-08-16 Thread Knut Anders Hatlen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-504?page=all ]

Knut Anders Hatlen reassigned DERBY-504:


Assign To: Knut Anders Hatlen

 SELECT DISTINCT returns duplicates when selecting from subselects
 -

  Key: DERBY-504
  URL: http://issues.apache.org/jira/browse/DERBY-504
  Project: Derby
 Type: Bug
   Components: SQL
 Versions: 10.2.0.0
  Environment: Latest development sources (SVN revision 232227), Sun JDK 1.5, 
 Solaris/x86
 Reporter: Knut Anders Hatlen
 Assignee: Knut Anders Hatlen
 Priority: Minor


 When one performs a select distinct on a table generated by a subselect, 
 there sometimes are duplicates in the result. The following example shows the 
 problem:
 ij CREATE TABLE names (id INT PRIMARY KEY, name VARCHAR(10));
 0 rows inserted/updated/deleted
 ij INSERT INTO names (id, name) VALUES
(1, 'Anna'), (2, 'Ben'), (3, 'Carl'),
(4, 'Carl'), (5, 'Ben'), (6, 'Anna');
 6 rows inserted/updated/deleted
 ij SELECT DISTINCT(name) FROM (SELECT name, id FROM names) AS n;
 NAME  
 --
 Anna  
 Ben   
 Carl  
 Carl  
 Ben   
 Anna  
 Six names are returned, although only three names should have been returned.
 When the result is explicitly sorted (using ORDER BY) or the id column is 
 removed from the subselect, the query returns three names as expected:
 ij SELECT DISTINCT(name) FROM (SELECT name, id FROM names) AS n ORDER BY 
 name;
 NAME  
 --
 Anna  
 Ben   
 Carl  
 3 rows selected
 ij SELECT DISTINCT(name) FROM (SELECT name FROM names) AS n;
 NAME  
 --
 Anna  
 Ben   
 Carl  
 3 rows selected

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-508) Wrong classname in javadoc for ClientDataSource

2005-08-16 Thread Knut Anders Hatlen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-508?page=all ]
 
Knut Anders Hatlen closed DERBY-508:


Assign To: Knut Anders Hatlen

 Wrong classname in javadoc for ClientDataSource
 ---

  Key: DERBY-508
  URL: http://issues.apache.org/jira/browse/DERBY-508
  Project: Derby
 Type: Bug
   Components: Documentation
 Versions: 10.2.0.0
 Reporter: Knut Anders Hatlen
 Assignee: Knut Anders Hatlen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-508.ClientDataSource-javadoc.diff

 The javadoc for org.apache.derby.jdbc.ClientDataSource says:
 The class ClientDataSource can be used in a connection pooling environment, 
 and the class ClientXADataSource can be used in a distributed, and pooling 
 environment.
 The correct phrase should be: The class ClientConnectionPoolDataSource can 
 be used in a connection pooling environment 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Created: (DERBY-505) Add system procedure to allow setting statement timeout

2005-08-16 Thread Satheesh Bandaram




I agree with these comments... In fact, I added similar comments to the
bug on 14th Aug. In case Jira didn't send that message out, here it is:

I would like to hear reasoning behind this new feature
request. I see following issues with the suggestion: 
  
1) System procedures and functions are used for admin and diagnostic
purposes typically. Since there is no standard for these, every
database vendor has their own way to perform admin and diagnostics.
However, this proposal seems to define application behavior based on
system procedure. 
  
2) I would like to know why JDBC's setQueryTimeout mechanism is not
sufficient... Not sure what the bug comment means by "query timeout
functionality not only through JDBC, but also through SQL". Derby
supports SQL only using JDBC currently. If the comment is refering to
IJ, that is also a JDBC application and could be programmed to support
query timeout using JDBC. 
  


Satheesh

Mike Matrigali wrote:

  I am wondering why this is necessary, since there is a way to do
this through jdbc - why add a different way to do this?  I assume
users could always create their own procedure if they needed it.
What is the circumstance that you need this from SQL rather
than JDBC.

To me this just doesn't seem like the right use of the derby
provided system procedures.

We added the system utility system procedures as a last resort
for the things which had no sql standard, like backup and import.  Any
use of system procedure is non-standard and will cause issues for
database portability, so I think it is important to not add to them
if it is not necessary.

If there really is a need to do this from sql rather than jdbc
I would prefer in the following order:
1) let users create their own procedure using existing available syntax
2) do the setting as a property rather than a system procedure


Oyvind Bakksjo (JIRA) wrote:

  
  
Add system procedure to allow setting statement timeout
---

 Key: DERBY-505
 URL: http://issues.apache.org/jira/browse/DERBY-505
 Project: Derby
Type: New Feature
  Components: SQL  
Versions: 10.1.1.0
Reporter: Oyvind Bakksjo
 Assigned to: Oyvind Bakksjo 
Priority: Minor


Propose to add a system procedure:

  SYSCS_UTIL.SYSCS_SET_STATEMENT_TIMEOUT(INT)

This procedure will enable the query timeout functionality not only through JDBC, but also through SQL. I suggest the following semantics:

The timeout value (in seconds) set with this procedure will apply to all subsequent statements executed on the current connection (the same connection on which the procedure was called), until a different value is set with the same procedure. A value of 0 indicates no timeout. Supplying a negative value will cause an exception. For each executed statement, the semantics are the same as for using Statement.setQueryTimeout() through JDBC.


  
  


  






Re: Patch tool acts strange

2005-08-16 Thread Mamta Satoor
Hi,

I was wondering ifthis can be resolved by setting svn:eol-style on master/updatableResultSet.out. When I list the properties for this master, it doesn't show any properties
$ svn proplist --verbose java/testing/org/apache/derbyTesting/functionTests/master/updatableResultSet.out$

There is anupdatableResultSet.out master specific to DerbyNet and it has the appropriate property set on it.
$ svn proplist --verbose java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/updatableResultSet.outProperties on 'java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\updatableResultSet.out':
 svn:eol-style : native

Mamta

On 8/16/05, Philip Wilder [EMAIL PROTECTED] wrote:
Thomas Lecavelier wrote:Hi,I met also this kind of problem because some of my co-workers used their
IDE in 'windows end line' markers where I was using unix end line markers.If this could help you...-- TomCraig Russell a écrit :In my experience, this is due to a white space change. For example,
adding or removing a blank, or replacing a tab with blanks. There is achange, just not easily viewed with the naked eye.Ah, I think this explanation has merit. In the case of my patch it was
not just a few lines that were replaced but every line :-SWith this in mind it would probably be prudent to run the derbynetclientsuite (or at least the lang/updatableResultSet.java test) in a Linuxenvironment before this patch gets released.
Philip


Re: Patch tool acts strange

2005-08-16 Thread Jean T. Anderson

Mamta Satoor wrote:

Hi,
 
I was wondering if this can be resolved by setting svn:eol-style on 
master/updatableResultSet.out. When I list the properties for this 
master, it doesn't show any properties
$ svn proplist --verbose 
java/testing/org/apache/derbyTesting/functionTests/master/updatableResultSet.out

$


http://www.apache.org/dev/version-control.html links to a wonderful file 
to add to your ~/.subversion/config file that automatically sets 
svn:eol-style to native for many files with specific extensions:


http://www.apache.org/dev/svn-eol-style.txt

 -jean


Re: [jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Rick Hillegas

Hi Mike,

I like your suggestions that a low priority thread should perform the 
compressions and that we should expose a knob for disabling this thread. 
Here are some further suggestions:


Compressing all the tables and recalculating all the statistics once a 
month could cause quite a hiccup for a large database. Maybe we could do 
something finer grained. For instance, we could try to make it easy to 
ask some question like Is more than 20% of this table's space dead? No 
doubt there are some tricky issues in maintaining a per-table dead-space 
counter and in keeping that counter from being a sync point during 
writes. However, if we could answer a question like that, then we could 
pay the compression/reoptimization penalty as we go rather than 
incurring a heavy, monthly lump-sum tax.


Cheers,
-Rick

Mike Matrigali wrote:


Full compression of derby tables is not done automatically, I
am looking for input on how to schedule such an operation.  An
operation like this is going to have a large cpu, i/o, and
possible temporary disk space impact on the rest of the server.
As a zero admin db I think we should figure out some way to
do this automatically, but I think there are a number of
applications which would not be happy with such a performance
impact not under their control.

My initial thoughts are to pick a default time frame, say
once every 30 days to check for table level events like
compression and statistics generation and then execute the operations
at low priority.  Also add some sort of parameter so that
applications could disable the automatic background jobs.

Note that derby does automatically reclaim space from deletes
for subsequent inserts, but the granularity currently is at
a page level.  So deleting every 3rd or 5th row is the worst
case behavior.  The page level decision was a tradeoff as
reclaiming the space is time consuming so did not want to
schedule to work on a row by row basis.  Currently we schedule
the work when all the rows on a page are marked deleted.

Volker Edelmann (JIRA) wrote:

 

DERBY-132 resolved ? Table not automatically compressed 



Key: DERBY-510
URL: http://issues.apache.org/jira/browse/DERBY-510
Project: Derby
   Type: Bug
   Versions: 10.1.1.0
Environment: JDK 1.4.2, JDK 1.5.0

Windows XP
   Reporter: Volker Edelmann


I tried a test-program that repeatedly inserts a bunch of data into 1 table and 
repeatedly deletes a bunch of data.

   // table is not empty  when test-program starts
derby.executeSelect(select count(*) c from rclvalues);

  TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200); // 
insert 2.000.000 rows
   derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
  TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
   derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);

derby.executeSelect(select count(*) c from rclvalues);

At the end of the operation, the table contains approximately the same number 
of rows. But the size of the database has grown from
581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby does the compression now ( version 10.1.X.X.).  



   





[jira] Reopened: (DERBY-488) DatabaseMetaData.getColumns() fails on iSeries JDK 1.4 with verfier error on generated class.

2005-08-16 Thread Deepa Remesh (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-488?page=all ]
 
Deepa Remesh reopened DERBY-488:


 Assign To: Deepa Remesh  (was: Daniel John Debrunner)

This error has to be ported to 10.0 branch also.

 DatabaseMetaData.getColumns() fails on iSeries JDK 1.4 with verfier error on 
 generated class.
 -

  Key: DERBY-488
  URL: http://issues.apache.org/jira/browse/DERBY-488
  Project: Derby
 Type: Bug
   Components: SQL
 Versions: 10.1.1.0
 Reporter: Daniel John Debrunner
 Assignee: Deepa Remesh
  Fix For: 10.2.0.0, 10.1.1.0
  Attachments: patch488.txt

 Analysis shows that 
 --
 The problem is occurring starting at offset 2007 in method e23.  There is an 
 invokeinterface to method setWidth(int, int, boolean) of class 
 VariableSizeDataValue.  This invoke returns a value of class 
 DataValueDescriptor.  That value is in turn stored in field e142 at offset 
 2015 in method e23.  The problem is that field e142 is a NumberDataValue, and 
 DataValueDescriptor is not a valid subclass of NumberDataValue.  Thus the 
 store is not allowed.
 --
 Looking at the generated setWidth() calls I see one in BinaryOperatorNode 
 where the return (DataValueDescriptor) is not cast to the type of the field 
 it is stored in. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Francois Orsini
Maybe like having somewhat of an HouseKeeper module servicing
tasks/chores with one of them being data compression, triggered
during Derby idle times...configuration settings properties could
let the service chore know which table to compress, if not all the
them...just some thoughts...

--francois

On 8/16/05, Mike Matrigali [EMAIL PROTECTED] wrote:
 Full compression of derby tables is not done automatically, I
 am looking for input on how to schedule such an operation.  An
 operation like this is going to have a large cpu, i/o, and
 possible temporary disk space impact on the rest of the server.
 As a zero admin db I think we should figure out some way to
 do this automatically, but I think there are a number of
 applications which would not be happy with such a performance
 impact not under their control.
 
 My initial thoughts are to pick a default time frame, say
 once every 30 days to check for table level events like
 compression and statistics generation and then execute the operations
 at low priority.  Also add some sort of parameter so that
 applications could disable the automatic background jobs.
 
 Note that derby does automatically reclaim space from deletes
 for subsequent inserts, but the granularity currently is at
 a page level.  So deleting every 3rd or 5th row is the worst
 case behavior.  The page level decision was a tradeoff as
 reclaiming the space is time consuming so did not want to
 schedule to work on a row by row basis.  Currently we schedule
 the work when all the rows on a page are marked deleted.
 
 Volker Edelmann (JIRA) wrote:
 
  DERBY-132 resolved ? Table not automatically compressed
  
 
   Key: DERBY-510
   URL: http://issues.apache.org/jira/browse/DERBY-510
   Project: Derby
  Type: Bug
  Versions: 10.1.1.0
   Environment: JDK 1.4.2, JDK 1.5.0
  Windows XP
  Reporter: Volker Edelmann
 
 
   I tried a test-program that repeatedly inserts a bunch of data into 1 
  table and repeatedly deletes a bunch of data.
 
  // table is not empty  when test-program starts
   derby.executeSelect(select count(*) c from rclvalues);
 
 TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200); // 
  insert 2.000.000 rows
  derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
 TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
  derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);
 
   derby.executeSelect(select count(*) c from rclvalues);
 
  At the end of the operation, the table contains approximately the same 
  number of rows. But the size of the database has grown from
  581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that 
  Derby does the compression now ( version 10.1.X.X.).
 
 



Re: [jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Rick Hillegas

Continuing to maunder, let me fine-tune this a bit:

1) Imagine that, on an ongoing basis we maintain some CompressionMetric, 
which measures whether a given table needs compression/reoptimization. 
Dead space might be part of this metric or not. Time since last 
compression could be part of the metric. The metric could be as crude or 
fancy as we like.


2) At some point, based on its CompressionMetric, a table Qualifies for 
compression/reoptimization.


3) At some fairly fine-grained interval, a low priority thread wakes up, 
looks for Qualifying tables, and compresses/reoptimizes them. By 
default, this thread runs in a 0-administration database, but we expose 
a knob for scheduling/disabling the thread.


Your original proposal is a degenerate case of this approach and maybe 
it's the first solution we implement. However, we can get fancier as we 
need to support bigger datasets.


Cheers,
-Rick

Rick Hillegas wrote:


Hi Mike,

I like your suggestions that a low priority thread should perform the 
compressions and that we should expose a knob for disabling this 
thread. Here are some further suggestions:


Compressing all the tables and recalculating all the statistics once a 
month could cause quite a hiccup for a large database. Maybe we could 
do something finer grained. For instance, we could try to make it easy 
to ask some question like Is more than 20% of this table's space 
dead? No doubt there are some tricky issues in maintaining a 
per-table dead-space counter and in keeping that counter from being a 
sync point during writes. However, if we could answer a question like 
that, then we could pay the compression/reoptimization penalty as we 
go rather than incurring a heavy, monthly lump-sum tax.


Cheers,
-Rick

Mike Matrigali wrote:


Full compression of derby tables is not done automatically, I
am looking for input on how to schedule such an operation.  An
operation like this is going to have a large cpu, i/o, and
possible temporary disk space impact on the rest of the server.
As a zero admin db I think we should figure out some way to
do this automatically, but I think there are a number of
applications which would not be happy with such a performance
impact not under their control.

My initial thoughts are to pick a default time frame, say
once every 30 days to check for table level events like
compression and statistics generation and then execute the operations
at low priority.  Also add some sort of parameter so that
applications could disable the automatic background jobs.

Note that derby does automatically reclaim space from deletes
for subsequent inserts, but the granularity currently is at
a page level.  So deleting every 3rd or 5th row is the worst
case behavior.  The page level decision was a tradeoff as
reclaiming the space is time consuming so did not want to
schedule to work on a row by row basis.  Currently we schedule
the work when all the rows on a page are marked deleted.

Volker Edelmann (JIRA) wrote:

 

DERBY-132 resolved ? Table not automatically compressed 



Key: DERBY-510
URL: http://issues.apache.org/jira/browse/DERBY-510
Project: Derby
   Type: Bug
   Versions: 10.1.1.0Environment: JDK 1.4.2, JDK 1.5.0
Windows XP
   Reporter: Volker Edelmann


I tried a test-program that repeatedly inserts a bunch of data into 
1 table and repeatedly deletes a bunch of data.


   // table is not empty  when test-program starts
derby.executeSelect(select count(*) c from rclvalues);

  TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 
200); // insert 2.000.000 rows
   derby.executeDelete(delete from rclvalues where MOD(id, 3) = 
0);

  TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
   derby.executeDelete(delete from rclvalues where MOD(id, 5) = 
0);


derby.executeSelect(select count(*) c from rclvalues);

At the end of the operation, the table contains approximately the 
same number of rows. But the size of the database has grown from
581 MB to 1.22 GB. From the description of item DERBY-132, I hoped 
that Derby does the compression now ( version 10.1.X.X.). 

  








Re: Patch tool acts strange

2005-08-16 Thread Philip Wilder

Mamta Satoor wrote:


Hi,
 
I was wondering if this can be resolved by setting svn:eol-style on 
master/updatableResultSet.out. When I list the properties for this 
master, it doesn't show any properties
$ svn proplist --verbose 
java/testing/org/apache/derbyTesting/functionTests/master/updatableResultSet.out

$
 
There is an updatableResultSet.out master specific to DerbyNet and it 
has the appropriate property set on it.
$ svn proplist --verbose 
java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/updatableResultSet.out
Properties on 
'java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\updatableResultSet.out': 


  svn:eol-style : native
 
Mamta
 


It looks like at least the majority of the files in the master directory 
have svn:eol-style set to native. Even if this doesn't solve my problem 
I suspect it is something that should be done anyway.


Philip


sharing code between the client and server

2005-08-16 Thread Rick Hillegas
When we last visited this issue (July 2005 thread named Size of common 
jar file), we decided not to do anything until we had to. Well, I would 
like to start writing/refactoring some small chunks of network code for 
sharing by the client and server. My naive approach would be to do the 
following.


o Create a new fork in the source code: java/common. This would be 
parallel to java/client and java/server.


o This fork of the tree would hold sources in these packages: 
org.apache.derby.common...


o The build would compile this fork into classes/org/apache/derby/common/...

o The jar-building targets would be smart enough to include these 
classes in derby.jar, derbyclient.jar, and derbytools.jar.


As I recall, there was an edge case: including a derby.jar from one 
release and a derbyclient.jar from another release in the same VM. I 
think that a customer should expect problems if they mix and match jar 
files from different releases put out by a vendor. It's an old 
deficiency in the CLASSPATH model. With judicious use of ClassLoaders, I 
think customers can hack around this edge case.


I welcome your feedback.

Cheers,
-Rick


[jira] Commented: (DERBY-509) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Jean T. Anderson (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-509?page=comments#action_12318940 ] 

Jean T. Anderson commented on DERBY-509:


This seems to me like it would be an excellent question for [EMAIL PROTECTED] 
Could you repost it there? That way others on that list will also benefit from 
the reply.

I realize sometimes it isn't obvious how to post to the derby lists. First you 
need to subscribe by sending email to:

  [EMAIL PROTECTED]

The first time you post to that list there will be a delay because the first 
post is moderated. Later posts will automatically hit the list.

More information about the derby lists is here:
   http://db.apache.org/derby/derby_mail.html

 DERBY-132 resolved ? Table not automatically compressed
 ---

  Key: DERBY-509
  URL: http://issues.apache.org/jira/browse/DERBY-509
  Project: Derby
 Type: Bug
 Versions: 10.1.1.0
  Environment: JDK 1.4.2, JDK 1.5.0,
 Windows XP
 Reporter: Volker Edelmann


 I tried a test-program that repeatedly inserts a bunch of  data into 1 table 
 and repeatedly deletes a bunch of data. 
   derby.executeSelect(select count(*) c from rclvalues);
   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200);  
 // insert 2.000.000 rows
 derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
  
   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
 derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);
   derby.executeSelect(select count(*) c from rclvalues);
 At the end of the operation, the table contains approximately the same number 
 of rows. But the size of the database  has grown  from
 581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby 
 does the compression now ( version 10.1.X.X.).
 Did I overlook I still have to use  SYSCS_UTIL.SYSCS_COMPRESS_TABLE ?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Patch tool acts strange

2005-08-16 Thread Andrew McIntyre
On 8/16/05, Jean T. Anderson [EMAIL PROTECTED] wrote:
 
 http://www.apache.org/dev/version-control.html links to a wonderful file
 to add to your ~/.subversion/config file that automatically sets
 svn:eol-style to native for many files with specific extensions:
 
 http://www.apache.org/dev/svn-eol-style.txt

This is indeed very handy, but there are a lot of .out files in our
master directory, so you should also add:

*.out = svn:eol-style=native

to your Subversion config if you use the Apache svn config template..

andrew


drda product identifier

2005-08-16 Thread Rick Hillegas
DRDA publishes a list of supporting vendors (see 
http://www.opengroup.org/dbiop/prodid.htm). Each of these vendors has 
its own product id. Derby isn't in this list although Cloudscape half 
is. Right now:


o The Derby network client makes up its own product identifier: DNC.

o The Derby network server uses the Cloudscape product identifier: CSS.

This seems a little odd to me:

o Should Derby use the id of a product sold by IBM? This could give rise 
to lots of mischief.


o Other vendors who supply both clients and servers use the same product 
identifer for both. Why don't we follow this common practice?


Derby could apply for its own product identifier to be used by both 
client and server. I'm happy to push this application forward if the 
list thinks that's the right thing to do.


Feedback, other thoughts?

Thanks,
-Rick


Re: [VOTE] Policy for supported platforms

2005-08-16 Thread Andrew McIntyre
On 8/16/05, Rick Hillegas [EMAIL PROTECTED] wrote:
 Hi Andrew,
 
 I've made the following changes to STATUS, which I'm submitting in the
 attached diff file:
 
 1) Reworded the preamble to indicate that Derby is a full-fledged Apache
 DB project.
 
 2) Listed 10.1.1.0 as released.
 
 3) Noted our graduation.
 
 4) Recorded the platform support vote.

Committed revision 233074. Thanks for helping keep STATUS up to date!

 I didn't know what you meant about listing the new committers (Bernt,
 Oyvind, and David). They are already at the end of the committer list.

I think I was looking at an outdated copy, sorry.

 By the way, does anyone know what the type column in the Detailed
 references table means? For the original committers, this
 column holds some kind of login name or abbreviation which, at least in
 some cases, isn't the individual's JIRA id. Currently, this
 column is blank for the new committers.

The type column is from a table on the original Derby page at the
Incubator website. That info is archived at
http://incubator.apache.org/projects/derby.html.  Now that we've
graduated from the Incubator, most of the incubation status info at
the top of STATUS can go, including this section. I'll leave that for
the next person who updates the file.

cheers,
andrew


Re: [jira] Created: (DERBY-510) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Mike Matrigali
thanks, this is useful.

It turns out that the compressions metric is not too hard, as the
internal space vti already basically does the work - at least at
a page level granularity.  I think there may be some use for an
admin thread to maintain info about this compression metric over time
and then use that info to decide to do the compression.  For instance
if a table total size is staying constant but the free space is
growing and shrinking over time then there is no need to compress -
similarly if the table is growing and free space is staying constant
again no need to compress.  Another issue is picking
 a good default for how much free space is enough: any space, 10%, 20%,
1 meg, 10 meg, ...

As you point out, scheduling a compression should be based on whether
it will compress.  The other problem that is similar is that currently
there is not way to update the cardinality statistic.  This statistic
is currently updated only when an index is created explictly or, or
when an index is updated internally as part of a compress table.
Unfortunately a number of applications tend to create empty tables and
indexes and then load the data, so never get this statistic correct.

Keeping with the zero admin goal it would be better to figure out a way
to automatically update this, rather than require an explicit call from
the user.   Currently the code to generate this statistic requires
a scan of the entire index and a compare of every row to the next one -
producing a single statistic for every leading set of columns in an
index.  It is used basically to determine the average number of keys
per value for a given value in an index.  Note that other histogram
type information used by other db's are gathered straight from the
btree, and thus don't require any type of statistic maintenance.

Any good ideas on how to tell when we should update that statistic,
some options include:
o when the table has grown by X%
o time based
o number of rows have changed by X%
o some sort of sample scheme compared with stored result
o default for small number of rows, and once when reaches N rows.



Rick Hillegas wrote:
 Continuing to maunder, let me fine-tune this a bit:
 
 1) Imagine that, on an ongoing basis we maintain some CompressionMetric,
 which measures whether a given table needs compression/reoptimization.
 Dead space might be part of this metric or not. Time since last
 compression could be part of the metric. The metric could be as crude or
 fancy as we like.
 
 2) At some point, based on its CompressionMetric, a table Qualifies for
 compression/reoptimization.
 
 3) At some fairly fine-grained interval, a low priority thread wakes up,
 looks for Qualifying tables, and compresses/reoptimizes them. By
 default, this thread runs in a 0-administration database, but we expose
 a knob for scheduling/disabling the thread.
 
 Your original proposal is a degenerate case of this approach and maybe
 it's the first solution we implement. However, we can get fancier as we
 need to support bigger datasets.
 
 Cheers,
 -Rick
 
 Rick Hillegas wrote:
 
 Hi Mike,

 I like your suggestions that a low priority thread should perform the
 compressions and that we should expose a knob for disabling this
 thread. Here are some further suggestions:

 Compressing all the tables and recalculating all the statistics once a
 month could cause quite a hiccup for a large database. Maybe we could
 do something finer grained. For instance, we could try to make it easy
 to ask some question like Is more than 20% of this table's space
 dead? No doubt there are some tricky issues in maintaining a
 per-table dead-space counter and in keeping that counter from being a
 sync point during writes. However, if we could answer a question like
 that, then we could pay the compression/reoptimization penalty as we
 go rather than incurring a heavy, monthly lump-sum tax.

 Cheers,
 -Rick

 Mike Matrigali wrote:

 Full compression of derby tables is not done automatically, I
 am looking for input on how to schedule such an operation.  An
 operation like this is going to have a large cpu, i/o, and
 possible temporary disk space impact on the rest of the server.
 As a zero admin db I think we should figure out some way to
 do this automatically, but I think there are a number of
 applications which would not be happy with such a performance
 impact not under their control.

 My initial thoughts are to pick a default time frame, say
 once every 30 days to check for table level events like
 compression and statistics generation and then execute the operations
 at low priority.  Also add some sort of parameter so that
 applications could disable the automatic background jobs.

 Note that derby does automatically reclaim space from deletes
 for subsequent inserts, but the granularity currently is at
 a page level.  So deleting every 3rd or 5th row is the worst
 case behavior.  The page level decision was a tradeoff as
 reclaiming the space is time 

Re: [VOTE] Policy for supported platforms

2005-08-16 Thread Rick Hillegas

Thanks, Andrew.

-Rick

Andrew McIntyre wrote:


On 8/16/05, Rick Hillegas [EMAIL PROTECTED] wrote:
 


Hi Andrew,

I've made the following changes to STATUS, which I'm submitting in the
attached diff file:

1) Reworded the preamble to indicate that Derby is a full-fledged Apache
DB project.

2) Listed 10.1.1.0 as released.

3) Noted our graduation.

4) Recorded the platform support vote.
   



Committed revision 233074. Thanks for helping keep STATUS up to date!

 


I didn't know what you meant about listing the new committers (Bernt,
Oyvind, and David). They are already at the end of the committer list.
   



I think I was looking at an outdated copy, sorry.

 


By the way, does anyone know what the type column in the Detailed
references table means? For the original committers, this
column holds some kind of login name or abbreviation which, at least in
some cases, isn't the individual's JIRA id. Currently, this
column is blank for the new committers.
   



The type column is from a table on the original Derby page at the
Incubator website. That info is archived at
http://incubator.apache.org/projects/derby.html.  Now that we've
graduated from the Incubator, most of the incubation status info at
the top of STATUS can go, including this section. I'll leave that for
the next person who updates the file.

cheers,
andrew
 





[jira] Updated: (DERBY-456) Can't load the jdbc driver(org.apache.derby.jdbc.EmbeddedDriver) when the derby.jar is contained in a directory having exclamation mark(!)

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-456?page=all ]

Mike Matrigali updated DERBY-456:
-

Component: JDBC

 Can't load the jdbc driver(org.apache.derby.jdbc.EmbeddedDriver) when the 
 derby.jar is contained in a directory having exclamation mark(!)
 

  Key: DERBY-456
  URL: http://issues.apache.org/jira/browse/DERBY-456
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.0.2.1
  Environment: created a simple program to load the jdbc driver when 
 derby.jar is contained in the directory test!Derby. Notice the 
 exclamation mark in the directory name. If the exclamation mark is removed 
 from the directory name then it works fine.
 I am using WinXP Pro(SP2).
 Reporter: aakash agrawal
 Priority: Blocker


 The test program is as below:
 import org.apache.derby.jdbc.EmbeddedDriver;
 public class testDerby {
   public static void main(String[] args) {
   try {
   new EmbeddedDriver();
   } catch (Exception e) {
   System.out.println(e.getMessage());
   e.printStackTrace();
   }
   }
 }
 The following exception is thrown: 
 XBM02.D : [0] org.apache.derby.iapi.services.stream.InfoStreams
 ERROR XBM02: XBM02.D : [0] org.apache.derby.iapi.services.stream.InfoStreams
   at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
   at 
 org.apache.derby.iapi.services.monitor.Monitor.missingImplementation(Monitor.java)
   at 
 org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java)
   at 
 org.apache.derby.iapi.services.monitor.Monitor.startSystemModule(Monitor.java)
   at 
 org.apache.derby.impl.services.monitor.BaseMonitor.runWithState(BaseMonitor.java)
   at 
 org.apache.derby.impl.services.monitor.FileMonitor.init(FileMonitor.java)
   at 
 org.apache.derby.iapi.services.monitor.Monitor.startMonitor(Monitor.java)
   at org.apache.derby.iapi.jdbc.JDBCBoot.boot(JDBCBoot.java)
   at org.apache.derby.jdbc.EmbeddedDriver.boot(EmbeddedDriver.java)
   at org.apache.derby.jdbc.EmbeddedDriver.clinit(EmbeddedDriver.java)
   at testDerby.main(testDerby.java:6)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-269) Provide some way to update index cardinality statistics (e.g. reimplement update statistics)

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-269?page=all ]

Mike Matrigali updated DERBY-269:
-

  Component: SQL
Description: 
Performance problems are being reported that can be resolved by updating the 
cardinality statistics used by the optimizer.  Currently the only time the 
statistics are guaranteed to be an up-to-date is when the index is first 
created on a fully populated table.  This is most easily accomplished on an 
existing table by using the command: 

   alter table table-name compress [sequential]  

Compress table is an I/O intensive task.  A better way to achieve this would be 
to re-enable parser support for the 'update statistics' command or re-implement 
the update in some other fashion.

  was:
Performance problems are being reported that can be resolved by updating the 
cardinality statistics used by the optimizer.  Currently the only time the 
statistics are guaranteed to be an up-to-date is when the index is first 
created on a fully populated table.  This is most easily accomplished on an 
existing table by using the command: 

   alter table table-name compress [sequential]  

Compress table is an I/O intensive task.  A better way to achieve this would be 
to re-enable parser support for the 'update statistics' command or re-implement 
the update in some other fashion.

Environment: 

 Provide some way to update index cardinality statistics (e.g. reimplement 
 update statistics)
 

  Key: DERBY-269
  URL: http://issues.apache.org/jira/browse/DERBY-269
  Project: Derby
 Type: New Feature
   Components: SQL
 Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.0.2.2
 Reporter: Stan Bradbury


 Performance problems are being reported that can be resolved by updating the 
 cardinality statistics used by the optimizer.  Currently the only time the 
 statistics are guaranteed to be an up-to-date is when the index is first 
 created on a fully populated table.  This is most easily accomplished on an 
 existing table by using the command: 
alter table table-name compress [sequential]  
 Compress table is an I/O intensive task.  A better way to achieve this would 
 be to re-enable parser support for the 'update statistics' command or 
 re-implement the update in some other fashion.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-260) Derby is not stable on multiprocessor or Hyperthreating architectures.

2005-08-16 Thread Mike Matrigali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-260?page=comments#action_12318943 ] 

Mike Matrigali commented on DERBY-260:
--

Roland did you ever get any more information about this.  Should we close this 
as not a derby issue,  or at least not reproducible?

 Derby is not stable on multiprocessor or Hyperthreating architectures.
 --

  Key: DERBY-260
  URL: http://issues.apache.org/jira/browse/DERBY-260
  Project: Derby
 Type: Bug
  Environment: P4 3GHz with Hyperthreating, 512 Mb, WIN2000 
 Reporter: Roland Beuker


 Hello,
 I am using IBM Cloudscape Version 10.0 with Hibernate and it works great. But 
 at the time I was deploying my project on a target with a Hyperthreating CPU 
 things went wrong (database very unstable). It seems that Cloudscape is not 
 functioning on multiprocessor or Hyperthreating architectures. Has anyone 
 some more information?
 Regards,
 Roland Beuker 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-185) Update incubator status page

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-185?page=all ]

Mike Matrigali updated DERBY-185:
-

  Component: Web Site
Description: 
Update the page

http://incubator.apache.org/projects/derby.html

to reflect current status. I need to figure out what to edit and where we are. 
Please attach any comments to this issue

  was:
Update the page

http://incubator.apache.org/projects/derby.html

to reflect current status. I need to figure out what to edit and where we are. 
Please attach any comments to this issue

Environment: 

 Update incubator status page
 

  Key: DERBY-185
  URL: http://issues.apache.org/jira/browse/DERBY-185
  Project: Derby
 Type: Task
   Components: Web Site
 Reporter: Jeremy Boynes
 Assignee: Jeremy Boynes


 Update the page
 http://incubator.apache.org/projects/derby.html
 to reflect current status. I need to figure out what to edit and where we 
 are. Please attach any comments to this issue

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-216) expand largeCodeGen.java test

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-216?page=all ]

Mike Matrigali updated DERBY-216:
-

  Component: Test
Description: 
the largeCodeGen test needs to be expanded to include other cases that genreate 
large amounts of byte code. 


For example:
 large in clause
 large insert statement that inserts many rows
 sql statements with large constant values 

It is best if the verious tests just use a variable that can be bumped higher 
and higher for testing and if individual cases are isolated.

Possible approaches, think of ways to make sql statements really big that will 
take different code paths.

Look in the code for instances of statementNumHitLimit and create cases that 
pass through that code.  Those cases may pass but the hope is to get rid of 
these calls in favor of splitting  the code in a centralized way, so add the 
tests to largeCodeGen even if they don't fail.

 



  was:
the largeCodeGen test needs to be expanded to include other cases that genreate 
large amounts of byte code. 


For example:
 large in clause
 large insert statement that inserts many rows
 sql statements with large constant values 

It is best if the verious tests just use a variable that can be bumped higher 
and higher for testing and if individual cases are isolated.

Possible approaches, think of ways to make sql statements really big that will 
take different code paths.

Look in the code for instances of statementNumHitLimit and create cases that 
pass through that code.  Those cases may pass but the hope is to get rid of 
these calls in favor of splitting  the code in a centralized way, so add the 
tests to largeCodeGen even if they don't fail.

 



Environment: 

 expand largeCodeGen.java test
 -

  Key: DERBY-216
  URL: http://issues.apache.org/jira/browse/DERBY-216
  Project: Derby
 Type: Sub-task
   Components: Test
 Reporter: Kathey Marsden


 the largeCodeGen test needs to be expanded to include other cases that 
 genreate large amounts of byte code. 
 For example:
  large in clause
  large insert statement that inserts many rows
  sql statements with large constant values 
 It is best if the verious tests just use a variable that can be bumped higher 
 and higher for testing and if individual cases are isolated.
 Possible approaches, think of ways to make sql statements really big that 
 will take different code paths.
 Look in the code for instances of statementNumHitLimit and create cases that 
 pass through that code.  Those cases may pass but the hope is to get rid of 
 these calls in favor of splitting  the code in a centralized way, so add the 
 tests to largeCodeGen even if they don't fail.
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-159) When Derby runs in Network Server mode, client does not receive warnings generated by Derby

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-159?page=all ]

Mike Matrigali updated DERBY-159:
-

  Component: Network Server
Description: 
A simple code below will demonstrate that warnings generated by Derby running 
in Server mode do not make their way to client. The client code below is trying 
to create the database db1drda which already exsits. Server generates a warning 
for that but the client cde below does not print it.

con = 
DriverManager.getConnection(jdbc:derby:net://localhost:1527/db1drda;create=true:retrieveMessagesFromServerOnGetMessage=true;,
 app, app);
SQLWarning warnings1 = con.getWarnings();
System.out.println(database exists, should get warning);
while (warnings1 != null)
{
System.out.println(warnings on connection =  + warnings1);
warnings1 = warnings1.getNextWarning();
}



  was:
A simple code below will demonstrate that warnings generated by Derby running 
in Server mode do not make their way to client. The client code below is trying 
to create the database db1drda which already exsits. Server generates a warning 
for that but the client cde below does not print it.

con = 
DriverManager.getConnection(jdbc:derby:net://localhost:1527/db1drda;create=true:retrieveMessagesFromServerOnGetMessage=true;,
 app, app);
SQLWarning warnings1 = con.getWarnings();
System.out.println(database exists, should get warning);
while (warnings1 != null)
{
System.out.println(warnings on connection =  + warnings1);
warnings1 = warnings1.getNextWarning();
}



Environment: 

 When Derby runs in Network Server mode, client does not receive warnings 
 generated by Derby
 ---

  Key: DERBY-159
  URL: http://issues.apache.org/jira/browse/DERBY-159
  Project: Derby
 Type: Bug
   Components: Network Server
 Reporter: Mamta A. Satoor


 A simple code below will demonstrate that warnings generated by Derby running 
 in Server mode do not make their way to client. The client code below is 
 trying to create the database db1drda which already exsits. Server generates 
 a warning for that but the client cde below does not print it.
 con = 
 DriverManager.getConnection(jdbc:derby:net://localhost:1527/db1drda;create=true:retrieveMessagesFromServerOnGetMessage=true;,
  app, app);
 SQLWarning warnings1 = con.getWarnings();
 System.out.println(database exists, should get warning);
 while (warnings1 != null)
 {
   System.out.println(warnings on connection =  + warnings1);
   warnings1 = warnings1.getNextWarning();
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-150) Disable transaction logging

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-150?page=all ]

Mike Matrigali updated DERBY-150:
-

  Component: Store
Summary: Disable transaction logging  (was: Disable logging)
Description: 
As a means of improving performance, I would like to be able to disable logging 
completely.

I use Derby in applications thatnever use rollback and I suspect that quite a 
lot of time is going to logging.

Thanks

  was:
As a means of improving performance, I would like to be able to disable logging 
completely.

I use Derby in applications thatnever use rollback and I suspect that quite a 
lot of time is going to logging.

Thanks


Note that even if your application does not  explicitlly call rollback many 
internal operations in derby 
use rollback , like: lock timeout, lock deadlock, statement errors, server 
crash.

If such a feature were provided what would you expect from the database if the 
machine crashed:
1) no database expected after crash
2) a bootable database that would have inconsistent data (ie. partial data from 
a transaction)
3) a consistent database from some arbitrary point in time.

 Disable transaction logging
 ---

  Key: DERBY-150
  URL: http://issues.apache.org/jira/browse/DERBY-150
  Project: Derby
 Type: New Feature
   Components: Store
  Environment: all
 Reporter: Barnet Wagman


 As a means of improving performance, I would like to be able to disable 
 logging completely.
 I use Derby in applications thatnever use rollback and I suspect that quite a 
 lot of time is going to logging.
 Thanks

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-151) Thread termination - XSDG after operation is 'complete'

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-151?page=all ]

Mike Matrigali updated DERBY-151:
-

  Component: Store
Description: 
I've encountered what appears to be a bug related to threading. After an INSERT 
operation, if the invoking thread terminates too quickly, Derby throws an XSDG.

The bug is a bit difficult to isolate but it occurs consistently in the 
following situation (with a particular database and an operation of a 
particular size):

Derby is running in embedded mode with autocommit on.  
The application performs an INPUT operation from a thread that is not the main 
thread.  The INPUT is issued using a PreparedStatement.  The INPUT adds ~ 256 
records of six fields each. (Note that INSERTs of this size seem to work fine 
in other contexts.)
 
The preparedStatement.executeUpdate() seems to excute successfully; at least it 
returns without throwing an exception. 

The thread that invoked the INPUT operation then terminates (but NOT the 
application).  The next INPUT operation then results in an

ERROR XSDG1: Page Page(7,Container(0, 1344)) could not be written to disk, 
please check if disk is full.

The disk is definitely not full.

HOWEVER, if I put the calling thread to sleep for a second before it exits, the 
problem does not occur.

I'm not quite sure what to make of this.  I was under the impression that most 
of Derby's activity occurs in the application's threads.  Could Derby be 
creating a child thread from in the application thread, which dies when the 
parent thread terminates?

Thanks





  was:
I've encountered what appears to be a bug related to threading. After an INSERT 
operation, if the invoking thread terminates too quickly, Derby throws an XSDG.

The bug is a bit difficult to isolate but it occurs consistently in the 
following situation (with a particular database and an operation of a 
particular size):

Derby is running in embedded mode with autocommit on.  
The application performs an INPUT operation from a thread that is not the main 
thread.  The INPUT is issued using a PreparedStatement.  The INPUT adds ~ 256 
records of six fields each. (Note that INSERTs of this size seem to work fine 
in other contexts.)
 
The preparedStatement.executeUpdate() seems to excute successfully; at least it 
returns without throwing an exception. 

The thread that invoked the INPUT operation then terminates (but NOT the 
application).  The next INPUT operation then results in an

ERROR XSDG1: Page Page(7,Container(0, 1344)) could not be written to disk, 
please check if disk is full.

The disk is definitely not full.

HOWEVER, if I put the calling thread to sleep for a second before it exits, the 
problem does not occur.

I'm not quite sure what to make of this.  I was under the impression that most 
of Derby's activity occurs in the application's threads.  Could Derby be 
creating a child thread from in the application thread, which dies when the 
parent thread terminates?

Thanks






Without some sort of reproducible case, don't think this issue will be 
addressed.  Also always include all the 
stack trace information from derby.log whenever possible.  

My best guess is that somehow a thread interrupt is being sent to the thread 
issuing the I/O, possibly the interrupt
is even being posted to the thread before the execute statement is being 
called?  

To answer the question, the I/O described above could be issued either by the 
thread doing the insert , or it could
be issued by a background thread executing a checkpoint.  The stack track would 
tell which.  

 Thread termination - XSDG after operation is 'complete'
 

  Key: DERBY-151
  URL: http://issues.apache.org/jira/browse/DERBY-151
  Project: Derby
 Type: Bug
   Components: Store
 Versions: 10.0.2.1
  Environment: Linux kernel 2.4.21-243-athlon (SuSE 9.0)
 Reporter: Barnet Wagman


 I've encountered what appears to be a bug related to threading. After an 
 INSERT operation, if the invoking thread terminates too quickly, Derby throws 
 an XSDG.
 The bug is a bit difficult to isolate but it occurs consistently in the 
 following situation (with a particular database and an operation of a 
 particular size):
 Derby is running in embedded mode with autocommit on.  
 The application performs an INPUT operation from a thread that is not the 
 main thread.  The INPUT is issued using a PreparedStatement.  The INPUT adds 
 ~ 256 records of six fields each. (Note that INSERTs of this size seem to 
 work fine in other contexts.)
  
 The preparedStatement.executeUpdate() seems to excute successfully; at least 
 it returns without throwing an exception. 
 The thread that invoked the INPUT operation then terminates (but NOT the 
 application).  The next INPUT operation then results in an
 ERROR XSDG1: Page Page(7,Container(0, 1344)) could not be written to disk, 
 please check if disk is full.
 

[jira] Updated: (DERBY-147) ERROR 42X79 not consistant ? - same column name specified twice

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-147?page=all ]

Mike Matrigali updated DERBY-147:
-

  Component: SQL
Description: 
This happens from JDBC or ij. Here the output form ij

ij version 10.0 
CONNECTION0* -  jdbc:derby:phsDB 
* = current connection 
ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
slg_name_lookup a1 ORDER BY a1.XXX_foreign;
ERROR 42X79: Column name 'XXX_FOREIGN' appears more than once in the result of 
the query expression. 

But when removing the ORDER BY and keeping the 2 same column names it works

ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
slg_name_lookup a1;
XXX_FOREIGN 
|NATIVE 
 |KIND|XXX_FOREIGN  

---
  
0 rows selected 
ij 


So - it seams to be OK to specify the same column twice - as long as you do not 
add the ORDER BY clause.  

I woul dof course like that the system allows this - but at leats it should be 
consistant and either allow both or none of the two queries above.



  was:
This happens from JDBC or ij. Here the output form ij

ij version 10.0 
CONNECTION0* -  jdbc:derby:phsDB 
* = current connection 
ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
slg_name_lookup a1 ORDER BY a1.XXX_foreign;
ERROR 42X79: Column name 'XXX_FOREIGN' appears more than once in the result of 
the query expression. 

But when removing the ORDER BY and keeping the 2 same column names it works

ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
slg_name_lookup a1;
XXX_FOREIGN 
|NATIVE 
 |KIND|XXX_FOREIGN  

---
  
0 rows selected 
ij 


So - it seams to be OK to specify the same column twice - as long as you do not 
add the ORDER BY clause.  

I woul dof course like that the system allows this - but at leats it should be 
consistant and either allow both or none of the two queries above.



Environment: 

 ERROR 42X79 not consistant ? - same column name specified twice
 ---

  Key: DERBY-147
  URL: http://issues.apache.org/jira/browse/DERBY-147
  Project: Derby
 Type: Bug
   Components: SQL
 Reporter: Bernd Ruehlicke
  Attachments: derby-147-10.0.2.1.diff, derby-147.diff

 This happens from JDBC or ij. Here the output form ij
 ij version 10.0 
 CONNECTION0* -jdbc:derby:phsDB 
 * = current connection 
 ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
 slg_name_lookup a1 ORDER BY a1.XXX_foreign;
 ERROR 42X79: Column name 'XXX_FOREIGN' appears more than once in the result 
 of the query expression. 
 But when removing the ORDER BY and keeping the 2 same column names it works
 ij select a1.XXX_foreign, a1.native, a1.kind, a1.XXX_foreign FROM 
 slg_name_lookup a1;
 XXX_FOREIGN   
   |NATIVE 
  |KIND|XXX_FOREIGN
   
 ---
   
 0 rows selected 
 ij 
 So - it seams to be OK to specify the same column twice - as long as you do 
 not add the ORDER BY clause.  
 I woul dof course like that the system allows this - but at leats it should 
 be consistant and either allow both or none of the two queries above.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-119) Add ALTER TABLE option to change column from NULL to NOT NULL

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-119?page=all ]

Mike Matrigali updated DERBY-119:
-

  Component: SQL
Description: 
There was a thread about this on the Cloudscape forum

http://www-106.ibm.com/developerworks/forums/dw_thread.jsp?message=4103269cat=19thread=59941forum=370#4103269

Since this describes the problem I will just copy the content of this entry as 
my dexscription


The content of this was 


Hi,

I stumbled across a behaviour of cloudscape which is not a bug but IMHO an 
implementation choice. To assign a primary key to a table using ALTER TABLE all 
columns must be declared NOT NULL first, which can only be specified upon 
column creation (no ALTER TABLE statement exists to change the NOT NULL 
property of a column).

Most databases I know do two things differently:
1) when a primary key is assigned all pk columns are automatically set to NOT 
NULL, if one of them contains NULL values, the ALTER TABLE statement fails
2) it is possible to alter the column to set the NOT NULL property after column 
creation (fails when there are already records containing NULL values)

If I have understood the limitations correctly in Cloudscape I have no choice 
but to remove and re-add the column which is supposed to be used in the primary 
key, if it is not already declared as NOT NULL. This means that in the case of 
a table containing valid data (unique and not null) in the column in all 
records, I would have to export the data, remove and re-add the column and 
reimport that data, which would not be necessary e.g. in Oracle or MaxDB.

Is it possible to change that behaviour or is there a good reason for it? It 
looks as if it makes the life of the user more difficult than necessary for 
certain metadata manipulations. Making it possible to alter the NOT NULL 
property of a column would solve this and IMHO having a primary key constraint 
do this implicitly makes sense as well. 

Thanks in advance for any insight on this,

Robert



  was:
There was a thread about this on the Cloudscape forum

http://www-106.ibm.com/developerworks/forums/dw_thread.jsp?message=4103269cat=19thread=59941forum=370#4103269

Since this describes the problem I will just copy the content of this entry as 
my dexscription


The content of this was 


Hi,

I stumbled across a behaviour of cloudscape which is not a bug but IMHO an 
implementation choice. To assign a primary key to a table using ALTER TABLE all 
columns must be declared NOT NULL first, which can only be specified upon 
column creation (no ALTER TABLE statement exists to change the NOT NULL 
property of a column).

Most databases I know do two things differently:
1) when a primary key is assigned all pk columns are automatically set to NOT 
NULL, if one of them contains NULL values, the ALTER TABLE statement fails
2) it is possible to alter the column to set the NOT NULL property after column 
creation (fails when there are already records containing NULL values)

If I have understood the limitations correctly in Cloudscape I have no choice 
but to remove and re-add the column which is supposed to be used in the primary 
key, if it is not already declared as NOT NULL. This means that in the case of 
a table containing valid data (unique and not null) in the column in all 
records, I would have to export the data, remove and re-add the column and 
reimport that data, which would not be necessary e.g. in Oracle or MaxDB.

Is it possible to change that behaviour or is there a good reason for it? It 
looks as if it makes the life of the user more difficult than necessary for 
certain metadata manipulations. Making it possible to alter the NOT NULL 
property of a column would solve this and IMHO having a primary key constraint 
do this implicitly makes sense as well. 

Thanks in advance for any insight on this,

Robert



Environment: 

 Add ALTER TABLE option to change column from NULL to NOT NULL
 -

  Key: DERBY-119
  URL: http://issues.apache.org/jira/browse/DERBY-119
  Project: Derby
 Type: New Feature
   Components: SQL
 Reporter: Bernd Ruehlicke


 There was a thread about this on the Cloudscape forum
 http://www-106.ibm.com/developerworks/forums/dw_thread.jsp?message=4103269cat=19thread=59941forum=370#4103269
 Since this describes the problem I will just copy the content of this entry 
 as my dexscription
 The content of this was 
 
 Hi,
 I stumbled across a behaviour of cloudscape which is not a bug but IMHO an 
 implementation choice. To assign a primary key to a table using ALTER TABLE 
 all columns must be declared NOT NULL first, which can only be specified upon 
 column creation (no ALTER TABLE statement exists to change the NOT NULL 
 property of a column).
 Most databases I know do two things differently:
 1) when a primary key is assigned all pk columns are automatically set to NOT 
 NULL, if one of them 

[jira] Updated: (DERBY-103) Global Oracle/Axion style sequence generator

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-103?page=all ]

Mike Matrigali updated DERBY-103:
-

  Component: SQL
Description: 
The identity column generator is just not enought. It would just be wonderful 
if Derby had commands like

CREATE SEQUENCE mySeq;

values mySeq.nextval;
values mySeq.currval;

CREATE TABLE my_table (
id integer default mySeq.nextval,
value varchar(40)
)

DROP SEQUENCE mySeq;




  was:
The identity column generator is just not enought. It would just be wonderful 
if Derby had commands like

CREATE SEQUENCE mySeq;

values mySeq.nextval;
values mySeq.currval;

CREATE TABLE my_table (
id integer default mySeq.nextval,
value varchar(40)
)

DROP SEQUENCE mySeq;




Environment: 

 Global Oracle/Axion style sequence generator
 

  Key: DERBY-103
  URL: http://issues.apache.org/jira/browse/DERBY-103
  Project: Derby
 Type: Wish
   Components: SQL
 Reporter: Bernd Ruehlicke


 The identity column generator is just not enought. It would just be wonderful 
 if Derby had commands like
 CREATE SEQUENCE mySeq;
 values mySeq.nextval;
 values mySeq.currval;
 CREATE TABLE my_table (
 id integer default mySeq.nextval,
 value varchar(40)
 )
 DROP SEQUENCE mySeq;

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-85) NPE when creating a trigger on a table and default schema doesn't exist.

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-85?page=all ]

Mike Matrigali updated DERBY-85:


  Component: SQL
Description: 
BACKGROUND:

When connecting to a Derby db with a user id and password, the default schema 
is USER.  For example, if I connect with:

ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';

then the default schema is SOMEUSER.

PROBLEM:

It turns out that if a table t1 exists in a non-default schema and the default 
schema (in this case, SOMEUSER) doesn't exist yet (because no objects have 
been created in that schema), then attempts to create a trigger on t1 using its 
qualified name will lead to a null pointer exception in the Derby engine.

REPRO:

In ij:

-- Create database with default schema SOMEUSER.
ij connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';

-- Create table t1 in a non-default schema; in this case, call it ITKO.
ij create table itko.t1 (i int);
0 rows inserted/updated/deleted

-- Now schema ITKO exists, and T1 exists in schema ITKO, but default schema 
SOMEUSER does NOT exist, because we haven't created any objects in that schema 
yet.

-- So now we try to create a trigger in the ITKO (i.e. the non-default) 
schema...
ij create trigger trig1 after update on itko.t1 for each row mode db2sql 
select * from sys.systables;
ERROR XJ001: Java exception: ': java.lang.NullPointerException'.

A look at the derby.log file shows the stack trace given below.  In a word, it 
looks like the compilation schema field of SYS.SYSTRIGGERS isn't getting set, 
and so it ends up being null.  That causes the NPE in subsequent processing...

java.lang.NullPointerException
at 
org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200)
at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890)
at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(CreateTriggerConstantAction.java:354)
at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(CreateTriggerConstantAction.java:258)
at 
org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:56)
at 
org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:366)
at 
org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1100)
at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:509)
at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:467)
at org.apache.derby.impl.tools.ij.ij.executeImmediate(ij.java:299)
at org.apache.derby.impl.tools.ij.utilMain.doCatch(utilMain.java:433)
at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:310)
at org.apache.derby.impl.tools.ij.Main.go(Main.java:210)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:176)
at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:56)
at org.apache.derby.tools.ij.main(ij.java:60)

  was:
BACKGROUND:

When connecting to a Derby db with a user id and password, the default schema 
is USER.  For example, if I connect with:

ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';

then the default schema is SOMEUSER.

PROBLEM:

It turns out that if a table t1 exists in a non-default schema and the default 
schema (in this case, SOMEUSER) doesn't exist yet (because no objects have 
been created in that schema), then attempts to create a trigger on t1 using its 
qualified name will lead to a null pointer exception in the Derby engine.

REPRO:

In ij:

-- Create database with default schema SOMEUSER.
ij connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';

-- Create table t1 in a non-default schema; in this case, call it ITKO.
ij create table itko.t1 (i int);
0 rows inserted/updated/deleted

-- Now schema ITKO exists, and T1 exists in schema ITKO, but default schema 
SOMEUSER does NOT exist, because we haven't created any objects in that schema 
yet.

-- So now we try to create a trigger in the ITKO (i.e. the non-default) 
schema...
ij create trigger trig1 after update on itko.t1 for each row mode db2sql 
select * from sys.systables;
ERROR XJ001: Java exception: ': java.lang.NullPointerException'.

A look at the derby.log file shows the stack trace given below.  In a word, it 
looks like the compilation schema field of SYS.SYSTRIGGERS isn't getting set, 
and so it ends up being null.  That causes the NPE in subsequent processing...

java.lang.NullPointerException
at 
org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200)
at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890)
at 

[jira] Updated: (DERBY-80) RFE: System Function for Diagnostic Info

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-80?page=all ]

Mike Matrigali updated DERBY-80:


  Component: Services
Description: 
Feature Request:
A system function that lists internal diagnostic information:

- A list of all Derby properties with their actual values.

- The size and usage of the page cache.

- Location of the system directory, temp directories, log file, 

- The names and state of all open databases.

- A list of the active sessions, transactions, etc.

etc.

Reason: I find it difficult to control the effect of setting Derby properties, 
e.g. derby.storage.pageCacheSize.

  was:
Feature Request:
A system function that lists internal diagnostic information:

- A list of all Derby properties with their actual values.

- The size and usage of the page cache.

- Location of the system directory, temp directories, log file, 

- The names and state of all open databases.

- A list of the active sessions, transactions, etc.

etc.

Reason: I find it difficult to control the effect of setting Derby properties, 
e.g. derby.storage.pageCacheSize.

Environment: 

 RFE: System Function for Diagnostic Info
 

  Key: DERBY-80
  URL: http://issues.apache.org/jira/browse/DERBY-80
  Project: Derby
 Type: New Feature
   Components: Services
 Reporter: Christian d'Heureuse


 Feature Request:
 A system function that lists internal diagnostic information:
 - A list of all Derby properties with their actual values.
 - The size and usage of the page cache.
 - Location of the system directory, temp directories, log file, 
 - The names and state of all open databases.
 - A list of the active sessions, transactions, etc.
 etc.
 Reason: I find it difficult to control the effect of setting Derby 
 properties, e.g. derby.storage.pageCacheSize.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-37) detection of incorrect types comparison is done at ? parameters

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-37?page=all ]

Mike Matrigali updated DERBY-37:


  Component: SQL
Description: 
java code:

PreparedStatement ps = conn.prepareStatement(statement);

This statement:

SELECT 
THIS.CODE_EID,THIS.COMPOSED_EID,'org.jpox.samples.applicationidentity.ChildComposedIntID'
 as JPOXMETADATA ,THIS.INTEGER_IDX AS 
JPOXORDER0,ELEMENT_1.CODE,ELEMENT_1.COMPOSED,ELEMENT_1.DESCRIPTION,ELEMENT_1.NAME
 FROM NORMALLISTAPPLICATIONID_COMPOS THIS INNER JOIN CHILDCOMPOSEDINTID 
ELEMENT ON THIS.CODE_EID = ELEMENT.CODE AND THIS.COMPOSED_EID = 
ELEMENT.COMPOSED INNER JOIN COMPOSEDINTID ELEMENT_1 ON ELEMENT.CODE = 
ELEMENT_1.CODE AND ELEMENT.COMPOSED = ELEMENT_1.COMPOSED WHERE 
THIS.NORMALLISTAPPLICATIONID_ID_OID = ? AND THIS.INTEGER_IDX = ? ORDER BY 
JPOXORDER0

results in:

SQL Exception: Comparisons between 'VARCHAR' and 'INTEGER' are not supported.

possible cause:

The INTEGER_IDX is an INTEGER column. While running the prepareStatement, 
JDBC/Cloudscape thinks I'm comparing INTEGER_IDX with ? (question mark) 
(INTEGER vs VARCHAR). This is not true, ? (question mark) is a parameter that 
will be later substitued in my code by an integer value.

  was:
java code:

PreparedStatement ps = conn.prepareStatement(statement);

This statement:

SELECT 
THIS.CODE_EID,THIS.COMPOSED_EID,'org.jpox.samples.applicationidentity.ChildComposedIntID'
 as JPOXMETADATA ,THIS.INTEGER_IDX AS 
JPOXORDER0,ELEMENT_1.CODE,ELEMENT_1.COMPOSED,ELEMENT_1.DESCRIPTION,ELEMENT_1.NAME
 FROM NORMALLISTAPPLICATIONID_COMPOS THIS INNER JOIN CHILDCOMPOSEDINTID 
ELEMENT ON THIS.CODE_EID = ELEMENT.CODE AND THIS.COMPOSED_EID = 
ELEMENT.COMPOSED INNER JOIN COMPOSEDINTID ELEMENT_1 ON ELEMENT.CODE = 
ELEMENT_1.CODE AND ELEMENT.COMPOSED = ELEMENT_1.COMPOSED WHERE 
THIS.NORMALLISTAPPLICATIONID_ID_OID = ? AND THIS.INTEGER_IDX = ? ORDER BY 
JPOXORDER0

results in:

SQL Exception: Comparisons between 'VARCHAR' and 'INTEGER' are not supported.

possible cause:

The INTEGER_IDX is an INTEGER column. While running the prepareStatement, 
JDBC/Cloudscape thinks I'm comparing INTEGER_IDX with ? (question mark) 
(INTEGER vs VARCHAR). This is not true, ? (question mark) is a parameter that 
will be later substitued in my code by an integer value.


 detection of incorrect types comparison is done at ? parameters
 ---

  Key: DERBY-37
  URL: http://issues.apache.org/jira/browse/DERBY-37
  Project: Derby
 Type: Bug
   Components: SQL
  Environment: Cloudscape 10 beta 
 Reporter: Erik Bengtson


 java code:
 PreparedStatement ps = conn.prepareStatement(statement);
 This statement:
 SELECT 
 THIS.CODE_EID,THIS.COMPOSED_EID,'org.jpox.samples.applicationidentity.ChildComposedIntID'
  as JPOXMETADATA ,THIS.INTEGER_IDX AS 
 JPOXORDER0,ELEMENT_1.CODE,ELEMENT_1.COMPOSED,ELEMENT_1.DESCRIPTION,ELEMENT_1.NAME
  FROM NORMALLISTAPPLICATIONID_COMPOS THIS INNER JOIN CHILDCOMPOSEDINTID 
 ELEMENT ON THIS.CODE_EID = ELEMENT.CODE AND THIS.COMPOSED_EID = 
 ELEMENT.COMPOSED INNER JOIN COMPOSEDINTID ELEMENT_1 ON ELEMENT.CODE = 
 ELEMENT_1.CODE AND ELEMENT.COMPOSED = ELEMENT_1.COMPOSED WHERE 
 THIS.NORMALLISTAPPLICATIONID_ID_OID = ? AND THIS.INTEGER_IDX = ? ORDER BY 
 JPOXORDER0
 results in:
 SQL Exception: Comparisons between 'VARCHAR' and 'INTEGER' are not supported.
 possible cause:
 The INTEGER_IDX is an INTEGER column. While running the prepareStatement, 
 JDBC/Cloudscape thinks I'm comparing INTEGER_IDX with ? (question mark) 
 (INTEGER vs VARCHAR). This is not true, ? (question mark) is a parameter that 
 will be later substitued in my code by an integer value.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-341) Client should disallow XAConnection getConnection() when a global transaction has been started and a logical connection has already been obtained

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-341?page=all ]

Mike Matrigali updated DERBY-341:
-

Component: JDBC

 Client should disallow XAConnection getConnection() when a global transaction 
 has been started and a logical connection has already been obtained
 -

  Key: DERBY-341
  URL: http://issues.apache.org/jira/browse/DERBY-341
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.1.1.0
 Reporter: Kathey Marsden
  Fix For: 10.2.0.0


 If a logical connection has already been obtained,  client should disallow  
 XAConnection getConnection if a global transaction has been started and a 
 logical connection has already been obtained
 Repro:
 With the client the script below does not give an error.
 ij connect 'wombat;create=true';
 ij disconnect;
 ij xa_datasource 'wombat';
 ij xa_connect user 'APP' password 'xxx';
 Connection number: 3.
 ij -- start new transaction
 xa_start xa_noflags 0;
 ij xa_getconnection;
 ij -- Should not be able to get connection again
 xa_getconnection;
 With embedded we get.
 ERROR XJ059: Cannot close a connection while a global transaction is still 
 active.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-351) auto sequencing columns

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-351?page=all ]
 
Mike Matrigali closed DERBY-351:


Resolution: Invalid

Just send mail to derby user list for questions, don't file JIRA issues.

 auto sequencing columns
 ---

  Key: DERBY-351
  URL: http://issues.apache.org/jira/browse/DERBY-351
  Project: Derby
 Type: Bug
 Reporter: Wendy Gibbons


 I am trying to create a sequence on a derby table, I can not find any of the 
 ways I would normally do this, auto sequencing columns, creating a seperate 
 sequence (and using in the insert)
 how am I supposed to implement a sequence please? sorry but I couldn't find a 
 bulletin board or anywhere else to ask for help

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-410) ClientDataSource should not require serverName/portNumber to be set

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-410?page=all ]

Mike Matrigali updated DERBY-410:
-

Component: JDBC

 ClientDataSource should not require serverName/portNumber to  be set
 

  Key: DERBY-410
  URL: http://issues.apache.org/jira/browse/DERBY-410
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.1.1.0, 10.2.0.0
 Reporter: Kathey Marsden
 Assignee: Philip Wilder


 the ClientDataSource property  serverName should default to localhost but 
 is currently  required.
 http://incubator.apache.org/derby/docs/adminguide/cadminappsclient.html
 See repro for DERBY-409
 and comment out the lines
 ds.setServerName(localhost);
 ds.setPortNumber(1527);

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-439) IDENTITY_VAL_LOCAL() returns null after INSERT INTO table VALUES(DEFAULT)

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-439?page=all ]

Mike Matrigali updated DERBY-439:
-

Component: SQL

 IDENTITY_VAL_LOCAL() returns null after INSERT INTO table VALUES(DEFAULT)
 -

  Key: DERBY-439
  URL: http://issues.apache.org/jira/browse/DERBY-439
  Project: Derby
 Type: Bug
   Components: SQL
 Versions: 10.0.2.1
  Environment: Mandriva 2005 Linux. Derby 10.0.2.1
 Reporter: Andy Jefferson


 I have a table as follows
 CREATE TABLE MYTABLE
 (
 MYTABLE_ID BIGINT NOT NULL generated always as identity (start with 1)
 )
 and then I issue
 INSERT INTO MYTABLE VALUES (DEFAULT);
 followed by
 VALUES IDENTITY_VAL_LOCAL();
 This returns null!
 If instead my table was
 CREATE TABLE MYTABLE
 (
 MYTABLE_ID BIGINT NOT NULL generated always as identity (start with 1),
 NAME VARCHAR(20) NULL
 )
 and I then issue
 INSERT INTO MYTABLE (NAME) VALUES (NEW NAME);
 followed by
 VALUES IDENTITY_VAL_LOCAL();
 I get the value assigned to the identity column correctly.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-412) Connection toString should show type information and the meaning of the identifier that it prints

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-412?page=all ]

Mike Matrigali updated DERBY-412:
-

Component: JDBC

 Connection toString should show type information and  the meaning of the 
 identifier that it prints
 --

  Key: DERBY-412
  URL: http://issues.apache.org/jira/browse/DERBY-412
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.1.1.0, 10.2.0.0
 Reporter: Kathey Marsden
 Assignee: David Van Couvering
  Fix For: 10.2.0.0


 After the change for DERBY-243 the  connection toString() output is  an 
 integer which correspond to SESSIONID.  The output should identify the type 
 and also the meaning of the identifier that it prints.  Perhaps a format that 
 appends the default toString output with the sessionid information as it 
 prints in the derby.log would be more informative.
 [EMAIL PROTECTED] (SESSONID = 2)
 Ultimately this could be expanded to included other diagnostic information e.g
 [EMAIL PROTECTED] (XID = 132), (SESSIONID = 5), (DATABASE = wombat), (DRDAID 
 = NF01.H324-940125304405039114{7})

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-421) starting an XA transaction resets the isolation level set with SET CURRENT ISOLATION

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-421?page=all ]

Mike Matrigali updated DERBY-421:
-

Component: JDBC

 starting an XA transaction resets the isolation level set with SET CURRENT 
 ISOLATION
 

  Key: DERBY-421
  URL: http://issues.apache.org/jira/browse/DERBY-421
  Project: Derby
 Type: Sub-task
   Components: JDBC
 Reporter: Kathey Marsden


 When an XA Transaction is started the isolation level set with SET CURRENT 
 ISOLATION gets reset to CS.
 Embedded setTransactionIsolation  does not have this problem but this problem 
 is the root cause of DERBY-414 because client implements 
 setTransactionIsolation by sending SET CURRENT ISOLATION
 $ java TestSetCurrentIsolation
 Database product: Apache Derby
 Database version: 10.2.0.0 alpha
 Driver name:  Apache Derby Embedded JDBC Driver
 Driver version:   10.2.0.0 alpha
 SET CURRENT ISOLATION = UR
 CURRENT ISOLATION: UR
 getTransactionIsolation:TRANSACTION_READ_UNCOMMITTED:1
 Isolation level after xa start
 CURRENT ISOLATION: CS
 getTransactionIsolation:TRANSACTION_READ_COMMITTED:2
 $
 import java.sql.*;
 import javax.sql.*;
 import javax.transaction.xa.*;
 public class TestSetCurrentIsolation
 {
 public static void main(String[] args) throws Throwable
 {
 try
 {
  final org.apache.derby.jdbc.EmbeddedXADataSource ds =
  new org.apache.derby.jdbc.EmbeddedXADataSource();
  ds.setDatabaseName(C:\\drivers\\derby\\databases\\SCHEDDB);
  ds.setUser(dbuser1);
  ds.setPassword(**);
 XAConnection xaConn = ds.getXAConnection();
 Connection conn = xaConn.getConnection();
 conn.setAutoCommit(true);
 System.out.println(Database product:  + 
 conn.getMetaData().getDatabaseProductName());
 System.out.println(Database version:  + 
 conn.getMetaData().getDatabaseProductVersion());
 System.out.println(Driver name:   + 
 conn.getMetaData().getDriverName());
 System.out.println(Driver version:+ 
 conn.getMetaData().getDriverVersion());
 Statement stmt = conn.createStatement();
 System.out.println(SET CURRENT ISOLATION = UR);
 stmt.executeUpdate(SET CURRENT ISOLATION = UR);
 showIsolationLevel(conn);
 conn.setAutoCommit(false);
 XAResource xaRes = xaConn.getXAResource();
 Xid xid = new TestXid(1,(byte) 32, (byte) 32);
 xaRes.start(xid, XAResource.TMNOFLAGS);
 System.out.println(Isolation level after xa start);
 showIsolationLevel(conn);
 
 xaRes.end(xid, XAResource.TMSUCCESS);
 xaRes.rollback(xid);
 conn.close();
 xaConn.close();
 }
 catch (SQLException sqlX)
 {
 System.out.println(Error on thread 1.);
 do sqlX.printStackTrace();
 while ((sqlX = sqlX.getNextException()) != null);
 }
 catch (Throwable th)
 {
 System.out.println(Error on thread 1.);
 do th.printStackTrace();
 while ((th = th.getCause()) != null);
 }
 }
   /**
* @param conn
* @throws SQLException
*/
   private static void showIsolationLevel(Connection conn) throws 
 SQLException {
   PreparedStatement ps = conn.prepareStatement(VALUES CURRENT 
 ISOLATION);
   ResultSet rs = ps.executeQuery();
   //ResultSet rs = conn.createStatement().executeQuery(VALUES 
 CURRENT ISOLATION);
   rs.next();
   System.out.println(CURRENT ISOLATION:  +  rs.getString(1));
   System.out.println(getTransactionIsolation: + 
   
 getIsoLevelName(conn.getTransactionIsolation())); 
   
   }
   
   public static String getIsoLevelName(int level)
   {
   switch (level) {
   case java.sql.Connection.TRANSACTION_REPEATABLE_READ:
   return TRANSACTION_REAPEATABLE_READ: + level;
   
   case java.sql.Connection.TRANSACTION_READ_COMMITTED:
   return TRANSACTION_READ_COMMITTED: + level;
   case java.sql.Connection.TRANSACTION_SERIALIZABLE:
   return TRANSACTION_SERIALIZABLE: + level;
   case java.sql.Connection.TRANSACTION_READ_UNCOMMITTED:
   return TRANSACTION_READ_UNCOMMITTED: + level;
   }
   return UNEXPECTED_ISO_LEVEL;
   }
 }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of 

[jira] Updated: (DERBY-1) Can't create a new db on OS X

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1?page=all ]

Mike Matrigali updated DERBY-1:
---

  Component: Store
Description: 
This problem does not occur when I use the same jars on Linux.

I am unable to create a new database in ij by using the following command:

connect 'jdbc:derby:testdb;create=true';

I get the following output:

ERROR XJ041: Failed to create database 'testdb', see the next exception for 
details.
ERROR XBM01: Startup failed due to an exception, see next exception for details.
ERROR XJ001: Java exception: 
'/Users/tom/dev/java/derby-bin/lib/testdb/log/log1.dat (File exists): 
java.io.FileNotFoundException'.

All users have write permissions to the directory so it's not getting blocked 
there.  I'm not sure what's going on.  I've included the contents of derby.log 
below.  I've also included the result of running sysinfo on my machine below 
that.


2004-09-24 20:33:53.762 GMT:
 Booting Derby version IBM Corp. - Apache Derby - 10.0.2.0 - (30301): instance 
c013800d-00ff-3226-5601-0015bd70
on database directory /Users/tom/dev/java/derby-bin/lib/testdb 


2004-09-24 20:33:53.821 GMT:
Shutting down instance c013800d-00ff-3226-5601-0015bd70

2004-09-24 20:33:53.837 GMT Thread[main,5,main] Cleanup action starting
ERROR XBM01: Startup failed due to an exception, see next exception for details.
at 
org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at 
org.apache.derby.iapi.services.monitor.Monitor.exceptionStartingModule(Monitor.java)
at org.apache.derby.impl.store.raw.log.LogToFile.boot(LogToFile.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java)
at 
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.bootLogFactory(BaseDataFileFactory.java)
at 
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.setRawStoreFactory(BaseDataFileFactory.java)
at org.apache.derby.impl.store.raw.RawStore.boot(RawStore.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java)
at 
org.apache.derby.impl.store.access.RAMAccessManager.boot(RAMAccessManager.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java)
at org.apache.derby.impl.db.BasicDatabase.bootStore(BasicDatabase.java)
at org.apache.derby.impl.db.BasicDatabase.boot(BasicDatabase.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(BaseMonitor.java)
at 
org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Monitor.java)
at 
org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(EmbedConnection.java)
at 
org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java)
at 
org.apache.derby.impl.jdbc.EmbedConnection20.init(EmbedConnection20.java)
at 
org.apache.derby.impl.jdbc.EmbedConnection30.init(EmbedConnection30.java)
at org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java)
at org.apache.derby.jdbc.Driver169.connect(Driver169.java)
at java.sql.DriverManager.getConnection(DriverManager.java:512)
at java.sql.DriverManager.getConnection(DriverManager.java:140)
at org.apache.derby.impl.tools.ij.ij.dynamicConnection(ij.java)
at org.apache.derby.impl.tools.ij.ij.ConnectStatement(ij.java)
at org.apache.derby.impl.tools.ij.ij.ijStatement(ij.java)
at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java)
at org.apache.derby.impl.tools.ij.Main.go(Main.java)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java)
at org.apache.derby.impl.tools.ij.Main14.main(Main14.java)
at 

[jira] Updated: (DERBY-273) The derbynet/dataSourcePermissions_net.java test fails intermittently

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-273?page=all ]

Mike Matrigali updated DERBY-273:
-

  Component: Network Server
Description: 
The test fails in the derbyall/derbynetclientmats/derbynetmats suite stack with 
the following diff:
*** Start: dataSourcePermissions_net jdk1.4.2 DerbyNetClient 
derbynetmats:derbynetmats 2005-05-11 04:24:11 ***
17a18,19
 org.apache.derby.iapi.services.context.ShutdownException: 
 agentThread[DRDAConnThread_2,5,derby.daemons]
Test Failed.


  was:
The test fails in the derbyall/derbynetclientmats/derbynetmats suite stack with 
the following diff:
*** Start: dataSourcePermissions_net jdk1.4.2 DerbyNetClient 
derbynetmats:derbynetmats 2005-05-11 04:24:11 ***
17a18,19
 org.apache.derby.iapi.services.context.ShutdownException: 
 agentThread[DRDAConnThread_2,5,derby.daemons]
Test Failed.



 The derbynet/dataSourcePermissions_net.java test fails intermittently
 -

  Key: DERBY-273
  URL: http://issues.apache.org/jira/browse/DERBY-273
  Project: Derby
 Type: Bug
   Components: Network Server
  Environment: 1.4.2 JVM (both Sun and IBM)
 Reporter: Jack Klebanoff
 Assignee: Tomohito Nakayama


 The test fails in the derbyall/derbynetclientmats/derbynetmats suite stack 
 with the following diff:
 *** Start: dataSourcePermissions_net jdk1.4.2 DerbyNetClient 
 derbynetmats:derbynetmats 2005-05-11 04:24:11 ***
 17a18,19
  org.apache.derby.iapi.services.context.ShutdownException: 
  agentThread[DRDAConnThread_2,5,derby.daemons]
 Test Failed.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-185) Update incubator status page

2005-08-16 Thread Jean T. Anderson (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-185?page=all ]
 
Jean T. Anderson closed DERBY-185:
--

Resolution: Fixed

Derby graduated. jta did a final update of the incubator status page on 
2005-08-01, revision 226892, and there is no further need to update it.

 Update incubator status page
 

  Key: DERBY-185
  URL: http://issues.apache.org/jira/browse/DERBY-185
  Project: Derby
 Type: Task
   Components: Web Site
 Reporter: Jeremy Boynes
 Assignee: Jeremy Boynes


 Update the page
 http://incubator.apache.org/projects/derby.html
 to reflect current status. I need to figure out what to edit and where we 
 are. Please attach any comments to this issue

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-500) Update/Select failure when BLOB/CLOB fields updated in several rows by PreparedStatement using setBinaryStream and setCharacterStream

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-500?page=all ]

Mike Matrigali updated DERBY-500:
-

Component: JDBC

 Update/Select failure when BLOB/CLOB fields updated in several rows by 
 PreparedStatement using setBinaryStream and setCharacterStream
 -

  Key: DERBY-500
  URL: http://issues.apache.org/jira/browse/DERBY-500
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.1.1.0
  Environment: Windows 2000, java SDK 1.4
 Reporter: Peter Kovgan


 I have table contained BLOB and CLOB fields:
 Create table string is:
 private static final String CREATE = CREATE TABLE ta ( +
 ta_id INTEGER NOT NULL, +
 mname VARCHAR( 254 ) NOT NULL, +
 mvalue INT NOT NULL, +
 mdate DATE NOT NULL, +
 bytedata BLOB NOT NULL, +
 chardata CLOB NOT NULL, +
 PRIMARY KEY ( ta_id ));
 Then I insert 2000 rows in the table.
 Then I update all 2000 rows by command:
 private static final String UPDATE  =  UPDATE ta  +
   SET bytedata=? ,chardata=?  +
   WHERE mvalue=?;
 /**create blob and clob arrays**/
 int len1 = 1;//for blob length data
 int len2 = 15000;//for clob length data
 byte buf [] = new byte[len1];
 for(int i=0;ilen1;i++){
   buf [i] = (byte)45;
 }
 ByteArrayInputStream bais = new ByteArrayInputStream(buf);
 
 char[] bufc = new char[len2];
 for (int i = 0; i  bufc.length; i++) {
   bufc[i] = (char)'b';
   }
 CharArrayReader car = new CharArrayReader(bufc);
 /***/
 PreparedStatement pstmt = connection.prepareStatement(UPDATE);
 pstmt.setBinaryStream(1,bais, len1);
 pstmt.setCharacterStream(2,car, len2);
 pstmt.setInt(3,5000);
 int updated =  pstmt.executeUpdate();
 pstmt.close();
 System.out.printlen(updated =+updated );
 all 2000 rows updated , because I receive output : updated =2000
 But If I run select (SELECT bytedata ,chardata  FROM ta)  after update, 
 select failed with error:
 ERROR XSDA7: Restore of a serializable or SQLData object of class , attempted 
 to
  read more data than was originally stored
 at 
 org.apache.derby.iapi.error.StandardException.newException(StandardEx
 ception.java)
 at 
 org.apache.derby.impl.store.raw.data.StoredPage.readRecordFromArray(S
 toredPage.java)
 at 
 org.apache.derby.impl.store.raw.data.StoredPage.restoreRecordFromSlot
 (StoredPage.java)
 at 
 org.apache.derby.impl.store.raw.data.BasePage.fetchFromSlot(BasePage.
 java)
 at 
 org.apache.derby.impl.store.access.conglomerate.GenericScanController
 .fetchRows(GenericScanController.java)
 at 
 org.apache.derby.impl.store.access.heap.HeapScan.fetchNextGroup(HeapS
 can.java)
 at 
 org.apache.derby.impl.sql.execute.BulkTableScanResultSet.reloadArray(
 BulkTableScanResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.BulkTableScanResultSet.getNextRowCo
 re(BulkTableScanResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.NestedLoopJoinResultSet.getNextRowC
 ore(NestedLoopJoinResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.NestedLoopLeftOuterJoinResultSet.ge
 tNextRowCore(NestedLoopLeftOuterJoinResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.ProjectRestrictResultSet.getNextRow
 Core(ProjectRestrictResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.SortResultSet.getRowFromResultSet(S
 ortResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.SortResultSet.getNextRowFromRS(Sort
 ResultSet.java)
 at 
 org.apache.derby.impl.sql.execute.SortResultSet.loadSorter(SortResult
 Set.java)
 at 
 org.apache.derby.impl.sql.execute.SortResultSet.openCore(SortResultSe
 t.java)
 at 
 org.apache.derby.impl.sql.execute.BasicNoPutResultSetImpl.open(BasicN
 oPutResultSetImpl.java)
 at 
 org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPre
 paredStatement.java)
 at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedState
 ment.java)
 at 
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Em
 bedPreparedStatement.java)
 at 
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.execute(EmbedPrepar
 edStatement.java)
 at com.beep_beep.dbtest.complex.Benchmark.testSelect(Unknown Source)
 at 
 com.beep_beep.dbtest.complex.Benchmark.executeSimplestBigTable(Unknown Sour
 ce)
 at com.beep_beep.dbtest.complex.Benchmark.testBigTable(Unknown Source)
 at 
 com.beep_beep.dbtest.complex.Benchmark.executeDegradationBenchmark(Unknown
 Source)
 at com.beep_beep.dbtest.complex.Benchmark.main(Unknown Source)
 From the stack trace and from console I see that Update 

[jira] Closed: (DERBY-509) DERBY-132 resolved ? Table not automatically compressed

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-509?page=all ]
 
Mike Matrigali closed DERBY-509:


Resolution: Invalid

This is the current expected behavior.  Full compress is not done 
automatically, at 
the page level deleted space is reused by  subsequent inserts. This test case is
the worst case scenario as only every 3rd or 5th row is deleted.  I will file a 
separate
enhancement  to somehow  automatically run compress table.  

 DERBY-132 resolved ? Table not automatically compressed
 ---

  Key: DERBY-509
  URL: http://issues.apache.org/jira/browse/DERBY-509
  Project: Derby
 Type: Bug
 Versions: 10.1.1.0
  Environment: JDK 1.4.2, JDK 1.5.0,
 Windows XP
 Reporter: Volker Edelmann


 I tried a test-program that repeatedly inserts a bunch of  data into 1 table 
 and repeatedly deletes a bunch of data. 
   derby.executeSelect(select count(*) c from rclvalues);
   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 200);  
 // insert 2.000.000 rows
 derby.executeDelete(delete from rclvalues where MOD(id, 3) = 0);
  
   TestQueries.executeBulkInsertAnalyst(derby.getConnection(), 100);
 derby.executeDelete(delete from rclvalues where MOD(id, 5) = 0);
   derby.executeSelect(select count(*) c from rclvalues);
 At the end of the operation, the table contains approximately the same number 
 of rows. But the size of the database  has grown  from
 581 MB to 1.22 GB. From the description of item DERBY-132, I hoped that Derby 
 does the compression now ( version 10.1.X.X.).
 Did I overlook I still have to use  SYSCS_UTIL.SYSCS_COMPRESS_TABLE ?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-512) enhance existing automatic space reclamation for tables

2005-08-16 Thread Mike Matrigali (JIRA)
enhance existing automatic space reclamation for tables
---

 Key: DERBY-512
 URL: http://issues.apache.org/jira/browse/DERBY-512
 Project: Derby
Type: Improvement
  Components: Store  
Reporter: Mike Matrigali
Priority: Minor


The current space reclamation system does a good job of reusing space if 
inserts follow deletes and if the 
deletes result in freeing complete pages.  In other cases unused space can grow 
in tables and can only
be reclaimed by explicitly calling the compress table interfaces by hand.

As a zero admin database, derby should provide some way to automatically 
reclaim this space.  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-293) Correlate client connection with embedded connection

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-293?page=all ]

Mike Matrigali updated DERBY-293:
-

Component: JDBC

 Correlate client connection with embedded connection
 

  Key: DERBY-293
  URL: http://issues.apache.org/jira/browse/DERBY-293
  Project: Derby
 Type: Improvement
   Components: Network Server
 Versions: 10.1.1.0
  Environment: N/A
 Reporter: David Van Couvering
 Priority: Minor
  Fix For: 10.2.0.0


 There should be a way for someone to correlate a given embedded connection 
 with its matching network client connection, if such a client connection 
 exists.  
 See 
 http://article.gmane.org/gmane.comp.apache.db.derby.devel/3748
 and
 http://article.gmane.org/gmane.comp.apache.db.derby.devel/3942
 for some background info on how to get useful information out of the DRDA 
 protocol
 stream to accomplish this.
 This could be done either by modifying the toString() method of an embedded
 connection to show its associated network client connection information or (my
 preference) include this information in the proposed Connection VTI (see 
 DERBY-292).  I am worried that if we use toString() for this, the output will 
 be overly long and complicated; also, over a period of time the same embedded 
 connection may be associated with multiple client connections, resulting in a 
 changing toString() value for the embedded connection.  This seems 
 problematic if we are intending toString() to uniquely identify a connection 
 for the lifetime of the connection -- this would be a good goal to have as it 
 would enable us to do some useful debugging using the VTIs.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-292) Add a Connection VTI

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-292?page=all ]

Mike Matrigali updated DERBY-292:
-

Component: JDBC

 Add a Connection VTI
 

  Key: DERBY-292
  URL: http://issues.apache.org/jira/browse/DERBY-292
  Project: Derby
 Type: Improvement
   Components: JDBC
 Versions: 10.1.1.0
  Environment: N/A
 Reporter: David Van Couvering
 Priority: Minor
  Fix For: 10.2.0.0


 We should add a new VTI that lists all active connections and provides 
 details about each connection, such as owning user, the associated client 
 connection if there is one, etc.  Linked with the other VTIs this would be a 
 very useful debugging tool.  One of the columns should be the unique 
 connection string for that connection, once we have this available with 
 DERBY-243.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-293) Correlate client connection with embedded connection

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-293?page=all ]

Mike Matrigali updated DERBY-293:
-

Component: Network Server
   (was: JDBC)

 Correlate client connection with embedded connection
 

  Key: DERBY-293
  URL: http://issues.apache.org/jira/browse/DERBY-293
  Project: Derby
 Type: Improvement
   Components: Network Server
 Versions: 10.1.1.0
  Environment: N/A
 Reporter: David Van Couvering
 Priority: Minor
  Fix For: 10.2.0.0


 There should be a way for someone to correlate a given embedded connection 
 with its matching network client connection, if such a client connection 
 exists.  
 See 
 http://article.gmane.org/gmane.comp.apache.db.derby.devel/3748
 and
 http://article.gmane.org/gmane.comp.apache.db.derby.devel/3942
 for some background info on how to get useful information out of the DRDA 
 protocol
 stream to accomplish this.
 This could be done either by modifying the toString() method of an embedded
 connection to show its associated network client connection information or (my
 preference) include this information in the proposed Connection VTI (see 
 DERBY-292).  I am worried that if we use toString() for this, the output will 
 be overly long and complicated; also, over a period of time the same embedded 
 connection may be associated with multiple client connections, resulting in a 
 changing toString() value for the embedded connection.  This seems 
 problematic if we are intending toString() to uniquely identify a connection 
 for the lifetime of the connection -- this would be a good goal to have as it 
 would enable us to do some useful debugging using the VTIs.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-118) Allow any build-in function as default values in table create for columns

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-118?page=all ]

Mike Matrigali updated DERBY-118:
-

  Component: SQL
Description: 
It is ok in ij to do a   values char(current_date)   but is is not allowed to 
use char(current_date) as default value for clolumns; like for example

CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default user, 
create_date_string varchar(40) default char(current_date))

Request: It should be allowed to use any build-in function which return a valid 
type as part of the default value spec.


There was a e-mail thread for this and the core content/answer was:

Bernd Ruehlicke wrote:
 
 CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default 
 user, create_date_string varchar(40) default char(current_date))
 
 give an error as below - any idea why ?!??!
 

The rules for what is acceptable as a column default in Derby say that the only 
valid functions are datetime functions. 
  The logic that enforces this can be seen in the defaultTypeIsValid method 
of the file:

./java/engine/org/apache/derby/impl/sql/compile/ColumnDefinitionNode.java

The Derby Reference Manual also states this same restriction (albeit rather 
briefly):



Column Default

For the definition of a default value, a ConstantExpression is an expression 
that does not refer to any table. It can include constants, date-time special 
registers, current schemas, users, and null.



A date-time special register here means a date-time function such as 
date(current_date) in your first example. 
Since the function char is NOT a date-time function, it will throw an error.

I believe this restriction was put in place as part of the DB2 compatibility 
work was that done in Cloudscape a while back.

Hope that answers your question,
Army





  was:
It is ok in ij to do a   values char(current_date)   but is is not allowed to 
use char(current_date) as default value for clolumns; like for example

CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default user, 
create_date_string varchar(40) default char(current_date))

Request: It should be allowed to use any build-in function which return a valid 
type as part of the default value spec.


There was a e-mail thread for this and the core content/answer was:

Bernd Ruehlicke wrote:
 
 CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default 
 user, create_date_string varchar(40) default char(current_date))
 
 give an error as below - any idea why ?!??!
 

The rules for what is acceptable as a column default in Derby say that the only 
valid functions are datetime functions. 
  The logic that enforces this can be seen in the defaultTypeIsValid method 
of the file:

./java/engine/org/apache/derby/impl/sql/compile/ColumnDefinitionNode.java

The Derby Reference Manual also states this same restriction (albeit rather 
briefly):



Column Default

For the definition of a default value, a ConstantExpression is an expression 
that does not refer to any table. It can include constants, date-time special 
registers, current schemas, users, and null.



A date-time special register here means a date-time function such as 
date(current_date) in your first example. 
Since the function char is NOT a date-time function, it will throw an error.

I believe this restriction was put in place as part of the DB2 compatibility 
work was that done in Cloudscape a while back.

Hope that answers your question,
Army





Environment: 

 Allow any build-in function as default values in table create for columns
 -

  Key: DERBY-118
  URL: http://issues.apache.org/jira/browse/DERBY-118
  Project: Derby
 Type: Improvement
   Components: SQL
 Reporter: Bernd Ruehlicke
 Priority: Minor


 It is ok in ij to do a   values char(current_date)   but is is not allowed to 
 use char(current_date) as default value for clolumns; like for example
 CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default user, 
 create_date_string varchar(40) default char(current_date))
 Request: It should be allowed to use any build-in function which return a 
 valid type as part of the default value spec.
 There was a e-mail thread for this and the core content/answer was:
 Bernd Ruehlicke wrote:
  
  CREATE TABLE DOSENOTWORK (num int, created_by varchar(40) default 
  user, create_date_string varchar(40) default char(current_date))
  
  give an error as below - any idea why ?!??!
  
 The rules for what is acceptable as a column default in Derby say that the 
 only valid functions are datetime functions. 
   The logic that enforces this can be seen in the defaultTypeIsValid method 
 of the file:
 ./java/engine/org/apache/derby/impl/sql/compile/ColumnDefinitionNode.java
 The Derby Reference Manual also states this same restriction (albeit rather 
 briefly):
 
 Column Default
 For the definition of a default value, 

[jira] Updated: (DERBY-152) ERROR 42X01: Syntax error: Encountered commit at line 1, column 1.

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-152?page=all ]

Mike Matrigali updated DERBY-152:
-

  Component: SQL
Description: 
The following totally valid (but of course a little insane - since you could 
use _connection.commit() ) throws the ERROR 42X01 - which it should not do.

  Statement s = _connection.createStatement();
  try
  {
  s.execute(commit);
  }
  catch(Exception e)
  {
  e.printStackTrace();
  System.out.println(WOW - what happened here  ?);   
  };



  was:
The following totally valid (but of course a little insane - since you could 
use _connection.commit() ) throws the ERROR 42X01 - which it should not do.

  Statement s = _connection.createStatement();
  try
  {
  s.execute(commit);
  }
  catch(Exception e)
  {
  e.printStackTrace();
  System.out.println(WOW - what happened here  ?);   
  };



Environment: 

 ERROR 42X01: Syntax error: Encountered commit at line 1, column 1.
 

  Key: DERBY-152
  URL: http://issues.apache.org/jira/browse/DERBY-152
  Project: Derby
 Type: Wish
   Components: SQL
 Reporter: Bernd Ruehlicke
 Priority: Minor


 The following totally valid (but of course a little insane - since you could 
 use _connection.commit() ) throws the ERROR 42X01 - which it should not do.
   Statement s = _connection.createStatement();
   try
   {
   s.execute(commit);
   }
   catch(Exception e)
   {
   e.printStackTrace();
   System.out.println(WOW - what happened here  ?);   
   };

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-110) hsqldb is faster than derby doing inserts

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-110?page=all ]

Mike Matrigali updated DERBY-110:
-

  Component: Performance
Summary: hsqldb is faster than derby doing inserts  (was: performance)
Description: 
1. create db name systable in derby and hsqldb (another open source dbms);
2. create table named a table ('CREATE TABLE aTable(id INTEGER NOT NULL, name 
VARCHAR(255) NOT NULL, description VARCHAR(255))');
3. insert ten thousands row in table atable
for(int i=1; i1; i++) {
sql = INSERT INTO aTable VALUES(+i+, 'haha', 'zhang'' test');
stmt.execute(sql);
System.out.println(i);
}
4. derby spend 50390 millisecond;
   hsqldb spend 4250 millisecond;
 
5. conclusion: hsqldb has more perfect performance.
   Maybe derby need to improve it's performance.

  was:
1. create db name systable in derby and hsqldb (another open source dbms);
2. create table named a table ('CREATE TABLE aTable(id INTEGER NOT NULL, name 
VARCHAR(255) NOT NULL, description VARCHAR(255))');
3. insert ten thousands row in table atable
for(int i=1; i1; i++) {
sql = INSERT INTO aTable VALUES(+i+, 'haha', 'zhang'' test');
stmt.execute(sql);
System.out.println(i);
}
4. derby spend 50390 millisecond;
   hsqldb spend 4250 millisecond;
 
5. conclusion: hsqldb has more perfect performance.
   Maybe derby need to improve it's performance.

Environment: 
CPU 2.40GHz
windows 2000


  was:
CPU 2.40GHz
windows 2000



 hsqldb is faster than derby doing inserts
 -

  Key: DERBY-110
  URL: http://issues.apache.org/jira/browse/DERBY-110
  Project: Derby
 Type: Test
   Components: Performance
 Versions: 10.0.2.1
  Environment: CPU 2.40GHz
 windows 2000
 Reporter: Zhang Jinsheng
 Priority: Minor


 1. create db name systable in derby and hsqldb (another open source dbms);
 2. create table named a table ('CREATE TABLE aTable(id INTEGER NOT NULL, name 
 VARCHAR(255) NOT NULL, description VARCHAR(255))');
 3. insert ten thousands row in table atable
 for(int i=1; i1; i++) {
   sql = INSERT INTO aTable VALUES(+i+, 'haha', 'zhang'' test');
   stmt.execute(sql);
   System.out.println(i);
 }
 4. derby spend 50390 millisecond;
hsqldb spend 4250 millisecond;
  
 5. conclusion: hsqldb has more perfect performance.
Maybe derby need to improve it's performance.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-4) order by is not supported for insert ... select

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-4?page=all ]

Mike Matrigali updated DERBY-4:
---

  Component: SQL
Description: 
When filling a table with insert ... select ..., order by cannot be 
specified.

There is not method to copy a table sorted into another table (except using 
export/import). This would be useful to optimize performance for big tables, or 
to create identity values that are ascending (related to another column).

Example:

create table temp1 (
   s varchar(10));

insert into temp1 values 'x','a','c','b','a';

create table temp2 (
   i integer not null
  generated always as identity
  primary key,
   s varchar(10));

insert into temp2 (s)
   select s from temp1 order by s;

-- Error: order by is not allowed.

-- trying to use group by instead of oder by:

insert into temp2 (s)
   select s from temp1 group by s;
select * from temp2;

-- group by did not sort the table.



  was:
When filling a table with insert ... select ..., order by cannot be 
specified.

There is not method to copy a table sorted into another table (except using 
export/import). This would be useful to optimize performance for big tables, or 
to create identity values that are ascending (related to another column).

Example:

create table temp1 (
   s varchar(10));

insert into temp1 values 'x','a','c','b','a';

create table temp2 (
   i integer not null
  generated always as identity
  primary key,
   s varchar(10));

insert into temp2 (s)
   select s from temp1 order by s;

-- Error: order by is not allowed.

-- trying to use group by instead of oder by:

insert into temp2 (s)
   select s from temp1 group by s;
select * from temp2;

-- group by did not sort the table.



Environment: 

 order by is not supported for insert ... select
 ---

  Key: DERBY-4
  URL: http://issues.apache.org/jira/browse/DERBY-4
  Project: Derby
 Type: New Feature
   Components: SQL
 Reporter: Christian d'Heureuse
 Priority: Minor


 When filling a table with insert ... select ..., order by cannot be 
 specified.
 There is not method to copy a table sorted into another table (except using 
 export/import). This would be useful to optimize performance for big tables, 
 or to create identity values that are ascending (related to another column).
 Example:
 create table temp1 (
s varchar(10));
 insert into temp1 values 'x','a','c','b','a';
 create table temp2 (
i integer not null
   generated always as identity
   primary key,
s varchar(10));
 insert into temp2 (s)
select s from temp1 order by s;
 -- Error: order by is not allowed.
 -- trying to use group by instead of oder by:
 insert into temp2 (s)
select s from temp1 group by s;
 select * from temp2;
 -- group by did not sort the table.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-321) ASSERT FAILED invalid space required 10012 newDataToWrite.getUsed() 10012 nextRecordOffset 189 ... on a database recoverty.

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-321?page=all ]

Mike Matrigali updated DERBY-321:
-

  Component: Store
Description: 
while doing random crash/recovery tests. I got the following assert failure:

org.apache.derby.iapi.services.sanity.AssertFailure: ASSERT FAILED invalid 
space required 10012 newDataToWrite.getUsed() 10012 nextRecordOffset 1890 
newOffset 1890 reservedSpaceFieldId 5 startField 0 newEndFieldExclusive 8 
newFieldCount 8 oldFieldCount 6 slot 3 freeSpace 5439 unusedSpace 0 page 
Page(454,Container(0, 1152))



2005-05-24 18:28:45.234 GMT:
 Booting Derby version The Apache Software Foundation - Apache Derby - 10.1.0.0 
alpha - (1): instance c013800d-0104-0ff6-9447-00109d80
on database directory G:\stresstests\csitm 


Exception trace: 

org.apache.derby.iapi.services.sanity.AssertFailure: ASSERT FAILED invalid 
space required 10012 newDataToWrite.getUsed() 10012 nextRecordOffset 1890 
newOffset 1890 reservedSpaceFieldId 5 startField 0 newEndFieldExclusive 8 
newFieldCount 8 oldFieldCount 6 slot 3 freeSpace 5439 unusedSpace 0 page 
Page(454,Container(0, 1152))

at 
org.apache.derby.iapi.services.sanity.SanityManager.THROWASSERT(SanityManager.java:150)

at 
org.apache.derby.impl.store.raw.data.StoredPage.storeRecordForUpdate(StoredPage.java:7449)

at 
org.apache.derby.impl.store.raw.data.StoredPage.storeRecord(StoredPage.java:7098)

at 
org.apache.derby.impl.store.raw.data.UpdateOperation.undoMe(UpdateOperation.java:200)

at 
org.apache.derby.impl.store.raw.data.PhysicalUndoOperation.doMe(PhysicalUndoOperation.java:146)

at 
org.apache.derby.impl.store.raw.log.FileLogger.logAndUndo(FileLogger.java:532)

at org.apache.derby.impl.store.raw.xact.Xact.logAndUndo(Xact.java:361)

at 
org.apache.derby.impl.store.raw.log.FileLogger.undo(FileLogger.java:1014)

at org.apache.derby.impl.store.raw.xact.Xact.abort(Xact.java:906)

at 
org.apache.derby.impl.store.raw.xact.XactFactory.rollbackAllTransactions(XactFactory.java:498)

at 
org.apache.derby.impl.store.raw.log.LogToFile.recover(LogToFile.java:1082)

at org.apache.derby.impl.store.raw.RawStore.boot(RawStore.java:323)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1985)

at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:284)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:539)

at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)

at 
org.apache.derby.impl.store.access.RAMAccessManager.boot(RAMAccessManager.java:994)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1985)

at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:284)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:539)

at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)

at 
org.apache.derby.impl.db.BasicDatabase.bootStore(BasicDatabase.java:752)

at org.apache.derby.impl.db.BasicDatabase.boot(BasicDatabase.java:173)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1985)

at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:284)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java:1832)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(BaseMonitor.java:1698)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(BaseMonitor.java:1577)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(BaseMonitor.java:996)

at 
org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(BaseMonitor.java:988)

at 
org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Monitor.java:533)

at 
org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:1548)

at 
org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:193)

at 
org.apache.derby.impl.jdbc.EmbedConnection30.init(EmbedConnection30.java:72)

at 
org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java:73)

at org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:183)

at java.sql.DriverManager.getConnection(DriverManager.java:512)

at java.sql.DriverManager.getConnection(DriverManager.java:140)

at org.apache.derby.impl.tools.ij.ij.dynamicConnection(ij.java:836)

at org.apache.derby.impl.tools.ij.ij.ConnectStatement(ij.java:698)

at org.apache.derby.impl.tools.ij.ij.ijStatement(ij.java:528)

at 

[jira] Updated: (DERBY-368) Make it possible to alter generatedColumnOption of column

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-368?page=all ]

Mike Matrigali updated DERBY-368:
-

Component: SQL

 Make it possible to alter generatedColumnOption of column
 -

  Key: DERBY-368
  URL: http://issues.apache.org/jira/browse/DERBY-368
  Project: Derby
 Type: Sub-task
   Components: SQL
 Reporter: Tomohito Nakayama
 Priority: Minor


 This corresponds to solution-2 and solution-3 of DERBY-167.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-408) Fix formatting of manuals in PDF output

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-408?page=all ]

Mike Matrigali updated DERBY-408:
-

Component: Documentation

 Fix formatting of manuals in PDF output
 ---

  Key: DERBY-408
  URL: http://issues.apache.org/jira/browse/DERBY-408
  Project: Derby
 Type: Bug
   Components: Documentation
  Environment: all
 Reporter: Jeff Levitt
 Priority: Minor
  Fix For: 10.2.0.0


 1.  The syntax boxes in many of the Derby manuals seem to output with extra 
 End-of-line feeds in the PDF's.  Some syntax boxes print one word per line.   
 For example:
 http://incubator.apache.org/derby/docs/tools/tools-single.html#rtoolsijpropref10135
 This might be a bug with the DITA toolkit, because the DITA source files dont 
 have these End-of-line feeds in them.
 This bug was originally reported in the doc reviews for version 10.1:
 http://issues.apache.org/jira/browse/DERBY-383
 (see Myrna's comments)
 2.  Based on http://issues.apache.org/jira/browse/DERBY-384 comments to the 
 doc review (see Sunitha's comments), we need to figure out how to et the 
 table numbers to ascend.  Currently, they all output as table 1.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-415) sysinfo with -cp client option should not print error saying DB2 jar file and driver class are missing

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-415?page=all ]

Mike Matrigali updated DERBY-415:
-

Component: Services

 sysinfo with -cp client option should not print error saying DB2 jar file and 
 driver class are missing
 --

  Key: DERBY-415
  URL: http://issues.apache.org/jira/browse/DERBY-415
  Project: Derby
 Type: Bug
   Components: Services
 Versions: 10.1.1.0
 Reporter: David Van Couvering
 Priority: Minor


 If you run
   java org.apache.derby.tools.sysinfo -cp client SimpleApp.class
 you get
 FOUND IN CLASS PATH:
 Derby Client libraries (derbyclient.jar)
 user-specified class (SimpleApp)
 NOT FOUND IN CLASS PATH:
 Derby Client libraries (db2jcc.jar)
 (com.ibm.db2.jcc.DB2Driver not found.)
 The NOT FOUND IN CLASSPATH output is confusing and invalid because we're 
 testing the network client, not the DB2 JCC client.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-442) Update triggers on tables with blob columns stream blobs into memory even when the blobs are not referenced/accessed.

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-442?page=all ]

Mike Matrigali updated DERBY-442:
-

Component: SQL

 Update triggers on tables with blob columns stream blobs into memory even 
 when the blobs are not referenced/accessed.
 -

  Key: DERBY-442
  URL: http://issues.apache.org/jira/browse/DERBY-442
  Project: Derby
 Type: Sub-task
   Components: SQL
 Versions: 10.2.0.0
 Reporter: A B
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: d442.java

 Suppose I have 1) a table t1 with blob data in it, and 2) an UPDATE trigger 
 tr1 defined on that table, where the triggered-SQL-action for tr1 does 
 NOT reference any of the blob columns in the table.  [ Note that this is 
 different from DERBY-438 because DERBY-438 deals with triggers that _do_ 
 reference the blob column(s), whereas this issue deals with triggers that do 
 _not_ reference the blob columns--but I think they're related, so I'm 
 creating this as subtask to 438 ].  In such a case, if the trigger is fired, 
 the blob data will be streamed into memory and thus consume JVM heap, even 
 though it (the blob data) is never actually referenced/accessed by the 
 trigger statement.
 For example, suppose we have the following DDL:
 create table t1 (id int, status smallint, bl blob(2G));
 create table t2 (id int, updated int default 0);
 create trigger tr1 after update of status on t1 referencing new as n_row 
 for each row mode db2sql update t2 set updated = updated + 1 where t2.id = 
 n_row.id;
 Then if t1 and t2 both have data and we make a call to:
 update t1 set status = 3;
 the trigger tr1 will fire,  which will cause the blob column in t1 to be 
 streamed into memory for each row affected by the trigger.  The result is 
 that, if the blob data is large, we end up using a lot of JVM memory when we 
 really shouldn't have to (at least, in _theory_ we shouldn't have to...).
 Ideally, Derby could figure out whether or not the blob column is referenced, 
 and avoid streaming the lob into memory whenever possible (hence this is 
 probably more of an enhancement request than a bug)...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-441) Javadoc for Client data source and driver files needs cleanup

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-441?page=all ]

Mike Matrigali updated DERBY-441:
-

Component: Network Client

 Javadoc for Client data source and driver files needs cleanup
 -

  Key: DERBY-441
  URL: http://issues.apache.org/jira/browse/DERBY-441
  Project: Derby
 Type: Bug
   Components: Network Client
 Versions: 10.1.1.0
 Reporter: Daniel John Debrunner
 Assignee: Daniel John Debrunner
 Priority: Minor


 No comments on ClientDriver
 Many protected/public  fields/methods appear in the published javadoc, but 
 are not intended to be part of the api.
 E.g. instance fields of ClientBaseDataSource, propertyKey constants in 
 ClientBaseDataSource
 Need to warn about conflicting attributes in connectionAttributes property 
 (for Embedded as well)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-65) Network Server user ID and password encryption requires IBMJCE

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-65?page=all ]

Mike Matrigali updated DERBY-65:


  Component: Network Server
Description: 
If you set securityMechanism=9 as a JCC (DB2 Universal Driver) property when 
connecting to Derby a java.lang.ClassNotFoundException is returned in an error 
because the code in the org.apache.derby.impl.drda.EncryptionManager 
constructor does the following:

try {
  if (java.security.Security.getProvider (IBMJCE) == null) // IBMJCE is 
not installed, install it.
java.security.Security.addProvider ((java.security.Provider) 
Class.forName(IBMJCE).newInstance());
SNIP
}
catch (java.lang.ClassNotFoundException e) {
  throw new SQLException (java.lang.ClassNotFoundException is caught +
   when initializing EncryptionManager ' + 
e.getMessage() + ');
}

Some improvements could also be made to related documentation:

http://incubator.apache.org/derby/manuals/admin/hubprnt16.html should probably 
be improved to describe the valid values for all properties (e.g. 
securityMechanism ) or have links (or a comment) to other manuals that have 
further information on the properties.

==

Here is how to reproduce the problem using the ij tool:

D:\Derby_snapshots\svnversion_46005java -cp 
.;.\lib\derby.jar;.\lib\derbynet.jar;.\lib\derbytools.jar;..\db2jcc\lib\db2jcc.jar;..\db2jcc\lib\db2jcc_license_c.jar
  -Dij.driver=com.ibm.db2.jcc.DB2Driver -Dij.user=wkpoint -Dij.password=wppass 
-Dij.protocol=jdbc:derby:net://localhost:1527/ org.apache.derby.tools.ij
ij version 10.0 (C) Copyright IBM Corp. 1997, 2004.
ij connect 
'testDB3;create=true:retrieveMessagesFromServerOnGetMessage=true;securityMechanism=9;';
ERROR (no SQLState): java.lang.ClassNotFoundException is caught when 
initializing EncryptionManager 'IBMJCE'
ij


-- Java Information --
Java Version:1.4.2_05
Java Vendor: Sun Microsystems Inc.
Java home:   C:\Program Files\Java\j2re1.4.2_05
Java classpath:  
.;.\lib\derby.jar;.\lib\derbynet.jar;.\lib\derbytools.jar;..\db2jcc\lib\db2jcc.jar;..\db2jcc\lib\db2jcc_license_c.j
ar
OS name: Windows XP
OS architecture: x86
OS version:  5.1
Java user name:  sissonj
Java user home:  C:\Documents and Settings\john
Java user dir:   D:\Derby_snapshots\svnversion_46005
- Derby Information 
[D:\Derby_snapshots\svnversion_46005\lib\derby.jar] 10.0.2.0 - (46005)
[D:\Derby_snapshots\svnversion_46005\lib\derbynet.jar] 10.0.2.0 - (46005)
[D:\Derby_snapshots\svnversion_46005\lib\derbytools.jar] 10.0.2.0 - (46005)
[D:\Derby_snapshots\db2jcc\lib\db2jcc.jar] 2.4 - (17)
[D:\Derby_snapshots\db2jcc\lib\db2jcc_license_c.jar] 2.4 - (17)
--
- Locale Information -
--

  was:
If you set securityMechanism=9 as a JCC (DB2 Universal Driver) property when 
connecting to Derby a java.lang.ClassNotFoundException is returned in an error 
because the code in the org.apache.derby.impl.drda.EncryptionManager 
constructor does the following:

try {
  if (java.security.Security.getProvider (IBMJCE) == null) // IBMJCE is 
not installed, install it.
java.security.Security.addProvider ((java.security.Provider) 
Class.forName(IBMJCE).newInstance());
SNIP
}
catch (java.lang.ClassNotFoundException e) {
  throw new SQLException (java.lang.ClassNotFoundException is caught +
   when initializing EncryptionManager ' + 
e.getMessage() + ');
}

Some improvements could also be made to related documentation:

http://incubator.apache.org/derby/manuals/admin/hubprnt16.html should probably 
be improved to describe the valid values for all properties (e.g. 
securityMechanism ) or have links (or a comment) to other manuals that have 
further information on the properties.

==

Here is how to reproduce the problem using the ij tool:

D:\Derby_snapshots\svnversion_46005java -cp 
.;.\lib\derby.jar;.\lib\derbynet.jar;.\lib\derbytools.jar;..\db2jcc\lib\db2jcc.jar;..\db2jcc\lib\db2jcc_license_c.jar
  -Dij.driver=com.ibm.db2.jcc.DB2Driver -Dij.user=wkpoint -Dij.password=wppass 
-Dij.protocol=jdbc:derby:net://localhost:1527/ org.apache.derby.tools.ij
ij version 10.0 (C) Copyright IBM Corp. 1997, 2004.
ij connect 
'testDB3;create=true:retrieveMessagesFromServerOnGetMessage=true;securityMechanism=9;';
ERROR (no SQLState): java.lang.ClassNotFoundException is caught when 
initializing EncryptionManager 'IBMJCE'
ij


-- Java Information --
Java Version:1.4.2_05
Java Vendor: Sun Microsystems Inc.
Java home:   C:\Program Files\Java\j2re1.4.2_05
Java classpath:  
.;.\lib\derby.jar;.\lib\derbynet.jar;.\lib\derbytools.jar;..\db2jcc\lib\db2jcc.jar;..\db2jcc\lib\db2jcc_license_c.j
ar
OS name:  

[jira] Updated: (DERBY-223) Change programs under demo directory use consistent package names so IDEs do not report errors

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-223?page=all ]

Mike Matrigali updated DERBY-223:
-

  Component: Demos/Scripts
Description: 
Currently if you build the demos under eclipse, it gives the following errors:

SeverityDescription ResourceIn Folder   Location
Creation Time
2   The declared package does not match the expected package nserverdemo
SimpleNetworkClientSample.java  derby-trunk/java/demo/nserverdemo   line 1  
April 14, 2005 9:32:35 AM
2   The declared package does not match the expected package nserverdemo
SimpleNetworkServerSample.java  derby-trunk/java/demo/nserverdemo   line 1  
April 14, 2005 9:32:35 AM
2   The declared package does not match the expected package simple 
SimpleApp.java  derby-trunk/java/demo/simpleline 1  April 14, 2005 9:32:35 
AM

The following demo src files (and their associated documentation for running 
them) should be changed so that they specify the package nserverdemo so they 
are consistent with the other java source files in the same directory:

java/demo/nserverdemo/SimpleNetworkClientSample.java
java/demo/nserverdemo/SimpleNetworkServerSample.java

The following demo src files (and their associated documentation for running 
them) should be changed so that they specify the package simple so they are 
consistent with the other java source files in the nserverdemo directory:

java/demo/simple/SimpleApp.java


  was:
Currently if you build the demos under eclipse, it gives the following errors:

SeverityDescription ResourceIn Folder   Location
Creation Time
2   The declared package does not match the expected package nserverdemo
SimpleNetworkClientSample.java  derby-trunk/java/demo/nserverdemo   line 1  
April 14, 2005 9:32:35 AM
2   The declared package does not match the expected package nserverdemo
SimpleNetworkServerSample.java  derby-trunk/java/demo/nserverdemo   line 1  
April 14, 2005 9:32:35 AM
2   The declared package does not match the expected package simple 
SimpleApp.java  derby-trunk/java/demo/simpleline 1  April 14, 2005 9:32:35 
AM

The following demo src files (and their associated documentation for running 
them) should be changed so that they specify the package nserverdemo so they 
are consistent with the other java source files in the same directory:

java/demo/nserverdemo/SimpleNetworkClientSample.java
java/demo/nserverdemo/SimpleNetworkServerSample.java

The following demo src files (and their associated documentation for running 
them) should be changed so that they specify the package simple so they are 
consistent with the other java source files in the nserverdemo directory:

java/demo/simple/SimpleApp.java


Environment: 

 Change programs under demo directory use consistent package names so IDEs do 
 not report errors
 --

  Key: DERBY-223
  URL: http://issues.apache.org/jira/browse/DERBY-223
  Project: Derby
 Type: Improvement
   Components: Demos/Scripts
 Reporter: John Sisson
 Priority: Trivial


 Currently if you build the demos under eclipse, it gives the following errors:
 Severity  Description ResourceIn Folder   Location
 Creation Time
 2 The declared package does not match the expected package nserverdemo
 SimpleNetworkClientSample.java  derby-trunk/java/demo/nserverdemo   line 
 1  April 14, 2005 9:32:35 AM
 2 The declared package does not match the expected package nserverdemo
 SimpleNetworkServerSample.java  derby-trunk/java/demo/nserverdemo   line 
 1  April 14, 2005 9:32:35 AM
 2 The declared package does not match the expected package simple 
 SimpleApp.java  derby-trunk/java/demo/simpleline 1  April 14, 2005 
 9:32:35 AM
 The following demo src files (and their associated documentation for running 
 them) should be changed so that they specify the package nserverdemo so 
 they are consistent with the other java source files in the same directory:
 java/demo/nserverdemo/SimpleNetworkClientSample.java
 java/demo/nserverdemo/SimpleNetworkServerSample.java
 The following demo src files (and their associated documentation for running 
 them) should be changed so that they specify the package simple so they are 
 consistent with the other java source files in the nserverdemo directory:
 java/demo/simple/SimpleApp.java

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-284) Show warning message , if hard upgrade was not executed because upgrade=true was designated on or after 2nd connection.

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-284?page=all ]

Mike Matrigali updated DERBY-284:
-

  Component: JDBC
Description: 
Show warning message , 
if hard upgrade was not executed.
This happens when upgrade=true was designated on or after 2nd connection.


In the mail of Re: upgrading trouble ... (Re: Patch again for DERBY-167.),
Daniel John Debrunner wrote:

 The upgrade=true has to be set on the connection that boots the
 database, the first connection to the database. In your example where
 you use update=true, that booted the old database in soft upgrade mode.
 Hard upgrade did not happen because the connection with the correct
 upgrade=true was not the first so it was a normal connection to the
 database.
 
 A warning if the upgrade has no effect might be a good idea, either
 no-upgrade required or upgrade on booted database not possible.
 
 Dan.



  was:
Show warning message , 
if hard upgrade was not executed.
This happens when upgrade=true was designated on or after 2nd connection.


In the mail of Re: upgrading trouble ... (Re: Patch again for DERBY-167.),
Daniel John Debrunner wrote:

 The upgrade=true has to be set on the connection that boots the
 database, the first connection to the database. In your example where
 you use update=true, that booted the old database in soft upgrade mode.
 Hard upgrade did not happen because the connection with the correct
 upgrade=true was not the first so it was a normal connection to the
 database.
 
 A warning if the upgrade has no effect might be a good idea, either
 no-upgrade required or upgrade on booted database not possible.
 
 Dan.



Environment: 

 Show warning message , if hard upgrade was not executed  because upgrade=true 
 was designated on or after 2nd connection.
 

  Key: DERBY-284
  URL: http://issues.apache.org/jira/browse/DERBY-284
  Project: Derby
 Type: Improvement
   Components: JDBC
 Reporter: Tomohito Nakayama
 Priority: Trivial


 Show warning message , 
 if hard upgrade was not executed.
 This happens when upgrade=true was designated on or after 2nd connection.
 In the mail of Re: upgrading trouble ... (Re: Patch again for DERBY-167.),
 Daniel John Debrunner wrote:
  The upgrade=true has to be set on the connection that boots the
  database, the first connection to the database. In your example where
  you use update=true, that booted the old database in soft upgrade mode.
  Hard upgrade did not happen because the connection with the correct
  upgrade=true was not the first so it was a normal connection to the
  database.
  
  A warning if the upgrade has no effect might be a good idea, either
  no-upgrade required or upgrade on booted database not possible.
  
  Dan.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-385) servlet Back to Main Page link points to csnet instead of derbynet

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-385?page=all ]

Mike Matrigali updated DERBY-385:
-

Component: Network Server

 servlet Back to Main Page link points to csnet instead of derbynet
 --

  Key: DERBY-385
  URL: http://issues.apache.org/jira/browse/DERBY-385
  Project: Derby
 Type: Bug
   Components: Network Server
 Versions: 10.1.1.0
 Reporter: Myrna van Lunteren
 Assignee: Myrna van Lunteren
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: servlet_385.diff

 The link at the top of the servlet 
 java/drda/org/apache/derby/drda/NetServlet.java links to csnet (because of 
 the static String SERVLET_ADDRESS). 
 However, the servlet address is now derbynet.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-112) Variable name 'enum' used in a couple of places

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-112?page=all ]

Mike Matrigali updated DERBY-112:
-

  Component: SQL
Description: 
I noticed the variable name 'enum' was used in a couple of places like 
SQLParser.java and ij.java. 'enum' is now a keyword in Java 1.5. I just changed 
the variable name to avoid any future conflicts.

This is my first code submission. Please let me know if all changes submitted 
should be linked to a corresponding item in JIRA?

  was:
I noticed the variable name 'enum' was used in a couple of places like 
SQLParser.java and ij.java. 'enum' is now a keyword in Java 1.5. I just changed 
the variable name to avoid any future conflicts.

This is my first code submission. Please let me know if all changes submitted 
should be linked to a corresponding item in JIRA?


 Variable name 'enum' used in a couple of places
 ---

  Key: DERBY-112
  URL: http://issues.apache.org/jira/browse/DERBY-112
  Project: Derby
 Type: Improvement
   Components: SQL
  Environment: win xp
 Reporter: Jonathan Nash
 Priority: Trivial


 I noticed the variable name 'enum' was used in a couple of places like 
 SQLParser.java and ij.java. 'enum' is now a keyword in Java 1.5. I just 
 changed the variable name to avoid any future conflicts.
 This is my first code submission. Please let me know if all changes submitted 
 should be linked to a corresponding item in JIRA?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-133) Autocommit turned false and rollbacks

2005-08-16 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-133?page=all ]
 
Mike Matrigali closed DERBY-133:


Resolution: Cannot Reproduce

We never could get enough info to reproduce the problem.

 Autocommit turned false and rollbacks
 -

  Key: DERBY-133
  URL: http://issues.apache.org/jira/browse/DERBY-133
  Project: Derby
 Type: Improvement
   Components: Store
 Versions: 10.1.1.0
  Environment: Windows XP Environment
 Reporter: Anil Rao


 I have two tables Employee and Salary. Salary is a child of Employee table 
 with a foriegn key with 1 to many relationship between the two tables.
 I have in my java file connection to the database, with set Autocommit being 
 false. When I have two connection threads inserting on to employee and salary 
 tables. I made a an insert into salary table and then to employee table, with 
 that employee not in the employee, the insert went through fine.
 Then I made an insert into employee table and it went fine.
 In the next set of transaction I had the salary table insert going through 
 fine and then the employee insert did not go through, and the transaction on 
 the employee insert was rolled back but not the salary table insert. 
 Can anyone please help me whether any setting I need to do to make this work 
 correctly.
 Example of the java code and tables script is as follows.
 Script to create tables.
 Create Employee and Salary tables in any derby database.
 Script is as below.
 CREATE TABLE employee( empid INTEGER NOT NULL,
 full_name VARCHAR(30) NOT NULL,
 salary DECIMAL(10,2) NOT NULL );
 CREATE TABLE salary(
 empid INTEGER NOT NULL, 
 pay_date DATE NOT NULL);
 alter table employee add CONSTRAINT emp_pk PRIMARY KEY (empid)
 ALTER TABLE salary ADD CONSTRAINT salary_fk1
 FOREIGN KEY (empid)
 REFERENCES employee(empid) 
 ;
 -- Java Code for inserts.
 import java.sql.Connection;
 /*
  * Embedded Connection.
  */
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 public class EmConst
 {
 /* the default framework is embedded*/
 public String framework = embedded;
 public String driver = org.apache.derby.jdbc.EmbeddedDriver;
 public String protocol = jdbc:derby:;
 public static void main(String[] args)
 {
 new EmConst().go(args);
 }
 void go(String[] args)
 {
 /* parse the arguments to determine which framework is desired*/
 parseArguments(args);
 System.out.println(SimpleApp starting in  + framework +  mode.);
 try
 {
 /*
The driver is installed by loading its class.
In an embedded environment, this will start up Derby, since it 
 is not already running.
  */
 Class.forName(driver).newInstance();
 System.out.println(Loaded the appropriate driver.);
 Connection conn = null;
 Properties props = new Properties();
 props.put(user, );
 props.put(password, );
 /*
The connection specifies create=true to cause
the database to be created. To remove the database,
remove the directory derbyDB and its contents.
The directory derbyDB will be created under
the directory that the system property
derby.system.home points to, or the current
directory if derby.system.home is not set.
  */
 conn = DriverManager.getConnection(protocol +
 Emp;create=true, props);
 System.out.println(Connected to and created database derbyDB);
 conn.setAutoCommit(false);
 /*
Creating a statement lets us issue commands against
the connection.
  */
 Statement s = conn.createStatement();
 /*
We create a table, add a few rows, and update one.
  */
 s.execute(create TABLE employee(empid INTEGER NOT NULL,full_name 
 VARCHAR(30) NOT NULL,salary DECIMAL(10,2) NOT NULL ));
 System.out.println(Created table Employee);
 s.execute(create TABLE salary(empid INTEGER NOT NULL,pay_date 
 DATE NOT NULL));
 System.out.println(Created table Salary);
 s.execute(insert into employee values (100,'John',100));
 System.out.println(Inserted John Record);
 s.execute(insert into salary values (100,'01/01/2003'));
 System.out.println(Inserted John Salary);
 s.execute(insert into salary values (200,'01/01/2003'));
 System.out.println(Inserted Pat Salary);
   s.execute(insert into employee values (200,'Patt','200'));
 System.out.println(Inserted Pat Record);
 s.execute(select 

Re: sharing code between the client and server

2005-08-16 Thread Rick Hillegas

Hey Dan,

I'm going to hold off on this until you get back. It would be nice to 
work out a code-sharing model soon. My particular issue here is that I 
want to add some new constants to the network layer and it seems brittle 
to me to have to make identical edits in two sets of files.


Cheers,
-Rick

David Van Couvering wrote:

You go, Rick!  I think the edge case is going to bite you, though.  I 
don't think you can wave your hands and say customers can just write a 
classloader to fix the problem.


If I remember correctly, the motivation for the edge case was to allow 
different versions of the network driver and embedded driver running 
next to each other.


I think this was motivated by some IBM customers.  My questoin is: is 
the real motivation for compatibility between client and server?  If 
so, it seems to me that what you really want is for a new version of 
the network client driver to be backward compatible with an older 
version of the server running elsewhere, or, vice-versa, a newer 
version of the server to be backward compatible with an older version 
of the client.  This was managed at Sybase with the TDS protocol using 
a handshake at login time where the client and server agree at what 
version of the protocol to run at.  Perhaps this is what we want to do 
here.


If the motivation was something else, I'd like to understand it 
better.  Dan D. was the main person who brought this up.  Is Dan back 
yet?


Thanks,

David

Rick Hillegas wrote:

When we last visited this issue (July 2005 thread named Size of 
common jar file), we decided not to do anything until we had to. 
Well, I would like to start writing/refactoring some small chunks of 
network code for sharing by the client and server. My naive approach 
would be to do the following.


o Create a new fork in the source code: java/common. This would be 
parallel to java/client and java/server.


o This fork of the tree would hold sources in these packages: 
org.apache.derby.common...


o The build would compile this fork into 
classes/org/apache/derby/common/...


o The jar-building targets would be smart enough to include these 
classes in derby.jar, derbyclient.jar, and derbytools.jar.


As I recall, there was an edge case: including a derby.jar from one 
release and a derbyclient.jar from another release in the same VM. I 
think that a customer should expect problems if they mix and match 
jar files from different releases put out by a vendor. It's an old 
deficiency in the CLASSPATH model. With judicious use of 
ClassLoaders, I think customers can hack around this edge case.


I welcome your feedback.

Cheers,
-Rick







[jira] Commented: (DERBY-396) Support for ALTER STATEMENT to DROP , MODIFY, RENAME a COLUMN

2005-08-16 Thread Kumar Matcha (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-396?page=comments#action_12318968 ] 

Kumar Matcha commented on DERBY-396:


Derby is supposed to be SQL-92E compliance and supports key features in the 
SQL-99 standards. If you take a look at the SQL92 Standard for ALTER TABLE 
STATEMENT :

alter table statement ::=
  ALTER TABLE table name alter table action

 alter table action ::=
add column definition
  | alter column definition
  | drop column definition
  | add table constraint definition
  | drop table constraint definition

So its supposed to provide support for dropping a column. Although RENAME 
COLUMN is vendor specific, Oracle/Postgres aggrement is a good enough standard.



 Support for ALTER STATEMENT to DROP ,  MODIFY, RENAME a COLUMN
 --

  Key: DERBY-396
  URL: http://issues.apache.org/jira/browse/DERBY-396
  Project: Derby
 Type: New Feature
   Components: SQL
  Environment: LINUX 
 Reporter: Kumar Matcha
 Priority: Blocker


 Alter Statement should support  dropping a column, modifying a column to a 
 different data type , rename a column.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira