[jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-09-27 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen updated DERBY-1696:


Assignee: (was: Andreas Korneliussen)

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.5, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat, DERBY-1696v2.diff


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1846) Create a script that allows users to easily update their Derby jars with the JDBC4 classes.

2006-09-22 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1846?page=comments#action_12436798 ] 

Andreas Korneliussen commented on DERBY-1846:
-

I have successfully used this script now on solarisx86. My previous test was 
from the 10.2 branch.
To test it, I did the following:
1. Compiled my sources without jdbc4 :ant clobber all buildcleanjars
2. In my trunk directory, added a softlink: ln -s jars/sane lib (to simulate a 
distribution)
3. Set DERBY_HOME to my trunk directory
4.Set JAVA_HOME to my jdk1.6.0 installation

Now, I ran the script, and it compiled all the jdbc4 stuff, and added the 
classes to the jar files.

I think this script is very useful if we make a binary release of Derby with 
JDBC4 as source. Then users can compile jdbc4 wihtout downloading java 1.3, 
java 1.4, ant, and other libraries which are required to compile the Derby 
engine.


 Create a script that allows users to easily update their Derby jars with the 
 JDBC4 classes.
 ---

 Key: DERBY-1846
 URL: http://issues.apache.org/jira/browse/DERBY-1846
 Project: Derby
  Issue Type: Improvement
  Components: Demos/Scripts
Affects Versions: 10.2.1.0
Reporter: Andrew McIntyre
 Assigned To: Andrew McIntyre
 Fix For: 10.2.1.0

 Attachments: derby-1846-netconnection.diff, derby-1846-v1.diff, 
 derby-1846-v2.diff, derby1846-batchFix_v1.diff, modules.properties, 
 output_batchFix_v1.txt


 Since the resolution of the JDBC 4 licensing issue was to not ship a build 
 that includes Derby's JDBC 4 code, but continue to ship the Derby source 
 files for them, a script which automatically compiles and updates the Derby 
 jars with the JDBC 4 classes would be useful.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-22 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1862?page=all ]

Andreas Korneliussen updated DERBY-1862:


Attachment: DERBY-1862v3.diff

Attaching a modified patch where I have taken in the advice of not creating the 
map object in the constructor, and using ReuseFactory for getting Integer 
objects. Synchronization is done on this to protect the map from concurrent 
access while creating/populating it.

 Simple hash improves performance
 

 Key: DERBY-1862
 URL: http://issues.apache.org/jira/browse/DERBY-1862
 Project: Derby
  Issue Type: Improvement
  Components: Performance
Affects Versions: 10.1.2.1, 10.1.3.1
 Environment: WinXp, JRE 1.5_6., Hibernate 3.1
Reporter: Tore Andre Olmheim
 Attachments: DERBY-1696v2.diff, DERBY-1862.diff, DERBY-1862v2.diff, 
 DERBY-1862v3.diff


 We are currently developing a system where we load between 1000 and 5000 
 objects in one go. The user can load different chunks of objects at any time 
 as he/she is navigating. 
 The system consist of a java application which accesses derby via hibernate.
 During profiling we discovered that the org.apache.derby.iapi.util.StringUtil 
 is the biggest bottleneck in the system.
 The method SQLEqualsIgnoreCase(String s1, String s2) is doing upperCase on 
 both s1 and s2, all the time.
 By putting the uppcase value into a Hashtable and using the input-string as 
 key we increates the performance with about 40%. 
 Our test-users report that the system now seems to run at  double speed. 
 The class calling the StringUtil.SQLEqualsIgnoreCase in this case is
 org.apache.derby.impl.jdbc.EmbedResultSet
 This class should also be checked as it seems to do a lot of looping.  
 It might be a canditate for hashing, as it is stated in the code:
 // REVISIT: we might want to cache our own info...
 Here is a diff agains the 10.1.3.1 source for 
 org.apache.derby.iapi.util.StringUtil
 22a23
  import java.util.Hashtable;
 319c320,326
  return 
 s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
 ---
{
   String s1Up = (String) uppercaseMap.get(s1);
   if (s1Up == null)
   {
  s1Up = s1.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s1,s1Up);
   }
 320a328,332
   String s2Up = (String) uppercaseMap.get(s2);
   if (s2Up == null)
   {
  s2Up = s2.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s2,s2Up);
 321a334
   return s1Up.equals(s2Up);
 322a336,339
   //return 
  s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
}
 }
 private static Hashtable uppercaseMap = new Hashtable();

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Closed: (DERBY-1564) wisconsin.java test failed in DerbyNet or DerbyNetClient frameworks, VM for network server got OutOfMemoryError

2006-09-21 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1564?page=all ]

Andreas Korneliussen closed DERBY-1564.
---


Thanks for resolving this issue, I agree it can be closed.


 wisconsin.java test failed in DerbyNet or DerbyNetClient frameworks, VM for 
 network server got OutOfMemoryError
 ---

 Key: DERBY-1564
 URL: http://issues.apache.org/jira/browse/DERBY-1564
 Project: Derby
  Issue Type: Bug
  Components: Network Server, Test, Regression Test Failure
Affects Versions: 10.2.1.0
 Environment: Solaris Sparc, Java 5 or 6, DerbyNet or DerbyNetClient 
 framework.
Reporter: Andreas Korneliussen
 Assigned To: John H. Embretsen
 Fix For: 10.2.1.0

 Attachments: port-wisconsin-from-10.2.0.4-to-10.1_v1.stat, 
 port-wisconsin-from-10.2.0.4-to-10.1_v1.zip, wisconsin.tar.gz


 The wisconsin test failed on some Solaris (sparc) platforms during testing of 
 the 10.2.0.4 snapshot, in either the DerbyNet or DerbyNetClient framework. 
 No output in the outfile. On some platforms the DerbyNet.err file has one 
 message:
 Exception in thread Thread-2 java.lang.OutOfMemoryError: Java heap space
 On some platforms the OutOfMemoryError is also (or instead) reported in the 
 derby.log file.
 All test machines had 2 CPUs and 2 GB of RAM.
 Here is a list of platforms where it failed:
 Java 6 (Mustang, build 91) :
 --
 Solaris 10 (sparc)
 derbyall/derbynetmats/derbynetmats.fail:lang/wisconsin.java
 Solaris 8 (sparcN-2)
 derbyall/derbynetmats/derbynetmats.fail:lang/wisconsin.java
 Solaris 10, local zone (sparc_zone1)
 derbyall/derbynetmats/derbynetmats.fail:lang/wisconsin.java
 Solaris 10, local zone (sparc_zone3)
 derbynetclientmats/derbynetmats/derbynetmats.fail:lang/wisconsin.java
 Solaris 10, global zone (zones)
 derbynetmats/derbynetmats.fail:lang/wisconsin.java
 Java 5 (Sun's HotSpot VM, v1.5.0):
 ---
 Solaris 9 (sparcN-1) 
 derbyall/derbynetclientmats/derbynetmats.fail:lang/wisconsin.java
 Solaris 8 (sparcN-2)
 derbyall/derbynetmats/derbynetmats.fail:lang/wisconsin.java 
 See http://www.nabble.com/10.2.0.4-Test-results-p5485739.html for details.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1846) Create a script that allows users to easily update their Derby jars with the JDBC4 classes.

2006-09-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1846?page=comments#action_12436565 ] 

Andreas Korneliussen commented on DERBY-1846:
-

I think the script update-with-jdbc4 as it is now in SVN has a dangling-else 
problem if JAVA_HOME is set. The scripts sets my JAVA_HOME to /usr/j2se

After modifying the script from:
-if [ -z $JAVA_HOME ]; then
-  if [ -n $darwin ]; then
-JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home
-  fi else if [ -d /usr/j2se -a -x /usr/j2se/bin/javac ]; then
-JAVA_HOME=/usr/j2se

to:

+
+if [ -n $JAVA_HOME ]; then
+  test
+else 
+  if [ -n $darwin ]; then 
+JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home 
+  else
+if [ -d /usr/j2se -a -x /usr/j2se/bin/javac ]; then
+   JAVA_HOME=/usr/j2se
+fi

I got past this problem. I am not a shell-expert so there is probably a better 
solution.


 Create a script that allows users to easily update their Derby jars with the 
 JDBC4 classes.
 ---

 Key: DERBY-1846
 URL: http://issues.apache.org/jira/browse/DERBY-1846
 Project: Derby
  Issue Type: Improvement
  Components: Demos/Scripts
Affects Versions: 10.2.1.0
Reporter: Andrew McIntyre
 Assigned To: Andrew McIntyre
 Fix For: 10.2.1.0

 Attachments: derby-1846-v1.diff, derby-1846-v2.diff, 
 derby1846-batchFix_v1.diff, modules.properties, output_batchFix_v1.txt


 Since the resolution of the JDBC 4 licensing issue was to not ship a build 
 that includes Derby's JDBC 4 code, but continue to ship the Derby source 
 files for them, a script which automatically compiles and updates the Derby 
 jars with the JDBC 4 classes would be useful.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-20 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1862?page=all ]

Andreas Korneliussen updated DERBY-1862:


Attachment: DERBY-1696v2.diff

The attached patch makes a map of column names to column number. The map is 
populated when the first call to findColumn is made.


 Simple hash improves performance
 

 Key: DERBY-1862
 URL: http://issues.apache.org/jira/browse/DERBY-1862
 Project: Derby
  Issue Type: Improvement
  Components: Performance
Affects Versions: 10.1.2.1, 10.1.3.1
 Environment: WinXp, JRE 1.5_6., Hibernate 3.1
Reporter: Tore Andre Olmheim
 Attachments: DERBY-1696v2.diff, DERBY-1862.diff, DERBY-1862v2.diff


 We are currently developing a system where we load between 1000 and 5000 
 objects in one go. The user can load different chunks of objects at any time 
 as he/she is navigating. 
 The system consist of a java application which accesses derby via hibernate.
 During profiling we discovered that the org.apache.derby.iapi.util.StringUtil 
 is the biggest bottleneck in the system.
 The method SQLEqualsIgnoreCase(String s1, String s2) is doing upperCase on 
 both s1 and s2, all the time.
 By putting the uppcase value into a Hashtable and using the input-string as 
 key we increates the performance with about 40%. 
 Our test-users report that the system now seems to run at  double speed. 
 The class calling the StringUtil.SQLEqualsIgnoreCase in this case is
 org.apache.derby.impl.jdbc.EmbedResultSet
 This class should also be checked as it seems to do a lot of looping.  
 It might be a canditate for hashing, as it is stated in the code:
 // REVISIT: we might want to cache our own info...
 Here is a diff agains the 10.1.3.1 source for 
 org.apache.derby.iapi.util.StringUtil
 22a23
  import java.util.Hashtable;
 319c320,326
  return 
 s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
 ---
{
   String s1Up = (String) uppercaseMap.get(s1);
   if (s1Up == null)
   {
  s1Up = s1.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s1,s1Up);
   }
 320a328,332
   String s2Up = (String) uppercaseMap.get(s2);
   if (s2Up == null)
   {
  s2Up = s2.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s2,s2Up);
 321a334
   return s1Up.equals(s2Up);
 322a336,339
   //return 
  s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
}
 }
 private static Hashtable uppercaseMap = new Hashtable();

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-20 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1862?page=all ]

Andreas Korneliussen updated DERBY-1862:


Attachment: DERBY-1862v2.diff

Attached incorrect patch (DERBY-1696v2.diff). The correct patch is 
DERBY-1862v2.diff.


 Simple hash improves performance
 

 Key: DERBY-1862
 URL: http://issues.apache.org/jira/browse/DERBY-1862
 Project: Derby
  Issue Type: Improvement
  Components: Performance
Affects Versions: 10.1.2.1, 10.1.3.1
 Environment: WinXp, JRE 1.5_6., Hibernate 3.1
Reporter: Tore Andre Olmheim
 Attachments: DERBY-1696v2.diff, DERBY-1862.diff, DERBY-1862v2.diff


 We are currently developing a system where we load between 1000 and 5000 
 objects in one go. The user can load different chunks of objects at any time 
 as he/she is navigating. 
 The system consist of a java application which accesses derby via hibernate.
 During profiling we discovered that the org.apache.derby.iapi.util.StringUtil 
 is the biggest bottleneck in the system.
 The method SQLEqualsIgnoreCase(String s1, String s2) is doing upperCase on 
 both s1 and s2, all the time.
 By putting the uppcase value into a Hashtable and using the input-string as 
 key we increates the performance with about 40%. 
 Our test-users report that the system now seems to run at  double speed. 
 The class calling the StringUtil.SQLEqualsIgnoreCase in this case is
 org.apache.derby.impl.jdbc.EmbedResultSet
 This class should also be checked as it seems to do a lot of looping.  
 It might be a canditate for hashing, as it is stated in the code:
 // REVISIT: we might want to cache our own info...
 Here is a diff agains the 10.1.3.1 source for 
 org.apache.derby.iapi.util.StringUtil
 22a23
  import java.util.Hashtable;
 319c320,326
  return 
 s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
 ---
{
   String s1Up = (String) uppercaseMap.get(s1);
   if (s1Up == null)
   {
  s1Up = s1.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s1,s1Up);
   }
 320a328,332
   String s2Up = (String) uppercaseMap.get(s2);
   if (s2Up == null)
   {
  s2Up = s2.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s2,s2Up);
 321a334
   return s1Up.equals(s2Up);
 322a336,339
   //return 
  s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
}
 }
 private static Hashtable uppercaseMap = new Hashtable();

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-18 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1862?page=all ]

Andreas Korneliussen updated DERBY-1862:


Attachment: DERBY-1862.diff

Attached is a patch which uses another approach to improve the 
SQLEqualsIgnoreCase method. The patch check the identity and length of the 
strings to be compared, before doing conversions to uppercase with english 
locale. 

String.toUpperCase(..) with english locale, should return a string with the 
same number of characters, and it should therefore be valid to do a check of 
number of characters before doing any conversions.

The patch which is posted as part of the description, will leak memory, since 
strings are never removed from the upperCaseMap.

 Simple hash improves performance
 

 Key: DERBY-1862
 URL: http://issues.apache.org/jira/browse/DERBY-1862
 Project: Derby
  Issue Type: Improvement
  Components: Performance
Affects Versions: 10.1.3.1, 10.1.2.1
 Environment: WinXp, JRE 1.5_6., Hibernate 3.1
Reporter: Tore Andre Olmheim
 Attachments: DERBY-1862.diff


 We are currently developing a system where we load between 1000 and 5000 
 objects in one go. The user can load different chunks of objects at any time 
 as he/she is navigating. 
 The system consist of a java application which accesses derby via hibernate.
 During profiling we discovered that the org.apache.derby.iapi.util.StringUtil 
 is the biggest bottleneck in the system.
 The method SQLEqualsIgnoreCase(String s1, String s2) is doing upperCase on 
 both s1 and s2, all the time.
 By putting the uppcase value into a Hashtable and using the input-string as 
 key we increates the performance with about 40%. 
 Our test-users report that the system now seems to run at  double speed. 
 The class calling the StringUtil.SQLEqualsIgnoreCase in this case is
 org.apache.derby.impl.jdbc.EmbedResultSet
 This class should also be checked as it seems to do a lot of looping.  
 It might be a canditate for hashing, as it is stated in the code:
 // REVISIT: we might want to cache our own info...
 Here is a diff agains the 10.1.3.1 source for 
 org.apache.derby.iapi.util.StringUtil
 22a23
  import java.util.Hashtable;
 319c320,326
  return 
 s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
 ---
{
   String s1Up = (String) uppercaseMap.get(s1);
   if (s1Up == null)
   {
  s1Up = s1.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s1,s1Up);
   }
 320a328,332
   String s2Up = (String) uppercaseMap.get(s2);
   if (s2Up == null)
   {
  s2Up = s2.toUpperCase(Locale.ENGLISH);
  uppercaseMap.put(s2,s2Up);
 321a334
   return s1Up.equals(s2Up);
 322a336,339
   //return 
  s1.toUpperCase(Locale.ENGLISH).equals(s2.toUpperCase(Locale.ENGLISH));
}
 }
 private static Hashtable uppercaseMap = new Hashtable();

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1177) updateObject with null as argument causes network driver to fail with NullPointerException

2006-09-18 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1177?page=comments#action_12435486 ] 

Andreas Korneliussen commented on DERBY-1177:
-

REL NOTE:

Problem 1:
The derby client driver throws an exception when calling 
ResultSet.updateObject() with a null parameter. This is different from the 
embedded driver, which would update the column with a SQL NULL value.

Symptoms:
Applications which use ResultSet.updateObject(..) with null values, will get a 
SQLException from the client driver, however if using the embedded driver, the 
call will update the column to SQL NULL.

Cause:
Incorrect behaviour in client driver.

Solution:
Fixed client driver to behave as embedded driver. ResultSet.updateObject(..) 
with a null parameter, will set the column value to SQL NULL.

Workaround:
Instead of using ResultSet.updateObject(..) with null parameter, the client 
application can use ResultSet.updateNull(..).


Problem 2:
In the client jdbc driver: 
After calling ResultSet.updateNull(..), the method ResultSet.wasNull() and 
ResultSet.getXXX(..) returns the same values as before updateNull(..) was 
called. 

Symptoms:
In the client jdbc driver: 
After calling ResultSet.updateNull(..), the method ResultSet.wasNull() and 
ResultSet.getXXX(..) returns the same values as before updateNull(..) was 
called. 

Cause:
Incorrect behaviour in client driver.

Solution:
Fixed client driver to behave as embedded driver. After calling  
ResultSet.updateNull(..), ResultSet.wasNull() will return true, and 
ResultSet.getXXX(..) will return values corresponding to what is expected when 
the column is SQL NULL. 

Workaround:
NA


 updateObject with null as argument causes network driver to fail with 
 NullPointerException
 --

 Key: DERBY-1177
 URL: http://issues.apache.org/jira/browse/DERBY-1177
 Project: Derby
  Issue Type: Bug
  Components: Network Client
Affects Versions: 10.2.1.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.2.1.0

 Attachments: DERBY-1177.diff, DERBY-1177.stat, DERBY-1177v2.diff, 
 DERBY-1177v2.stat, DERBY-1177v3.diff, derbyall_report.txt, 
 derbyall_report.txt, UpdateXXXTest.java


 Calling ResultSet.updateObject(column, object) causes the network driver to 
 give NullPointerException if the object parameter is null.
 Stack trace from test:
 Test output:
 E.
 Time: 7.597
 There was 1 error:
 1) 
 testUpdateObjectWithNull(org.apache.derbyTesting.functionTests.tests.jdbcapi.UpdateXXXTest)java.lang.NullPointerException
 at 
 org.apache.derby.client.am.CrossConverters.setObject(CrossConverters.java:845)
 at 
 org.apache.derby.client.am.ResultSet.updateObject(ResultSet.java:3073)
 at 
 org.apache.derbyTesting.functionTests.tests.jdbcapi.UpdateXXXTest.testUpdateObjectWithNull(UpdateXXXTest.java:215)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 Will attach the test.
 To run:
 java -Dframework=DerbyNetClient 
 org.apache.derbyTesting.functionTests.harness.RunTest 
 jdbcapi/UpdateXXXTest.junit
 The test does not fail with the embedded framework.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-09-16 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1696?page=comments#action_12435207 ] 

Andreas Korneliussen commented on DERBY-1696:
-

I will update the patch for this issue once the DERBY-1799 is reviewed and 
committed, since these patches do have a minor conflict. Basically, the same 
approach will be used for this patch (using reopenScan()) to release locks when 
moving to afterlast /beforefirst).

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat, DERBY-1696v2.diff


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1659) Document describe and show tables functionality

2006-09-14 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1659?page=comments#action_12434702 ] 

Andreas Korneliussen commented on DERBY-1659:
-

I have just a minor comment on the doc: I have been testing this feature, and 
found that descrbie also can be used for describing views. 

ij show views;
TABLE_SCHEM |TABLE_NAME|REMARKS 

MYPP|V1|

1 row selected
ij describe mypp.v1;
COLUMN_NAME |TYPE_NAME|DEC|NUM|COLUM|COLUMN_DEF|CHAR_OCTE|IS_NULL
--
C   |INTEGER  |0   |10  |10|NULL  |NULL  |NO  

1 row selected

Therefore, I propose that the doc is modified to say that it can describe 
views, and the syntax of the describe command could i.e be:
describe view_name|table_name



 Document describe and show tables functionality
 ---

 Key: DERBY-1659
 URL: http://issues.apache.org/jira/browse/DERBY-1659
 Project: Derby
  Issue Type: Sub-task
  Components: Documentation
Reporter: David Van Couvering
 Assigned To: Andrew McIntyre
 Fix For: 10.3.0.0

 Attachments: derby-1659-andrew-1-html.zip, derby-1659-andrew-1.diff, 
 Describe.html, ShowProcedures.html, ShowSchemas.html, ShowSynonyms.html, 
 ShowTable.html, ShowViews.html


 Need to add documentation for this new feature

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1691) jdbcapi/blobclob4BLOB.java fails under DerbyNet framework with JCC 2.6

2006-09-11 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1691?page=comments#action_12433805 ] 

Andreas Korneliussen commented on DERBY-1691:
-

Patch looks good. Got one conflict in DerbyNet.exclude, which I resolved. I 
will run some tests and commit.



 jdbcapi/blobclob4BLOB.java fails under DerbyNet framework with JCC 2.6
 --

 Key: DERBY-1691
 URL: http://issues.apache.org/jira/browse/DERBY-1691
 Project: Derby
  Issue Type: Bug
  Components: Test, Regression Test Failure
Affects Versions: 10.2.1.0
 Environment: Linux 2.6.9-5.ELsmp Sun jdk 1.5.0_07-b03
Reporter: Rajesh Kartha
 Assigned To: Øystein Grøvlen
 Fix For: 10.2.1.0

 Attachments: derby-1651.diff


 With JCC 2.6, the jdbcapi/blobclob4BLOB.java fails.  The diff did not show 
 anything alarming, so I am guessing it
 may be a master update. The test passed fine with DerbyClient
 *** Start: blobclob4BLOB jdk1.5.0_06 DerbyNet derbynetmats:jdbcapi 2006-08-11 
 23:29:48 ***
 466a467,474
  EXPECTED SQLSTATE(22018): Invalid character string format for type INTEGER.
  end clobTest54
  START: clobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.StringIndexOutOfBoundsException: 
  String index out of range: -1
 468,475d475
  EXPECTED SQLSTATE(22018): Invalid character string format for type INTEGER.
  end clobTest54
  START: clobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.StringIndexOutOfBoundsException: 
 String index out of range: -1
 775a776,782
  blobTest54 finished
  START: blobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.NegativeArraySizeException
 777,783d783
  blobTest54 finished
  START: blobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.NegativeArraySizeException
 789 del
  com.ibm.db2.jcc.c.SqlException: Operation 'CREATE TRIGGER' cannot be 
 performed on object 'TESTBLOB' because there is an open ResultSet dependent 
 on that object.
 789a789
  com.ibm.db2.jcc.a.SqlException: Operation 'CREATE TRIGGER' cannot be 
  performed on object 'TESTBLOB' because there is an open ResultSet dependent 
  on that object.
 Test Failed.
 *** End:   blobclob4BLOB jdk1.5.0_06 DerbyNet derbynetmats:jdbcapi 2006-08-11 
 23:30:46 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1691) jdbcapi/blobclob4BLOB.java fails under DerbyNet framework with JCC 2.6

2006-09-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1691?page=all ]

Andreas Korneliussen resolved DERBY-1691.
-

Resolution: Fixed

Committed revision 442199.


 jdbcapi/blobclob4BLOB.java fails under DerbyNet framework with JCC 2.6
 --

 Key: DERBY-1691
 URL: http://issues.apache.org/jira/browse/DERBY-1691
 Project: Derby
  Issue Type: Bug
  Components: Test, Regression Test Failure
Affects Versions: 10.2.1.0
 Environment: Linux 2.6.9-5.ELsmp Sun jdk 1.5.0_07-b03
Reporter: Rajesh Kartha
 Assigned To: Øystein Grøvlen
 Fix For: 10.2.1.0

 Attachments: derby-1651.diff


 With JCC 2.6, the jdbcapi/blobclob4BLOB.java fails.  The diff did not show 
 anything alarming, so I am guessing it
 may be a master update. The test passed fine with DerbyClient
 *** Start: blobclob4BLOB jdk1.5.0_06 DerbyNet derbynetmats:jdbcapi 2006-08-11 
 23:29:48 ***
 466a467,474
  EXPECTED SQLSTATE(22018): Invalid character string format for type INTEGER.
  end clobTest54
  START: clobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.StringIndexOutOfBoundsException: 
  String index out of range: -1
 468,475d475
  EXPECTED SQLSTATE(22018): Invalid character string format for type INTEGER.
  end clobTest54
  START: clobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.StringIndexOutOfBoundsException: 
 String index out of range: -1
 775a776,782
  blobTest54 finished
  START: blobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.NegativeArraySizeException
 777,783d783
  blobTest54 finished
  START: blobTest6
  EXPECTED SQLSTATE(null): Invalid position 0 or length 5
  EXPECTED SQLSTATE(null): Invalid position 1 or length -76
  EXPECTED SQLSTATE(null): Invalid position 1 or length -1
  EXPECTED SQLSTATE(null): Invalid position 0 or length 0
  FAIL -- unexpected exception:java.lang.NegativeArraySizeException
 789 del
  com.ibm.db2.jcc.c.SqlException: Operation 'CREATE TRIGGER' cannot be 
 performed on object 'TESTBLOB' because there is an open ResultSet dependent 
 on that object.
 789a789
  com.ibm.db2.jcc.a.SqlException: Operation 'CREATE TRIGGER' cannot be 
  performed on object 'TESTBLOB' because there is an open ResultSet dependent 
  on that object.
 Test Failed.
 *** End:   blobclob4BLOB jdk1.5.0_06 DerbyNet derbynetmats:jdbcapi 2006-08-11 
 23:30:46 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1800?page=all ]

Andreas Korneliussen updated DERBY-1800:


Attachment: DERBY-1800.diff

The problem is caused by not flushing the blob data after an exception occurs.
Attached patch makes the test pass. I will test the patch more before 
committing.


 testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. 
 expected:22001 but was:58009'
 --

 Key: DERBY-1800
 URL: http://issues.apache.org/jira/browse/DERBY-1800
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.3.0.0
 Environment: All
Reporter: Ole Solberg
 Assigned To: Andreas Korneliussen
Priority: Minor
 Attachments: DERBY-1800.diff


 Has failed since 438528, changes:  
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/UpdateInfo/438528.txt
 Logs in 
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/Limited/testSummary-438528.html.
 E.g.
 *** Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:09 ***
 0 add
  F.
  There was 1 failure:
  1) 
  testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
   Unexpected SQL state. expected:22001 but was:58009
  FAILURES!!!
  Tests run: 2048,  Failures: 1,  Errors: 0
 Test Failed.
 *** End:   _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:26 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen resolved DERBY-1559.
-

Fix Version/s: 10.2.2.0
   Resolution: Fixed

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.2.2.0, 10.3.0.0

 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, DERBY-1559v6.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Closed: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen closed DERBY-1559.
---


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0, 10.2.2.0

 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, DERBY-1559v6.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen updated DERBY-1696:


Attachment: DERBY-1696v2.diff

The attached patch also releases the lock on indexed scans.
Additionally extended tests.

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat, DERBY-1696v2.diff


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen updated DERBY-1696:


Derby Info:   (was: [Patch Available])

Removing patch available flag, since the second patch makes SURQueryMixTest 
fail.

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat, DERBY-1696v2.diff


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-04 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1800?page=comments#action_12432508 ] 

Andreas Korneliussen commented on DERBY-1800:
-

Tests shows that the failure has been fixed. Derbyall passes. Running the 
jdbc40 suite, fails with more or less the same erros on client and embedded, 
indicating that those failures are not related to this fix, or DERBY-1559.

JDBC40_EMBEDDED:

Generating report for RunSuite jdbc40  null null null true 
-- Java Information --
Java Version:1.6.0-rc
Java Vendor: Sun Microsystems Inc.
Java home:   /usr/local/java/jdk1.6.0_b98/jre
Java classpath:  
/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derby.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_de_DE.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_es.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_fr.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_it.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_ja_JP.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_ko_KR.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_pt_BR.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_zh_CN.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyLocale_zh_TW.jar:/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyTesting.ja
 r:/expor
OS name: SunOS
OS architecture: x86
OS version:  5.10
Java user name:  ak136785
Java user home:  /home/ak136785
Java user dir:   
/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jdbc40_embedded
java.specification.name: Java Platform API Specification
java.specification.version: 1.6
- Derby Information 
JRE - JDBC: Java SE 6 - JDBC 4.0
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derby.jar]
 10.3.0.0 alpha - (438476:438489M)
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbytools.jar]
 10.3.0.0 alpha - (438476:438489M)
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbynet.jar]
 10.3.0.0 alpha - (438476:438489M)
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/derbyclient.jar]
 10.3.0.0 alpha - (438476:438489M)
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/jcc/db2jcc.jar]
 2.4 - (17)
[/export/home/tmp/ak136785/derbyall-20060904-1131-438476-438489M/jars/jcc/db2jcc_license_c.jar]
 2.4 - (17)
--
- Locale Information -
Current Locale :  [English/United States [en_US]]
Found support for locale: [de_DE]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [es]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [fr]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [it]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [ja_JP]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [ko_KR]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [pt_BR]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [zh_CN]
 version: 10.3.0.0 alpha - (438476:438489M)
Found support for locale: [zh_TW]
 version: 10.3.0.0 alpha - (438476:438489M)
--
Test environment information:
COMMAND LINE STYLE: jdk13
TEST CANONS: master
--
--
Summary results:

Test Run Started: 2006-09-04 14:05:59.0
Test Run Duration: 00:02:19

10 Tests Run
90% Pass (9 tests passed)
10% Fail (1 tests failed)
0 Suites skipped
--
Failed tests in: jdbc40_fail.txt
--
Passed tests in: jdbc40_pass.txt
--
System properties in: jdbc40_prop.txt
--
--
Failure Details:
* Diff file jdbc40/jdbc40/_Suite.diff
*** Start: _Suite jdk1.6.0-rc jdbc40:jdbc40 2006-09-04 14:08:02 ***
0 add
  ...E.E.E.E.E.E.E.E.E.E.E.E...
  ..F..F.F.F...
  E.E.E.E.E.E.E.E.E.E.E.E..
  There were 24 errors:
  1) 
  testUnsupportedSetObject_NCHAR(org.apache.derbyTesting.functionTests.tests.jdbc4.SetObjectUnsupportedTest)java.sql.SQLException:
   An attempt was made to get a data value of type 'VARCHAR' from a data 
  value of 

[jira] Updated: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1800?page=all ]

Andreas Korneliussen updated DERBY-1800:


Component/s: Network Server

 testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. 
 expected:22001 but was:58009'
 --

 Key: DERBY-1800
 URL: http://issues.apache.org/jira/browse/DERBY-1800
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure, Network Server
Affects Versions: 10.3.0.0
 Environment: All
Reporter: Ole Solberg
 Assigned To: Andreas Korneliussen
Priority: Minor
 Attachments: DERBY-1800.diff


 Has failed since 438528, changes:  
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/UpdateInfo/438528.txt
 Logs in 
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/Limited/testSummary-438528.html.
 E.g.
 *** Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:09 ***
 0 add
  F.
  There was 1 failure:
  1) 
  testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
   Unexpected SQL state. expected:22001 but was:58009
  FAILURES!!!
  Tests run: 2048,  Failures: 1,  Errors: 0
 Test Failed.
 *** End:   _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:26 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-04 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1800?page=all ]

Andreas Korneliussen resolved DERBY-1800.
-

Fix Version/s: 10.3.0.0
   Resolution: Fixed

Committed revision 440062.


 testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. 
 expected:22001 but was:58009'
 --

 Key: DERBY-1800
 URL: http://issues.apache.org/jira/browse/DERBY-1800
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure, Network Server
Affects Versions: 10.3.0.0
 Environment: All
Reporter: Ole Solberg
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0

 Attachments: DERBY-1800.diff


 Has failed since 438528, changes:  
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/UpdateInfo/438528.txt
 Logs in 
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/Limited/testSummary-438528.html.
 E.g.
 *** Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:09 ***
 0 add
  F.
  There was 1 failure:
  1) 
  testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
   Unexpected SQL state. expected:22001 but was:58009
  FAILURES!!!
  Tests run: 2048,  Failures: 1,  Errors: 0
 Test Failed.
 *** End:   _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:26 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1799) SUR: current row not locked when renavigating to it, in queries with indexes

2006-09-01 Thread Andreas Korneliussen (JIRA)
SUR: current row not locked when renavigating to it, in queries with indexes


 Key: DERBY-1799
 URL: http://issues.apache.org/jira/browse/DERBY-1799
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen


This problem is detected in transactions with isolation level 
read-committed/read-uncommitted.

We have a  table (T) which has a primary key (a), and a query which does 
select A from T (an indexed select)

If the result set is scrollable updatable, we expect the current row to be 
locked with an update lock. This does not seem to happen when repositioning to 
a row which has been already been fetched previously.

The result is that either the wrong row is locked, or if the result set has 
been on after last position, no row is locked.

Output from ij:
ij get scroll insensitive cursor c1 as 'select a from t for update';
ij next c1;
A  
---
1  
ij select * from SYSCS_DIAG.LOCK_TABLE;
XID|TYPE |MODE|TABLENAME
   
|LOCKNAME|STATE|TABLETYPE|LOCK|INDEXNAME   


---
243|ROW  |U   |T
   
|(1,7)   |GRANT|T|1|NULL


243|ROW  |S   |T
   
|(1,1)   |GRANT|T|1|SQL060901103455010  


243|TABLE|IX  |T
   
|Tablelock   |GRANT|T|4|NULL



3 rows selected

ij after last c1;
No current row
ij previous c1;
A  
---
3  
ij  select * from SYSCS_DIAG.LOCK_TABLE;
XID|TYPE |MODE|TABLENAME
   
|LOCKNAME|STATE|TABLETYPE|LOCK|INDEXNAME   


---
243|TABLE|IX  |T
   
|Tablelock   |GRANT|T|4|NULL



1 row selected

The last select shows that no row is locked at this point, however we expect 
one row to be locked.



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-09-01 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1696?page=comments#action_12432096 ] 

Andreas Korneliussen commented on DERBY-1696:
-

Thanks. Testing better with indexes also uncovered another locking issue w.r.t 
SUR : DERBY-1799.
I will upload a new patch for this issue.

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Assigned: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-01 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1800?page=all ]

Andreas Korneliussen reassigned DERBY-1800:
---

Assignee: Andreas Korneliussen

 testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. 
 expected:22001 but was:58009'
 --

 Key: DERBY-1800
 URL: http://issues.apache.org/jira/browse/DERBY-1800
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.3.0.0
 Environment: All
Reporter: Ole Solberg
 Assigned To: Andreas Korneliussen
Priority: Minor

 Has failed since 438528, changes:  
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/UpdateInfo/438528.txt
 Logs in 
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/Limited/testSummary-438528.html.
 E.g.
 *** Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:09 ***
 0 add
  F.
  There was 1 failure:
  1) 
  testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
   Unexpected SQL state. expected:22001 but was:58009
  FAILURES!!!
  Tests run: 2048,  Failures: 1,  Errors: 0
 Test Failed.
 *** End:   _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:26 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1800) testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. expected:22001 but was:58009'

2006-09-01 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1800?page=comments#action_12432140 ] 

Andreas Korneliussen commented on DERBY-1800:
-

I will check if this is caused by DERBY-1559, and if that is the case, fix the 
issue.

 testSetBinaryStreamLengthLessOnBlobTooLong fails with 'Unexpected SQL state. 
 expected:22001 but was:58009'
 --

 Key: DERBY-1800
 URL: http://issues.apache.org/jira/browse/DERBY-1800
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.3.0.0
 Environment: All
Reporter: Ole Solberg
 Assigned To: Andreas Korneliussen
Priority: Minor

 Has failed since 438528, changes:  
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/UpdateInfo/438528.txt
 Logs in 
 http://www.multinet.no/~solberg/public/Apache/Daily/jvm1.6/Limited/testSummary-438528.html.
 E.g.
 *** Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:09 ***
 0 add
  F.
  There was 1 failure:
  1) 
  testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
   Unexpected SQL state. expected:22001 but was:58009
  FAILURES!!!
  Tests run: 2048,  Failures: 1,  Errors: 0
 Test Failed.
 *** End:   _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 
 2006-08-31 02:52:26 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1740) Change error message to indicate encryptionkey length to be atleast 16 characters instead of 8 characters

2006-08-31 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1740?page=all ]

Andreas Korneliussen updated DERBY-1740:


Urgency:   (was: Urgent)

Unsetting the urgency flag. I think the urgency should be set to normal, as it 
is a matter of incorrect error message, and not a bug which may cause data 
corruption etc.

 Change error message to indicate encryptionkey length to be atleast 16 
 characters instead of 8 characters
 -

 Key: DERBY-1740
 URL: http://issues.apache.org/jira/browse/DERBY-1740
 Project: Derby
  Issue Type: Bug
Affects Versions: 10.0.2.0
 Environment: Any
Reporter: Rajesh Kartha
Priority: Minor
 Fix For: 10.2.1.0

 Attachments: derby-1740-1a.diff


 While attempting to create a encrypted database with even key length of 14 
 characters, it fails with the error message indicating the key length should 
 be atleast 8 characters.
 --
 -- Attempt to encrypt using key of lenght 14
 --
 ij connect 
 'jdbc:derby:adb;create=true;dataEncryption=true;encryptionAlgorithm=DES/CBC/NoPadding;encryptionKey=11223344556677';
 ERROR XJ041: Failed to create database 'adb', see the next exception for 
 details.
 ERROR XBM01: Startup failed due to an exception. See next exception for 
 details.
 ERROR XBCX2: Initializing cipher with a boot password that is too short. The 
 password must be at least 8 characters long.
 --
 --Requires 16 characters for the encryptionKey
 --
 ij connect 
 'jdbc:derby:adb;create=true;dataEncryption=true;encryptionAlgorithm=DES/CBC/NoPadding;encryptionKey=1122334455667788';
 ij

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-30 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Fix Version/s: 10.3.0.0
   Derby Info:   (was: [Patch Available])

Committed revision 438478 at trunk.


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0

 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, DERBY-1559v6.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-30 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen updated DERBY-1696:


Derby Info: [Patch Available]

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-29 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1559?page=comments#action_12431227 ] 

Andreas Korneliussen commented on DERBY-1559:
-

Thanks for the review.
I will run some tests, and then commit the patch on the trunk.

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-29 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559v6.diff

Found a bug while testing the patch DERBY-1559v5.diff:

In case of an DRDAProtoclException while doing the streaming of the data, the 
DDMReader.readLOBContinuationStream(.) method called agent.handleException(..). 
 This will cause the connection to be rolled back from within the stream 
(within an execute statement), and the engine throws an exception:

Execution failed because of Permanent Agent Error: SVRCOD = 40; RDBNAM = 
/export/home/tmp/db/bigdb2;create=true; diagnostic msg  = Cannot issue rollback 
in a nested connection when there is a pending operation in the parent 
connection.
org.apache.derby.impl.drda.DRDAProtocolException: Execution failed because of 
Permanent Agent Error: SVRCOD = 40; RDBNAM = /exp 
ort/home/tmp/db/bigdb2;create=true; diagnostic msg = Cannot issue rollback in a 
nested connection when there is a pending opera tion in the parent connection.

The side-effect of this error is quite severe, since it seems that the 
connection will never be rolled back.

The attached patch DERBY-1559v6.diff addresses this by having 
DDMReader.readLOBContinuationStream(.)  only log the exception.  The connection 
will be rolled back later by DRDAConnectionThread when the exception comes out 
from statement.execute().


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, DERBY-1559v6.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-29 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen updated DERBY-1696:


Attachment: DERBY-1696.diff
DERBY-1696.stat

The attached patch addresses this bug by having the SQL-execution layer use 
ScanController.reopenScan() to release the lock.
In the store layer, GenericScanController.reopenScan()  has been modified to 
release the read-lock after read, while 
GenericScanController.reopenAfterEndTransaction() now additonally may set the 
rowLocations invalidated flag even if the scan_state is SCAN_HOLD_INIT. This is 
because previously the scan_state SCAN_HOLD_INIT would (as it was used) 
guarantee that no RowLocations had been read from the scan.

Testing: one new test added to ConcurrencyTest. HoldabilityTest had to be 
modified: if a compress happens, and there is an open scrollable updatable 
resultset, no updates will be allowed, even if no scrolling has happened.


 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL, Store
Affects Versions: 10.2.1.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1696.diff, DERBY-1696.stat


 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-28 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559v5.diff

Attached patch makes DDMReader handle the DRDAProtocolException, and throws 
IOException from it, instead of doing it from the stream class.

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, DERBY-1559v5.diff, serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Assigned: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-25 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]

Andreas Korneliussen reassigned DERBY-1696:
---

Assignee: Andreas Korneliussen

 transaction may sometimes keep lock on a row after moving off the resultset 
 in scrollable updatable resultset
 -

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen

 If an application does the following:
  Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
   ResultSet.CONCUR_UPDATABLE);
  ResultSet rs = s.executeQuery(select * from t1);
  rs.afterLast();
  rs.last();
  rs.next();
 After doing this in transaction isolation level 
 read-committed/read-uncommitted, the last row is still locked with an update 
 lock.
 This is detected by running the JUnit testcase 
 ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
 (NOTE: the bug is revealed by this test, because the network server does a 
 rs.last() as the first operation on a scrollable updatable resultset to count 
 number of rows)
 What triggers this bug, seems to be the repositioning of the cursor after the 
 underlying all records have been inserted into the hashtable from the source 
 scan. When moving off the result set (to afterLast() or beforeFirst()) no 
 action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-25 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1559?page=comments#action_12430519 ] 

Andreas Korneliussen commented on DERBY-1559:
-

 One last thought, then: does it make any sense to have 
 DDMReader.readLobContinuationString
 throw an IOException rather than a DRDAProtocolException?
 
 Then you wouldn't need any try/catch blocks in EXTDTAReaderInputStream at 
 all, but could just
 allow the IOExceptions to escape upward to wherever they will be caught 
 higher up in the stack.
 

Yes, that makes sense. I will try that out.
Thanks.,

Andreas

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-24 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559v4.diff

Thanks for reviewing the patch. Attached is a patch where I try to address the 
issue w.r.t preserving the information in the DRDAProtocolException, if thrown 
during streaming. The DRDAProtocolException will be logged from the DDMReader, 
which has access to the DRDAConnThread, before being thrown again. 

I have also considered some other options, like making DRDAProtocolException 
inherit from IOException or make a new IOException subclass, which is able to 
preserve the stack trace from the cause (in JDK 1.3). I did not do that, since
1. by making DRDAProtocolException inherit from IOException, I would proably 
need to go through all code and check for catching of IOException (which then 
would also catch DRDAProtocolException)
2. since we probably soon will stop supporting 1.3, I did not create a new 
IOException subclass.

(ironically DRDAProtocolException seems to usually be thrown as a consequence 
of an IOException)

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, DERBY-1559v4.diff, serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-23 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559v3.diff

Attached is an updated patch were the conflicts from DERBY-1610 has been 
resolved.

Additionally, to avoid the failures in jdbcapi/parameterMapping, I have 
modified the code so that it does only use setBinaryStream if it is a BLOB 
columns, otherwise it continues to use setBytes.

I have also done some memory profiling, were the code inserts one BLOB of size 
64 MB streamed from the network client. 

The results are:
Without any changes: max memory usage 350 MB

With a special patch were setBytes was replaced by setBinaryStream: 176 MB

With this patch: 40 MB.


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 DERBY-1559v3.diff, serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1558) enable more testcases in ConcurrencyTest

2006-08-22 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1558?page=comments#action_12429682 ] 

Andreas Korneliussen commented on DERBY-1558:
-

Committed revision 433607.


 enable more testcases in ConcurrencyTest
 

 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.1.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0

 Attachments: DERBY-1558.diff, DERBY-1558.stat


 A number of testcases,  which depend on SUR, are not enabled in the 
 ConcurrencyTest.
 These testcases can be enabled. The test should also set some properties to 
 reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1558) enable more testcases in ConcurrencyTest

2006-08-22 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1558?page=all ]

Andreas Korneliussen resolved DERBY-1558.
-

Resolution: Fixed

 enable more testcases in ConcurrencyTest
 

 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.1.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0

 Attachments: DERBY-1558.diff, DERBY-1558.stat


 A number of testcases,  which depend on SUR, are not enabled in the 
 ConcurrencyTest.
 These testcases can be enabled. The test should also set some properties to 
 reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Closed: (DERBY-1558) enable more testcases in ConcurrencyTest

2006-08-22 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1558?page=all ]

Andreas Korneliussen closed DERBY-1558.
---


 enable more testcases in ConcurrencyTest
 

 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.1.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0

 Attachments: DERBY-1558.diff, DERBY-1558.stat


 A number of testcases,  which depend on SUR, are not enabled in the 
 ConcurrencyTest.
 These testcases can be enabled. The test should also set some properties to 
 reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429361 ] 

Andreas Korneliussen commented on DERBY-418:


I think that the lack of synchronization when calling 
singleUseActivation.markUnused() may cause that other threads do not see that 
the field inUse has been modified in the activation. Since it is the finalizer 
thread which calls this, it could mean that the thread which checks the inUse 
flag to close the activation, will not see that it has been modified to false.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent 

[jira] Resolved: (DERBY-1712) Add a JUnit test decorator that starts the NetworkServer at setUp and stops it at tearDown.

2006-08-21 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1712?page=all ]

Andreas Korneliussen resolved DERBY-1712.
-

Resolution: Fixed

 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.
 ---

 Key: DERBY-1712
 URL: http://issues.apache.org/jira/browse/DERBY-1712
 Project: Derby
  Issue Type: Improvement
  Components: Test
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0

 Attachments: DERBY-1712.diff, DERBY-1712.stat


 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Closed: (DERBY-1712) Add a JUnit test decorator that starts the NetworkServer at setUp and stops it at tearDown.

2006-08-21 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1712?page=all ]

Andreas Korneliussen closed DERBY-1712.
---


 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.
 ---

 Key: DERBY-1712
 URL: http://issues.apache.org/jira/browse/DERBY-1712
 Project: Derby
  Issue Type: Improvement
  Components: Test
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0

 Attachments: DERBY-1712.diff, DERBY-1712.stat


 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1558) enable more testcases in ConcurrencyTest

2006-08-21 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1558?page=all ]

Andreas Korneliussen updated DERBY-1558:


Attachment: DERBY-1558.diff
DERBY-1558.stat

Attaching patch which enables more testcases in ConcurrencyTest.
The patch makes use of SystemPropertiesTestSetup within the 
ConcurrencyTest.suite method to reduce lock timeout. It additionally makes use 
of NetworkServerTestSetup from the _Suite.suite() method. Therefore, the 
_Suite_app.properties has been added to disable starting of network server.

This change makes it possible to run the _Suite test from any Junit testrunner 
without starting the network server manually.


 enable more testcases in ConcurrencyTest
 

 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.1.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0

 Attachments: DERBY-1558.diff, DERBY-1558.stat


 A number of testcases,  which depend on SUR, are not enabled in the 
 ConcurrencyTest.
 These testcases can be enabled. The test should also set some properties to 
 reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429386 ] 

Andreas Korneliussen commented on DERBY-418:


The finalizer thread is as pr javadoc guaranteed to not hold any user-visible 
synchronization locks when finalize is invoked. If the finalizer synchronizes 
in the same order as the other methods, it should not introduce any dead-locks 
(you may get lock-waiting, but not dead-lock).

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 

[jira] Commented: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1559?page=comments#action_12429394 ] 

Andreas Korneliussen commented on DERBY-1559:
-

DERBY-1535 caused conflicts with this patch during development. Now, DERBY-1610 
will cause conflicts with it. I think this patch is bigger, and more complex 
than the patches in DERBY-1610, so it is probably better to commit this patch 
first, then update, resolve conflicts, and commit the patch in DERBY-1610. 

Is that ok ?

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1559?page=comments#action_12429406 ] 

Andreas Korneliussen commented on DERBY-1559:
-

I will wait for DERBY-1610 being fixed, to avoid the risk.


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1387) Add JMX extensions to Derby

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1387?page=comments#action_12429432 ] 

Andreas Korneliussen commented on DERBY-1387:
-

Hi, thanks for providing this patch. I have tried compiling it, and I do have 
the following comments:

1. It seems that most of the classes have multiple entries in the patch file, 
so when applying the patch, I got the same class multiple times in the same 
file.

2. After fixing 1, all code, except one class seems to be able to compile with 
JDK 1.4. The class which depends on JDK 1.5 is BasicManagementService. It uses 
java.lang.management.ManagementFactory to create the MBeanServer. Instead I 
would propose simply using javax.management.MBeanServerFactory. This would 
allow the code to be compiled on JDK 1.4.  

If you do need the JDK 1.5 libraries, the build files must be set up so that it 
is still possible to compile on JDK 1.4 (ie. by skipping the JMX targets).


 Add JMX extensions to Derby
 ---

 Key: DERBY-1387
 URL: http://issues.apache.org/jira/browse/DERBY-1387
 Project: Derby
  Issue Type: New Feature
  Components: Services
Reporter: Sanket Sharma
 Assigned To: Sanket Sharma
 Attachments: derbyjmx.patch, Requirements for JMX Updated.html, 
 Requirements for JMX.html, Requirements for JMX.zip


 This is a draft requirement specification for adding monitoring and 
 management extensions to Apache Derby using JMX. The requirements document 
 has been uploaded on JIRA as well as the Derby Wiki page at 
 http://wiki.apache.org/db-derby/_Requirement_Specifications_for_Monitoring_%26_Management_Extensions_using_JMX
 Developers and Users are requested to please look at the document (feature 
 list in particular) and add their own rating to features by adding a coloumn 
 to the table.
 Comments are welcome.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429438 ] 

Andreas Korneliussen commented on DERBY-418:


I am not sure what you mean. 

Could you give an example ?

I agree that it is possible to get a deadlock in the finalize() method if it 
obtains its locks in a different order than another user thread or another 
finalizer thread. If it obtains the locks in the same order, the condition for 
a deadlock is not there. If there are multiple objects being garbage collected, 
sharing mutexes, they need to set the locks in the same order - or else you may 
get deadlock.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429473 ] 

Andreas Korneliussen commented on DERBY-418:


Assuming the Derby embedded JDBC driver is thread-safe, it should be safe for a 
result set to call its own close() method in its finalizer. If you get a 
dead-lock in the finalizer, it proves that it is also possible to write a 
multithreaded program which gets deadlocks when calling ResultSet.close, and 
derby then is not really MT-safe.

If this happens, I think it is better to fix the embedded driver so that it 
really becomes MT-safe, than avoiding synchronization in the finalizer threads.

As for the suggested change in 1142, I would note that If there is no 
synchronization in the finalizer, and you set a field in a object from it, 
there is no guarantee that other threads will see the modification of the field 
(unless, I think, it is volatile). However, I think Mayuresh has been working 
on this issue, so maybe he has tried that approach?

Another approach could be to use a WeakHashMap to store the activations in, 
instead of a Vector. If all objects referring to the activation have been 
garbage collected, the activation will be removed from the WeakHashMap.


 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 

[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-18 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


  Urgency: Normal
Affects Version/s: 10.2.2.0
   10.3.0.0
   Derby Info: [Patch Available]

I will consider committing the patch next week, unless anyone objects.

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.1.0, 10.3.0.0, 10.2.2.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff, 
 serverMemoryUsage.xls


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1558) enable more testcases in ConcurrencyTest

2006-08-17 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1558?page=all ]

Andreas Korneliussen updated DERBY-1558:


Summary: enable more testcases in ConcurrencyTest  (was: include 
ConcurrencyTest into a suite)
Description: 
A number of testcases,  which depend on SUR, are not enabled in the 
ConcurrencyTest.
These testcases can be enabled. The test should also set some properties to 
reduce lock timeout, so that it runs faster.

  was:The test jdbcapi/ConcurrencyTest.junit is currently not included in any 
suites. Also, a number of testcases are not enabled in that test, which depend 
on SUR. These testcases can be enabled. The test should also set some 
properties to reduce lock timeout, so that it runs faster.


Updated summary and description of this JIRA.

 enable more testcases in ConcurrencyTest
 

 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.3.0.0


 A number of testcases,  which depend on SUR, are not enabled in the 
 ConcurrencyTest.
 These testcases can be enabled. The test should also set some properties to 
 reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1712) Add a JUnit test decorator that starts the NetworkServer at setUp and stops it at tearDown.

2006-08-17 Thread Andreas Korneliussen (JIRA)
Add a JUnit test decorator that starts the NetworkServer at setUp and stops it 
at tearDown.
---

 Key: DERBY-1712
 URL: http://issues.apache.org/jira/browse/DERBY-1712
 Project: Derby
  Issue Type: Improvement
  Components: Test
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0


Add a JUnit test decorator that starts the NetworkServer at setUp and stops it 
at tearDown.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1712) Add a JUnit test decorator that starts the NetworkServer at setUp and stops it at tearDown.

2006-08-17 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1712?page=all ]

Andreas Korneliussen updated DERBY-1712:


Attachment: DERBY-1712.diff
DERBY-1712.stat

The patch (DERBY-1712.diff) has a new class NetworkServerTestSetup. This class 
is put into a new package for junit components, called 
org.apache.derbyTesting.junit, and the patch therefore also contains a new 
build file. TestConfiguration had to be modified so that the new class could 
use DERBY_TEST_CONFIG attribute.



 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.
 ---

 Key: DERBY-1712
 URL: http://issues.apache.org/jira/browse/DERBY-1712
 Project: Derby
  Issue Type: Improvement
  Components: Test
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Fix For: 10.3.0.0

 Attachments: DERBY-1712.diff, DERBY-1712.stat


 Add a JUnit test decorator that starts the NetworkServer at setUp and stops 
 it at tearDown.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-08-15 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559v2.diff

Attached is an updated patch which preserves the changes in DERBY-1535 when 
receiving multiple EXTDTA objects.

 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat, DERBY-1559v2.diff


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-15 Thread Andreas Korneliussen (JIRA)
transaction may sometimes keep lock on a row after moving off the resultset in 
scrollable updatable resultset
-

 Key: DERBY-1696
 URL: http://issues.apache.org/jira/browse/DERBY-1696
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.2.0.0, 10.2.2.0, 10.3.0.0
Reporter: Andreas Korneliussen


If an application does the following:

 Statement s = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
  ResultSet.CONCUR_UPDATABLE);
 ResultSet rs = s.executeQuery(select * from t1);
 rs.afterLast();
 rs.last();
 rs.next();

After doing this in transaction isolation level 
read-committed/read-uncommitted, the last row is still locked with an update 
lock.

This is detected by running the JUnit testcase 
ConcurrencyTest.testUpdatePurgedTuple1 in the DerbyNetClient framework.
(NOTE: the bug is revealed by this test, because the network server does a 
rs.last() as the first operation on a scrollable updatable resultset to count 
number of rows)

What triggers this bug, seems to be the repositioning of the cursor after the 
underlying all records have been inserted into the hashtable from the source 
scan. When moving off the result set (to afterLast() or beforeFirst()) no 
action is done to release the lock of the current row.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1598) unable to boot exisiting database from network server when running with security manager

2006-07-27 Thread Andreas Korneliussen (JIRA)
unable to boot exisiting database from network server when running with 
security manager


 Key: DERBY-1598
 URL: http://issues.apache.org/jira/browse/DERBY-1598
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
Priority: Blocker


Myrna van Lunteren reported the following:

I ran into the following interesting situation with permissions
granted as per derby_tests.policy, and I'm hoping someone can answer
my questions:
- start networkserver with derby_tests.policy as described in the
remote server testing section of the java/testing/README.htm, but with
-h srvhostname
- start an ij session, connect to the server creating a database
- disconnect, exit ij, shutdown networkserver
so far ok
- start networkserver again just like before
- start ij again just like before, connect to the same database again
results in:
ERROR XJ040: DERBY SQL error: SQLCODE: -1, SQLSTATE: XJ040, SQLERRMC:
Failed to start database 'bladb', see the next exception for
details.::SQLSTATE: XJ001Java exception: 'access denied
(java.io.FilePermission
/home/myrna/tsttmp5/srv/bladb/log/logmirror.ctrl read):
java.security.AccessControlException'.

One can dis- and reconnect fine as long as the network server is up,
but once it has been bounced, reconnect fails.

derby.log shows no stack trace, even though the following properties
are set in derby.properties in derby.system.home:
derby.infolog.append=true
derby.language.logStatementText=true
derby.stream.error.logSeverityLevel=0
--
...
2006-07-26 23:49:38.402 GMT Thread[DRDAConnThread_3,5,main] (DATABASE
= bladb), (DRDAID = {1}), Failed to start database 'bladb', see the
next exception for details.
2006-07-26 23:49:38.404 GMT Thread[DRDAConnThread_3,5,main] (DATABASE
= bladb), (DRDAID = {1}), Java exception: 'access denied
(java.io.FilePermission
/home/myrna/tsttmp5/srv/bladb/log/logmirror.ctrl read):
java.security.AccessControlException'.


The error goes away when I add the following permissions to derbynet.jar:
 // all databases under derby.system.home
 permission java.io.FilePermission ${derby.system.home}${/}-,
read, write, delete;


I have reproduced this problem manually. After adding some tracing calls in 
..drda.Database.makeConnection() I got this stack trace:
java.sql.SQLException: Failed to start database 
'/export/home/tmp/devel/derbydev/testing/testdb', see the next exception for 
details.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:44)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:88)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:94)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Util.java:173)
at 
org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(EmbedConnection.java:1955)
at 
org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:1619)
at 
org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:216)
at 
org.apache.derby.impl.jdbc.EmbedConnection30.init(EmbedConnection30.java:72)
at 
org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java:73)
at org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:209)
at 
org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:116)
at org.apache.derby.impl.drda.Database.makeConnection(Database.java:232)
at 
org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(DRDAConnThread.java:1191)
at 
org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(DRDAConnThread.java:1169)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(DRDAConnThread.java:2758)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(DRDAConnThread.java:1031)
at 
org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:874)
at 
org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:254)
NEXT Exception follows
java.security.AccessControlException: access denied (java.io.FilePermission 
/export/home/tmp/devel/derbydev/testing/testdb/log/logmirror.ctrl read)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:269)
at 
java.security.AccessController.checkPermission(AccessController.java:401)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:524)
at java.lang.SecurityManager.checkRead(SecurityManager.java:863)
at java.io.File.exists(File.java:678)
at 
org.apache.derby.impl.store.raw.log.LogToFile.boot(LogToFile.java:2987)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
at 

[jira] Closed: (DERBY-1598) unable to boot exisiting database from network server when running with security manager

2006-07-27 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1598?page=all ]

Andreas Korneliussen closed DERBY-1598.
---

Resolution: Duplicate

Duplicate: http://issues.apache.org/jira/browse/DERBY-1241

 unable to boot exisiting database from network server when running with 
 security manager
 

 Key: DERBY-1598
 URL: http://issues.apache.org/jira/browse/DERBY-1598
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
Priority: Blocker

 Myrna van Lunteren reported the following:
 Quote
 I ran into the following interesting situation with permissions
 granted as per derby_tests.policy, and I'm hoping someone can answer
 my questions:
 - start networkserver with derby_tests.policy as described in the
 remote server testing section of the java/testing/README.htm, but with
 -h srvhostname
 - start an ij session, connect to the server creating a database
 - disconnect, exit ij, shutdown networkserver
 so far ok
 - start networkserver again just like before
 - start ij again just like before, connect to the same database again
 results in:
 ERROR XJ040: DERBY SQL error: SQLCODE: -1, SQLSTATE: XJ040, SQLERRMC:
 Failed to start database 'bladb', see the next exception for
 details.::SQLSTATE: XJ001Java exception: 'access denied
 (java.io.FilePermission
 /home/myrna/tsttmp5/srv/bladb/log/logmirror.ctrl read):
 java.security.AccessControlException'.
 One can dis- and reconnect fine as long as the network server is up,
 but once it has been bounced, reconnect fails.
 derby.log shows no stack trace, even though the following properties
 are set in derby.properties in derby.system.home:
 derby.infolog.append=true
 derby.language.logStatementText=true
 derby.stream.error.logSeverityLevel=0
 --
 ...
 2006-07-26 23:49:38.402 GMT Thread[DRDAConnThread_3,5,main] (DATABASE
 = bladb), (DRDAID = {1}), Failed to start database 'bladb', see the
 next exception for details.
 2006-07-26 23:49:38.404 GMT Thread[DRDAConnThread_3,5,main] (DATABASE
 = bladb), (DRDAID = {1}), Java exception: 'access denied
 (java.io.FilePermission
 /home/myrna/tsttmp5/srv/bladb/log/logmirror.ctrl read):
 java.security.AccessControlException'.
 
 The error goes away when I add the following permissions to derbynet.jar:
  // all databases under derby.system.home
  permission java.io.FilePermission ${derby.system.home}${/}-,
 read, write, delete;
 End Quote 
 I have reproduced this problem manually. After adding some tracing calls in 
 ..drda.Database.makeConnection() I got this stack trace:
 java.sql.SQLException: Failed to start database 
 '/export/home/tmp/devel/derbydev/testing/testdb', see the next exception for 
 details.
 at 
 org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:44)
 at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:88)
 at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:94)
 at 
 org.apache.derby.impl.jdbc.Util.generateCsSQLException(Util.java:173)
 at 
 org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(EmbedConnection.java:1955)
 at 
 org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:1619)
 at 
 org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:216)
 at 
 org.apache.derby.impl.jdbc.EmbedConnection30.init(EmbedConnection30.java:72)
 at 
 org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java:73)
 at 
 org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:209)
 at 
 org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:116)
 at 
 org.apache.derby.impl.drda.Database.makeConnection(Database.java:232)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(DRDAConnThread.java:1191)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(DRDAConnThread.java:1169)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(DRDAConnThread.java:2758)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(DRDAConnThread.java:1031)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:874)
 at 
 org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:254)
 NEXT Exception follows
 java.security.AccessControlException: access denied (java.io.FilePermission 
 /export/home/tmp/devel/derbydev/testing/testdb/log/logmirror.ctrl read)
 at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:269)
 at 
 java.security.AccessController.checkPermission(AccessController.java:401)
 at 

[jira] Commented: (DERBY-1241) logmirror.ctrl is getting accessed outside the privileged block when the checkpoint instant is invalid on log factory boot method.

2006-07-27 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1241?page=comments#action_12423822 ] 

Andreas Korneliussen commented on DERBY-1241:
-

The following scenario reproduced this problem:

1. Start Network server with security manager enabled
2. Use ij to connect to the server and create a database
3. Stop Network server using Ctrl-c (i.e do not shut down database gracefully)
4. Restart Network server with security manager enabled
5. Use ij to connect to the server and connect to the database previously 
created



 logmirror.ctrl  is getting accessed outside the privileged block when the 
 checkpoint instant is invalid  on log factory boot method.
 

 Key: DERBY-1241
 URL: http://issues.apache.org/jira/browse/DERBY-1241
 Project: Derby
  Issue Type: Bug
  Components: Store
Reporter: Suresh Thalamati
 Assigned To: Suresh Thalamati
Priority: Minor

 This problem was reported on the derby-dev list  by Olav Sandstaa , filing 
 jira entry  for it. 
 Olav Sandstaa wrote:
  Rick Hillegas [EMAIL PROTECTED] wrote:
 
  java.sql.SQLException: Java exception: 'access denied 
  (java.io.FilePermission 
  /export/home/tmp/derbyjdbc4/DerbyNetClient/TestConnectionMethods/wombat/log/logmirror.ctrl
   read): java.security.AccessControlException'.
  at 
  java.security.AccessControlContext.checkPermission(AccessControlContext.java:321)
  at 
  java.security.AccessController.checkPermission(AccessController.java:546)
  at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
  at java.lang.SecurityManager.checkRead(SecurityManager.java:871)
  at java.io.File.exists(File.java:731)
  at 
  org.apache.derby.impl.store.raw.log.LogToFile.boot(LogToFile.java:2940)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
  at 
  org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
  at 
  org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
  at 
  org.apache.derby.impl.store.raw.data.BaseDataFileFactory.bootLogFactory(BaseDataFileFactory.java:1762)
  at 
  org.apache.derby.impl.store.raw.data.BaseDataFileFactory.setRawStoreFactory(BaseDataFileFactory.java:1218)
  at org.apache.derby.impl.store.raw.RawStore.boot(RawStore.java:250)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
  at 
  org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
  at 
  org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
  at 
  org.apache.derby.impl.store.access.RAMAccessManager.boot(RAMAccessManager.java:987)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
  at 
  org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
  at 
  org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
  at 
  org.apache.derby.impl.db.BasicDatabase.bootStore(BasicDatabase.java:738)
  at org.apache.derby.impl.db.BasicDatabase.boot(BasicDatabase.java:178)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
  at 
  org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java:1831)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(BaseMonitor.java:1697)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(BaseMonitor.java:1577)
  at 
  org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(BaseMonitor.java:990)
  at 
  org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Monitor.java:541)
  at 
  org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:1586)
  at 
  org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:216)
  at 
  org.apache.derby.impl.jdbc.EmbedConnection30.init(EmbedConnection30.java:72)
  at 
  org.apache.derby.impl.jdbc.EmbedConnection40.init(EmbedConnection40.java:48)
  at 
  org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Driver40.java:62)
  at org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:199)
  at org.apache.derby.impl.drda.Database.makeConnection(Database.java:231)
  

[jira] Commented: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-27 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1545?page=comments#action_12423828 ] 

Andreas Korneliussen commented on DERBY-1545:
-

Improved the original patch by also granting permission to 
${derbyTesting.serverhost}.
This should make it possible to also run this test when the server host is on a 
remote machine.
Committed revision 426048.


 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen
 Assigned To: Andreas Korneliussen
 Fix For: 10.2.0.0

 Attachments: DERBY-1545.diff, DERBY-1545.stat


 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-07-27 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1559?page=all ]

Andreas Korneliussen updated DERBY-1559:


Attachment: DERBY-1559.diff
DERBY-1559.stat

Attached is a patch which does the following:

EXTDTAReaderInputStream: this is a new class which is a subclass of 
InputStream. It is capable of reading EXTDTA from the network using the 
DDMReader.

DDMReader: added helper methods to create EXTDTAReaderInputStream and to let it 
fetch more data

DRDAConnThread: 
* When handling a EXCSQLSTT request DRDAConnThread will do an execute of the 
statement right after calling readAndSetAllExtParams(..). This is because the 
call to stmt.execute() will start the stream of the LOB data.  
readAndSetExtParam (..) is implemented so that it will only do streaming of LOB 
on the last EXTDTA parameter, otherwise it will use the old mechanism of 
streaming all and creating a byte-array. 

Using this patch, I have successfully inserted a 1GB blob streamed from the 
Network client to the Network server, running the Network server with 64MB of 
max heap space. I have also run derbyall with this patch, with no failures.


 when receiving a single EXTDTA object representing a BLOB, the server do not 
 need to read it into memory before inserting it into the DB
 

 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1559.diff, DERBY-1559.stat


 When streaming a BLOB from the Network Client to the Network Server, the 
 Network server currently read all the data from the stream and put it into a 
 byte array.
 The blob data is then inserted into the DB by using
 PreparedStatement.setBytes(..)
 and later
 PreparedStatement.execute()
 To avoid OutOfMemoryError if the size of the Blob is  than total memory in 
 the VM, we could make the network server create a stream which reads data 
 when doing PreparedStatement.execute().  The DB will then stream the BLOB 
 data directly from the network inputstream into the disk.
 I intent to make a patch which does this if there is only one EXTDTA object 
 (BLOB) sent from the client in the statement, as it will simplify the 
 implementation. Later this can be improved  further to include CLOBs, and 
 possibly to include the cases where there are multiple EXTDTA objects.
 --
 CLOBs are more complex, as there need to be some character encoding. This can 
 be achieved by using a InputStreamReader,  and use 
 PreparedStatement.setCharacterStream(..). However the size of the stream is 
 not necessarily the same as the size of the raw binary data, and to do this 
 for CLOBs, I would need the embedded prepared statements to support the new 
 setCharacterStream() overloads in JDBC4 (which do not include a length 
 atribute)
 --
 Multiple EXTDATA objects are also more complex, since one would need to have 
 fully read the previous object ,before it is possible to read the next.
 --

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1564) wisconsin.sql test failed in DerbyNet framework, VM for network server got OutOfMemoryError

2006-07-27 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1564?page=comments#action_12423858 ] 

Andreas Korneliussen commented on DERBY-1564:
-

I think experimenting with different heap-sizes is slightly off-track, since 
all it may prove is that derby 10.2 may use more memory than derby 10.1 (then 
excluding the fact that the test has changed). 

This error was seen in a very specific environment: new VMs (JVM 1.5 and JVM 
1.6), and on Sparc platform (real multi CPU) and in Solaris Zones environment 
(which will affect timing). This test has previously had timing issues.  Would 
it be possible to run the test in the same environment as it failed using the 
derbyTesting.jar from 10.2, against Derby jar files from 10.1 ? If the error is 
reproduced, it indicates that this is not a regression, however still it may be 
a serious bug.

---
The tests have been changed in DERBY-937. One change was to allow the optimizer 
to use as much time as it want in order to find the perfect query plan. The 
other change was to run compress on the tables.  This happens before the test 
starts running the sql script which gives output. To me, this indicates, that 
the added compress may have played a part of the issue (since it did not give 
any output).


 wisconsin.sql test failed in DerbyNet framework, VM for network server got 
 OutOfMemoryError
 ---

 Key: DERBY-1564
 URL: http://issues.apache.org/jira/browse/DERBY-1564
 Project: Derby
  Issue Type: Bug
  Components: Network Server, Test
Affects Versions: 10.2.0.0
 Environment: Solaris Sparc, Java 6, running in a Solaris Zone. 
 DerbyNet framework.
Reporter: Andreas Korneliussen
Priority: Critical
 Attachments: wisconsin.tar.gz


 The wisconsin test failed when running it in DerbyNet framework. No output in 
 the outfile. The DerbyNet.err file has one message:
 Exception in thread Thread-2 java.lang.OutOfMemoryError: Java heap space
 The test was run against 10.2.0.4 snapshot release.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1600) JUnit test SURQueryMixTest does not follow the typical pattern for JUnit tests.

2006-07-27 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1600?page=comments#action_12423905 ] 

Andreas Korneliussen commented on DERBY-1600:
-

See:
http://junit.sourceforge.net/doc/cookstour/cookstour.htm

run() is a template method, it will call:

setUp()
runTest()
tearDown()

runTest() default implementation is to use reflection and find a test method to 
call. This is usually the easiest way to achieve reuse of the code. In this 
case, I wanted the test to run the same testcode, however on a variation of SQL 
queries, and data models, and it made more sense to solve it by overriding 
runTest(), and give parameters in constructor for the different parameters.

I do not see how an existing junit test would slow down any progress of moving 
other tests to junit.

 JUnit test SURQueryMixTest does not follow the typical pattern for JUnit 
 tests.
 ---

 Key: DERBY-1600
 URL: http://issues.apache.org/jira/browse/DERBY-1600
 Project: Derby
  Issue Type: Bug
  Components: Test
Affects Versions: 10.2.0.0
Reporter: Daniel John Debrunner
 Fix For: 10.2.0.0


 SURQueryMixTest overrides runtest instead of providing individual test 
 methods, I think this means its setUp and tearDown methods would not be 
 called.
 It does not have such methods at the present, bu if someone needed to add 
 them then they would waste time figuring out why the methods were never 
 called.
 JUnit tests in derby should follow the established pattern, see 
 http://junit.sourceforge.net/doc/cookbook/cookbook.htm
 http://wiki.apache.org/db-derby/DerbyJUnitTesting
 Slows down the progress to moving to JUnit for all tests.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1587) INTEGER function cannot be abbreviated

2006-07-26 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1587?page=comments#action_12423578 ] 

Andreas Korneliussen commented on DERBY-1587:
-

I think the patch looks good and can be committed. 

 INTEGER function cannot be abbreviated
 --

 Key: DERBY-1587
 URL: http://issues.apache.org/jira/browse/DERBY-1587
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1
Reporter: Christian d'Heureuse
 Assigned To: Yip Ng
Priority: Minor
 Attachments: derby1587trunkdiff1.txt, derby1587trunkstat1.txt


 The reference manual 
 (http://db.apache.org/derby/docs/10.1/ref/rrefbuiltinteger.html) states that 
 the INTEGER function can be abbreviated by INT, but Derby 10.1.3.1 does not 
 accept that.
 Example:
   VALUES INTEGER(1.5);
   - OK
   VALUES INT(1.5);
   - ERROR 42X80: VALUES clause must contain at least one element.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1545?page=comments#action_12423599 ] 

Andreas Korneliussen commented on DERBY-1545:
-

The test code (TestProto is packed into derbyTesting.jar), does new 
Socket(hostName, 1527). derbyTesting.jar does not have permission to connect 
and resolve. Fix could be to grant permission in testProtocol.policy.


 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen

 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Assigned: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1545?page=all ]

Andreas Korneliussen reassigned DERBY-1545:
---

Assignee: Andreas Korneliussen

 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen
 Assigned To: Andreas Korneliussen

 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1545?page=all ]

Andreas Korneliussen updated DERBY-1545:


Attachment: DERBY-1545.diff
DERBY-1545.stat

Attached is a patch (DERBY-1545.diff + stat) which gives socket permission to 
derbyTesting.jar.

 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen
 Assigned To: Andreas Korneliussen
 Attachments: DERBY-1545.diff, DERBY-1545.stat


 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1587) INTEGER function cannot be abbreviated

2006-07-26 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1587?page=all ]

Andreas Korneliussen resolved DERBY-1587.
-

Fix Version/s: 10.2.0.0
   Resolution: Fixed
   Derby Info:   (was: [Patch Available])

Thanks for contributing.

Committed revision 425726.

 INTEGER function cannot be abbreviated
 --

 Key: DERBY-1587
 URL: http://issues.apache.org/jira/browse/DERBY-1587
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1
Reporter: Christian d'Heureuse
 Assigned To: Yip Ng
Priority: Minor
 Fix For: 10.2.0.0

 Attachments: derby1587trunkdiff1.txt, derby1587trunkstat1.txt


 The reference manual 
 (http://db.apache.org/derby/docs/10.1/ref/rrefbuiltinteger.html) states that 
 the INTEGER function can be abbreviated by INT, but Derby 10.1.3.1 does not 
 accept that.
 Example:
   VALUES INTEGER(1.5);
   - OK
   VALUES INT(1.5);
   - ERROR 42X80: VALUES clause must contain at least one element.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1545?page=all ]

Andreas Korneliussen resolved DERBY-1545.
-

Fix Version/s: 10.2.0.0
   Resolution: Fixed

Committed revision 425732.


 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen
 Assigned To: Andreas Korneliussen
 Fix For: 10.2.0.0

 Attachments: DERBY-1545.diff, DERBY-1545.stat


 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Closed: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1545?page=all ]

Andreas Korneliussen closed DERBY-1545.
---


 derbynet/testProtocol.java fails with security manager enabled
 --

 Key: DERBY-1545
 URL: http://issues.apache.org/jira/browse/DERBY-1545
 Project: Derby
  Issue Type: Bug
  Components: Regression Test Failure
Affects Versions: 10.2.0.0
 Environment: Solaris 10 x86, Sun JVM 1.5.0
Reporter: Knut Anders Hatlen
 Assigned To: Andreas Korneliussen
 Fix For: 10.2.0.0

 Attachments: DERBY-1545.diff, DERBY-1545.stat


 The tinderbox test started failing after revision 423676 which enabled 
 security manager for derbynet/testProtocol.java. See 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/423706-derbyall_diff.txt
 Exception in thread main java.security.AccessControlException: access 
 denied (java.net.SocketPermission 127.0.0.1:1527 connect,resolve)
   at 
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
   at 
 java.security.AccessController.checkPermission(AccessController.java:427)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
   at java.lang.SecurityManager.checkConnect(SecurityManager.java:1034)
   at java.net.Socket.connect(Socket.java:501)
   at java.net.Socket.connect(Socket.java:457)
   at java.net.Socket.init(Socket.java:365)
   at java.net.Socket.init(Socket.java:178)
   at org.apache.derby.impl.drda.TestProto.getConnection(Unknown Source)
   at org.apache.derby.impl.drda.TestProto.init(Unknown Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.executeFile(Unknown
  Source)
   at 
 org.apache.derbyTesting.functionTests.tests.derbynet.testProtocol.main(Unknown
  Source)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1595) Network server fails with DRDAProtocolException if a BLOB with size 2147483647 is streamed from client

2006-07-26 Thread Andreas Korneliussen (JIRA)
Network server fails with DRDAProtocolException if a BLOB with size 2147483647 
is streamed from client
--

 Key: DERBY-1595
 URL: http://issues.apache.org/jira/browse/DERBY-1595
 Project: Derby
  Issue Type: Bug
  Components: Network Server
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
Priority: Minor


When executing a program which inserts a BLOB of size 2GB-1, the Network server 
fails with DRDAProtocolException.  This happens before it starts handling the 
actual LOB data:

java org.apache.derby.drda.NetworkServerControl start
Apache Derby Network Server - 10.2.0.4 alpha started and ready to accept 
connections on port 1527 at 2006-07-26 14:15:21.284 GMT
Execution failed because of a Distributed Protocol Error:  DRDA_Proto_SYNTAXRM; 
CODPNT arg  = 0; Error Code Value = c
org.apache.derby.impl.drda.DRDAProtocolException
at 
org.apache.derby.impl.drda.DRDAConnThread.throwSyntaxrm(DRDAConnThread.java:441)
at 
org.apache.derby.impl.drda.DDMReader.readLengthAndCodePoint(DDMReader.java:554)
at org.apache.derby.impl.drda.DDMReader.getCodePoint(DDMReader.java:617)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSQLDTA_work(DRDAConnThread.java:4072)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSQLDTA(DRDAConnThread.java:3928)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTTobjects(DRDAConnThread.java:3806)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTT(DRDAConnThread.java:3640)
at 
org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:928)
at 
org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:254)
null
org.apache.derby.impl.drda.DRDAProtocolException
at 
org.apache.derby.impl.drda.DRDAConnThread.throwSyntaxrm(DRDAConnThread.java:441)
at 
org.apache.derby.impl.drda.DDMReader.readLengthAndCodePoint(DDMReader.java:554)
at org.apache.derby.impl.drda.DDMReader.getCodePoint(DDMReader.java:617)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSQLDTA_work(DRDAConnThread.java:4072)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSQLDTA(DRDAConnThread.java:3928)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTTobjects(DRDAConnThread.java:3806)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTT(DRDAConnThread.java:3640)
at 
org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:928)
at 
org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:254)




-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-1296) Setting property derby.system.bootAll causes an Exception

2006-07-25 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1296?page=all ]

Andreas Korneliussen resolved DERBY-1296.
-

Resolution: Fixed
Derby Info:   (was: [Patch Available])

Thanks for resolving the conflict.

Committed revision 425388.

 Setting property derby.system.bootAll causes an Exception
 -

 Key: DERBY-1296
 URL: http://issues.apache.org/jira/browse/DERBY-1296
 Project: Derby
  Issue Type: Bug
  Components: Services
Affects Versions: 10.1.3.1
 Environment: Windows XP
Reporter: David Heath
 Assigned To: Fernanda Pizzorno
Priority: Critical
 Fix For: 10.2.0.0

 Attachments: derby-1296.diff, derby-1296.stat, derby-1296v2.diff, 
 derby-1296v2.stat


 After creating 3 databases under c:\databases\sample - I wanted to get a list 
 of available databases, I followed the example in the DriverPropertyInfo 
 array example in the developer guide and used the following routine:
 ...
   private static void test2() {
 String driverName =org.apache.derby.jdbc.EmbeddedDriver;
 String url = jdbc:derby:;
 Properties p = System.getProperties();
 p.put(derby.system.home, C:\\databases\\sample);
 p.put(derby.system.bootAll, true);
 try {
   Class.forName(driverName);
   Driver driver = DriverManager.getDriver(url);
   Properties info = new Properties();
   DriverPropertyInfo[] attributes = driver.getPropertyInfo(url, info);
   for (DriverPropertyInfo attribute : attributes) {
 System.out.print(attribute.name);
 System.out.print( : );
 if (attribute.choices != null) {
   System.out.print(Arrays.toString(attribute.choices));
 }
 System.out.print( : );
 System.out.println(attribute.value);
   }
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
 try {
   DriverManager.getConnection(jdbc:derby:;shutdown=true);
 }
 catch(Exception exp) {
 }
   }
 When run the following exception occured:
 Exception in thread main java.lang.ExceptionInInitializerError
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Unknown Source)
 at Test.test2(Test.java:20)
 at Test.main(Test.java:8)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.bootProviderServic
 es(Unknown Source)
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.bootPersistentServ
 ices(Unknown Source)
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.runWithState(Unkno
 wn Source)
 at org.apache.derby.impl.services.monitor.FileMonitor.init(Unknown 
 Sou
 rce)
 at 
 org.apache.derby.iapi.services.monitor.Monitor.startMonitor(Unknown S
 ource)
 at org.apache.derby.iapi.jdbc.JDBCBoot.boot(Unknown Source)
 at org.apache.derby.jdbc.EmbeddedDriver.boot(Unknown Source)
 at org.apache.derby.jdbc.EmbeddedDriver.clinit(Unknown Source)
 ... 4 more
 If i comment out:
 // p.put(derby.system.bootAll, true);
 The program runs, but no databases are listed.
 The output from java org.apache.derby.tools.sysinfo:
 -- Java Information --
 Java Version:1.5.0_05
 Java Vendor: Sun Microsystems Inc.
 Java home:   C:\Program Files\Java\jre1.5.0_05
 Java classpath:  
 C:\tools\derby\db-derby-10.1.2.1-bin\lib\derby.jar;C:\tools\der
 by\db-derby-10.1.2.1-bin\lib\derbytools.jar;;C:\tools\Java\jdk1.5.0_05\lib\tools
 .jar;C:\tools\log4j\logging-log4j-1.2.12\dist\lib\log4j-1.2.12.jar;C:\dev_deploy
 \plugins\com.x4m.util_1.0.0.jar;C:\dev_deploy\plugins\com.x4m.uomcrs_1.0.0.jar;C
 :\dev_deploy\plugins\com.x4m.database_1.0.0.jar;C:\dev_deploy\plugins\com.x4m.fe
 ature_1.0.0.jar;C:\dev_deploy\plugins\org.eclipse.core.runtime_3.1.0.jar;C:\dev_
 deploy\plugins\org.eclipse.osgi_3.1.0.jar;C:\dev_deploy\plugins\com.x4m.database
 _1.0.0.jar;C:\david\novice\syncservices\build\class
 OS name: Windows XP
 OS architecture: x86
 OS version:  5.1
 Java user name:  David
 Java user home:  C:\Documents and Settings\David
 Java user dir:   C:\david\novice\derby
 java.specification.name: Java Platform API Specification
 java.specification.version: 1.5
 - Derby Information 
 JRE - JDBC: J2SE 5.0 - JDBC 3.0
 [C:\tools\derby\db-derby-10.1.2.1-bin\lib\derby.jar] 10.1.2.1 - (330608)
 [C:\tools\derby\db-derby-10.1.2.1-bin\lib\derbytools.jar] 10.1.2.1 - (330608)
 --
 - Locale Information -
 --
 I have read most of the documentation and can find no other way to get a list 
 of available catalogs - thus I do not know of a workaround for this problem.
 David Heath
 Transform Software 

[jira] Commented: (DERBY-1535) Trial 2 for DERBY-550, improve use of Engine from NetworkServer and reduce memory usage

2006-07-25 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1535?page=comments#action_12423375 ] 

Andreas Korneliussen commented on DERBY-1535:
-

I think there could be a problem here:

DRDAConnThread:
-   ps.setBytes(i+1, paramBytes);
+   ps.setBinaryStream(i+1, 
+   
   new ByteArrayInputStream(paramBytes),
+   

If paramBytes is null, it would previously be interpreted by the engine as 
there is a NULL value inserted into the database.
When creating a new ByteArrayInputStream, I guess it would be interpreted as 
inserting no data into the BLOB (if the paramBytes is null), or maybe a 
NullPointerException gets thrown


 Trial 2 for DERBY-550, improve use of Engine from NetworkServer and reduce 
 memory usage
 ---

 Key: DERBY-1535
 URL: http://issues.apache.org/jira/browse/DERBY-1535
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Reporter: Tomohito Nakayama
 Assigned To: Tomohito Nakayama
 Attachments: DERBY-1513_1535.patch, DERBY-1513_1535_2.patch, 
 DERBY-1535.patch, serverMemoryUsage.xls, serverMemoryUsage_1513_1535.xls


 Through DERBY-1513, Trial 1 for DERBY-550, 
 it was suggested that NetworkServer seems to use Engine inefficiently and use 
 too much of memory.
 This task try to improve the use of Engine from NetworkServer and try to 
 reduce memory usage.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1296) Setting property derby.system.bootAll causes an Exception

2006-07-24 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1296?page=comments#action_12423000 ] 

Andreas Korneliussen commented on DERBY-1296:
-

Thanks for making the patch for this critical issue. It seems there is a 
conflict when applying it, due to changes in BaseJDBCTestCase. Could you please 
resolve the conflict?
I do also suggest renaming the test case class from bootAllTest to BootAllTest.


 Setting property derby.system.bootAll causes an Exception
 -

 Key: DERBY-1296
 URL: http://issues.apache.org/jira/browse/DERBY-1296
 Project: Derby
  Issue Type: Bug
  Components: Services
Affects Versions: 10.1.3.1
 Environment: Windows XP
Reporter: David Heath
 Assigned To: Fernanda Pizzorno
Priority: Critical
 Fix For: 10.2.0.0

 Attachments: derby-1296.diff, derby-1296.stat


 After creating 3 databases under c:\databases\sample - I wanted to get a list 
 of available databases, I followed the example in the DriverPropertyInfo 
 array example in the developer guide and used the following routine:
 ...
   private static void test2() {
 String driverName =org.apache.derby.jdbc.EmbeddedDriver;
 String url = jdbc:derby:;
 Properties p = System.getProperties();
 p.put(derby.system.home, C:\\databases\\sample);
 p.put(derby.system.bootAll, true);
 try {
   Class.forName(driverName);
   Driver driver = DriverManager.getDriver(url);
   Properties info = new Properties();
   DriverPropertyInfo[] attributes = driver.getPropertyInfo(url, info);
   for (DriverPropertyInfo attribute : attributes) {
 System.out.print(attribute.name);
 System.out.print( : );
 if (attribute.choices != null) {
   System.out.print(Arrays.toString(attribute.choices));
 }
 System.out.print( : );
 System.out.println(attribute.value);
   }
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
 try {
   DriverManager.getConnection(jdbc:derby:;shutdown=true);
 }
 catch(Exception exp) {
 }
   }
 When run the following exception occured:
 Exception in thread main java.lang.ExceptionInInitializerError
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Unknown Source)
 at Test.test2(Test.java:20)
 at Test.main(Test.java:8)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.bootProviderServic
 es(Unknown Source)
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.bootPersistentServ
 ices(Unknown Source)
 at 
 org.apache.derby.impl.services.monitor.BaseMonitor.runWithState(Unkno
 wn Source)
 at org.apache.derby.impl.services.monitor.FileMonitor.init(Unknown 
 Sou
 rce)
 at 
 org.apache.derby.iapi.services.monitor.Monitor.startMonitor(Unknown S
 ource)
 at org.apache.derby.iapi.jdbc.JDBCBoot.boot(Unknown Source)
 at org.apache.derby.jdbc.EmbeddedDriver.boot(Unknown Source)
 at org.apache.derby.jdbc.EmbeddedDriver.clinit(Unknown Source)
 ... 4 more
 If i comment out:
 // p.put(derby.system.bootAll, true);
 The program runs, but no databases are listed.
 The output from java org.apache.derby.tools.sysinfo:
 -- Java Information --
 Java Version:1.5.0_05
 Java Vendor: Sun Microsystems Inc.
 Java home:   C:\Program Files\Java\jre1.5.0_05
 Java classpath:  
 C:\tools\derby\db-derby-10.1.2.1-bin\lib\derby.jar;C:\tools\der
 by\db-derby-10.1.2.1-bin\lib\derbytools.jar;;C:\tools\Java\jdk1.5.0_05\lib\tools
 .jar;C:\tools\log4j\logging-log4j-1.2.12\dist\lib\log4j-1.2.12.jar;C:\dev_deploy
 \plugins\com.x4m.util_1.0.0.jar;C:\dev_deploy\plugins\com.x4m.uomcrs_1.0.0.jar;C
 :\dev_deploy\plugins\com.x4m.database_1.0.0.jar;C:\dev_deploy\plugins\com.x4m.fe
 ature_1.0.0.jar;C:\dev_deploy\plugins\org.eclipse.core.runtime_3.1.0.jar;C:\dev_
 deploy\plugins\org.eclipse.osgi_3.1.0.jar;C:\dev_deploy\plugins\com.x4m.database
 _1.0.0.jar;C:\david\novice\syncservices\build\class
 OS name: Windows XP
 OS architecture: x86
 OS version:  5.1
 Java user name:  David
 Java user home:  C:\Documents and Settings\David
 Java user dir:   C:\david\novice\derby
 java.specification.name: Java Platform API Specification
 java.specification.version: 1.5
 - Derby Information 
 JRE - JDBC: J2SE 5.0 - JDBC 3.0
 [C:\tools\derby\db-derby-10.1.2.1-bin\lib\derby.jar] 10.1.2.1 - (330608)
 [C:\tools\derby\db-derby-10.1.2.1-bin\lib\derbytools.jar] 10.1.2.1 - (330608)
 --
 - Locale Information -
 --
 I have read most of the documentation and can find no other 

[jira] Created: (DERBY-1558) include ConcurrencyTest into a suite

2006-07-21 Thread Andreas Korneliussen (JIRA)
include ConcurrencyTest into a suite


 Key: DERBY-1558
 URL: http://issues.apache.org/jira/browse/DERBY-1558
 Project: Derby
  Issue Type: Test
  Components: Test
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen
Priority: Minor


The test jdbcapi/ConcurrencyTest.junit is currently not included in any suites. 
Also, a number of testcases are not enabled in that test, which depend on SUR. 
These testcases can be enabled. The test should also set some properties to 
reduce lock timeout, so that it runs faster.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1559) when receiving a single EXTDTA object representing a BLOB, the server do not need to read it into memory before inserting it into the DB

2006-07-21 Thread Andreas Korneliussen (JIRA)
when receiving a single EXTDTA object representing a BLOB, the server do not 
need to read it into memory before inserting it into the DB


 Key: DERBY-1559
 URL: http://issues.apache.org/jira/browse/DERBY-1559
 Project: Derby
  Issue Type: Sub-task
  Components: Network Server
Affects Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned To: Andreas Korneliussen


When streaming a BLOB from the Network Client to the Network Server, the 
Network server currently read all the data from the stream and put it into a 
byte array.
The blob data is then inserted into the DB by using
PreparedStatement.setBytes(..)

and later

PreparedStatement.execute()

To avoid OutOfMemoryError if the size of the Blob is  than total memory in the 
VM, we could make the network server create a stream which reads data when 
doing PreparedStatement.execute().  The DB will then stream the BLOB data 
directly from the network inputstream into the disk.

I intent to make a patch which does this if there is only one EXTDTA object 
(BLOB) sent from the client in the statement, as it will simplify the 
implementation. Later this can be improved  further to include CLOBs, and 
possibly to include the cases where there are multiple EXTDTA objects.

--
CLOBs are more complex, as there need to be some character encoding. This can 
be achieved by using a InputStreamReader,  and use 
PreparedStatement.setCharacterStream(..). However the size of the stream is not 
necessarily the same as the size of the raw binary data, and to do this for 
CLOBs, I would need the embedded prepared statements to support the new 
setCharacterStream() overloads in JDBC4 (which do not include a length atribute)
--
Multiple EXTDATA objects are also more complex, since one would need to have 
fully read the previous object ,before it is possible to read the next.
--


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (DERBY-1564) wisconsin.sql test failed in DerbyNet framework, VM for network server got OutOfMemoryError

2006-07-21 Thread Andreas Korneliussen (JIRA)
wisconsin.sql test failed in DerbyNet framework, VM for network server got 
OutOfMemoryError
---

 Key: DERBY-1564
 URL: http://issues.apache.org/jira/browse/DERBY-1564
 Project: Derby
  Issue Type: Bug
  Components: Network Server, Test
Affects Versions: 10.2.0.0
 Environment: Solaris Sparc, Java 6, running in a Solaris Zone. 
DerbyNet framework.
Reporter: Andreas Korneliussen


The wisconsin test failed when running it in DerbyNet framework. No output in 
the outfile. The DerbyNet.err file has one message:

Exception in thread Thread-2 java.lang.OutOfMemoryError: Java heap space

The test was run against 10.2.0.4 snapshot release.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-1564) wisconsin.sql test failed in DerbyNet framework, VM for network server got OutOfMemoryError

2006-07-21 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1564?page=all ]

Andreas Korneliussen updated DERBY-1564:


Attachment: wisconsin.tar.gz

Attaching log from test (wisconsin.tar.gz)

 wisconsin.sql test failed in DerbyNet framework, VM for network server got 
 OutOfMemoryError
 ---

 Key: DERBY-1564
 URL: http://issues.apache.org/jira/browse/DERBY-1564
 Project: Derby
  Issue Type: Bug
  Components: Network Server, Test
Affects Versions: 10.2.0.0
 Environment: Solaris Sparc, Java 6, running in a Solaris Zone. 
 DerbyNet framework.
Reporter: Andreas Korneliussen
 Attachments: wisconsin.tar.gz


 The wisconsin test failed when running it in DerbyNet framework. No output in 
 the outfile. The DerbyNet.err file has one message:
 Exception in thread Thread-2 java.lang.OutOfMemoryError: Java heap space
 The test was run against 10.2.0.4 snapshot release.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1564) wisconsin.sql test failed in DerbyNet framework, VM for network server got OutOfMemoryError

2006-07-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1564?page=comments#action_12422684 ] 

Andreas Korneliussen commented on DERBY-1564:
-

Do note that that there was not output in the outfile, and the diff file is 
therefore the entire master file.

Below is some output from the report file, which indicates the test ran for 
more than 1 hour before failing:
* Diff file derbyall/derbynetmats/DerbyNet/derbynetmats/wisconsin.diff
*** Start: wisconsin jdk1.6.0-rc DerbyNet derbynetmats:derbynetmats 2006-07-19 
16:46:20 ***
0 del
 ij -- This test is an adaptation of the Wisconsin benchmark, as documented in
 - The Benchmark Handbook, Second Edition (edited by Jim Gray).  The 
structure

... SNIP

 0 rows inserted/updated/deleted
 ij
Test Failed.
*** End:   wisconsin jdk1.6.0-rc DerbyNet derbynetmats:derbynetmats 2006-07-19 
17:49:18 ***


 wisconsin.sql test failed in DerbyNet framework, VM for network server got 
 OutOfMemoryError
 ---

 Key: DERBY-1564
 URL: http://issues.apache.org/jira/browse/DERBY-1564
 Project: Derby
  Issue Type: Bug
  Components: Network Server, Test
Affects Versions: 10.2.0.0
 Environment: Solaris Sparc, Java 6, running in a Solaris Zone. 
 DerbyNet framework.
Reporter: Andreas Korneliussen
 Attachments: wisconsin.tar.gz


 The wisconsin test failed when running it in DerbyNet framework. No output in 
 the outfile. The DerbyNet.err file has one message:
 Exception in thread Thread-2 java.lang.OutOfMemoryError: Java heap space
 The test was run against 10.2.0.4 snapshot release.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-1511) SELECT clause without a WHERE, causes an Exception when extracting a Blob from a database

2006-07-20 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1511?page=comments#action_12422368 ] 

Andreas Korneliussen commented on DERBY-1511:
-

Yes.

If you modify the code, and replace readTable1(..) with con.commit(), you will 
see this issue with autocommit off:

The problem seems to be related to BulkTableScanResultSet. On the first call to 
next() it builds an array of rows (rowArray), and feeds the result set with 
rows from that array. However, after a commit, the Conglomerate is closed. The 
BulkTableScanResultSet will continue to feed rows from the rowArray, and the 
BLOB columns in these rows depend on reading data from the Conglomerate (which 
is closed).

When adding a WHERE clause, the engine will not use BulkTableScanResultSet. 
Also if the statement has concurrency CONCUR_UPDATABLE, the engine will not use 
BulkTableScanResultSet.



 SELECT clause without a WHERE, causes an Exception when extracting a Blob 
 from a database
 -

 Key: DERBY-1511
 URL: http://issues.apache.org/jira/browse/DERBY-1511
 Project: Derby
  Issue Type: Bug
  Components: Miscellaneous
Affects Versions: 10.1.2.1
 Environment: Windows XP
Reporter: David Heath
Priority: Minor

 An exception occurs when extracting a Blob from a database. 
 The following code, will ALWAYS fail with the Exception:
 java.io.IOException: ERROR 40XD0: Container has been closed
 at 
 org.apache.derby.impl.store.raw.data.OverflowInputStream.fillByteHold
 er(Unknown Source)
 at 
 org.apache.derby.impl.store.raw.data.BufferedByteHolderInputStream.re
 ad(Unknown Source)
 at java.io.DataInputStream.read(Unknown Source)
 at java.io.FilterInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
 at java.io.ObjectInputStream$BlockDataInputStream.readDoubles(Unknown 
 So
 urce)
 at java.io.ObjectInputStream.readArray(Unknown Source)
 at java.io.ObjectInputStream.readObject0(Unknown Source)
 at java.io.ObjectInputStream.readObject(Unknown Source)
 at BlobTest.readRows(BlobTest.java:82)
 at BlobTest.main(BlobTest.java:24)
 CODE:
 import java.io.*;
 import java.sql.*;
 import java.util.*;
 public class BlobTest
 {
   private static final String TABLE1 = CREATE TABLE TABLE_1 ( 
  + ID INTEGER NOT NULL, 
  + COL_2 INTEGER NOT NULL, 
  + PRIMARY KEY (ID) );
   private static final String TABLE2 = CREATE TABLE TABLE_2 ( 
  + ID INTEGER NOT NULL, 
  + COL_BLOB BLOB, 
  + PRIMARY KEY (ID) );
   public static void main(String... args) {
 try {
   createDBandTables();
   Connection con = getConnection();
   addRows(con, 1, 1);
   addRows(con, 1, 2);
   readRows(con, 1);
   con.close();
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
   }
   private static void addRows(Connection con, int size, int id) 
  throws Exception
   {
 String sql = INSERT INTO TABLE_1 VALUES(?, ?);
 PreparedStatement pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 pstmt.setInt(2, 2);
 pstmt.executeUpdate();
 pstmt.close();
 double[] array = new double[size];
 array[size-1] = 1.23;
 sql = INSERT INTO TABLE_2 VALUES(?, ?);
 pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
 ObjectOutputStream objStream = new ObjectOutputStream(byteStream);
 objStream.writeObject(array); // Convert object to byte stream 
 objStream.flush();
 objStream.close();
 byte[] bytes = byteStream.toByteArray();
 ByteArrayInputStream inStream = new ByteArrayInputStream(bytes);
 pstmt.setBinaryStream(2, inStream, bytes.length);
 pstmt.executeUpdate();
 pstmt.close();
   }
   private static void readRows(Connection con, int id) throws Exception
   {
 String sql = SELECT * FROM TABLE_2;
 //String sql = SELECT * FROM TABLE_2 WHERE ID  0;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 while (rs.next()) {
   rs.getInt(1);
   InputStream stream = rs.getBinaryStream(2);
   ObjectInputStream objStream = new ObjectInputStream(stream);
   Object obj = objStream.readObject();
   double[] array = (double[]) obj;
   System.out.println(array.length);
   readTable1(con, id);
 }
 rs.close();
 stmt.close();
   

[jira] Updated: (DERBY-1351) lang/forupdate.sql fails with derbyclient in the 10.1 branch

2006-07-20 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1351?page=all ]

Andreas Korneliussen updated DERBY-1351:


Derby Info:   (was: [Patch Available])

svn merge -r420534:420535 https://svn.apache.org/repos/asf/db/derby/code/trunk
A
java/testing/org/apache/derbyTesting/functionTests/tests/lang/forupdate_sed.properties
C
java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/forupdate.out
C
java/testing/org/apache/derbyTesting/functionTests/master/DerbyNetClient/forupdate.out
[EMAIL PROTECTED]:/4clean/10.1 svn diff 
java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/forupdate.out

Since this issue gives merge conflict, I am removing the patch-available flag.

 lang/forupdate.sql fails with derbyclient in the 10.1 branch
 

 Key: DERBY-1351
 URL: http://issues.apache.org/jira/browse/DERBY-1351
 Project: Derby
  Issue Type: Bug
  Components: Test
Affects Versions: 10.1.2.5, 10.2.0.0
 Environment: Windows 2000 and Sun jdk 15
Reporter: Rajesh Kartha
 Assigned To: Fernanda Pizzorno
Priority: Minor
 Fix For: 10.1.4.0, 10.2.0.0

 Attachments: derby-1351.diff, derby-1351.stat


 Derby 10.1 branch - 10.1.2.5 - (409283)
 *** Start: forupdate jdk1.5.0_02 DerbyNetClient derbynetmats:derbynetmats 
 2006-05-24 21:24:26 ***
 333 del
  SQL_CURLH000C3
 333a333
  SQL_CURLH000C1
 393 del
  SQL_CURLH000C3
 393a393
  SQL_CURLH000C1
 Test Failed.
 *** End:   forupdate jdk1.5.0_02 DerbyNetClient derbynetmats:derbynetmats 
 2006-05-24 21:24:41 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-18 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen resolved DERBY-802.


Fix Version/s: 10.2.0.0
   Resolution: Fixed
   Derby Info:   (was: [Patch Available])

Thanks for reviewing.

Committed revision 423034.


 OutofMemory Error when reading large blob when statement type is 
 ResultSet.TYPE_SCROLL_INSENSITIVE
 --

 Key: DERBY-802
 URL: http://issues.apache.org/jira/browse/DERBY-802
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 
 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
 Environment: all
Reporter: Sunitha Kambhampati
 Assigned To: Andreas Korneliussen
Priority: Minor
 Fix For: 10.2.0.0

 Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff, 
 derby-802v3.diff, derby-802v3.stat


 Grégoire Dubois on the list reported this problem.  From his mail: the 
 reproduction is attached below. 
 When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
 exception is thrown when reading large blobs. 
 import java.sql.*;
 import java.io.*;
 /**
 *
 * @author greg
 */
 public class derby_filewrite_fileread {

 private static File file = new 
 File(/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv);
 private static File destinationFile = new 
 File(/home/greg/DerbyDatabase/+file.getName());

 /** Creates a new instance of derby_filewrite_fileread */
 public derby_filewrite_fileread() {   

 }

 public static void main(String args[]) {
 try {
 
 Class.forName(org.apache.derby.jdbc.EmbeddedDriver).newInstance();
 Connection connection = DriverManager.getConnection 
 (jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true, APP, );
 connection.setAutoCommit(false);

 Statement statement = 
 connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
 ResultSet.CONCUR_READ_ONLY);
 ResultSet result = statement.executeQuery(SELECT TABLENAME FROM 
 SYS.SYSTABLES);

 // Create table if it doesn't already exists.
 boolean exist=false;
 while ( result.next() ) {
 if (db_file.equalsIgnoreCase(result.getString(1)))
 exist=true;
 }
 if ( !exist ) {
 System.out.println(Create table db_file.);
 statement.execute(CREATE TABLE db_file (+
 name  VARCHAR(40),+
 file  BLOB(2G) NOT 
 NULL));
 connection.commit();
 }

 // Read file from disk, write on DB.
 System.out.println(1 - Read file from disk, write on DB.);
 PreparedStatement 
 preparedStatement=connection.prepareStatement(INSERT INTO db_file(name,file) 
 VALUES (?,?));
 FileInputStream fileInputStream = new FileInputStream(file);
 preparedStatement.setString(1, file.getName());
 preparedStatement.setBinaryStream(2, fileInputStream, 
 (int)file.length());   
 preparedStatement.execute();
 connection.commit();
 System.out.println(2 - END OF Read file from disk, write on 
 DB.);


 // Read file from DB, and write on disk.
 System.out.println(3 - Read file from DB, and write on disk.);
 result = statement.executeQuery(SELECT file FROM db_file WHERE 
 name='+file.getName()+');
 byte[] buffer = new byte [1024];
 result.next();
 BufferedInputStream inputStream=new 
 BufferedInputStream(result.getBinaryStream(1),1024);
 FileOutputStream outputStream = new 
 FileOutputStream(destinationFile);
 int readBytes = 0;
 while (readBytes!=-1) {
 readBytes=inputStream.read(buffer,0,buffer.length);
 if ( readBytes != -1 )
 outputStream.write(buffer, 0, readBytes);
 } 
 inputStream.close();
 outputStream.close();
 System.out.println(4 - END OF Read file from DB, and write on 
 disk.);
 }
 catch (Exception e) {
 e.printStackTrace(System.err);
 }
 }
 }
 It returns
 1 - Read file from disk, write on DB.
 2 - END OF Read file from disk, write on DB.
 3 - Read file from DB, and write on disk.
 java.lang.OutOfMemoryError
 if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 

[jira] Commented: (DERBY-694) Statement exceptions cause all the connection's result sets to be closed with the client driver

2006-07-18 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-694?page=comments#action_12421848 ] 

Andreas Korneliussen commented on DERBY-694:


I think a test was disabled in DERBY-506. According to DERBY-705, it can be 
enabled once this issue has been fixed. I do not know if that test is for 
testing this specific issue, or if it just failed due to this bug.

 Statement exceptions cause all the connection's result sets to be closed with 
 the client driver
 ---

 Key: DERBY-694
 URL: http://issues.apache.org/jira/browse/DERBY-694
 Project: Derby
  Issue Type: Bug
  Components: Network Client
Affects Versions: 10.1.1.1
Reporter: Oyvind Bakksjo
 Assigned To: V.Narayanan
Priority: Minor
 Attachments: DERBY-694.html, DERBY-694_upload_v1.diff, 
 DERBY-694_upload_v1.stat, StatementRollbackTest.java


 Scenario:
 Autocommit off. Have two prepared statements, calling executeQuery() on both, 
 giving me two result sets. Can fetch data from both with next(). If one 
 statement gets an exception (say, caused by a division by zero), not only 
 this statement's result set is closed, but also the other open resultset. 
 This happens with the client driver, whereas in embedded mode, the other 
 result set is unaffected by the exception in the first result set (as it 
 should be).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-17 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---

Attachment: derby-802v3.diff
derby-802v3.stat

Attached is a patch (derby-802v3.diff and derby-802v3.stat) which uses 
projectmappings calculated from ProjectRestrictResultSet, and 4 new testcases 
with projections has been added to BLOBTest.junit

 OutofMemory Error when reading large blob when statement type is 
 ResultSet.TYPE_SCROLL_INSENSITIVE
 --

 Key: DERBY-802
 URL: http://issues.apache.org/jira/browse/DERBY-802
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 
 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
 Environment: all
Reporter: Sunitha Kambhampati
 Assigned To: Andreas Korneliussen
Priority: Minor
 Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff, 
 derby-802v3.diff, derby-802v3.stat


 Grégoire Dubois on the list reported this problem.  From his mail: the 
 reproduction is attached below. 
 When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
 exception is thrown when reading large blobs. 
 import java.sql.*;
 import java.io.*;
 /**
 *
 * @author greg
 */
 public class derby_filewrite_fileread {

 private static File file = new 
 File(/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv);
 private static File destinationFile = new 
 File(/home/greg/DerbyDatabase/+file.getName());

 /** Creates a new instance of derby_filewrite_fileread */
 public derby_filewrite_fileread() {   

 }

 public static void main(String args[]) {
 try {
 
 Class.forName(org.apache.derby.jdbc.EmbeddedDriver).newInstance();
 Connection connection = DriverManager.getConnection 
 (jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true, APP, );
 connection.setAutoCommit(false);

 Statement statement = 
 connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
 ResultSet.CONCUR_READ_ONLY);
 ResultSet result = statement.executeQuery(SELECT TABLENAME FROM 
 SYS.SYSTABLES);

 // Create table if it doesn't already exists.
 boolean exist=false;
 while ( result.next() ) {
 if (db_file.equalsIgnoreCase(result.getString(1)))
 exist=true;
 }
 if ( !exist ) {
 System.out.println(Create table db_file.);
 statement.execute(CREATE TABLE db_file (+
 name  VARCHAR(40),+
 file  BLOB(2G) NOT 
 NULL));
 connection.commit();
 }

 // Read file from disk, write on DB.
 System.out.println(1 - Read file from disk, write on DB.);
 PreparedStatement 
 preparedStatement=connection.prepareStatement(INSERT INTO db_file(name,file) 
 VALUES (?,?));
 FileInputStream fileInputStream = new FileInputStream(file);
 preparedStatement.setString(1, file.getName());
 preparedStatement.setBinaryStream(2, fileInputStream, 
 (int)file.length());   
 preparedStatement.execute();
 connection.commit();
 System.out.println(2 - END OF Read file from disk, write on 
 DB.);


 // Read file from DB, and write on disk.
 System.out.println(3 - Read file from DB, and write on disk.);
 result = statement.executeQuery(SELECT file FROM db_file WHERE 
 name='+file.getName()+');
 byte[] buffer = new byte [1024];
 result.next();
 BufferedInputStream inputStream=new 
 BufferedInputStream(result.getBinaryStream(1),1024);
 FileOutputStream outputStream = new 
 FileOutputStream(destinationFile);
 int readBytes = 0;
 while (readBytes!=-1) {
 readBytes=inputStream.read(buffer,0,buffer.length);
 if ( readBytes != -1 )
 outputStream.write(buffer, 0, readBytes);
 } 
 inputStream.close();
 outputStream.close();
 System.out.println(4 - END OF Read file from DB, and write on 
 disk.);
 }
 catch (Exception e) {
 e.printStackTrace(System.err);
 }
 }
 }
 It returns
 1 - Read file from disk, write on DB.
 2 - END OF Read file from disk, write on DB.
 3 - Read file from DB, and write on disk.
 java.lang.OutOfMemoryError
 if the file is ~10MB or more

-- 
This message is automatically generated by 

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-14 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12421136 ] 

Andreas Korneliussen commented on DERBY-802:


Thank you very much for the review comments. I agree that the undo-projection 
code can be improved, (something I tried to do in the v2 diff). Your suggested 
approach sounds promising, I will try it out, and add more tests to BLOBTest.

 OutofMemory Error when reading large blob when statement type is 
 ResultSet.TYPE_SCROLL_INSENSITIVE
 --

 Key: DERBY-802
 URL: http://issues.apache.org/jira/browse/DERBY-802
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 
 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
 Environment: all
Reporter: Sunitha Kambhampati
 Assigned To: Andreas Korneliussen
Priority: Minor
 Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff


 Grégoire Dubois on the list reported this problem.  From his mail: the 
 reproduction is attached below. 
 When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
 exception is thrown when reading large blobs. 
 import java.sql.*;
 import java.io.*;
 /**
 *
 * @author greg
 */
 public class derby_filewrite_fileread {

 private static File file = new 
 File(/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv);
 private static File destinationFile = new 
 File(/home/greg/DerbyDatabase/+file.getName());

 /** Creates a new instance of derby_filewrite_fileread */
 public derby_filewrite_fileread() {   

 }

 public static void main(String args[]) {
 try {
 
 Class.forName(org.apache.derby.jdbc.EmbeddedDriver).newInstance();
 Connection connection = DriverManager.getConnection 
 (jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true, APP, );
 connection.setAutoCommit(false);

 Statement statement = 
 connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
 ResultSet.CONCUR_READ_ONLY);
 ResultSet result = statement.executeQuery(SELECT TABLENAME FROM 
 SYS.SYSTABLES);

 // Create table if it doesn't already exists.
 boolean exist=false;
 while ( result.next() ) {
 if (db_file.equalsIgnoreCase(result.getString(1)))
 exist=true;
 }
 if ( !exist ) {
 System.out.println(Create table db_file.);
 statement.execute(CREATE TABLE db_file (+
 name  VARCHAR(40),+
 file  BLOB(2G) NOT 
 NULL));
 connection.commit();
 }

 // Read file from disk, write on DB.
 System.out.println(1 - Read file from disk, write on DB.);
 PreparedStatement 
 preparedStatement=connection.prepareStatement(INSERT INTO db_file(name,file) 
 VALUES (?,?));
 FileInputStream fileInputStream = new FileInputStream(file);
 preparedStatement.setString(1, file.getName());
 preparedStatement.setBinaryStream(2, fileInputStream, 
 (int)file.length());   
 preparedStatement.execute();
 connection.commit();
 System.out.println(2 - END OF Read file from disk, write on 
 DB.);


 // Read file from DB, and write on disk.
 System.out.println(3 - Read file from DB, and write on disk.);
 result = statement.executeQuery(SELECT file FROM db_file WHERE 
 name='+file.getName()+');
 byte[] buffer = new byte [1024];
 result.next();
 BufferedInputStream inputStream=new 
 BufferedInputStream(result.getBinaryStream(1),1024);
 FileOutputStream outputStream = new 
 FileOutputStream(destinationFile);
 int readBytes = 0;
 while (readBytes!=-1) {
 readBytes=inputStream.read(buffer,0,buffer.length);
 if ( readBytes != -1 )
 outputStream.write(buffer, 0, readBytes);
 } 
 inputStream.close();
 outputStream.close();
 System.out.println(4 - END OF Read file from DB, and write on 
 disk.);
 }
 catch (Exception e) {
 e.printStackTrace(System.err);
 }
 }
 }
 It returns
 1 - Read file from disk, write on DB.
 2 - END OF Read file from disk, write on DB.
 3 - Read file from DB, and write on disk.
 java.lang.OutOfMemoryError
 if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was 

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-13 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12420819 ] 

Andreas Korneliussen commented on DERBY-802:


Withdrawing derby-802v2.diff, since it makes SURQueryMixTest.junit fail.
The first patch has been run with no failures in derbyall.
I will commit tomorrow, unless I receive any review comments.

 OutofMemory Error when reading large blob when statement type is 
 ResultSet.TYPE_SCROLL_INSENSITIVE
 --

  Key: DERBY-802
  URL: http://issues.apache.org/jira/browse/DERBY-802
  Project: Derby
 Type: Bug

   Components: JDBC
 Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 
 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
  Environment: all
 Reporter: Sunitha Kambhampati
 Assignee: Andreas Korneliussen
 Priority: Minor
  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff

 Grégoire Dubois on the list reported this problem.  From his mail: the 
 reproduction is attached below. 
 When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
 exception is thrown when reading large blobs. 
 import java.sql.*;
 import java.io.*;
 /**
 *
 * @author greg
 */
 public class derby_filewrite_fileread {

 private static File file = new 
 File(/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv);
 private static File destinationFile = new 
 File(/home/greg/DerbyDatabase/+file.getName());

 /** Creates a new instance of derby_filewrite_fileread */
 public derby_filewrite_fileread() {   

 }

 public static void main(String args[]) {
 try {
 
 Class.forName(org.apache.derby.jdbc.EmbeddedDriver).newInstance();
 Connection connection = DriverManager.getConnection 
 (jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true, APP, );
 connection.setAutoCommit(false);

 Statement statement = 
 connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
 ResultSet.CONCUR_READ_ONLY);
 ResultSet result = statement.executeQuery(SELECT TABLENAME FROM 
 SYS.SYSTABLES);

 // Create table if it doesn't already exists.
 boolean exist=false;
 while ( result.next() ) {
 if (db_file.equalsIgnoreCase(result.getString(1)))
 exist=true;
 }
 if ( !exist ) {
 System.out.println(Create table db_file.);
 statement.execute(CREATE TABLE db_file (+
 name  VARCHAR(40),+
 file  BLOB(2G) NOT 
 NULL));
 connection.commit();
 }

 // Read file from disk, write on DB.
 System.out.println(1 - Read file from disk, write on DB.);
 PreparedStatement 
 preparedStatement=connection.prepareStatement(INSERT INTO db_file(name,file) 
 VALUES (?,?));
 FileInputStream fileInputStream = new FileInputStream(file);
 preparedStatement.setString(1, file.getName());
 preparedStatement.setBinaryStream(2, fileInputStream, 
 (int)file.length());   
 preparedStatement.execute();
 connection.commit();
 System.out.println(2 - END OF Read file from disk, write on 
 DB.);


 // Read file from DB, and write on disk.
 System.out.println(3 - Read file from DB, and write on disk.);
 result = statement.executeQuery(SELECT file FROM db_file WHERE 
 name='+file.getName()+');
 byte[] buffer = new byte [1024];
 result.next();
 BufferedInputStream inputStream=new 
 BufferedInputStream(result.getBinaryStream(1),1024);
 FileOutputStream outputStream = new 
 FileOutputStream(destinationFile);
 int readBytes = 0;
 while (readBytes!=-1) {
 readBytes=inputStream.read(buffer,0,buffer.length);
 if ( readBytes != -1 )
 outputStream.write(buffer, 0, readBytes);
 } 
 inputStream.close();
 outputStream.close();
 System.out.println(4 - END OF Read file from DB, and write on 
 disk.);
 }
 catch (Exception e) {
 e.printStackTrace(System.err);
 }
 }
 }
 It returns
 1 - Read file from disk, write on DB.
 2 - END OF Read file from disk, write on DB.
 3 - Read file from DB, and write on disk.
 java.lang.OutOfMemoryError
 if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more 

[jira] Reopened: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-13 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen reopened DERBY-1497:
-


Will update the fix so that a constructor of DisconnectException which takes 
Throwable is used.
After applying the new fix, the stack trace for the exception contains the 
following information when running the BlobOutOfMem repro in DERBY-550 on Java 
6 (Mustang):

java.sql.SQLException: Attempt to fully materialize lob data that is too large 
for the JVM.  The connection has been terminated.
at 
org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:94)
at 
org.apache.derby.client.am.SqlException.getSQLException(SqlException.java:344)
at org.apache.derby.client.am.ResultSet.next(ResultSet.java:278)
at derbytest.BlobOutOfMem.main(BlobOutOfMem.java:104)
Caused by: org.apache.derby.client.am.DisconnectException: Attempt to fully 
materialize lob data that is too large for the JVM.  The connection has been 
terminated.
at 
org.apache.derby.client.net.NetStatementReply.copyEXTDTA(NetStatementReply.java:1486)
at 
org.apache.derby.client.net.NetResultSetReply.parseCNTQRYreply(NetResultSetReply.java:139)
at 
org.apache.derby.client.net.NetResultSetReply.readFetch(NetResultSetReply.java:41)
at 
org.apache.derby.client.net.ResultSetReply.readFetch(ResultSetReply.java:40)
at 
org.apache.derby.client.net.NetResultSet.readFetch_(NetResultSet.java:205)
at org.apache.derby.client.am.ResultSet.flowFetch(ResultSet.java:4160)
at 
org.apache.derby.client.net.NetCursor.getMoreData_(NetCursor.java:1182)
at org.apache.derby.client.am.Cursor.stepNext(Cursor.java:176)
at org.apache.derby.client.am.Cursor.next(Cursor.java:195)
at org.apache.derby.client.am.ResultSet.nextX(ResultSet.java:299)
at org.apache.derby.client.am.ResultSet.next(ResultSet.java:269)
... 1 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at org.apache.derby.client.net.Reply.getData(Reply.java:786)
at 
org.apache.derby.client.net.NetStatementReply.copyEXTDTA(NetStatementReply.java:1478)
... 11 more


 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-1497.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-13 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen resolved DERBY-1497:
-

Resolution: Fixed

Committed revision 421566.


 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-1497.diff, DERBY-1497v2.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-13 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen closed DERBY-1497:
---


 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-1497.diff, DERBY-1497v2.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1486) ERROR 40XD0 - When exracting Blob from a database

2006-07-12 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1486?page=comments#action_12420571 ] 

Andreas Korneliussen commented on DERBY-1486:
-

When executing a statement in autocommit mode, the transaction will be 
committed. 

You are using forward-only, holdable result sets, which means that on commit, 
the result set will not be closed, however since it is forward only, the 
current position of the cursor is not valid, and you should issue next to get 
to a valid cursor position.

I would therefore expect the first example to fail, but not the second.
First:
if (rs.next()) {
  rs.getInt(1);
  
  readTable1(con, id);  -- ---causes transaction commit,
  
  InputStream stream = rs.getBinaryStream(2);

  ObjectInputStream objStream = new ObjectInputStream(stream);

  Object obj = objStream.readObject();

  double[] array = (double[]) obj;

  System.out.println(array.length);
} 
Second:
 while (rs.next()) {
  rs.getInt(1);

  InputStream stream = rs.getBinaryStream(2);

  ObjectInputStream objStream = new ObjectInputStream(stream);

  Object obj = objStream.readObject();

  double[] array = (double[]) obj;

  System.out.println(array.length);
  readTable1(con, id); --- transaction commit
} 

I have reproduced the problem. I found that if I modified the SELECT statement  
from 
SELECT * FROM TABLE_2 to SELECT * FROM TABLE_2 WHERE ID0, I was able to run 
the second example without failures.




  ERROR 40XD0 - When exracting Blob from a database
 --

  Key: DERBY-1486
  URL: http://issues.apache.org/jira/browse/DERBY-1486
  Project: Derby
 Type: Bug

   Components: Miscellaneous
 Versions: 10.1.2.1
  Environment: Windows XP
 Reporter: David Heath


 An exception occurs when extracting a Blob from a database. 
 The following code, will ALWAYS fail with the Exception:
 java.io.IOException: ERROR 40XD0: Container has been closed
 at 
 org.apache.derby.impl.store.raw.data.OverflowInputStream.fillByteHolder(Unknown
  Source)
 at 
 org.apache.derby.impl.store.raw.data.BufferedByteHolderInputStream.read(Unknown
  Source)
 at java.io.DataInputStream.read(Unknown Source)
 at java.io.FilterInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
 at java.io.ObjectInputStream$BlockDataInputStream.readDoubles(Unknown 
 Source)
 at java.io.ObjectInputStream.readArray(Unknown Source)
 at java.io.ObjectInputStream.readObject0(Unknown Source)
 at java.io.ObjectInputStream.readObject(Unknown Source)
 at BlobTest.readRows(BlobTest.java:81)
 at BlobTest.main(BlobTest.java:23)
 CODE:
 import java.io.*;
 import java.sql.*;
 import java.util.*;
 public class BlobTest
 {
   private static final String TABLE1 = CREATE TABLE TABLE_1 ( 
  + ID INTEGER NOT NULL, 
  + COL_2 INTEGER NOT NULL, 
  + PRIMARY KEY (ID) );
   private static final String TABLE2 = CREATE TABLE TABLE_2 ( 
  + ID INTEGER NOT NULL, 
  + COL_BLOB BLOB, 
  + PRIMARY KEY (ID) );
   public static void main(String... args) {
 try {
   createDBandTables();
   Connection con = getConnection();
   addRows(con, 1, 1);
   readRows(con, 1);
   con.close();
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
   }
   private static void addRows(Connection con, int size, int id) 
  throws Exception
   {
 String sql = INSERT INTO TABLE_1 VALUES(?, ?);
 PreparedStatement pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 pstmt.setInt(2, 2);
 pstmt.executeUpdate();
 pstmt.close();
 double[] array = new double[size];
 array[size-1] = 1.23;
 sql = INSERT INTO TABLE_2 VALUES(?, ?);
 pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
 ObjectOutputStream objStream = new ObjectOutputStream(byteStream);
 objStream.writeObject(array); // Convert object to byte stream 
 byte[] bytes = byteStream.toByteArray();
 ByteArrayInputStream inStream = new ByteArrayInputStream(bytes);
 pstmt.setBinaryStream(2, inStream, bytes.length);
 pstmt.executeUpdate();
 pstmt.close();
   }
   private static void readRows(Connection con, int id) throws Exception
   {
 String sql = SELECT * FROM TABLE_2;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 if (rs.next()) {
   rs.getInt(1);
 

[jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-12 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420591 ] 

Andreas Korneliussen commented on DERBY-550:


Unless the streaming could be fixed for 10.2 so that we avoid OutOfMemoryError 
on the receiver side, I would propose the following:

We know the size of the LOB, and can check if it can go into memory (using the 
Runtime class). If it cannot go into memory, we can throw an SQLException, 
instead of consuming all memory in the VM until we get OutOfMemoryError.

By using this approach, we achieve the following:
* Side-effects on other connections in the VM: Although it is the LOB which is 
taking almost all the memory in the VM, the OutOfMemoryError may be thrown in 
another thread in the VM, causing side-effects on other connections or on the 
application itself.
* Currently, if the Network server goes out of memory when streaming data, the 
DRDAConnThread will stop. This causes hangs in the user applications. 

If the streaming is fixed, there is not need to do this. Does anyone plan to 
fix the streaming issues for 10.2 ? If not, I will make a couple of JIRA issues 
to do the work of avoiding OutOfMemoryError by checking size before allocating 
the byte arrays.


 BLOB : java.lang.OutOfMemoryError with network JDBC driver 
 (org.apache.derby.jdbc.ClientDriver)
 ---

  Key: DERBY-550
  URL: http://issues.apache.org/jira/browse/DERBY-550
  Project: Derby
 Type: Bug

   Components: JDBC, Network Server
 Versions: 10.1.1.0
  Environment: Any environment.
 Reporter: Grégoire Dubois
 Assignee: Tomohito Nakayama
  Attachments: BlobOutOfMem.java

 Using the org.apache.derby.jdbc.ClientDriver driver to access the
 Derby database through network, the driver is writting all the file into 
 memory (RAM) before sending
 it to the database.
 Writting small files (smaller than 5Mo) into the database works fine,
 but it is impossible to write big files (40Mo for example, or more), without 
 getting the
 exception java.lang.OutOfMemoryError.
 The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
 Here follows some code that creates a database, a table, and trys to write a 
 BLOB. 2 parameters are to be changed for the code to work for you : 
 DERBY_DBMS_PATH and FILE
 import NetNoLedge.Configuration.Configs;
 import org.apache.derby.drda.NetworkServerControl;
 import java.net.InetAddress;
 import java.io.*;
 import java.sql.*;
 /**
  *
  * @author  greg
  */
 public class DerbyServer_JDBC_BLOB_test {
 
 // The unique instance of DerbyServer in the application.
 private static DerbyServer_JDBC_BLOB_test derbyServer;
 
 private NetworkServerControl server;
 
 private static final String DERBY_JDBC_DRIVER = 
 org.apache.derby.jdbc.ClientDriver;
 private static final String DERBY_DATABASE_NAME = Test;
 
 // ###
 // ### SET HERE THE EXISTING PATH YOU WANT 
 // ###
 private static final String DERBY_DBMS_PATH =  /home/greg/DatabaseTest;
 // ###
 // ###
 
 
 private static int derbyPort = 9157;
 private static String userName = user;
 private static String userPassword = password;
 
 // 
 ###
 // # DEFINE HERE THE PATH TO THE FILE YOU WANT TO WRITE INTO 
 THE DATABASE ###
 // # TRY A 100kb-3Mb FILE, AND AFTER A 40Mb OR BIGGER FILE 
 #
 // 
 ###
 private static final File FILE = new File(/home/greg/01.jpg);
 // 
 ###
 // 
 ###
 
 /**
  * pUsed to test the server.
  */
 public static void main(String args[]) {
 try {
 DerbyServer_JDBC_BLOB_test.launchServer();
 DerbyServer_JDBC_BLOB_test server = getUniqueInstance();
 server.start();
 System.out.println(Server started);
 
 // After the server has been started, launch a first connection 
 to the database to
 // 1) Create the database if it doesn't exist already,
 // 2) Create the tables if they don't exist already.
 Class.forName(DERBY_JDBC_DRIVER).newInstance();
 Connection connection = DriverManager.getConnection 
 

[jira] Commented: (DERBY-1486) ERROR 40XD0 - When exracting Blob from a database

2006-07-12 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1486?page=comments#action_12420614 ] 

Andreas Korneliussen commented on DERBY-1486:
-

Executing a new statement, will cause a commit on the current transaction, 
which is in line with the standard. The result set from the previous statement 
does not get closed until you call close() on it or until it has reached the 
end, or the transaction is rolled back. Since it is holdable, it is not closed 
when there is a commit. 

However: 
This is from the Derby doc:

For non-scrollable result sets, immediately following a commit, the only valid 
operations that can be performed on the ResultSet object are:

* positioning the result set to the next row with ResultSet.next().
* closing the result set with ResultSet.close().

I think it is a bug that the second example does not work when the select 
statement is SELECT * FROM TABLE_2

  ERROR 40XD0 - When exracting Blob from a database
 --

  Key: DERBY-1486
  URL: http://issues.apache.org/jira/browse/DERBY-1486
  Project: Derby
 Type: Bug

   Components: Miscellaneous
 Versions: 10.1.2.1
  Environment: Windows XP
 Reporter: David Heath


 An exception occurs when extracting a Blob from a database. 
 The following code, will ALWAYS fail with the Exception:
 java.io.IOException: ERROR 40XD0: Container has been closed
 at 
 org.apache.derby.impl.store.raw.data.OverflowInputStream.fillByteHolder(Unknown
  Source)
 at 
 org.apache.derby.impl.store.raw.data.BufferedByteHolderInputStream.read(Unknown
  Source)
 at java.io.DataInputStream.read(Unknown Source)
 at java.io.FilterInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
 at java.io.ObjectInputStream$BlockDataInputStream.readDoubles(Unknown 
 Source)
 at java.io.ObjectInputStream.readArray(Unknown Source)
 at java.io.ObjectInputStream.readObject0(Unknown Source)
 at java.io.ObjectInputStream.readObject(Unknown Source)
 at BlobTest.readRows(BlobTest.java:81)
 at BlobTest.main(BlobTest.java:23)
 CODE:
 import java.io.*;
 import java.sql.*;
 import java.util.*;
 public class BlobTest
 {
   private static final String TABLE1 = CREATE TABLE TABLE_1 ( 
  + ID INTEGER NOT NULL, 
  + COL_2 INTEGER NOT NULL, 
  + PRIMARY KEY (ID) );
   private static final String TABLE2 = CREATE TABLE TABLE_2 ( 
  + ID INTEGER NOT NULL, 
  + COL_BLOB BLOB, 
  + PRIMARY KEY (ID) );
   public static void main(String... args) {
 try {
   createDBandTables();
   Connection con = getConnection();
   addRows(con, 1, 1);
   readRows(con, 1);
   con.close();
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
   }
   private static void addRows(Connection con, int size, int id) 
  throws Exception
   {
 String sql = INSERT INTO TABLE_1 VALUES(?, ?);
 PreparedStatement pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 pstmt.setInt(2, 2);
 pstmt.executeUpdate();
 pstmt.close();
 double[] array = new double[size];
 array[size-1] = 1.23;
 sql = INSERT INTO TABLE_2 VALUES(?, ?);
 pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
 ObjectOutputStream objStream = new ObjectOutputStream(byteStream);
 objStream.writeObject(array); // Convert object to byte stream 
 byte[] bytes = byteStream.toByteArray();
 ByteArrayInputStream inStream = new ByteArrayInputStream(bytes);
 pstmt.setBinaryStream(2, inStream, bytes.length);
 pstmt.executeUpdate();
 pstmt.close();
   }
   private static void readRows(Connection con, int id) throws Exception
   {
 String sql = SELECT * FROM TABLE_2;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 if (rs.next()) {
   rs.getInt(1);
   readTable1(con, id);
   InputStream stream = rs.getBinaryStream(2);
   ObjectInputStream objStream = new ObjectInputStream(stream);
   Object obj = objStream.readObject();   // FAILS HERE
   double[] array = (double[]) obj;
   System.out.println(array.length);
 }
 rs.close();
 stmt.close();
   }
   private static void readTable1(Connection con, int id) throws Exception {
 String sql = SELECT ID FROM TABLE_1 WHERE ID= + id;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 if (rs.next()) {

[jira] Commented: (DERBY-1486) ERROR 40XD0 - When exracting Blob from a database

2006-07-12 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1486?page=comments#action_12420618 ] 

Andreas Korneliussen commented on DERBY-1486:
-

To clarify: this applies only in autocommit:
 Executing a new statement, will cause a commit on the current transaction,

  ERROR 40XD0 - When exracting Blob from a database
 --

  Key: DERBY-1486
  URL: http://issues.apache.org/jira/browse/DERBY-1486
  Project: Derby
 Type: Bug

   Components: Miscellaneous
 Versions: 10.1.2.1
  Environment: Windows XP
 Reporter: David Heath


 An exception occurs when extracting a Blob from a database. 
 The following code, will ALWAYS fail with the Exception:
 java.io.IOException: ERROR 40XD0: Container has been closed
 at 
 org.apache.derby.impl.store.raw.data.OverflowInputStream.fillByteHolder(Unknown
  Source)
 at 
 org.apache.derby.impl.store.raw.data.BufferedByteHolderInputStream.read(Unknown
  Source)
 at java.io.DataInputStream.read(Unknown Source)
 at java.io.FilterInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.read(Unknown Source)
 at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
 at java.io.ObjectInputStream$BlockDataInputStream.readDoubles(Unknown 
 Source)
 at java.io.ObjectInputStream.readArray(Unknown Source)
 at java.io.ObjectInputStream.readObject0(Unknown Source)
 at java.io.ObjectInputStream.readObject(Unknown Source)
 at BlobTest.readRows(BlobTest.java:81)
 at BlobTest.main(BlobTest.java:23)
 CODE:
 import java.io.*;
 import java.sql.*;
 import java.util.*;
 public class BlobTest
 {
   private static final String TABLE1 = CREATE TABLE TABLE_1 ( 
  + ID INTEGER NOT NULL, 
  + COL_2 INTEGER NOT NULL, 
  + PRIMARY KEY (ID) );
   private static final String TABLE2 = CREATE TABLE TABLE_2 ( 
  + ID INTEGER NOT NULL, 
  + COL_BLOB BLOB, 
  + PRIMARY KEY (ID) );
   public static void main(String... args) {
 try {
   createDBandTables();
   Connection con = getConnection();
   addRows(con, 1, 1);
   readRows(con, 1);
   con.close();
 }
 catch(Exception exp) {
   exp.printStackTrace();
 }
   }
   private static void addRows(Connection con, int size, int id) 
  throws Exception
   {
 String sql = INSERT INTO TABLE_1 VALUES(?, ?);
 PreparedStatement pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 pstmt.setInt(2, 2);
 pstmt.executeUpdate();
 pstmt.close();
 double[] array = new double[size];
 array[size-1] = 1.23;
 sql = INSERT INTO TABLE_2 VALUES(?, ?);
 pstmt = con.prepareStatement(sql);
 pstmt.setInt(1, id);
 ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
 ObjectOutputStream objStream = new ObjectOutputStream(byteStream);
 objStream.writeObject(array); // Convert object to byte stream 
 byte[] bytes = byteStream.toByteArray();
 ByteArrayInputStream inStream = new ByteArrayInputStream(bytes);
 pstmt.setBinaryStream(2, inStream, bytes.length);
 pstmt.executeUpdate();
 pstmt.close();
   }
   private static void readRows(Connection con, int id) throws Exception
   {
 String sql = SELECT * FROM TABLE_2;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 if (rs.next()) {
   rs.getInt(1);
   readTable1(con, id);
   InputStream stream = rs.getBinaryStream(2);
   ObjectInputStream objStream = new ObjectInputStream(stream);
   Object obj = objStream.readObject();   // FAILS HERE
   double[] array = (double[]) obj;
   System.out.println(array.length);
 }
 rs.close();
 stmt.close();
   }
   private static void readTable1(Connection con, int id) throws Exception {
 String sql = SELECT ID FROM TABLE_1 WHERE ID= + id;
 Statement stmt = con.createStatement();
 ResultSet rs = stmt.executeQuery(sql);
 if (rs.next()) {
 }
 rs.close();
 stmt.close();
   }
   
   private static Connection getConnection() throws Exception {
 String driver=org.apache.derby.jdbc.EmbeddedDriver;
 Properties p = System.getProperties();
 p.put(derby.system.home, C:\\databases\\sample);
 
 Class.forName(driver);
 String url = jdbc:derby:derbyBlob;
 Connection con = DriverManager.getConnection(url);
 return con;
   }
   private static void createDBandTables() throws Exception {
 String driver=org.apache.derby.jdbc.EmbeddedDriver;
 Properties p = System.getProperties();
 p.put(derby.system.home, 

[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-12 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---

Attachment: derby-802v2.diff

The attached diff (derby-802v2.diff) has one change compared to the first diff:
* The logic for undoing the projection is moved to ProjectRestricResultSet and 
takes advantage of the projectMappings array already built there.

 OutofMemory Error when reading large blob when statement type is 
 ResultSet.TYPE_SCROLL_INSENSITIVE
 --

  Key: DERBY-802
  URL: http://issues.apache.org/jira/browse/DERBY-802
  Project: Derby
 Type: Bug

   Components: JDBC
 Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 
 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
  Environment: all
 Reporter: Sunitha Kambhampati
 Assignee: Andreas Korneliussen
 Priority: Minor
  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff

 Grégoire Dubois on the list reported this problem.  From his mail: the 
 reproduction is attached below. 
 When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
 exception is thrown when reading large blobs. 
 import java.sql.*;
 import java.io.*;
 /**
 *
 * @author greg
 */
 public class derby_filewrite_fileread {

 private static File file = new 
 File(/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv);
 private static File destinationFile = new 
 File(/home/greg/DerbyDatabase/+file.getName());

 /** Creates a new instance of derby_filewrite_fileread */
 public derby_filewrite_fileread() {   

 }

 public static void main(String args[]) {
 try {
 
 Class.forName(org.apache.derby.jdbc.EmbeddedDriver).newInstance();
 Connection connection = DriverManager.getConnection 
 (jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true, APP, );
 connection.setAutoCommit(false);

 Statement statement = 
 connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
 ResultSet.CONCUR_READ_ONLY);
 ResultSet result = statement.executeQuery(SELECT TABLENAME FROM 
 SYS.SYSTABLES);

 // Create table if it doesn't already exists.
 boolean exist=false;
 while ( result.next() ) {
 if (db_file.equalsIgnoreCase(result.getString(1)))
 exist=true;
 }
 if ( !exist ) {
 System.out.println(Create table db_file.);
 statement.execute(CREATE TABLE db_file (+
 name  VARCHAR(40),+
 file  BLOB(2G) NOT 
 NULL));
 connection.commit();
 }

 // Read file from disk, write on DB.
 System.out.println(1 - Read file from disk, write on DB.);
 PreparedStatement 
 preparedStatement=connection.prepareStatement(INSERT INTO db_file(name,file) 
 VALUES (?,?));
 FileInputStream fileInputStream = new FileInputStream(file);
 preparedStatement.setString(1, file.getName());
 preparedStatement.setBinaryStream(2, fileInputStream, 
 (int)file.length());   
 preparedStatement.execute();
 connection.commit();
 System.out.println(2 - END OF Read file from disk, write on 
 DB.);


 // Read file from DB, and write on disk.
 System.out.println(3 - Read file from DB, and write on disk.);
 result = statement.executeQuery(SELECT file FROM db_file WHERE 
 name='+file.getName()+');
 byte[] buffer = new byte [1024];
 result.next();
 BufferedInputStream inputStream=new 
 BufferedInputStream(result.getBinaryStream(1),1024);
 FileOutputStream outputStream = new 
 FileOutputStream(destinationFile);
 int readBytes = 0;
 while (readBytes!=-1) {
 readBytes=inputStream.read(buffer,0,buffer.length);
 if ( readBytes != -1 )
 outputStream.write(buffer, 0, readBytes);
 } 
 inputStream.close();
 outputStream.close();
 System.out.println(4 - END OF Read file from DB, and write on 
 disk.);
 }
 catch (Exception e) {
 e.printStackTrace(System.err);
 }
 }
 }
 It returns
 1 - Read file from disk, write on DB.
 2 - END OF Read file from disk, write on DB.
 3 - Read file from DB, and write on disk.
 java.lang.OutOfMemoryError
 if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   

[jira] Updated: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]

Andreas Korneliussen updated DERBY-1497:


Attachment: DERBY-1497.diff

The attached patch fixes the problem by not providning the error object as 
parameter to message.

 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Attachments: DERBY-1497.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-1498) avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams

2006-07-11 Thread Andreas Korneliussen (JIRA)
avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams
---

 Key: DERBY-1498
 URL: http://issues.apache.org/jira/browse/DERBY-1498
 Project: Derby
Type: Sub-task

  Components: Network Server  
Versions: 10.2.0.0
Reporter: Andreas Korneliussen
 Assigned to: Andreas Korneliussen 
Priority: Trivial


In DRDAConnThread.readAndSetExtParams(..)  all the bytes of a LOB is 
concatenated to a string and traced, when running in debug mode:
This makes it harder to debug OutOfMemory errors.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1498) avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1498?page=all ]

Andreas Korneliussen updated DERBY-1498:


Attachment: DERBY-1498.diff

Attached is a patch which instead logs the length of the byte array.

 avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams
 ---

  Key: DERBY-1498
  URL: http://issues.apache.org/jira/browse/DERBY-1498
  Project: Derby
 Type: Sub-task

   Components: Network Server
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Attachments: DERBY-1498.diff

 In DRDAConnThread.readAndSetExtParams(..)  all the bytes of a LOB is 
 concatenated to a string and traced, when running in debug mode:
 This makes it harder to debug OutOfMemory errors.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1498) avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1498?page=all ]

Andreas Korneliussen updated DERBY-1498:


Derby Info: [Patch Available]

 avoid tracing of LOB data in DRDAConnThread.readAndSetExtParams
 ---

  Key: DERBY-1498
  URL: http://issues.apache.org/jira/browse/DERBY-1498
  Project: Derby
 Type: Sub-task

   Components: Network Server
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Attachments: DERBY-1498.diff

 In DRDAConnThread.readAndSetExtParams(..)  all the bytes of a LOB is 
 concatenated to a string and traced, when running in debug mode:
 This makes it harder to debug OutOfMemory errors.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]

Andreas Korneliussen updated DERBY-1497:


Derby Info: [Patch Available]

 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Attachments: DERBY-1497.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen resolved DERBY-1497:
-

Fix Version: 10.2.0.0
 Resolution: Fixed
 Derby Info:   (was: [Patch Available])

Committed revision 420821.


 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-1497.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-11 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen closed DERBY-1497:
---


 assert failure in MessageUtil, because exception thrown with too many 
 parameters when handling OutOfMemoryError
 ---

  Key: DERBY-1497
  URL: http://issues.apache.org/jira/browse/DERBY-1497
  Project: Derby
 Type: Sub-task

   Components: Network Client
 Versions: 10.2.0.0
 Reporter: Andreas Korneliussen
 Assignee: Andreas Korneliussen
 Priority: Trivial
  Fix For: 10.2.0.0
  Attachments: DERBY-1497.diff

 If the VM throws a OutOfMemoryException, which is caught in:
 NetStatementReply.copyEXTDTA:
 protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
 {
 try {
 parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
 byte[] data = null;
 if (longValueForDecryption_ == null) {
 data = (getData(null)).toByteArray();
 } else {
 data = longValueForDecryption_;
 dssLength_ = 0;
 longValueForDecryption_ = null;
 }
 netCursor.extdtaData_.add(data);
 } catch (java.lang.OutOfMemoryError e) { --- outofmemory
 agent_.accumulateChainBreakingReadExceptionAndThrow(new 
 DisconnectException(agent_,
 new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
 e));  - message does not take parameters, causing assert 
 failure
 }
 } 
 Instead of getting the message: java.sql.SQLException: Attempt to fully 
 materialize lob data that is too large for the JVM.  The connection has been 
 terminated.
 I am getting an assert: 
 Exception in thread main 
 org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
 parameters expected for message id 58009.C.6 (0) does not match number of 
 arguments received (1)
 at 
 org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



  1   2   3   4   >