Thoughts about community involvement in important issues (spinoff from getGeneratedKeys())

2006-07-13 Thread Kathey Marsden
This was in a thread about getGeneratedKeys() 


Lance J. Andersen wrote about JDBC spec and compatibility discussions:

the discussions are healthy though to not just jump into the fire, i 
just wish there were more people contributing besides us sometimes 


Kathey wrote:

Well it just takes time I think and sometimes it is like the hard work 
of  rubbing two sticks together to start a fire.  Once it catches,  
watch out!

Here are some examples 

1) I used to think  I was one of a very few that cared at about 
compatibility. 
   I'd wake up each day to new proposals like: "Today let's change the 
internal  DRDAID to and break every client in the world" or "Lets add a 
new security permission that needs to be in every deployed policy 
file".   Now  I see lots of folks working on compat.  Dave published the 
guidelines.  Rick  immediately asked with this issue  if it was ok to 
change  in 10.2 and talked about polling the user community.  Lots of 
developers are thinking about it with their changes.  No good testing 
yet though. There is far to go.


2) I used to think I was one of a very few that cared about all of the 
outstanding defects piling up.
But in 10.1.3 I saw 90 bugs fixed and contributions from  36 
developers and testers  from all over the world.

3)  Two weeks ago I thought I was one of very few who cared about Jira 
Maintenance and having the data mean anything.  I put out some reports 
but no bites. This week, Rick put out a request and Andrew helped 
facilitate with easy links. I went to a meeting and came back to a mass 
Jira mail attack from folks cleaning up issues.


4)   Earlier this  week I thought I was one of the very few that cared 
about quick patch turnaround and patch list maintenance.
  Oh well,  there are still 22 outstanding.   Next week who knows. 

The sad thing is there is a really long list of  very important things 
that really need to  be on the right people's  radar.  Getting them in 
the hands of the right people who care and presented in a way that is 
meaningful to them  is hard. This community is preparing for an 
explosion of users and developers too, I think,  as folks want to come 
in,  fix their bug and get out.  My opinion is that we are not at all nearly prepared on 
many fronts, from community issues like patch turnaround and code 
formatting trip wires to really serious bugs like the corruption  caused 
by  DERBY-700, security issues like DERBY-65/528 and low hanging fruit  
serious compliance issues like DERBY-790.  How important are those 
relative to changing getGeneratedKeys()?  Different for each person I guess.


But we need  to keep on communicating and try to find new ways to frame 
these issues in the right way and provide easy access to the people 
care.  As soon as you showed me a user hitting trouble with 
getGeneratedKeys() you softened me up #:).   Maybe there is a Wiki page 
you could make to bring attention to JDBC compliance issues and Derby  too.   
This would help to give us the  big picture of what the outstanding 
issues are. 


Kathey




[jira] Updated: (DERBY-1462) Test harness incorrectly catches db.lck WARNINGS generated during STORE tests and lists the tests as failed in *_fail.txt.

2006-07-13 Thread Myrna van Lunteren (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1462?page=all ]

Myrna van Lunteren updated DERBY-1462:
--

Derby Info: [Patch Available]

> Test harness incorrectly catches db.lck WARNINGS generated during STORE tests 
> and lists the tests as failed in *_fail.txt.
> --
>
>  Key: DERBY-1462
>  URL: http://issues.apache.org/jira/browse/DERBY-1462
>  Project: Derby
> Type: Test

>   Components: Test
> Versions: 10.1.2.5, 10.1.2.4, 10.1.2.3, 10.1.2.2, 10.1.2.1, 10.1.3.0, 
> 10.1.3.1
>  Environment: IBM 1.3.1 JRE for LINUX (and possibly other JRE 1.3.1 
> environments)
> Reporter: Stan Bradbury
> Assignee: Myrna van Lunteren
> Priority: Minor
>  Attachments: DERBY-1462_102_20060713.diff, DERBY-1462_102_20060713.stat
>
> The following store tests from derbyall do not shutdown cleanly so leave the 
> db.lck file on disk.  This is OK! It is done by design to test recovery.  THE 
> PROBLEM, when run on Linux using IBM JRE 1.3.1 sp 10 the test harness 'sees' 
> the warnings and lists the tests as having failed.  The harness should ignore 
> this warnings as the tests proceed and complete cleanly.
> Tests INCORRECLTY reported as failed:
> derbyall/derbynetclientmats/derbynetmats.fail:stress/stress.multi
> derbyall/derbynetmats/derbynetmats.fail:stress/stress.multi
> derbyall/storeall/storeall.fail:storetests/st_1.sql
> derbyall/storeall/storeall.fail:unit/recoveryTest.unit
>  erbyall/storeall/storeall.fail:store/LogChecksumRecovery.java
> derbyall/storeall/storeall.fail:store/LogChecksumRecovery1.java
>  erbyall/storeall/storeall.fail:store/MaxLogNumberRecovery.java
> derbyall/storeall/storeall.fail:store/oc_rec1.java
> derbyall/storeall/storeall.fail:store/oc_rec2.java
> derbyall/storeall/storeall.fail:store/oc_rec3.java
> derbyall/storeall/storeall.fail:store/oc_rec4.java
> derbyall/storeall/storeall.fail:store/dropcrash.java
> derbyall/storeall/storeall.fail:store/dropcrash2.java
> Example Error message:
> WARNING: Derby (instance FILTERED-UUID) is attempting to boot the 
> database 
> csf:/local1/131TST/Store1/storeall/storerecovery/storerecovery/wombat even 
> though Derby (instance FILTERED-UUID) may still be active.  Only one 
> instance of Derby should boot a database at a time. Severe and 
> non-recoverable corruption can result and may have already occurred.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1486) ERROR 40XD0 - When exracting Blob from a database

2006-07-13 Thread David Heath (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1486?page=comments#action_12420992 ] 

David Heath commented on DERBY-1486:


I appreciate the response of the community and also the communication from 
Lance. 

However, I am unsure how to close out this bug, it looks like the majority of 
the issues where encountered due to the way I interleaved ResultSets - these I 
can fix. (If someone has access to the JavaDocs for the java.sql package, 
changing them, to make this clear, would really help)

However, as mentioned in my last post and endorsed by Andreas, there seems to 
be a bug in derby:

Secondly, the fact the second example only works within derby, if the select 
statement is changed from: SELECT * FROM TABLE_2 to SELECT * FROM TABLE_2 WHERE 
ID>0 implies there is a bug somewhere in derby?

I do not think the above issue will affect my code (as I generally use a WHERE 
clause) - however I would hate the bug to just role under the carpet. Thus 
should I just close this bug report? Should I close this bug report and create 
a new one, with lower priority? Other suggestions appreciated.


>  ERROR 40XD0 - When exracting Blob from a database
> --
>
>  Key: DERBY-1486
>  URL: http://issues.apache.org/jira/browse/DERBY-1486
>  Project: Derby
> Type: Bug

>   Components: Miscellaneous
> Versions: 10.1.2.1
>  Environment: Windows XP
> Reporter: David Heath

>
> An exception occurs when extracting a Blob from a database. 
> The following code, will ALWAYS fail with the Exception:
> java.io.IOException: ERROR 40XD0: Container has been closed
> at 
> org.apache.derby.impl.store.raw.data.OverflowInputStream.fillByteHolder(Unknown
>  Source)
> at 
> org.apache.derby.impl.store.raw.data.BufferedByteHolderInputStream.read(Unknown
>  Source)
> at java.io.DataInputStream.read(Unknown Source)
> at java.io.FilterInputStream.read(Unknown Source)
> at java.io.ObjectInputStream$PeekInputStream.read(Unknown Source)
> at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
> at java.io.ObjectInputStream$BlockDataInputStream.readDoubles(Unknown 
> Source)
> at java.io.ObjectInputStream.readArray(Unknown Source)
> at java.io.ObjectInputStream.readObject0(Unknown Source)
> at java.io.ObjectInputStream.readObject(Unknown Source)
> at BlobTest.readRows(BlobTest.java:81)
> at BlobTest.main(BlobTest.java:23)
> CODE:
> import java.io.*;
> import java.sql.*;
> import java.util.*;
> public class BlobTest
> {
>   private static final String TABLE1 = "CREATE TABLE TABLE_1 ( "
>  + "ID INTEGER NOT NULL, "
>  + "COL_2 INTEGER NOT NULL, "
>  + "PRIMARY KEY (ID) )";
>   private static final String TABLE2 = "CREATE TABLE TABLE_2 ( "
>  + "ID INTEGER NOT NULL, "
>  + "COL_BLOB BLOB, "
>  + "PRIMARY KEY (ID) )";
>   public static void main(String... args) {
> try {
>   createDBandTables();
>   Connection con = getConnection();
>   addRows(con, 1, 1);
>   readRows(con, 1);
>   con.close();
> }
> catch(Exception exp) {
>   exp.printStackTrace();
> }
>   }
>   private static void addRows(Connection con, int size, int id) 
>  throws Exception
>   {
> String sql = "INSERT INTO TABLE_1 VALUES(?, ?)";
> PreparedStatement pstmt = con.prepareStatement(sql);
> pstmt.setInt(1, id);
> pstmt.setInt(2, 2);
> pstmt.executeUpdate();
> pstmt.close();
> double[] array = new double[size];
> array[size-1] = 1.23;
> sql = "INSERT INTO TABLE_2 VALUES(?, ?)";
> pstmt = con.prepareStatement(sql);
> pstmt.setInt(1, id);
> ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
> ObjectOutputStream objStream = new ObjectOutputStream(byteStream);
> objStream.writeObject(array); // Convert object to byte stream 
> byte[] bytes = byteStream.toByteArray();
> ByteArrayInputStream inStream = new ByteArrayInputStream(bytes);
> pstmt.setBinaryStream(2, inStream, bytes.length);
> pstmt.executeUpdate();
> pstmt.close();
>   }
>   private static void readRows(Connection con, int id) throws Exception
>   {
> String sql = "SELECT * FROM TABLE_2";
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery(sql);
> if (rs.next()) {
>   rs.getInt(1);
>   readTable1(con, id);
>   InputStream stream = rs.getBinaryStream(2);
>   ObjectInputStream objStream = new ObjectInputStream(stream);
>   Object obj = objStream.readObject();   // FAILS HERE
>   double[] array = (double[]) obj;
>   System.out.pri

Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Kathey Marsden
Somehow I managed to get into a side conversation with Lance,  in the 
follow up of this.  I am not sure how I did it with my mailer, but I 
responded directly to him. Below is my response to Lance which is 
relevant to this issue.  I will send a separate mail with additional 
follow-up which changed topics..


My response.

Lance J. Andersen wrote:

Understand, the original intent  of this thread was also to try and 
understand why this behavior was there and know i know.



I am sorry I didn't clarify this properly. For some reason I actually 
had thought I had but looking back at the thread I don't see it.  I 
guess I just thought  everyone knows this by now. Before there was talk 
of contributing the Cloudscape code to Apache, there was an effort to 
make Cloudscape compatible with DB2 and  that is the reason for this 
behavior.  I don't think it is relevant to Derby's current decisions 
except in cases like this where changing it might impact existing 
users.  Also any general impact on the ability to migrate to other 
databases as defined by our charter 
http://db.apache.org/derby/derby_charter.html might sometimes be a 
consideration I think.


Kathey






Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-13 Thread Daniel John Debrunner
David Van Couvering wrote:

> I'm all for helping out other projects, but IMHO Derby could really use
> this migration tool.  We could let the DDL team know these issues exist,
> participate in discussions with them, while at the same time scratching
> our own itch and getting a migration tool done before Ramin disappears...

I guess I was assuming that Derby would continue to get the tool even if
some of the code was contributed to DDL utils, as the tool would use
ddlutils.

Seems Ramin is getting a good view of open source. :-)

Options seem to be ...

  - Add code to Derby that might be better suited to the charter of our
sister DB project, ddlutils. Will the community reject it as outside the
charter, or not in the best interests of Apache?

  - Add code to ddlutils and Derby and have to deal with two communities
and no mentor in the ddlutils project.

I do think we should treat any submission by a Google summer of code
student as any other contribution and not take any shortcuts, decisions
against the charter or not in the best interest of the Apache DB project
or Apache itself.

One possible outcome is that Ramin completes the code, it's submitted to
Derby attached to a Jira entty, but the portions that are ddlutils
relevant get committed there and the derby specific code here. (Someone
would have to drive that though).

And hopefully the whole GoogleSoC experience is so good that Ramin
continues to be involved in Derby and other open source projects rather
than just "disappearing" at the end of the GoogleSoC.

Thanks,
Dan.




Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Daniel John Debrunner
Lance J. Andersen wrote:
> I think it can be improved but the javadocs indicates (executeUpdate)
> that the array is  ignored if the statement is not able to return an
> autogenerated key and the getGeneratedKeys says it will return an empty
> ResultSet if it cannot return generated keys.

Do the defintions of the columns in the empty ResultSet have to match
the defintions of the columns it can't return because they are not
generated.

:-)

Just kidding, though "does empty mean no *columns* and no rows" might a
 discussion for some future derby/apache event that involves beer. :-)

Dan.



[jira] Closed: (DERBY-222) Add project that uses derby to web site integration summary page

2006-07-13 Thread Jean T. Anderson (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-222?page=all ]
 
Jean T. Anderson closed DERBY-222:
--

Resolution: Fixed

This idea was implemented as a Derby Wiki page: 
http://wiki.apache.org/db-derby/UsesOfDerby

> Add project that uses derby to web site  integration summary page
> -
>
>  Key: DERBY-222
>  URL: http://issues.apache.org/jira/browse/DERBY-222
>  Project: Derby
> Type: Task

>   Components: Web Site
> Reporter: Jean T. Anderson
> Assignee: Jean T. Anderson
> Priority: Trivial

>
> Jeremy suggested using Jira to add projects/products that use derby to 
> http://incubator.apache.org/derby/integrate/misc.html; see
> http://mail-archives.eu.apache.org/mod_mbox/db-derby-user/200503.mbox/[EMAIL 
> PROTECTED]
> This task streamlines the process. Simply add a comment to this Jira issue 
> with the information you'd like added, and the assignee will add it. Please 
> provide:
>  - Product name
>  - Product URL
>  - License
>  - Description
>  - Product category (is it a Web App Server? a GUI Tool?)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-580) Re-arrange javadoc on the Derby site.

2006-07-13 Thread Jean T. Anderson (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-580?page=all ]
 
Jean T. Anderson closed DERBY-580:
--

Resolution: Fixed

> Re-arrange javadoc on the Derby site.
> -
>
>  Key: DERBY-580
>  URL: http://issues.apache.org/jira/browse/DERBY-580
>  Project: Derby
> Type: Improvement

>   Components: Web Site
> Reporter: Daniel John Debrunner
> Assignee: Jean T. Anderson

>
> 1) The published javadoc (ant target  publishedapi) is part of the manuals, 
> thus should be included (linked to) in the list under 10.0 and 10.1 manuals.
> 2) The javadoc link on the home tab should be removed.
> 3)  All the javadoc for the source code (the links from the link in 2) should 
> be in the papers section.
> Currently only one is, the engine javadoc.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-659) Sumitting article on creating a Derby Demo web app with Tomcat

2006-07-13 Thread Jean T. Anderson (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-659?page=all ]
 
Jean T. Anderson closed DERBY-659:
--

Resolution: Fixed

> Sumitting article on creating a Derby  Demo web app with Tomcat
> ---
>
>  Key: DERBY-659
>  URL: http://issues.apache.org/jira/browse/DERBY-659
>  Project: Derby
> Type: Task

>   Components: Web Site
> Reporter: Stan Bradbury
> Assignee: Jean T. Anderson
> Priority: Minor
>  Attachments: TomcatDemoSubmit.zip
>
> Submitting an article with step-by-step instructions (a deployment template) 
> showing adding Derby to a Tomcat server (bv 5.5.12) and then deploying the 
> JPetStore application as a demo of using Derby with Tomcat.
>   TITLE:  Embedding Apache Derby in Tomcat and creating an iBATIS JPetStore 
> Demo
> Attached Zipfile (TomcatDemoSubmit.zip) contains the forrest XML source for 
> the artilce and two assocated files.  The files should be placed in the 
> standard locations:
> DerbyTomcat5512JPetStor.xml - the integrate directory
> TomcatDeployOrig.gif  - the images directory
> DerbyJPetStore4Tomcat.zip   - the binaries directory
> Request that with this change we also standardize the website labels for the 
> three JPetStore integration papers to:   Demo.  Please make the 
> label for this paper be:
>   Tomcat Demo
> And the formerly submitted papers labels:
>   WebSphere Demo
>   Geronimo Demo

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Problems in SQLBinary when passing in streams with unknown length (SQL, store)

2006-07-13 Thread Daniel John Debrunner
Kristian Waagan wrote:
> Hello,
> 
> I just discovered that we are having problems with the length less
> overloads in the embedded driver. Before I add any Jiras, I would like
> some feedback from the community. There are for sure problems in
> SQLBinary.readFromStream(). I would also appreciate if someone with
> knowledge of the storage layer can tell me if we are facing trouble
> there as well.
> 
> SQL layer
> =
> SQLBinary.readFromStream()
>   1) The method does not support streaming.
>  It will either grow the buffer array to twice its size, or possibly
>  more if the available() method of the input stream returns a
>  non-zero value, until all data is read. This approach causes an
>  OutOfMemoryError if the stream data cannot fit into memory.

I think this is because the maximum size for this data type is 255
bytes, so memory usage was not a concern.
SQLBinary corresponds to CHAR FOR BIT DATA, the sub-classes correspond
to the larger data types.

One question that has been nagging me is that the standard response to
why the existing JDBC methods had to declare the length was that the
length was required up-front by most (some?) database engines. Did this
requirement suddenly disappear? I assume it was discussed in the JDBC
4.0 expert group.

I haven't looked at your implementation for this, but the root cause may
be that derby does need to verify that the supplied value does not
exceed the declared length for the data type. Prior to any change for
lengthless overloads the incoming length was checked before the data was
inserted into the store. I wonder if with your change it is still
checking the length prior to storing it, but reading the entire value
into a byte array in order to determine its length.

>   2) Might enter endless loop.
>  If the available() method of the input stream returns 0, and the
>  data in the stream is larger than the initial buffer array, an
>  endless loop will be entered. The problem is that the length
>  argument of read(byte[],int,int) is set to 0. We don't read any
>  more data and the stream is never exhausted.

That seems like a bug, available() is basically a useless method.

> 
> To me, relying on available() to determine if the stream is exhausted
> seems wrong. Also, subclasses of InputStream will return 0 if they don't
> override the method.
> I wrote a simple workaround for 2), but then the OutOfMemoryError
> comes into play for large data.
> 
> 
> Store layer
> ===
> I haven't had time to study the store layer, and know very little about
> it. I hope somebody can give me some quick answers here.
>   3) Is it possible to stream directly to the store layer if you don't
>  know the length of the data?
>  Can meta information (page headers, record headers etc.) be updated
>  "as we go", or must the size be specified when the insert is
>  started?

Yes the store can handle this.

Dan.



[jira] Updated: (DERBY-1462) Test harness incorrectly catches db.lck WARNINGS generated during STORE tests and lists the tests as failed in *_fail.txt.

2006-07-13 Thread Myrna van Lunteren (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1462?page=all ]

Myrna van Lunteren updated DERBY-1462:
--

Attachment: DERBY-1462_102_20060713.stat
DERBY-1462_102_20060713.diff

Attached patch for 10.2 - DERBY-1462_102_20060713.* - adds _sed.properties 
files for all the tests that are listed as failing, plus for st_derby715.java 
which also fails.

In addition, I had to modify RunTest.java to pick up the stress_sed.properties 
file (which was not happening because the multi test sits in a different 
location from the rest of the tests).

Finally I modified 2 build.xmls to no longer need the copyfiles.ant - which is 
how other test dirs have their build.xml now. (eliminating the maintenance pain 
of coyfiles.ant).

I ran the troublesome test suites with ibm131 and ibm142 on Linux with only 
failures in stress multi which appeared to be the known intermittent problem (I 
reran that until it passed once...) and is at least not related to this issue.

> Test harness incorrectly catches db.lck WARNINGS generated during STORE tests 
> and lists the tests as failed in *_fail.txt.
> --
>
>  Key: DERBY-1462
>  URL: http://issues.apache.org/jira/browse/DERBY-1462
>  Project: Derby
> Type: Test

>   Components: Test
> Versions: 10.1.2.5, 10.1.2.4, 10.1.2.3, 10.1.2.2, 10.1.2.1, 10.1.3.0, 
> 10.1.3.1
>  Environment: IBM 1.3.1 JRE for LINUX (and possibly other JRE 1.3.1 
> environments)
> Reporter: Stan Bradbury
> Assignee: Myrna van Lunteren
> Priority: Minor
>  Attachments: DERBY-1462_102_20060713.diff, DERBY-1462_102_20060713.stat
>
> The following store tests from derbyall do not shutdown cleanly so leave the 
> db.lck file on disk.  This is OK! It is done by design to test recovery.  THE 
> PROBLEM, when run on Linux using IBM JRE 1.3.1 sp 10 the test harness 'sees' 
> the warnings and lists the tests as having failed.  The harness should ignore 
> this warnings as the tests proceed and complete cleanly.
> Tests INCORRECLTY reported as failed:
> derbyall/derbynetclientmats/derbynetmats.fail:stress/stress.multi
> derbyall/derbynetmats/derbynetmats.fail:stress/stress.multi
> derbyall/storeall/storeall.fail:storetests/st_1.sql
> derbyall/storeall/storeall.fail:unit/recoveryTest.unit
>  erbyall/storeall/storeall.fail:store/LogChecksumRecovery.java
> derbyall/storeall/storeall.fail:store/LogChecksumRecovery1.java
>  erbyall/storeall/storeall.fail:store/MaxLogNumberRecovery.java
> derbyall/storeall/storeall.fail:store/oc_rec1.java
> derbyall/storeall/storeall.fail:store/oc_rec2.java
> derbyall/storeall/storeall.fail:store/oc_rec3.java
> derbyall/storeall/storeall.fail:store/oc_rec4.java
> derbyall/storeall/storeall.fail:store/dropcrash.java
> derbyall/storeall/storeall.fail:store/dropcrash2.java
> Example Error message:
> WARNING: Derby (instance FILTERED-UUID) is attempting to boot the 
> database 
> csf:/local1/131TST/Store1/storeall/storerecovery/storerecovery/wombat even 
> though Derby (instance FILTERED-UUID) may still be active.  Only one 
> instance of Derby should boot a database at a time. Severe and 
> non-recoverable corruption can result and may have already occurred.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1462) Test harness incorrectly catches db.lck WARNINGS generated during STORE tests and lists the tests as failed in *_fail.txt.

2006-07-13 Thread Myrna van Lunteren (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1462?page=comments#action_12420977 ] 

Myrna van Lunteren commented on DERBY-1462:
---

cut-and-paste comment on the list by Mike Matrigali:
--- agree, I think this a problem with these particular tests on 
pre 1.4
jvms, on non windows platforms - their output is different - it would
not be right to somehow get
the harness to supress every connect warning printed.  Fixing with test
specific seds is fine.

I checked all the tests below except for the stress ones, they all are
setup to run on an existing db and are coded to crash with no cleanup so
that subsequent test will run recovery.  A side effect of that crash on
pre-1.4 jvms on non windows platforms is that it leaves a lock file
around which derby can't tell
if it is an active lock file and thus the reconnect correctly prints a
warning in those cases.

The stress ones are a little puzzling, I didn't think we reconnected to
the same db - I have not really looked at how we run those tests in
the network server framework.  Is it likely we are trying to run those
tests on an existing db?


I think with the stress test & networkserver the last tester gets interrupted 
because of the timeout (10 minutes), so that process leaves the db.lck behind, 
which will cause the final.sql to hit the WARNING, so that's to be expected too.


> Test harness incorrectly catches db.lck WARNINGS generated during STORE tests 
> and lists the tests as failed in *_fail.txt.
> --
>
>  Key: DERBY-1462
>  URL: http://issues.apache.org/jira/browse/DERBY-1462
>  Project: Derby
> Type: Test

>   Components: Test
> Versions: 10.1.2.5, 10.1.2.4, 10.1.2.3, 10.1.2.2, 10.1.2.1, 10.1.3.0, 
> 10.1.3.1
>  Environment: IBM 1.3.1 JRE for LINUX (and possibly other JRE 1.3.1 
> environments)
> Reporter: Stan Bradbury
> Assignee: Myrna van Lunteren
> Priority: Minor

>
> The following store tests from derbyall do not shutdown cleanly so leave the 
> db.lck file on disk.  This is OK! It is done by design to test recovery.  THE 
> PROBLEM, when run on Linux using IBM JRE 1.3.1 sp 10 the test harness 'sees' 
> the warnings and lists the tests as having failed.  The harness should ignore 
> this warnings as the tests proceed and complete cleanly.
> Tests INCORRECLTY reported as failed:
> derbyall/derbynetclientmats/derbynetmats.fail:stress/stress.multi
> derbyall/derbynetmats/derbynetmats.fail:stress/stress.multi
> derbyall/storeall/storeall.fail:storetests/st_1.sql
> derbyall/storeall/storeall.fail:unit/recoveryTest.unit
>  erbyall/storeall/storeall.fail:store/LogChecksumRecovery.java
> derbyall/storeall/storeall.fail:store/LogChecksumRecovery1.java
>  erbyall/storeall/storeall.fail:store/MaxLogNumberRecovery.java
> derbyall/storeall/storeall.fail:store/oc_rec1.java
> derbyall/storeall/storeall.fail:store/oc_rec2.java
> derbyall/storeall/storeall.fail:store/oc_rec3.java
> derbyall/storeall/storeall.fail:store/oc_rec4.java
> derbyall/storeall/storeall.fail:store/dropcrash.java
> derbyall/storeall/storeall.fail:store/dropcrash2.java
> Example Error message:
> WARNING: Derby (instance FILTERED-UUID) is attempting to boot the 
> database 
> csf:/local1/131TST/Store1/storeall/storerecovery/storerecovery/wombat even 
> though Derby (instance FILTERED-UUID) may still be active.  Only one 
> instance of Derby should boot a database at a time. Severe and 
> non-recoverable corruption can result and may have already occurred.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Revoke REFERENCES privilege and drop foreign key constraint

2006-07-13 Thread Mamta Satoor
Hi,
 
While working on revoke privilege, I realized that when a table/view/routine is dropped, we do not drop the privileges that were defined on those objects. This is a known issue and Satheesh already has plans of working on it. But, out of curiosity, I was looking at 
DropTableConstantAction.executeCostantAction and found that there is following piece of code in there
   DropTriggerConstantAction.dropTriggerDescriptor(lcc, dm, dd, tc, trd, activation); 
So, seems like, with triggers, I might get lucky and when TRIGGERS or any other privilege is revoked on which a trigger depends, I can simply call the static method 
DropTriggerConstantAction.dropTriggerDescriptor(lcc, dm, dd, tc, trd, activation); 
And hence no need to go through the sql layer which will be good for performance.
 
I also looked at DropViewConstantAction.executeConstantAction(Activation) and found following piece of code there
  vd.dropViewWork(dd, dm, lcc, tc, sd, td, false);So, again, when a privilege is revoked such that a view will be impacted, I can simply call 
  vd.dropViewWork(dd, dm, lcc, tc, sd, td, false);when the view gets REVOKE_PRIVILEGE action from the dependency manager(I am not going into the details of what if there is another privilege which can replace the privilege being revoked and hence there is no need to drop the view. This discussion is just at high level).

 
So far so good for views and triggers. And most likely, no need to go through the SQL layer in order to drop them.
 
For constraints, there is a static method in DropConstraintConstantAction called dropConstraintAndIndex but unfortunately it does not do the work of invoking the dependency manager for all the invalidation actions. I am not sure why this static method was implemented in such a manner that only part of the work of drop constraint has been abstracted. May be someone on the list knows about this and can share their thoughts about peculiar DropConstraintConstantAction implementation.

 
So, in short, it seems like after all, I might not have to go through the SQL layer, most likely for views and triggers. As for constraints, I am not sure if I can avoid issuing a SQL, until I understand how(and if) the dependency work and dropConstraintAndIndex can be abstracted out into a stand alone method. If anyone has any thoughts, please share them here.

 
thanks,
Mamta 
On 7/13/06, Daniel John Debrunner <
[EMAIL PROTECTED]> wrote: 
Bryan Pendleton wrote:> [ Possible re-send of this message; I've been having email problems,> sorry. ] 
>>> I looked through alter table constant action to see what happens>> when a user issues a drop constraint foreignkeyname and it seems>> like there is lot more involved then simply calling the data 
>> dictionary to drop the constraint descriptor.>> What about re-factoring this code and moving the extra code out of> AlterTableConstantAction's drop constraint subroutine and into data 
> dictionary's> drop constraint routine.I think that's pushing too much knowledge of the SQL system into theDataDictionary. A constraint may share the underlying index with otherSQL indexes, thus dropping the constraint must check usage on the 
underlying index etc.> Then, we could share this code between alter table drop constraint, and> revoke privilege.The ConstantAction class for the drop constraint already contains thelogic, thus it could be the share point. Though as Mamta showed, we 
already have a easy api to do the sharing, at the SQL level. All that isrequired is a mechanism to run a SQL statement as another user, i thinkthis is something that will be required in the future, so seems like a 
good thing to add.Dan.


Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread David Van Couvering
Thanks, Bryan, I was not aware you could do that in JDBC -- I never 
programmed BLOBs in JDBC.  Definitely puts some constraints on the 
network logic.  If you can't cache it in memory, the network layer has 
to do something with that data, and know how to get back to it when you 
ask for it, and have protocol support for "can I have that BLOB again?" 
 That does get pretty tricky.


David

Bryan Pendleton wrote:

David Van Couvering wrote:
I guess what I was assuming was, if the application goes off and does 
something else, we can notice that and either raise an exception 
("you're not done with that BLOB column yet") or flush the rest of the 
BLOB data, since it's obvious they won't be getting back to it (e.g. 
if they send another query or do ResultSet.next(), it's clear they're 
done with the BLOB column). 


Are you sure that's acceptable JDBC behavior? My (very old) copy of the
JDBC spec says things like:

  The standard behavior for a Blob instance is to remain valid until the
  transaction in which it was created is either committed or rolled back.

So if I do something like:

  ResultSet rs = stmt.executeQuery("SELECT DATA FROM TABLE1");
  rs.first();
  Blob data = rs.getBlob("DATA");
  InputStream blobStream = blob.getBinaryStream();

I think I am supposed to be allowed to access blobStream quite some time 
later,

even if I do other things on the connection in the meantime.

But I confess I don't do a lot of BLOB programming in JDBC, so maybe I'm
manufacturing bogeymen that don't actually exist.

thanks,

bryan



Problems in SQLBinary when passing in streams with unknown length (SQL, store)

2006-07-13 Thread Kristian Waagan

Hello,

I just discovered that we are having problems with the length less
overloads in the embedded driver. Before I add any Jiras, I would like
some feedback from the community. There are for sure problems in
SQLBinary.readFromStream(). I would also appreciate if someone with
knowledge of the storage layer can tell me if we are facing trouble
there as well.

SQL layer
=
SQLBinary.readFromStream()
  1) The method does not support streaming.
 It will either grow the buffer array to twice its size, or possibly
 more if the available() method of the input stream returns a
 non-zero value, until all data is read. This approach causes an
 OutOfMemoryError if the stream data cannot fit into memory.

  2) Might enter endless loop.
 If the available() method of the input stream returns 0, and the
 data in the stream is larger than the initial buffer array, an
 endless loop will be entered. The problem is that the length
 argument of read(byte[],int,int) is set to 0. We don't read any
 more data and the stream is never exhausted.

To me, relying on available() to determine if the stream is exhausted
seems wrong. Also, subclasses of InputStream will return 0 if they don't
override the method.
I wrote a simple workaround for 2), but then the OutOfMemoryError
comes into play for large data.


Store layer
===
I haven't had time to study the store layer, and know very little about
it. I hope somebody can give me some quick answers here.
  3) Is it possible to stream directly to the store layer if you don't
 know the length of the data?
 Can meta information (page headers, record headers etc.) be updated
 "as we go", or must the size be specified when the insert is
 started?

I started looking at stuff in the store layer, but got a bit
overwhelmed, so instead of studying all the code, I'll post a few
questions and go to bed :)
I observed that inserts that go through the JDBC calls with length
arguments, go to BasePage.insertAllowOverflow if the column is
long. The application stream is finally read (through various wrapper
streams) from MemByteHolder.write(InputStream,long).
Can we get there without information about the data length?

I'll dig further another day, but as this is getting pretty
complex I'd like to get some help and pieces of advice from the experts.




Thanks,
--
Kristian


Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread Bryan Pendleton

David Van Couvering wrote:
I guess what I was assuming was, if the application goes off and does 
something else, we can notice that and either raise an exception 
("you're not done with that BLOB column yet") or flush the rest of the 
BLOB data, since it's obvious they won't be getting back to it (e.g. if 
they send another query or do ResultSet.next(), it's clear they're done 
with the BLOB column). 


Are you sure that's acceptable JDBC behavior? My (very old) copy of the
JDBC spec says things like:

  The standard behavior for a Blob instance is to remain valid until the
  transaction in which it was created is either committed or rolled back.

So if I do something like:

  ResultSet rs = stmt.executeQuery("SELECT DATA FROM TABLE1");
  rs.first();
  Blob data = rs.getBlob("DATA");
  InputStream blobStream = blob.getBinaryStream();

I think I am supposed to be allowed to access blobStream quite some time later,
even if I do other things on the connection in the meantime.

But I confess I don't do a lot of BLOB programming in JDBC, so maybe I'm
manufacturing bogeymen that don't actually exist.

thanks,

bryan



Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread David Van Couvering
I guess what I was assuming was, if the application goes off and does 
something else, we can notice that and either raise an exception 
("you're not done with that BLOB column yet") or flush the rest of the 
BLOB data, since it's obvious they won't be getting back to it (e.g. if 
they send another query or do ResultSet.next(), it's clear they're done 
with the BLOB column).  This is something that could be caught the next 
time the am code hits the network layer for something.  The network 
layer would notice there was a half-processed BLOB and Do The Right Thing.


Lots of hand-waving, but it seems to me that this is solvable, and it 
seems quite problematic to me that as it stands we just shoot the user 
in the foot if they happen to have a BLOB column that's larger than 
available memory, and there's nothing they can do to mitigate that.


David


Bryan Pendleton wrote:

David Van Couvering wrote:
... We should not allow any other requests to be sent to the server 
over this connection until the BLOB data is processed or cancelled 


I'm not quite sure how we would do this. What is preventing the client 
from,

say, calling ResultSet.next(), or going off to some other Statement object
and running some other query, etc.?

The DRDA protocol has a whole lot of logic regarding exactly how
the requestor and the server communicate about the state of the 
conversation,
what statements are in play and how to request or respond with actions 
on them,

what results have already been communicated, how to pick up and resume a
partially-fetched set of query results, etc.

DRDA may well have all the protocol mechanisms in place for suspending an
externalized data object partway through, picking it up later, etc., and we
may already have all that support in place in our network client libraries.

But I think there's some DRDA research waiting to be done here, that was 
the

main point I was trying to raise.

thanks,

bryan




Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread Bryan Pendleton

David Van Couvering wrote:
... We should not 
allow any other requests to be sent to the server over this connection 
until the BLOB data is processed or cancelled 


I'm not quite sure how we would do this. What is preventing the client from,
say, calling ResultSet.next(), or going off to some other Statement object
and running some other query, etc.?

The DRDA protocol has a whole lot of logic regarding exactly how
the requestor and the server communicate about the state of the conversation,
what statements are in play and how to request or respond with actions on them,
what results have already been communicated, how to pick up and resume a
partially-fetched set of query results, etc.

DRDA may well have all the protocol mechanisms in place for suspending an
externalized data object partway through, picking it up later, etc., and we
may already have all that support in place in our network client libraries.

But I think there's some DRDA research waiting to be done here, that was the
main point I was trying to raise.

thanks,

bryan




Re: Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

On 7/14/06, David Van Couvering <[EMAIL PROTECTED]> wrote:

Looks like we may have a winner... It would be interesting to see how
they use it -- do they have it available only for pre-Java 5 VMs, or is
it used regardless of what VM is being used?


It can be used regardless of the VM. Minimum requirement is Java 1.3.
On Java 1.5 and above, command line options can be used to give
precendence to MX4J libraries instead of  Sun's libraries.

As of now, I'm testing my code against the reference implementation.
I'll take some more to evaluate the pros and cons of each of them
before actually making my changes permanent.

MX4J's jars -  399 KB for JMX implementation and 168 KB for remote
functionalities. The second one is not really needed right now.



How big is the MX4J jar file?




And what time is it in India, Sanket?



Its almost 5 AM! Time to go to bed :-)  lol!!

Regards,
Sanket


Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread David Van Couvering
Hi, Bryan, perhaps I'm missing your point, isn't it the case that the 
server's thread simply blocks when its network buffer fills up, and that 
the only effect is that this particular thread is put to sleep?  Then 
the possible scenarios occur:


- The client pulls the next batch data into its local buffer, and the 
server thread can then send more data; this continues until all data is 
processed.


- The client maintains the connection but never gets around to 
processing the data.  This seems unusual, but it's fine.  We should not 
allow any other requests to be sent to the server over this connection 
until the BLOB data is processed or cancelled (and this raises the 
question: *can* you cancel?  I don't know JDBC well enough to answer this).


- The client connection goes away, at which point TCP-IP notifies the 
server socket, and the server-side thread can be cleaned up and put back 
on the free queue or whatever it is we do with it.


What deadlock situation are you concerned about?

David

Bryan Pendleton (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420877 ] 


Bryan Pendleton commented on DERBY-550:
---


RE: Andreas's observation that LOB implemtation classes would need to be 
reimplemented, so that they will do the streaming.

I'm wondering whether this might accidentally introduce other new and 
undesirable behaviors.

The net effect of a change like this is that the LOB data will remain in the 
network pipe, or be queued on the server side, until the blob is read by the 
client.

But what if that takes a long time, or in fact never happens?

We might need a way to "cancel" a partially-sent BLOB object which was returned 
by the server but which the client for whatever reason never decided to read.

The current "greedy" algorithm seems to ensure that we minimize the risk of 
producer-consumer deadlocks of various sorts, at the expense of accumulating the entire 
data into memory.

I hope this makes sense. I don't know of an actual problem here; I just have a 
funny feeling that this change is going to be rather tricky to accomplish.





BLOB : java.lang.OutOfMemoryError with network JDBC driver 
(org.apache.derby.jdbc.ClientDriver)
---

 Key: DERBY-550
 URL: http://issues.apache.org/jira/browse/DERBY-550
 Project: Derby
Type: Bug



  Components: JDBC, Network Server
Versions: 10.1.1.0
 Environment: Any environment.
Reporter: Grégoire Dubois
Assignee: Tomohito Nakayama
 Attachments: BlobOutOfMem.java

Using the org.apache.derby.jdbc.ClientDriver driver to access the
Derby database through network, the driver is writting all the file into memory 
(RAM) before sending
it to the database.
Writting small files (smaller than 5Mo) into the database works fine,
but it is impossible to write big files (40Mo for example, or more), without 
getting the
exception java.lang.OutOfMemoryError.
The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
Here follows some code that creates a database, a table, and trys to write a 
BLOB. 2 parameters are to be changed for the code to work for you : 
DERBY_DBMS_PATH and FILE
import NetNoLedge.Configuration.Configs;
import org.apache.derby.drda.NetworkServerControl;
import java.net.InetAddress;
import java.io.*;
import java.sql.*;
/**
 *
 * @author  greg
 */
public class DerbyServer_JDBC_BLOB_test {

// The unique instance of DerbyServer in the application.

private static DerbyServer_JDBC_BLOB_test derbyServer;

private NetworkServerControl server;

private static final String DERBY_JDBC_DRIVER = "org.apache.derby.jdbc.ClientDriver";

private static final String DERBY_DATABASE_NAME = "Test";

// ###

// ### SET HERE THE EXISTING PATH YOU WANT 
// ###
private static final String DERBY_DBMS_PATH =  "/home/greg/DatabaseTest";
// ###
// ###


private static int derbyPort = 9157;

private static String userName = "user";
private static String userPassword = "password";

// ###

// # DEFINE HERE THE PATH TO THE FILE YOU WANT TO WRITE INTO 
THE DATABASE ###
// # TRY A 100kb-3Mb FILE, AND AFTER A 40Mb OR BIGGER FILE 
#
// 
###
private static final File FILE = new File("/home/greg/01.jpg");
// 
#

Re: Choice of JMX implementations

2006-07-13 Thread Jean T. Anderson
Sanket Sharma wrote:
> Following apache projects are also known to use MX4J:
> Geronimo,
> Avalon-Phonenix
> Tomcat 

The Avalon project closed down, but Geronimo and Tomcat are both alive
and kicking.

 -jean

[1] http://avalon.apache.org/closed.html

> Regards,
> Sanket
> 
> On 7/14/06, Jean T. Anderson <[EMAIL PROTECTED]> wrote:
> 
>> thanks for checking the hivemind download, Andreas.
>>
>> I checked the geronimo.1.1 download and it does include the mx4j jars.
>> Its NOTICE file also has this attribution:
>>
>> >
>> =
>> > ==  MX4J
>> Notice==
>> >
>> =
>> >
>> > This product includes software developed by the MX4J project
>> > (http://sourceforge.net/projects/mx4j).
>>
>>
>>  -jean
>>
>>
>>
>> Andreas Korneliussen wrote:
>> > Jean T. Anderson wrote:
>> >
>> >> Andrew McIntyre wrote:
>> >>
>> >>> If the goal is to repackage any of these, I'm not sure that will be
>> >>> possible with any of the following, except for Apache Commons
>> >>> Modelling, but is that actually an implementation?
>> >>>
>> >>> For information on compatibility of other open source licenses with
>> >>> the ASL, see: http://people.apache.org/~cliffs/3party.html
>> >>>
>> >>> On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
>> >>
>> >>
>> >> ...
>> >>
>>  4. MX4J
>> >>>
>> >>>
>> >>> This has a modified BSD license with an advertising clause, and a
>> >>> restriction to downstream projects on naming. Not that we'd ever name
>> >>> our project MX4J, but it's an extra restriction that isn't in the
>> ASL,
>> >>> so we might need to get a determination from legal-discuss on whether
>> >>> this is acceptable to redistribute.
>> >>
>> >>
>> >>
>> >> here's one precedent for using MX4J at apache (and there might be
>> more):
>> >>
>> >>
>> http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html
>>
>> >>
>> >>
>> >>
>> > They are using it for development, and testing. However the users need
>> > to  download mx4j themselves, or download another JMX
>> implementation, or
>> > use java 5.
>> >
>> > See http://jakarta.apache.org/hivemind/hivemind-jmx/quickstart.html
>> >
>> > I downloaded hivemind-1-1-1, and could not find any redistribution of
>> > mx4j there.
>> >
>> > Andreas
>> >
>> >
>> >>  -jean
>> >>
>> >
>> >
>>
>>
> 



[jira] Commented: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-13 Thread Suresh Thalamati (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1156?page=comments#action_12420960 ] 

Suresh Thalamati commented on DERBY-1156:
-

FIixed the comments as suggected by Mike,  and committed reencrypt_4.diff  
patch to trunk 
with  revision 421721.

> allow the encrypting of an existing unencrypted db and allow the 
> re-encrypting of an existing encrypted db
> --
>
>  Key: DERBY-1156
>  URL: http://issues.apache.org/jira/browse/DERBY-1156
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.2.3
> Reporter: Mike Matrigali
> Assignee: Suresh Thalamati
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: encryptspec.html, reencrypt_1.diff, reencrypt_2.diff, 
> reencrypt_3.diff, reencrypt_4.diff
>
> encrypted database to be re-encrypted with a new password.
> Here are some ideas for an initial implementation.
> The easiest way to do this is to make sure we have exclusive access to the
> data and that no log is required in the new copy of the db.  I want to avoid
> the log as it also is encrypted.  Here is my VERY high level plan:
> 1) Force exclusive access by putting all the work in the low level store,
>offline boot method.  We will do redo recovery as usual, but at the end
>there will be an entry point to do the copy/encrypt operation.
> copy/encrypt process:
> 0) The request to encrypt/re-encrypt the db will be handled with a new set
>of url flags passed into store at boot time.  The new flags will provide
>the same inputs as the current encrypt flags.  So at high level the
>request will be "connect db old_encrypt_url_flags; new_encrypt_url_flags".
>TODO - provide exact new flag syntax.
> 1) Open a transaction do all logged work to do the encryption.  All logging
>will be done with existing encryption.
> 2) Copy and encrypt every db file in the database.  The target files will
>be in the data directory.  There will be a new suffix to track the new
>files, similar to the current process used for handling drop table in
>a transaction consistent manner without logging the entire table to the 
> log.
>Entire encrypted destination file is guaranteed synced to disk before
>transaction commits.  I don't think this part needs to be logged.
>Files will be read from the cache using existing mechanism and written
>directly into new encrypted files (new encrypted data does not end up in
>the cache).
> 3) Switch encrypted files for old files.  Do this under a new log operation
>so the process can be correctly rolled back if the encrypt db operation
>transaction fails.  Rollback will do file at a time switches, no reading
>of encrypted data is necessary.
> 4) log a "change encryption of db" log record, but do not update
>system.properties with the change.
> 5) commit transaction.
> 6) update system.properties and sync changes.
> 7) TODO - need someway to handle crash between steps 5 and 6.
> 6) checkpoint all data, at this point guaranteed that there is no outstanding
>transaction, so after checkpoint is done there is no need for the log.
> ISSUES:
> o there probably should be something that catches a request to encrypt to
>   whatever db was already encrypted with.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread Bryan Pendleton

Kathey Marsden commented on DERBY-1466:
---

 If nothing goes wrong this is just the startup and the shutdown message. 


I agree with your overall point, and agree that the console should flush.

I just wanted to point out that I think the above is true only for 10.2 and
later releases; in 10.1 the console has a message ("Connection number 472")
on each new connection that is accepted.

I still don't think it's a performance problem, just wanted to point this
boring little detail out :)

thanks,

bryan



Re: Choice of JMX implementations

2006-07-13 Thread Andreas Korneliussen

Sanket Sharma wrote:
...

> My recommendation is to use either XMOJO or MX4J. Both of them are
> open source and support JDK 1.3 and above, which is what Derby is
> supported on.
>
> Comments and opinion will be appriciated.
>
Is it necessary to choose a specific JMX implementation ? Aren't these
just implementations of the same JCP spec, so the interfaces/classes
should be compatible ?



They are implementations of the same JCP and it is not really that big
of an issue. The issues arises only when someone is using JDK < 1.5
which does not carry a implementation by default. Since most of
Derby's code is currently being built against JDK 1.3 and 1.4 (which
do not carry such an implementation), it gave me a chance to look at
alternatives and I just thought it will be good to discuss it.
Currently, I'm experimenting with the reference implementation of JDK
6 which forces me to build my code against three different JDK's. It
will be same for JDK1.5 as well. For building with JDK 1.4 and 1.3, I
will need an implementation. Thats when the issue surfaces.
Asking the user to download the reference implementation from Sun.com
can be considered as an alternative.

I understand.  I think it should be possible to build Derby with JMX 
features with JDK 1.4/1.3. To do it, we can ask the *developer*
to download a JMX library in the BUILDING.TXT instructions 
(http://svn.apache.org/repos/asf/db/derby/code/trunk/BUILDING.txt), or 
make a JMX library available from the svn repository.


Then, at a later point, we could decide if we want to distribute a JMX
library with Derby itself.

In terms of licensing, it seems like it is possible to redistribute mx4j
to end-users. This also probably means that we could put the mx4j jar
files into svn repository, so that developers do not need download them
manually.

Regards

Andreas


Re: Choice of JMX implementations

2006-07-13 Thread David Van Couvering
Looks like we may have a winner... It would be interesting to see how 
they use it -- do they have it available only for pre-Java 5 VMs, or is 
it used regardless of what VM is being used?


How big is the MX4J jar file?

And what time is it in India, Sanket?

David

Sanket Sharma wrote:

Following apache projects are also known to use MX4J:
Geronimo,
Avalon-Phonenix
Tomcat

Regards,
Sanket

On 7/14/06, Jean T. Anderson <[EMAIL PROTECTED]> wrote:

thanks for checking the hivemind download, Andreas.

I checked the geronimo.1.1 download and it does include the mx4j jars.
Its NOTICE file also has this attribution:

> 
=
> ==  MX4J 
Notice==
> 
=

>
> This product includes software developed by the MX4J project
> (http://sourceforge.net/projects/mx4j).


 -jean



Andreas Korneliussen wrote:
> Jean T. Anderson wrote:
>
>> Andrew McIntyre wrote:
>>
>>> If the goal is to repackage any of these, I'm not sure that will be
>>> possible with any of the following, except for Apache Commons
>>> Modelling, but is that actually an implementation?
>>>
>>> For information on compatibility of other open source licenses with
>>> the ASL, see: http://people.apache.org/~cliffs/3party.html
>>>
>>> On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
>>
>>
>> ...
>>
 4. MX4J
>>>
>>>
>>> This has a modified BSD license with an advertising clause, and a
>>> restriction to downstream projects on naming. Not that we'd ever name
>>> our project MX4J, but it's an extra restriction that isn't in the 
ASL,

>>> so we might need to get a determination from legal-discuss on whether
>>> this is acceptable to redistribute.
>>
>>
>>
>> here's one precedent for using MX4J at apache (and there might be 
more):

>>
>> 
http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html 


>>
>>
>>
> They are using it for development, and testing. However the users need
> to  download mx4j themselves, or download another JMX 
implementation, or

> use java 5.
>
> See http://jakarta.apache.org/hivemind/hivemind-jmx/quickstart.html
>
> I downloaded hivemind-1-1-1, and could not find any redistribution of
> mx4j there.
>
> Andreas
>
>
>>  -jean
>>
>
>




Re: [jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread Sunitha Kambhampati

Mike Matrigali wrote:

Is there any way to determine if the existing printWriter is 
autoflush, if so it might be worth checking before wrapping?


No. There is no api available on PrintWriter to check if the writer is 
in autoflush true mode.

http://java.sun.com/j2se/1.5.0/docs/api/java/io/PrintWriter.html

Thanks,
Sunitha.


[jira] Commented: (DERBY-1459) Early load of Derby driver with JDBC 4.0 autoloading can lead to derby properties not being processed or derby boot time actions consuming resources when a connection is

2006-07-13 Thread Rick Hillegas (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1459?page=comments#action_12420955 ] 

Rick Hillegas commented on DERBY-1459:
--

Committed at revision 421717. Derbyall ran cleanly against jar files and 
against the classtree on jdks 1.6, 1.5, and 1.4. The following issues occurred 
running against 1.3:

1) Initially, the encryption tests failed. The encryption tests passed after I 
installed the Sun JCE jars.

2) OutBufferedStream hung on 1.3. It also hung even on a clean subversion 
workspace without this patch.


> Early load of Derby driver with JDBC 4.0 autoloading can lead to derby 
> properties not being processed or derby boot time actions consuming resources 
> when a connection is made with another driver
> --
>
>  Key: DERBY-1459
>  URL: http://issues.apache.org/jira/browse/DERBY-1459
>  Project: Derby
> Type: Bug

>   Components: JDBC, Services
> Versions: 10.2.0.0
>  Environment: JDK 1.6 
> Reporter: Kathey Marsden
> Assignee: Rick Hillegas
> Priority: Critical
>  Fix For: 10.2.0.0
>  Attachments: autoloading_scenarios.html, bug1459_v01.diff, bug1459_v02.diff, 
> bug1459_v03.diff, bug1459_v04.diff
>
> The addition of support for autoloading of Derby drivers, DERBY-930, caused 
> two potentially serious regresions for applications.
> 1) Early load of driver can mean that  derby system properties, such as 
> derby.system.home may not be processed by the driver because they are set 
> after the driver is loaded.
> 2) Early load of the driver can mean boot time operations, such as starting 
> network server with derby.drda.startNetworkServer can happen even when Derby 
> is never used if a connection is made to another database such as Oracle.
> The attached file autoloading_scenarios.html  shows scenarios that show these 
> regressions plus another case that will regress if boot time operations are 
> moved to the first Derby embedded connection.   I don't know what solution is 
> available that would handle all three cases.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

Following apache projects are also known to use MX4J:
Geronimo,
Avalon-Phonenix
Tomcat

Regards,
Sanket

On 7/14/06, Jean T. Anderson <[EMAIL PROTECTED]> wrote:

thanks for checking the hivemind download, Andreas.

I checked the geronimo.1.1 download and it does include the mx4j jars.
Its NOTICE file also has this attribution:

> =
> ==  MX4J Notice==
> =
>
> This product includes software developed by the MX4J project
> (http://sourceforge.net/projects/mx4j).


 -jean



Andreas Korneliussen wrote:
> Jean T. Anderson wrote:
>
>> Andrew McIntyre wrote:
>>
>>> If the goal is to repackage any of these, I'm not sure that will be
>>> possible with any of the following, except for Apache Commons
>>> Modelling, but is that actually an implementation?
>>>
>>> For information on compatibility of other open source licenses with
>>> the ASL, see: http://people.apache.org/~cliffs/3party.html
>>>
>>> On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
>>
>>
>> ...
>>
 4. MX4J
>>>
>>>
>>> This has a modified BSD license with an advertising clause, and a
>>> restriction to downstream projects on naming. Not that we'd ever name
>>> our project MX4J, but it's an extra restriction that isn't in the ASL,
>>> so we might need to get a determination from legal-discuss on whether
>>> this is acceptable to redistribute.
>>
>>
>>
>> here's one precedent for using MX4J at apache (and there might be more):
>>
>> http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html
>>
>>
>>
> They are using it for development, and testing. However the users need
> to  download mx4j themselves, or download another JMX implementation, or
> use java 5.
>
> See http://jakarta.apache.org/hivemind/hivemind-jmx/quickstart.html
>
> I downloaded hivemind-1-1-1, and could not find any redistribution of
> mx4j there.
>
> Andreas
>
>
>>  -jean
>>
>
>




Re: [jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread David Van Couvering

+1

Kathey Marsden (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-1466?page=comments#action_12420953 ] 


Kathey Marsden commented on DERBY-1466:
---

Hi Sunitha,

I do not have an opinion on implementation but think it is important for the  
console printwriter to flush whenever there is a message.I don't  think it 
is a performance issue.  If nothing goes wrong this is just the startup and the 
shutdown message.  If something does go wrong, we *really* want to see it 
because  if there is a crash of some  sort it will never be seen,  especailly 
with DERBY-1456 outstanding.

Kathey



Network Server should flush the PrintWriter after console output


 Key: DERBY-1466
 URL: http://issues.apache.org/jira/browse/DERBY-1466
 Project: Derby
Type: Improvement



  Components: Network Server
Versions: 10.1.2.1
Reporter: Kathey Marsden



If Network Server is started with a PrintWriter specified for console output it 
will not automatically flush output such as  starting the server.  This can be 
confusing as the console output shows no activity.
Users currently need to specify the PrintWriter to autoflush  e.g.
 starterWriter = new PrintWriter(new FileOutputStream(new File(SERVER_START_LOG)),true); 
derbyServer = new NetworkServerControl();
 derbyServer.start(starterWriter); 
For repro see:

http://www.nabble.com/Questions-about-Network-Server-API-Behavior-p5055814.html
And change the following line in the program to not autoflush as follows:
starterWriter = new PrintWriter(new FileOutputStream(new File(SERVER_START_LOG)),false); 




[jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread Kathey Marsden (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1466?page=comments#action_12420953 ] 

Kathey Marsden commented on DERBY-1466:
---

Hi Sunitha,

I do not have an opinion on implementation but think it is important for the  
console printwriter to flush whenever there is a message.I don't  think it 
is a performance issue.  If nothing goes wrong this is just the startup and the 
shutdown message.  If something does go wrong, we *really* want to see it 
because  if there is a crash of some  sort it will never be seen,  especailly 
with DERBY-1456 outstanding.

Kathey


> Network Server should flush the PrintWriter after console output
> 
>
>  Key: DERBY-1466
>  URL: http://issues.apache.org/jira/browse/DERBY-1466
>  Project: Derby
> Type: Improvement

>   Components: Network Server
> Versions: 10.1.2.1
> Reporter: Kathey Marsden

>
> If Network Server is started with a PrintWriter specified for console output 
> it will not automatically flush output such as  starting the server.  This 
> can be confusing as the console output shows no activity.
> Users currently need to specify the PrintWriter to autoflush  e.g.
>  starterWriter = new PrintWriter(new FileOutputStream(new 
> File(SERVER_START_LOG)),true); 
> derbyServer = new NetworkServerControl();
>  derbyServer.start(starterWriter); 
> For repro see:
> http://www.nabble.com/Questions-about-Network-Server-API-Behavior-p5055814.html
> And change the following line in the program to not autoflush as follows:
> starterWriter = new PrintWriter(new FileOutputStream(new 
> File(SERVER_START_LOG)),false); 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread Mike Matrigali



Sunitha Kambhampati (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-1466?page=comments#action_12420949 ] 


Sunitha Kambhampati commented on DERBY-1466:


I took a quick look at this and want to  start a discussion on how this could be solved. 


Per java api for PrintWriter
-- There is no method to turn autoflush on an existing PrintWriter object.
-- when autoflush is on, flush will happen on the println statement only.

some solution options:
1) one is wrap the user's writer object with a PrintWriter and set autoflush to 
true
-- change in code only required in two places.
-- maybe ugly to wrap the object.

2)  explicitly call flush in all places after  we write to this writer object.
 
In Eclipse, searched for places where the logWriter in NetworkServerControl is referenced, here is the list

NetworkServerControlImpl - org.apache.derby.impl.drda - java/drda - ks_trunk
consoleExceptionPrintTrace(Throwable) (3 potential matches)
consoleMessage(String) (3 potential matches)
executeWork(String[]) (potential match)
setLogWriter(PrintWriter) (potential match)
shutdown() (2 potential matches)

So that is not a lot of places. so option 2 should be simple too. unless we 
miss to call flush in all the places.
--

There is one case where the writer is passed to exception.printStackTrace . If 
this gets flushed correctly in autoflush mode, I think #1 seems ok to me, less 
number of changes than #2.

I  was concerned a bit if setting autoflush will have any performance impact.  
But maybe it is ok since console output is not expected to be verbose anyways.

I am ok with either solution, at this point I would lean toward the 1st 
- assuming that console output doesn't happen very often.  Would there

ever be a case where performance mattered and would only want some
of these calls autoflushed and not others?

Is there any way to determine if the existing printWriter is autoflush, 
if so it might be worth checking before wrapping?




[jira] Updated: (DERBY-1453) jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server

2006-07-13 Thread Rajesh Kartha (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1453?page=all ]

Rajesh Kartha updated DERBY-1453:
-

   type: Bug  (was: Test)
Fix Version: 10.1.3.2
 (was: 10.2.0.0)
   Priority: Minor  (was: Critical)

changing to Bug in 10.1 branch with lower priority

> jdbcapi/blobclob4BLOB.java fails with 10.1 client and 10.2 server
> -
>
>  Key: DERBY-1453
>  URL: http://issues.apache.org/jira/browse/DERBY-1453
>  Project: Derby
> Type: Bug

>   Components: Network Server, Network Client
> Versions: 10.2.0.0, 10.1.3.0
>  Environment: derbyclient.jar and derbyTesting.jar from 10.1
> all other jars from 10.2
> Reporter: Deepa Remesh
> Priority: Minor
>  Fix For: 10.1.3.2

>
> Diff is:
> *** Start: blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:09:39 ***
> 510a511,513
> > FAIL -- unexpected exception 
> > SQLSTATE(40XL1): A lock could not be obtained within the time requested
> > START: clobTest93
> 512,513d514
> < START: clobTest93
> < clobTest92 finished
> 766 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 
> 'java.sql.String' from an InputStream. SQLSTATE: XJ001: Java exception: 
> 'ERROR 40XD0: Container has been closed: java.io.IOException'.
> 766a767
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> 769 del
> < EXPECTED SQLSTATE(XCL30): An IOException was thrown when reading a 'BLOB' 
> from an InputStream. SQLSTATE: XJ001: Java exception: 'ERROR 40XD0: Container 
> has been closed: java.io.IOException'.
> 769a770
> > EXPECTED SQLSTATE(XJ073): The data in this BLOB or CLOB is no longer 
> > available.  The BLOB or CLOBs transaction may be committed, or its 
> > connection is closed.
> Test Failed.
> *** End:   blobclob4BLOB jdk1.5.0_02 DerbyNetClient derbynetmats:jdbcapi 
> 2006-06-23 02:10:20 ***

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-1462) Test harness incorrectly catches db.lck WARNINGS generated during STORE tests and lists the tests as failed in *_fail.txt.

2006-07-13 Thread Mike Matrigali




Myrna van Lunteren (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-1462?page=comments#action_12420945 ] 


Myrna van Lunteren commented on DERBY-1462:
---

I do not believe this is a test harness issue.

If I run these steps with another jvm, these warnings do not appear.

I agree, I think this a problem with these particular tests on pre 1.4 
jvms, on non windows platforms - their output is different - it would 
not be right to somehow get
the harness to supress every connect warning printed.  Fixing with test 
specific seds is fine.


I checked all the tests below except for the stress ones, they all are 
setup to run on an existing db and are coded to crash with no cleanup so

that subsequent test will run recovery.  A side effect of that crash on
pre-1.4 jvms on non windows platforms is that it leaves a lock file 
around which derby can't tell

if it is an active lock file and thus the reconnect correctly prints a
warning in those cases.

The stress ones are a little puzzling, I didn't think we reconnected to 
the same db - I have not really looked at how we run those tests in

the network server framework.  Is it likely we are trying to run those
tests on an existing db?


I experimented, and I can make the warning appear with this ibm 1.3.1 SR10 with 
only ij, but not with another jvm by doing the following:
- start ij:
   java org.apache.derby.tools.ij
   ij>connect 'jdbc:derby:tstdb;create=true';
   ij> create table t1 (c1 int);
   ij> disconnect;
   ij> exit;
- ls tstdb and observe there is no db.lck file
-  start ij again, select from t, and ctrl-c out:
   java org.apache.derby.tools.ij
   ij> connect 'jdbc:derby:tstdb';
   ij> select * from t1;
   CTRL-C
- ls tstdb and observe there is a db.lck file now
- start ij again, and connect again, and see that only with ibm 1.3.1 do you 
see the warning.

I have no jdk131 installed on the machine I was doing this on. Maybe someone 
else can verify this behavior for that.

I worked on making _sed.properties files for all the affected tests (for 10.2, we have one extra tests failing - st_derby715.java) but this does not work for the stress tests, and possibly this is not the right thing to do in the first place.  




Test harness incorrectly catches db.lck WARNINGS generated during STORE tests 
and lists the tests as failed in *_fail.txt.
--

Key: DERBY-1462
URL: http://issues.apache.org/jira/browse/DERBY-1462
Project: Derby
   Type: Test




 Components: Test
   Versions: 10.1.2.5, 10.1.2.4, 10.1.2.3, 10.1.2.2, 10.1.2.1, 10.1.3.0, 
10.1.3.1
Environment: IBM 1.3.1 JRE for LINUX (and possibly other JRE 1.3.1 environments)
   Reporter: Stan Bradbury
   Assignee: Myrna van Lunteren
   Priority: Minor




The following store tests from derbyall do not shutdown cleanly so leave the 
db.lck file on disk.  This is OK! It is done by design to test recovery.  THE 
PROBLEM, when run on Linux using IBM JRE 1.3.1 sp 10 the test harness 'sees' 
the warnings and lists the tests as having failed.  The harness should ignore 
this warnings as the tests proceed and complete cleanly.
Tests INCORRECLTY reported as failed:
derbyall/derbynetclientmats/derbynetmats.fail:stress/stress.multi
derbyall/derbynetmats/derbynetmats.fail:stress/stress.multi
derbyall/storeall/storeall.fail:storetests/st_1.sql
derbyall/storeall/storeall.fail:unit/recoveryTest.unit
erbyall/storeall/storeall.fail:store/LogChecksumRecovery.java
derbyall/storeall/storeall.fail:store/LogChecksumRecovery1.java
erbyall/storeall/storeall.fail:store/MaxLogNumberRecovery.java
derbyall/storeall/storeall.fail:store/oc_rec1.java
derbyall/storeall/storeall.fail:store/oc_rec2.java
derbyall/storeall/storeall.fail:store/oc_rec3.java
derbyall/storeall/storeall.fail:store/oc_rec4.java
derbyall/storeall/storeall.fail:store/dropcrash.java
derbyall/storeall/storeall.fail:store/dropcrash2.java
Example Error message:
WARNING: Derby (instance FILTERED-UUID) is attempting to boot the 
database csf:/local1/131TST/Store1/storeall/storerecovery/storerecovery/wombat 
even though Derby (instance FILTERED-UUID) may still be active.  Only 
one instance of Derby should boot a database at a time. Severe and 
non-recoverable corruption can result and may have already occurred.







[jira] Commented: (DERBY-1466) Network Server should flush the PrintWriter after console output

2006-07-13 Thread Sunitha Kambhampati (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1466?page=comments#action_12420949 ] 

Sunitha Kambhampati commented on DERBY-1466:


I took a quick look at this and want to  start a discussion on how this could 
be solved. 

Per java api for PrintWriter
-- There is no method to turn autoflush on an existing PrintWriter object.
-- when autoflush is on, flush will happen on the println statement only.

some solution options:
1) one is wrap the user's writer object with a PrintWriter and set autoflush to 
true
-- change in code only required in two places.
-- maybe ugly to wrap the object.

2)  explicitly call flush in all places after  we write to this writer object.
 
In Eclipse, searched for places where the logWriter in NetworkServerControl is 
referenced, here is the list
NetworkServerControlImpl - org.apache.derby.impl.drda - java/drda - ks_trunk
consoleExceptionPrintTrace(Throwable) (3 potential matches)
consoleMessage(String) (3 potential matches)
executeWork(String[]) (potential match)
setLogWriter(PrintWriter) (potential match)
shutdown() (2 potential matches)

So that is not a lot of places. so option 2 should be simple too. unless we 
miss to call flush in all the places.
--

There is one case where the writer is passed to exception.printStackTrace . If 
this gets flushed correctly in autoflush mode, I think #1 seems ok to me, less 
number of changes than #2.

I  was concerned a bit if setting autoflush will have any performance impact.  
But maybe it is ok since console output is not expected to be verbose anyways.

Comments/thoughts ?

Thanks.

> Network Server should flush the PrintWriter after console output
> 
>
>  Key: DERBY-1466
>  URL: http://issues.apache.org/jira/browse/DERBY-1466
>  Project: Derby
> Type: Improvement

>   Components: Network Server
> Versions: 10.1.2.1
> Reporter: Kathey Marsden

>
> If Network Server is started with a PrintWriter specified for console output 
> it will not automatically flush output such as  starting the server.  This 
> can be confusing as the console output shows no activity.
> Users currently need to specify the PrintWriter to autoflush  e.g.
>  starterWriter = new PrintWriter(new FileOutputStream(new 
> File(SERVER_START_LOG)),true); 
> derbyServer = new NetworkServerControl();
>  derbyServer.start(starterWriter); 
> For repro see:
> http://www.nabble.com/Questions-about-Network-Server-API-Behavior-p5055814.html
> And change the following line in the program to not autoflush as follows:
> starterWriter = new PrintWriter(new FileOutputStream(new 
> File(SERVER_START_LOG)),false); 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-13 Thread David Van Couvering
I'm all for helping out other projects, but IMHO Derby could really use 
this migration tool.  We could let the DDL team know these issues exist, 
participate in discussions with them, while at the same time scratching 
our own itch and getting a migration tool done before Ramin disappears...


David

Satheesh Bandaram wrote:

Daniel John Debrunner wrote:


Isn't the point of Google summer of code to introduce students to open
source development, and this switch to ddlutils and additonal community
involvement is typical of open source? Scratch your own itch but also
benefit the community as well.

Dan
 


I am all for contributing code to ddlUtil. Only question is should we
develop these modules for Derby *first* in a generic way and then
contribute (modified version) ddlUtil or directly modify ddlUtil and
hope they will include it quickly enough for Ramin to use in his tool.
From what I understand he has another month to complete his migration
tool. He may or may not have any time to work on this after that point.
While ddlUtil may accept code from Ramin, they may want to make it work
on their supported platforms before they include it in their
distribution. They also have multiple ways to invoke ddlUtils and need
to store database schema in Turbine XML form. So, there may be much more
additional work that is needed before contribution from Ramin could be
seen an complete.

Satheesh




[jira] Commented: (DERBY-1462) Test harness incorrectly catches db.lck WARNINGS generated during STORE tests and lists the tests as failed in *_fail.txt.

2006-07-13 Thread Myrna van Lunteren (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1462?page=comments#action_12420945 ] 

Myrna van Lunteren commented on DERBY-1462:
---

I do not believe this is a test harness issue.

If I run these steps with another jvm, these warnings do not appear.

I experimented, and I can make the warning appear with this ibm 1.3.1 SR10 with 
only ij, but not with another jvm by doing the following:
- start ij:
   java org.apache.derby.tools.ij
   ij>connect 'jdbc:derby:tstdb;create=true';
   ij> create table t1 (c1 int);
   ij> disconnect;
   ij> exit;
- ls tstdb and observe there is no db.lck file
-  start ij again, select from t, and ctrl-c out:
   java org.apache.derby.tools.ij
   ij> connect 'jdbc:derby:tstdb';
   ij> select * from t1;
   CTRL-C
- ls tstdb and observe there is a db.lck file now
- start ij again, and connect again, and see that only with ibm 1.3.1 do you 
see the warning.

I have no jdk131 installed on the machine I was doing this on. Maybe someone 
else can verify this behavior for that.

I worked on making _sed.properties files for all the affected tests (for 10.2, 
we have one extra tests failing - st_derby715.java) but this does not work for 
the stress tests, and possibly this is not the right thing to do in the first 
place.  

> Test harness incorrectly catches db.lck WARNINGS generated during STORE tests 
> and lists the tests as failed in *_fail.txt.
> --
>
>  Key: DERBY-1462
>  URL: http://issues.apache.org/jira/browse/DERBY-1462
>  Project: Derby
> Type: Test

>   Components: Test
> Versions: 10.1.2.5, 10.1.2.4, 10.1.2.3, 10.1.2.2, 10.1.2.1, 10.1.3.0, 
> 10.1.3.1
>  Environment: IBM 1.3.1 JRE for LINUX (and possibly other JRE 1.3.1 
> environments)
> Reporter: Stan Bradbury
> Assignee: Myrna van Lunteren
> Priority: Minor

>
> The following store tests from derbyall do not shutdown cleanly so leave the 
> db.lck file on disk.  This is OK! It is done by design to test recovery.  THE 
> PROBLEM, when run on Linux using IBM JRE 1.3.1 sp 10 the test harness 'sees' 
> the warnings and lists the tests as having failed.  The harness should ignore 
> this warnings as the tests proceed and complete cleanly.
> Tests INCORRECLTY reported as failed:
> derbyall/derbynetclientmats/derbynetmats.fail:stress/stress.multi
> derbyall/derbynetmats/derbynetmats.fail:stress/stress.multi
> derbyall/storeall/storeall.fail:storetests/st_1.sql
> derbyall/storeall/storeall.fail:unit/recoveryTest.unit
>  erbyall/storeall/storeall.fail:store/LogChecksumRecovery.java
> derbyall/storeall/storeall.fail:store/LogChecksumRecovery1.java
>  erbyall/storeall/storeall.fail:store/MaxLogNumberRecovery.java
> derbyall/storeall/storeall.fail:store/oc_rec1.java
> derbyall/storeall/storeall.fail:store/oc_rec2.java
> derbyall/storeall/storeall.fail:store/oc_rec3.java
> derbyall/storeall/storeall.fail:store/oc_rec4.java
> derbyall/storeall/storeall.fail:store/dropcrash.java
> derbyall/storeall/storeall.fail:store/dropcrash2.java
> Example Error message:
> WARNING: Derby (instance FILTERED-UUID) is attempting to boot the 
> database 
> csf:/local1/131TST/Store1/storeall/storerecovery/storerecovery/wombat even 
> though Derby (instance FILTERED-UUID) may still be active.  Only one 
> instance of Derby should boot a database at a time. Severe and 
> non-recoverable corruption can result and may have already occurred.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1438) Text written by SQLException.toString differs between client and embedded driver

2006-07-13 Thread David Van Couvering (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1438?page=all ]

David Van Couvering updated DERBY-1438:
---

Attachment: DERBY-1438.diff

We can't override the toString() method on the network client, as what's being 
returned to users is a vanilla java.sql.SQLException (or one of its subclasses 
in JDBC4).  So it's going to do the default behavior and there's nothing we can 
do about it.

In the embedded deriver, you'll also get the default toString() output, because 
in SQLExceptionFactory40 we're converting to vanilla java.sql.SQLException 
classes and its subtypes.

Running pre-Java SE 6, the user gets an 
org.apache.derby.impl.jdbc.EmbedSQLException class, and that's kind of ugly to 
print out, and still wouldn't be consistent with the client.

What I have done (see attached patch) is change the toString() override method 
in EmbedSQLException to print out "java.sql.SQLException" rather than just "SQL 
Exception".  This is correct, in that EmbedSQLException is a subclass of 
java.sql.SQLException, and makes the client and the embedded drivers consistent.

Unless anyone objects, I'll commit this (and resulting master output changes) 
after running derbyall.

David

> Text written by SQLException.toString differs between client and embedded 
> driver
> 
>
>  Key: DERBY-1438
>  URL: http://issues.apache.org/jira/browse/DERBY-1438
>  Project: Derby
> Type: Improvement

>   Components: JDBC, Newcomer
> Versions: 10.2.0.0
>  Environment: Sun JDK 1.5
> Reporter: Olav Sandstaa
> Assignee: David Van Couvering
> Priority: Trivial
>  Attachments: DERBY-1438.diff
>
> The first part of the string written by SQLExeption.toString() differs
> between the Derby client driver and the embedded driver. The embedded
> driver writes:
>SQL Exception: Table/View 'DERBYDB' does not exist.
> while the client driver writes:
>java.sql.SQLException: Table/View 'DERBYDB' does not exist.
> It would be good if we changed this so the same text is written by
> both drivers. This reduces the difference seen when changing between
> client and embedded Derby and it make it possible to reduce the amount
> of sed-ing or the number of master file variants for some tests.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen



Kathey Marsden wrote:

Lance J. Andersen wrote:

This issue pointed out a problem in the JDBC EoD RI which made the 
assumption that the value returned matched the column type in the 
base table.


A Derby user encountered this issue as well, trying to use 10.2 and 
JDBC EoD  http://binkley.blogspot.com/2006/04/nifty-jdbc-40.html.
Well, it appears that the behavior in Derby was copied from  the IBM DB2 
driver  i am afraid, which did not come up on my EG call discussion 
yesterday as a difference in behavior, but that happens as well without 
sometimes specifically testing.  Nothing sadly is ever easy is it...







So here is a  benefit.  The change  may ease migration to Derby for 
apps that make this assumption.

It would help with some databases such as Oracle for sure.

  I hit a similar thing recently that Derby
Clob.getSubString does not support a zero offset and DDLUtils  
expected it to.  (That one is still on my list to file.  I don't know 
yet if that is a Derby bug or not. )   Another similar case is  
DERBY-1501 where it would be nice if Derby were more forgiving of 
non-portable apps.  Of course in both of those other cases we would 
just be adding to existing support, not changing existing behavior and 
`there is a risk  to  apps that develop on Derby and expect to be able 
to move without changes to another db.


Anyway I think if you would like to make this change it would be 
reasonable to file a Jira issue and pursue due diligence with the user 
community.
Understand, the original intent  of this thread was also to try and 
understand why this behavior was there and know i know.
I'll get  in touch with some of the users I work with and see if it 
might be an issue, but if limited to what has been outlined so far I 
tend to think it won't conflict with most typical usage cases.   I 
think that basically folks are going to be calling getLong() or 
getInt() on the  ResultSet returned and not getObject.  If they are 
looking at the metadata they are expecting it to be as you describe.  
But I will wait until we hear more. My biggest concerns with the 
change are:


1) The precedent it sets. That we can change compliant, documented 
behaviour like this.   But reading the ForwadCompatibility goal  I 
feel reassured that maybe this is ok.


"The goal is to allow any application written against the public 
interfaces of an older version of Derby to run, without any changes, 
against a newer version of Derby."


Maybe though, the ForwardCompatiblity guidelines should have 
information on due dilligence when making an incompatible change.


2) The potential high risk and impact of the code change for 
client/server  as outlined in my earlier mail.


Kathey



Re: [jira] Commented: (DERBY-1501) PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode

2006-07-13 Thread Lance J. Andersen




I am not sure why the wording was added to the overloaded setNull 
method which was added in JDBC 3.

 i probably would expect it to not ignore the specified sql type in
order to make sure the action requested is valid.  I would have to
check the SQL standard and discuss this with the EG further but it is
something else to try and clean up and added it to my ever growing to
do list


Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1501?page=comments#action_12420620 ] 

Daniel John Debrunner commented on DERBY-1501:
--

Knut Anders indicates

setNull(int,int,String)
 - If a JDBC driver does not need the type code or type name
  information, it may ignore it. 
setNull(int,int)
You must specify the parameter's SQL type.

Interesting, here the issue is about setNull(int,int) which doesn't have that comment about ignoring typeCode.
Could the omission be intentional and the wording in setNull(int,int,String) meant to be clearer, so that
one of typeCode or typeName could be ignored, but not both?

With setNull(1, Types.LONGVARBINARY) it is saying send a NULL of LONGVARBINARY to the engine,
the engine should then treat it the same as a cast of a LONGVARCHAR FOR BIT DATA to the target type.




  
  
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode
--

 Key: DERBY-1501
 URL: http://issues.apache.org/jira/browse/DERBY-1501
 Project: Derby
Type: Bug

  
  
  
  
Versions: 10.1.1.0
 Environment: WindowsXP
Reporter: Markus Fuchs
 Attachments: ByteArrayTest.java

When inserting a row into following table
BYTEARRAY_TEST( ID int, BYTEARRAY_VAL blob)
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY. You must give sqlType BLOB to make the insert work. The same test works using sqlType LONGVARBINARY in network mode. The following combinations don't work:
Column type   sqlType not working mandatory sqlType
BLOB   LONGVARBINARY BLOB
CLOB   LONGVARCHARCLOB
The issue here is that first Derby behaves differently in network and embedded mode. And secondly, should accept LONGVARBINARY/LONGVARCHAR for BLOB/CLOB columns.

  
  
  





Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Kathey Marsden

Lance J. Andersen wrote:

This issue pointed out a problem in the JDBC EoD RI which made the 
assumption that the value returned matched the column type in the base 
table.


A Derby user encountered this issue as well, trying to use 10.2 and 
JDBC EoD  http://binkley.blogspot.com/2006/04/nifty-jdbc-40.html.



So here is a  benefit.  The change  may ease migration to Derby for apps 
that make this assumption.   I hit a similar thing recently that Derby
Clob.getSubString does not support a zero offset and DDLUtils  expected 
it to.  (That one is still on my list to file.  I don't know yet if that 
is a Derby bug or not. )   Another similar case is  DERBY-1501 where it 
would be nice if Derby were more forgiving of non-portable apps.  Of 
course in both of those other cases we would just be adding to existing 
support, not changing existing behavior and `there is a risk  to  apps 
that develop on Derby and expect to be able to move without changes to 
another db.


Anyway I think if you would like to make this change it would be 
reasonable to file a Jira issue and pursue due diligence with the user 
community.  I'll get  in touch with some of the users I work with and 
see if it might be an issue, but if limited to what has been outlined so 
far I tend to think it won't conflict with most typical usage cases.   I 
think that basically folks are going to be calling getLong() or getInt() 
on the  ResultSet returned and not getObject.  If they are looking at 
the metadata they are expecting it to be as you describe.  But I will 
wait until we hear more. 
My biggest concerns with the change are:


1) The precedent it sets. That we can change compliant, documented 
behaviour like this.   But reading the ForwadCompatibility goal  I feel 
reassured that maybe this is ok.


"The goal is to allow any application written against the public 
interfaces of an older version of Derby to run, without any changes, 
against a newer version of Derby."


Maybe though, the ForwardCompatiblity guidelines should have information 
on due dilligence when making an incompatible change.


2) The potential high risk and impact of the code change for 
client/server  as outlined in my earlier mail.


Kathey



Re: [jira] Updated: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-13 Thread Mike Matrigali



Suresh Thalamati wrote:




I think returning Ok (true)  is not a right thing to do unless I really 
check the versions by reading the versions from the control files ..etc.


Current usage of this function is to make sure database is at the right 
version before doing any writes that will break the soft-upgrade.


If some one in the future implements a read-only feature that requires a 
version check , they can implement this method. Not my itch at the 
moment :-)


That's fine just leave it, I just looked again and see that 
unimplemented is standard

for the readonly implmentation.  I was confused that you were counting
on an exception, now I see it just never will be called for for readonly 
- only there to fulfill interface.  I was thinking that Readonly 
extended the default impl, and didn't understand why it didn't just

inherit impl.

Before 10.2 goes out, would like to see test showing what user gets
when they try to reencryt a read only db, but no need for this 
incremental commit.








Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-13 Thread Satheesh Bandaram
Daniel John Debrunner wrote:

>Isn't the point of Google summer of code to introduce students to open
>source development, and this switch to ddlutils and additonal community
>involvement is typical of open source? Scratch your own itch but also
>benefit the community as well.
>
>Dan
>  
>
I am all for contributing code to ddlUtil. Only question is should we
develop these modules for Derby *first* in a generic way and then
contribute (modified version) ddlUtil or directly modify ddlUtil and
hope they will include it quickly enough for Ramin to use in his tool.
>From what I understand he has another month to complete his migration
tool. He may or may not have any time to work on this after that point.
While ddlUtil may accept code from Ramin, they may want to make it work
on their supported platforms before they include it in their
distribution. They also have multiple ways to invoke ddlUtils and need
to store database schema in Turbine XML form. So, there may be much more
additional work that is needed before contribution from Ramin could be
seen an complete.

Satheesh




Re: Choice of JMX implementations

2006-07-13 Thread Jean T. Anderson
thanks for checking the hivemind download, Andreas.

I checked the geronimo.1.1 download and it does include the mx4j jars.
Its NOTICE file also has this attribution:

> =
> ==  MX4J Notice==
> =
> 
> This product includes software developed by the MX4J project
> (http://sourceforge.net/projects/mx4j).


 -jean



Andreas Korneliussen wrote:
> Jean T. Anderson wrote:
> 
>> Andrew McIntyre wrote:
>>
>>> If the goal is to repackage any of these, I'm not sure that will be
>>> possible with any of the following, except for Apache Commons
>>> Modelling, but is that actually an implementation?
>>>
>>> For information on compatibility of other open source licenses with
>>> the ASL, see: http://people.apache.org/~cliffs/3party.html
>>>
>>> On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
>>
>>
>> ...
>>
 4. MX4J
>>>
>>>
>>> This has a modified BSD license with an advertising clause, and a
>>> restriction to downstream projects on naming. Not that we'd ever name
>>> our project MX4J, but it's an extra restriction that isn't in the ASL,
>>> so we might need to get a determination from legal-discuss on whether
>>> this is acceptable to redistribute.
>>
>>
>>
>> here's one precedent for using MX4J at apache (and there might be more):
>>
>> http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html
>>
>>
>>
> They are using it for development, and testing. However the users need
> to  download mx4j themselves, or download another JMX implementation, or
> use java 5.
> 
> See http://jakarta.apache.org/hivemind/hivemind-jmx/quickstart.html
> 
> I downloaded hivemind-1-1-1, and could not find any redistribution of
> mx4j there.
> 
> Andreas
> 
> 
>>  -jean
>>
> 
> 



Re: Choice of JMX implementations

2006-07-13 Thread Daniel John Debrunner


Sanket Sharma wrote:

> They are implementations of the same JCP and it is not really that big
> of an issue. The issues arises only when someone is using JDK < 1.5
> which does not carry a implementation by default. Since most of
> Derby's code is currently being built against JDK 1.3 and 1.4 (which
> do not carry such an implementation), it gave me a chance to look at
> alternatives and I just thought it will be good to discuss it.
> Currently, I'm experimenting with the reference implementation of JDK
> 6 which forces me to build my code against three different JDK's. It
> will be same for JDK1.5 as well. For building with JDK 1.4 and 1.3, I
> will need an implementation. Thats when the issue surfaces.
> Asking the user to download the reference implementation from Sun.com
> can be considered as an alternative.

Since JMX is a new feature, there is no requirement for it to work with
Derby on JDK 1.3/1.4. Ie. it's fine to say if people want to use JMX and
Derby they have to run with JDK 1.5 or later.

Dan.






Re: Choice of JMX implementations

2006-07-13 Thread Andreas Korneliussen

Jean T. Anderson wrote:

Andrew McIntyre wrote:


If the goal is to repackage any of these, I'm not sure that will be
possible with any of the following, except for Apache Commons
Modelling, but is that actually an implementation?

For information on compatibility of other open source licenses with
the ASL, see: http://people.apache.org/~cliffs/3party.html

On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:


...


4. MX4J


This has a modified BSD license with an advertising clause, and a
restriction to downstream projects on naming. Not that we'd ever name
our project MX4J, but it's an extra restriction that isn't in the ASL,
so we might need to get a determination from legal-discuss on whether
this is acceptable to redistribute.



here's one precedent for using MX4J at apache (and there might be more):

http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html



They are using it for development, and testing. However the users need
to  download mx4j themselves, or download another JMX implementation, or
use java 5.

See http://jakarta.apache.org/hivemind/hivemind-jmx/quickstart.html

I downloaded hivemind-1-1-1, and could not find any redistribution of
mx4j there.

Andreas



 -jean






Re: Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

Nope.

Geronimo is an application server built around JMX sort of architecture.


On 7/14/06, Jean T. Anderson <[EMAIL PROTECTED]> wrote:

Sanket Sharma wrote:
> Just wanted an opinion about JMX implementation to use for Derby. I
> have listed the better known implementations below with my comments:

is Geronimo an option?

http://geronimo.apache.org/api/org/apache/geronimo/kernel/jmx/package-summary.html

 -jean



Re: Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

I think that the RI of JMX also has HttpAdaptor.


It was released as part of sun.* packages which are not officially
supported by sun. And I think it has been removed from JDK 6, not very
sure though.


> My recommendation is to use either XMOJO or MX4J. Both of them are
> open source and support JDK 1.3 and above, which is what Derby is
> supported on.
>
> Comments and opinion will be appriciated.
>
Is it necessary to choose a specific JMX implementation ? Aren't these
just implementations of the same JCP spec, so the interfaces/classes
should be compatible ?


They are implementations of the same JCP and it is not really that big
of an issue. The issues arises only when someone is using JDK < 1.5
which does not carry a implementation by default. Since most of
Derby's code is currently being built against JDK 1.3 and 1.4 (which
do not carry such an implementation), it gave me a chance to look at
alternatives and I just thought it will be good to discuss it.
Currently, I'm experimenting with the reference implementation of JDK
6 which forces me to build my code against three different JDK's. It
will be same for JDK1.5 as well. For building with JDK 1.4 and 1.3, I
will need an implementation. Thats when the issue surfaces.
Asking the user to download the reference implementation from Sun.com
can be considered as an alternative.



I might recommend using the reference implementation during the
development of this feature, because then you may avoid being dependent
on specific add-on features from a specific library. Or is there a
specific feature you really would like to use, which is not available in
the RI ?


No feature in particular that I would like to use. I will use only
standard features defined in the corresponding JCP. The only issue in
my opinion was my JMX implementation was adding the requirement for a
third version of JDK.


XMOJO is distributed under LGPL, could that be a problem ?


Will check with Apache licence and revert back.


Regards,
Sanket


Sincerely
-- Andreas




Re: Choice of JMX implementations

2006-07-13 Thread David Van Couvering
I want to retract my statement about removing support for system 
properties, major backward compatibility issue, what was I thinking?


I *can* see, however, that system properties become a layer on top of a 
core JMX implementation, so that internally the only way the system is 
configured and managed is through the JMX service.  But this really 
shouldn't happen until JMX is "just there" for all VMs that we support, 
I agree with Dan it would not be good to *require* a separate JMX jar 
file for Derby to run.


That said, I think the advantages of JMX are significant enough that 
many people will want to use it, and I think there's value in 
redistributing MX4J if we think it's up to snuff.


David

David Van Couvering wrote:
My understanding from Sanket's design is it uses the module 
architecture, so it's pluggable, and that it isn't even started by 
default, you have to enable it.  No new requirements on Derby unless you 
*want* to use JMX.


I would argue, however, that we should keep open to over time JMX 
becoming the primary configuration and management framework.  Supporting 
both JMX and system properties may become a bit time-consuming over 
time.   Perhaps, for example, when we EOL JDK 1.4 support, so that JMX 
is "just there" and doesn't require a separate runtime jar file.


David

Daniel John Debrunner wrote:

Sanket Sharma wrote:


Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

[snip]

Comments and opinion will be appriciated.


Sounds like a pluggable JMX implementation would be best, rather than
forcing an infrastructure on a derby user.

I hope that the JMX stuff is optional, and I can continue to run Derby
without any JMX booting or requiring any JMX libraries.

Thanks,
Dan.



Re: Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

Thanks Andrew for pointing out the legal implications.

Apache commons is not an implementation. It only facilitates coding of MBeans.


On 7/14/06, Andrew McIntyre <[EMAIL PROTECTED]> wrote:

If the goal is to repackage any of these, I'm not sure that will be
possible with any of the following, except for Apache Commons
Modelling, but is that actually an implementation?

For information on compatibility of other open source licenses with
the ASL, see: http://people.apache.org/~cliffs/3party.html

On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
> Just wanted an opinion about JMX implementation to use for Derby. I
> have listed the better known implementations below with my comments:
>
> 1. For Sun JDK/JVM prior to version 1.5 Sun's references 
implemenation is
> available as a seperate jar download. Applications running on JVM 
1.3
> and 1.4 will need to download install this jar.

We can't repackage this jar, as the terms of Sun's BCL are
incompatible with the ASL. But perhaps we could detect its presence
and start the JMX services if an implementation is present.


Maybe we can put it under "Required Software" in the Derby INSTALL and
BUILD document? or we can make the entire service optional? If the
user is aware of requirements and wants to use JMX, either he can
install JRE1.5 and above or download the jar, set the classpath and
start application with command line option to start the JMX modules?
We specify the details in BUILD and INSTALL documents, the way BUILD
document guides a user to download JCE and other optional components.


>  3. Apache Commons Modeller framework

Sounds like this would aid your development, but do you still need an
implementation? At any rate, we could repackage it if its needed at
runtime.


Yes, I will still require an implementation.


>  4. MX4J

This has a modified BSD license with an advertising clause, and a
restriction to downstream projects on naming. Not that we'd ever name
our project MX4J, but it's an extra restriction that isn't in the ASL,
so we might need to get a determination from legal-discuss on whether
this is acceptable to redistribute.



I was not really aware of the legal implications and would like to
thank you for it. I will read the Apache license terms and revert back
in a while.


Best Regards
Sanket.


Re: Choice of JMX implementations

2006-07-13 Thread Jean T. Anderson
Andrew McIntyre wrote:
> If the goal is to repackage any of these, I'm not sure that will be
> possible with any of the following, except for Apache Commons
> Modelling, but is that actually an implementation?
> 
> For information on compatibility of other open source licenses with
> the ASL, see: http://people.apache.org/~cliffs/3party.html
> 
> On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:
...
> 
>>  4. MX4J
> 
> This has a modified BSD license with an advertising clause, and a
> restriction to downstream projects on naming. Not that we'd ever name
> our project MX4J, but it's an extra restriction that isn't in the ASL,
> so we might need to get a determination from legal-discuss on whether
> this is acceptable to redistribute.

here's one precedent for using MX4J at apache (and there might be more):

http://jakarta.apache.org/hivemind/hivemind-jmx/setupJMXImplementation.html


 -jean



Re: New/replacement developer area on Wiki

2006-07-13 Thread David Van Couvering

Great work, thanks, Dan!

Daniel John Debrunner wrote:

I've created a new page on the wiki as a proposed starting point for
developers/contributors/comitters. This would replace the (imho)
somewhat unfriendly bulleted list of un-ordered links that we have
today. The intention is to provide a more readable way to navigate the
links, biased towards getting new developers into the project.

http://wiki.apache.org/db-derby/DerbyDev

This mostly points to existing content, on the wiki or the web-site, but
the is one major new area.

http://wiki.apache.org/db-derby/HowItWorks

This is meant to allow people to find technical (developer) information
about derby easily. My idea is that all technical documents would be
linked from this set of pages (to keep the pages manageable the main
sections of Derby are broken into individual pages). For example all
functional and design specifications would be linked from here, I filled
in a couple of examples either linking into Jira attachements or wiki
pages (e.g. see http://wiki.apache.org/db-derby/JdbcDriverLinks ).
Of course this being the wiki I'm hoping that the writers of exsiting
functional specs or writeups etc. would add the links into this
structure themselves.

I think all the links in these new pages are valid, except the one for
incremental development. This is the one I was planning to add to the
wiki but got side-tracked on the re-org when I couldn't find the patch
advice page easily.

I would appreciate any feedback, unless I hear otherwise I will replace
the current bulleted list with just a link to this new layout. I will
leave any links on the front page from the existing list that are not
covered in the re-org.

Thanks,
Dan.









Re: Choice of JMX implementations

2006-07-13 Thread David Van Couvering
That doesn't *look* like a JMX implementation, but a set of utilities on 
top of JMX.   Nice idea, though.


David

Jean T. Anderson wrote:

Sanket Sharma wrote:

Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:


is Geronimo an option?

http://geronimo.apache.org/api/org/apache/geronimo/kernel/jmx/package-summary.html

 -jean


Re: Choice of JMX implementations

2006-07-13 Thread David Van Couvering
My understanding from Sanket's design is it uses the module 
architecture, so it's pluggable, and that it isn't even started by 
default, you have to enable it.  No new requirements on Derby unless you 
*want* to use JMX.


I would argue, however, that we should keep open to over time JMX 
becoming the primary configuration and management framework.  Supporting 
both JMX and system properties may become a bit time-consuming over 
time.   Perhaps, for example, when we EOL JDK 1.4 support, so that JMX 
is "just there" and doesn't require a separate runtime jar file.


David

Daniel John Debrunner wrote:

Sanket Sharma wrote:


Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

[snip]

Comments and opinion will be appriciated.


Sounds like a pluggable JMX implementation would be best, rather than
forcing an infrastructure on a derby user.

I hope that the JMX stuff is optional, and I can continue to run Derby
without any JMX booting or requiring any JMX libraries.

Thanks,
Dan.



Re: Choice of JMX implementations

2006-07-13 Thread David Van Couvering
I like the idea of detecting the presence of a JMX implementation and 
starting the service if it exists.  We'd still have to indicate exactly 
what JMX implementations and versions we have tested with (and provide 
links to the download page), so that users know what will work.


That said, if we *can* redistribute MX4J and we think it works well, 
redistributing it would be a nice thing to do for our users, so they 
don't have to take an extra step to be able to get JMX functionality for 
Derby.


The nice thing is for anyone using JDK 1.5 or higher this is a non-issue.

What about J2ME, are there versions of J2ME that we support that don't 
come with JMX, or are we covered?


Thanks,

David

Andrew McIntyre wrote:

If the goal is to repackage any of these, I'm not sure that will be
possible with any of the following, except for Apache Commons
Modelling, but is that actually an implementation?

For information on compatibility of other open source licenses with
the ASL, see: http://people.apache.org/~cliffs/3party.html

On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:

Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

1. For Sun JDK/JVM prior to version 1.5 Sun's references 
implemenation is
available as a seperate jar download. Applications running 
on JVM 1.3

and 1.4 will need to download install this jar.


We can't repackage this jar, as the terms of Sun's BCL are
incompatible with the ASL. But perhaps we could detect its presence
and start the JMX services if an implementation is present.


 2. XMOJO Project


This is GPL licensed. Currently ASF policy is not to redistribute
GPL-licensed jars.


 3. Apache Commons Modeller framework


Sounds like this would aid your development, but do you still need an
implementation? At any rate, we could repackage it if its needed at
runtime.


 4. MX4J


This has a modified BSD license with an advertising clause, and a
restriction to downstream projects on naming. Not that we'd ever name
our project MX4J, but it's an extra restriction that isn't in the ASL,
so we might need to get a determination from legal-discuss on whether
this is acceptable to redistribute.

andrew


Re: Choice of JMX implementations

2006-07-13 Thread Jean T. Anderson
Sanket Sharma wrote:
> Just wanted an opinion about JMX implementation to use for Derby. I
> have listed the better known implementations below with my comments:

is Geronimo an option?

http://geronimo.apache.org/api/org/apache/geronimo/kernel/jmx/package-summary.html

 -jean


Re: Choice of JMX implementations

2006-07-13 Thread Andreas Korneliussen

Sanket Sharma wrote:

Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

1. Sun JDK 1.5 and above: Comes prepackaged with JMX. Requires no
additional Jars to depolyed with the client. Leads to smaller
footprint. One the negetive side, it will always require Derby to run
on JRE 1.5 and above. May not be a good choice as some implementations
might be using old JVM versions. For Sun JDK/JVM prior to version 1.5
Sun's references implemenation is available as a seperate jar
download. Applications running on JVM 1.3 and 1.4 will need to
download install this jar.

 2. XMOJO Project:  This JMX supports JDK version 1.2 and above,
which means it imposes no additional JDK requirements. Hoewever,
additional JARs will to be deployed on the client for them to use JMX.
They can either be bundled with Derby or can be seperately installed
on client. Another point to note is that XMojo itself requires more
jars to be present (xalan.jar, crimson.jar, and jaxp.jar and
org.mortbay.jetty.jar). It supports construction of MBeans through
Metadata sepcified in a XML file. Utility methods convert this
metadata in ModelMBeanInfo which can then be used to construct a
ModelMBean.

 3. Apache Commons Modeller framework:  The commons modelling
framework is another Apache project developed under the Jarkarta
project. Although, it is not an implementation in itself, it
facilitates writing of MBeans by using XML files. The Metadata for
MBeans is specified in an XML file which is parsed to gernerate
MBeans.It is known to work with all major JMX implementations.

 4. MX4J: Another very popular JMX framework know to work with JDK
1.3 and above. It also supports automatic generation of ModelMBeans
using XDoclets. Needs additional jars to be deployed. Supports HTTP
adapters for access via HTTP.



I think that the RI of JMX also has HttpAdaptor.


My recommendation is to use either XMOJO or MX4J. Both of them are
open source and support JDK 1.3 and above, which is what Derby is
supported on.

Comments and opinion will be appriciated.

Is it necessary to choose a specific JMX implementation ? Aren't these 
just implementations of the same JCP spec, so the interfaces/classes 
should be compatible ?


I might recommend using the reference implementation during the 
development of this feature, because then you may avoid being dependent 
on specific add-on features from a specific library. Or is there a 
specific feature you really would like to use, which is not available in 
the RI ?


XMOJO is distributed under LGPL, could that be a problem ?

Sincerely
-- Andreas



Best Regards,
Sanket Sharma




Re: Choice of JMX implementations

2006-07-13 Thread Daniel John Debrunner
Sanket Sharma wrote:

> Just wanted an opinion about JMX implementation to use for Derby. I
> have listed the better known implementations below with my comments:
[snip]
> Comments and opinion will be appriciated.

Sounds like a pluggable JMX implementation would be best, rather than
forcing an infrastructure on a derby user.

I hope that the JMX stuff is optional, and I can continue to run Derby
without any JMX booting or requiring any JMX libraries.

Thanks,
Dan.



New/replacement developer area on Wiki

2006-07-13 Thread Daniel John Debrunner

I've created a new page on the wiki as a proposed starting point for
developers/contributors/comitters. This would replace the (imho)
somewhat unfriendly bulleted list of un-ordered links that we have
today. The intention is to provide a more readable way to navigate the
links, biased towards getting new developers into the project.

http://wiki.apache.org/db-derby/DerbyDev

This mostly points to existing content, on the wiki or the web-site, but
the is one major new area.

http://wiki.apache.org/db-derby/HowItWorks

This is meant to allow people to find technical (developer) information
about derby easily. My idea is that all technical documents would be
linked from this set of pages (to keep the pages manageable the main
sections of Derby are broken into individual pages). For example all
functional and design specifications would be linked from here, I filled
in a couple of examples either linking into Jira attachements or wiki
pages (e.g. see http://wiki.apache.org/db-derby/JdbcDriverLinks ).
Of course this being the wiki I'm hoping that the writers of exsiting
functional specs or writeups etc. would add the links into this
structure themselves.

I think all the links in these new pages are valid, except the one for
incremental development. This is the one I was planning to add to the
wiki but got side-tracked on the re-org when I couldn't find the patch
advice page easily.

I would appreciate any feedback, unless I hear otherwise I will replace
the current bulleted list with just a link to this new layout. I will
leave any links on the front page from the existing list that are not
covered in the re-org.

Thanks,
Dan.









Re: Choice of JMX implementations

2006-07-13 Thread Andrew McIntyre

If the goal is to repackage any of these, I'm not sure that will be
possible with any of the following, except for Apache Commons
Modelling, but is that actually an implementation?

For information on compatibility of other open source licenses with
the ASL, see: http://people.apache.org/~cliffs/3party.html

On 7/13/06, Sanket Sharma <[EMAIL PROTECTED]> wrote:

Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

1. For Sun JDK/JVM prior to version 1.5 Sun's references implemenation 
is
available as a seperate jar download. Applications running on JVM 
1.3
and 1.4 will need to download install this jar.


We can't repackage this jar, as the terms of Sun's BCL are
incompatible with the ASL. But perhaps we could detect its presence
and start the JMX services if an implementation is present.


 2. XMOJO Project


This is GPL licensed. Currently ASF policy is not to redistribute
GPL-licensed jars.


 3. Apache Commons Modeller framework


Sounds like this would aid your development, but do you still need an
implementation? At any rate, we could repackage it if its needed at
runtime.


 4. MX4J


This has a modified BSD license with an advertising clause, and a
restriction to downstream projects on naming. Not that we'd ever name
our project MX4J, but it's an extra restriction that isn't in the ASL,
so we might need to get a determination from legal-discuss on whether
this is acceptable to redistribute.

andrew


Re: [jira] Updated: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-13 Thread Suresh Thalamati

Thanks for taking time to review the patch Mike. My comments are in-line.

Mike Matrigali (JIRA) wrote:

 [ http://issues.apache.org/jira/browse/DERBY-1156?page=all ]

Mike Matrigali updated DERBY-1156:
--


Here are my comments on review of the reencrypt_4.diff, also I am 


running storeall but it has not finished yet.:


minor typo's:
XactFactory.java line 835 - trasaction  --> transactions
TransactionTable.java: add comment that hasPreparedXact(boolean recovered) is
also MT unsafe.



I will fix the comments.



Is there a test for readonly db?



There is  a store/TurnsReadOnly.java test , but that is not included 
in the regresssion test suites, because cleanup will fail. May be 
there are some other tests or may be not :-)




It is a little wierd to throw an exception from the ReadOnly impl of
checkVersion(), from the name of the routine.  I understand what the
comment is saying.  It seems unexpected for this routine to not return
ok for a "current" db.




I think returning Ok (true)  is not a right thing to do unless I 
really check the versions by reading the versions from the control 
files ..etc.


Current usage of this function is to make sure database is at the 
right version before doing any writes that will break the soft-upgrade.


If some one in the future implements a read-only feature that requires 
a version check , they can implement this method. Not my itch at the 
moment :-)



Other choices are, throw the error :
1) StandardException.newException(SQLState.DATABASE_READ_ONLY);
   instead of unimplmented.

2) Make sure checkVersion(...) is not called by doing the readOnly db 
check in calling method.


I was planning to do the option 2  later. If you think option 1 is 
better I am ok with it too.





stuff for now or later:
o may be interesting to add following to your tests on XA
o have a global xact in the log that is aborted during recovery (ie.
  not yet prepared).



o have a global xact in the log that is prepared and committed.

  I don't think there are code issues with these paths, mostly just easy
  cases to add and will verify future code doesn't break it.

o add test for readonly db and reencrypt.
o add test for upgrade fail.  Is there an existing framework for soft
  upgrade testing in 10.2?




Thanks, those are good tests cases , I will work on them as a 
different patch.



-suresh






Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen




I think it can be improved but the javadocs indicates (executeUpdate)
that the array is  ignored if the statement is not able to return an
autogenerated key and the getGeneratedKeys says it will return an empty
ResultSet if it cannot return generated keys.



Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
I discussed this briefly at my JDBC EG meeting yesterday.

As i expected, all of the vendors on the call indicated that they return
the same data type for key returned in the case of column defined as
identity via the getGeneratedKeys() method.  The consensus was that this
is what a user would expect.

As to the unique key question posed by Dan, this is going to be an
ongoing EG discussion as some vendors do return identity columns values
in cases that  are not unique (because the backend like Derby allows for
it) which gets complex as some vendors also in some cases support
returning a ROWID currently (but this is a difference scenario then
using a defined column in the table).

  
  
Beyond that it's also unclear what should a driver do if the application
requests columns in the ResultSet that are not generated or identity
columns.

E.g. with the example in section 13.6 of JDBC 4 what is the expected
behaviour if the column list is any of:

 {"ORDER_ID", "ISBN"}
 {"ISBN"}
 {"ORDER_DATE"}
 {"ORDER_ID", "ORDER_DATE"}

where ORDER_DATE is a column that has a default of CURRENT_DATE (ie.
value not provided by the INSERT).

Dan.





  





[jira] Commented: (DERBY-1330) Provide runtime privilege checking for grant/revoke functionality

2006-07-13 Thread Mamta A. Satoor (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1330?page=comments#action_12420911 ] 

Mamta A. Satoor commented on DERBY-1330:


Just wondered if anyone got a chance to look at the patch 
Derby1330uuidIndexForPermsSystemTablesV6diff.txt. This one is a pretty 
localized change.

> Provide runtime privilege checking for grant/revoke functionality
> -
>
>  Key: DERBY-1330
>  URL: http://issues.apache.org/jira/browse/DERBY-1330
>  Project: Derby
> Type: Sub-task

>   Components: SQL
> Versions: 10.2.0.0
> Reporter: Mamta A. Satoor
> Assignee: Mamta A. Satoor
>  Attachments: AuthorizationModelForDerbySQLStandardAuthorization.html, 
> AuthorizationModelForDerbySQLStandardAuthorizationV2.html, 
> Derby1330PrivilegeCollectionV2diff.txt, 
> Derby1330PrivilegeCollectionV2stat.txt, 
> Derby1330PrivilegeCollectionV3diff.txt, 
> Derby1330PrivilegeCollectionV3stat.txt, 
> Derby1330ViewPrivilegeCollectionV1diff.txt, 
> Derby1330ViewPrivilegeCollectionV1stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV4diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV4stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV5diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV5stat.txt, 
> Derby1330uuidIndexForPermsSystemTablesV6diff.txt, 
> Derby1330uuidIndexForPermsSystemTablesV6stat.txt
>
> Additional work needs to be done for grant/revoke to make sure that only 
> users with required privileges can access various database objects. In order 
> to do that, first we need to collect the privilege requirements for various 
> database objects and store them in SYS.SYSREQUIREDPERM. Once we have this 
> information then when a user tries to access an object, the required 
> SYS.SYSREQUIREDPERM privileges for the object will be checked against the 
> user privileges in SYS.SYSTABLEPERMS, SYS.SYSCOLPERMS and 
> SYS.SYSROUTINEPERMS. The database object access will succeed only if the user 
> has the necessary privileges.
> SYS.SYSTABLEPERMS, SYS.SYSCOLPERMS and SYS.SYSROUTINEPERMS are already 
> populated by Satheesh's work on DERBY-464. But SYS.SYSREQUIREDPERM doesn't 
> have any information in it at this point and hence no runtime privilege 
> checking is getting done at this point.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1501) PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode

2006-07-13 Thread Markus Fuchs (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1501?page=comments#action_12420910 ] 

Markus Fuchs commented on DERBY-1501:
-

I would certainly appreciate, if the second issue would be handled as suggested 
by Knut :

"Derby doesn't need the type code, so ignoring it would be OK."

Thanks!

> PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL 
> Exception if given sqlType is LONGVARBINARY in embedded mode
> --
>
>  Key: DERBY-1501
>  URL: http://issues.apache.org/jira/browse/DERBY-1501
>  Project: Derby
> Type: Bug

>   Components: JDBC
> Versions: 10.1.1.0
>  Environment: WindowsXP
> Reporter: Markus Fuchs
>  Attachments: ByteArrayTest.java
>
> When inserting a row into following table
> BYTEARRAY_TEST( ID int, BYTEARRAY_VAL blob)
> PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL 
> Exception if given sqlType is LONGVARBINARY. You must give sqlType BLOB to 
> make the insert work. The same test works using sqlType LONGVARBINARY in 
> network mode. The following combinations don't work:
> Column type   sqlType not working mandatory sqlType
> BLOB   LONGVARBINARY BLOB
> CLOB   LONGVARCHARCLOB
> The issue here is that first Derby behaves differently in network and 
> embedded mode. And secondly, should accept LONGVARBINARY/LONGVARCHAR for 
> BLOB/CLOB columns.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-13 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1156?page=all ]

Mike Matrigali updated DERBY-1156:
--


storeall ran fine with sun jdk1.4.2 on xp - with patch reencrypt_4

> allow the encrypting of an existing unencrypted db and allow the 
> re-encrypting of an existing encrypted db
> --
>
>  Key: DERBY-1156
>  URL: http://issues.apache.org/jira/browse/DERBY-1156
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.2.3
> Reporter: Mike Matrigali
> Assignee: Suresh Thalamati
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: encryptspec.html, reencrypt_1.diff, reencrypt_2.diff, 
> reencrypt_3.diff, reencrypt_4.diff
>
> encrypted database to be re-encrypted with a new password.
> Here are some ideas for an initial implementation.
> The easiest way to do this is to make sure we have exclusive access to the
> data and that no log is required in the new copy of the db.  I want to avoid
> the log as it also is encrypted.  Here is my VERY high level plan:
> 1) Force exclusive access by putting all the work in the low level store,
>offline boot method.  We will do redo recovery as usual, but at the end
>there will be an entry point to do the copy/encrypt operation.
> copy/encrypt process:
> 0) The request to encrypt/re-encrypt the db will be handled with a new set
>of url flags passed into store at boot time.  The new flags will provide
>the same inputs as the current encrypt flags.  So at high level the
>request will be "connect db old_encrypt_url_flags; new_encrypt_url_flags".
>TODO - provide exact new flag syntax.
> 1) Open a transaction do all logged work to do the encryption.  All logging
>will be done with existing encryption.
> 2) Copy and encrypt every db file in the database.  The target files will
>be in the data directory.  There will be a new suffix to track the new
>files, similar to the current process used for handling drop table in
>a transaction consistent manner without logging the entire table to the 
> log.
>Entire encrypted destination file is guaranteed synced to disk before
>transaction commits.  I don't think this part needs to be logged.
>Files will be read from the cache using existing mechanism and written
>directly into new encrypted files (new encrypted data does not end up in
>the cache).
> 3) Switch encrypted files for old files.  Do this under a new log operation
>so the process can be correctly rolled back if the encrypt db operation
>transaction fails.  Rollback will do file at a time switches, no reading
>of encrypted data is necessary.
> 4) log a "change encryption of db" log record, but do not update
>system.properties with the change.
> 5) commit transaction.
> 6) update system.properties and sync changes.
> 7) TODO - need someway to handle crash between steps 5 and 6.
> 6) checkpoint all data, at this point guaranteed that there is no outstanding
>transaction, so after checkpoint is done there is no need for the log.
> ISSUES:
> o there probably should be something that catches a request to encrypt to
>   whatever db was already encrypted with.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Choice of JMX implementations

2006-07-13 Thread Sanket Sharma

Just wanted an opinion about JMX implementation to use for Derby. I
have listed the better known implementations below with my comments:

1. Sun JDK 1.5 and above: Comes prepackaged with JMX. Requires no
additional Jars to depolyed with the client. Leads to smaller
footprint. One the negetive side, it will always require Derby to run
on JRE 1.5 and above. May not be a good choice as some implementations
might be using old JVM versions. For Sun JDK/JVM prior to version 1.5
Sun's references implemenation is available as a seperate jar
download. Applications running on JVM 1.3 and 1.4 will need to
download install this jar.

 2. XMOJO Project:  This JMX supports JDK version 1.2 and above,
which means it imposes no additional JDK requirements. Hoewever,
additional JARs will to be deployed on the client for them to use JMX.
They can either be bundled with Derby or can be seperately installed
on client. Another point to note is that XMojo itself requires more
jars to be present (xalan.jar, crimson.jar, and jaxp.jar and
org.mortbay.jetty.jar). It supports construction of MBeans through
Metadata sepcified in a XML file. Utility methods convert this
metadata in ModelMBeanInfo which can then be used to construct a
ModelMBean.

 3. Apache Commons Modeller framework:  The commons modelling
framework is another Apache project developed under the Jarkarta
project. Although, it is not an implementation in itself, it
facilitates writing of MBeans by using XML files. The Metadata for
MBeans is specified in an XML file which is parsed to gernerate
MBeans.It is known to work with all major JMX implementations.

 4. MX4J: Another very popular JMX framework know to work with JDK
1.3 and above. It also supports automatic generation of ModelMBeans
using XDoclets. Needs additional jars to be deployed. Supports HTTP
adapters for access via HTTP.

My recommendation is to use either XMOJO or MX4J. Both of them are
open source and support JDK 1.3 and above, which is what Derby is
supported on.

Comments and opinion will be appriciated.

Best Regards,
Sanket Sharma


Re: is there an existing framework for adding 10.2 soft upgrade regression tests?

2006-07-13 Thread Deepa Remesh

On 7/13/06, Mike Matrigali <[EMAIL PROTECTED]> wrote:

I am just looking for an example of how to add a simple test to make
sure a hard upgrade only feature fails in soft upgrade mode in 10.2.



Please take a look at
org.apache.derbyTesting.functionTests.tests.upgradeTests.UpgradeTester.java
which has tests for different upgrade scenarios. The various case*
methods test some features in different upgrade modes.

Thanks,
Deepa


Re: Jira list request

2006-07-13 Thread Laura Stewart
*** (LS) Welcome Kim!  I have been out on vacation and just now catching up on the lists... I was delighted to 
see your note and to know that you are going to be working on the Derby documentation, that is my focus as well. 
 
Laura 
On 6/30/06, Andrew McIntyre <[EMAIL PROTECTED]> wrote:
On 6/30/06, Kim Haase <[EMAIL PROTECTED]> wrote:> Hello,
>> I'm a new subscriber to derby-dev. I will be working on Derby> documentation as a technical writer for Sun Microsystems. I have created> a Jira userid, chaase3. Could you please add it to the derby-developers
> Jira list?Hi Kim, I've added you to the derby-developers group in JIRA.andrew-- Laura Stewart 


[jira] Updated: (DERBY-1156) allow the encrypting of an existing unencrypted db and allow the re-encrypting of an existing encrypted db

2006-07-13 Thread Mike Matrigali (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1156?page=all ]

Mike Matrigali updated DERBY-1156:
--


Here are my comments on review of the reencrypt_4.diff, also I am running 
storeall but it has not finished yet.:

minor typo's:
XactFactory.java line 835 - trasaction  --> transactions
TransactionTable.java: add comment that hasPreparedXact(boolean recovered) is
also MT unsafe.

Is there a test for readonly db?

It is a little wierd to throw an exception from the ReadOnly impl of
checkVersion(), from the name of the routine.  I understand what the
comment is saying.  It seems unexpected for this routine to not return
ok for a "current" db.


stuff for now or later:
o may be interesting to add following to your tests on XA
o have a global xact in the log that is aborted during recovery (ie.
  not yet prepared).
o have a global xact in the log that is prepared and committed.

  I don't think there are code issues with these paths, mostly just easy
  cases to add and will verify future code doesn't break it.

o add test for readonly db and reencrypt.
o add test for upgrade fail.  Is there an existing framework for soft
  upgrade testing in 10.2?

> allow the encrypting of an existing unencrypted db and allow the 
> re-encrypting of an existing encrypted db
> --
>
>  Key: DERBY-1156
>  URL: http://issues.apache.org/jira/browse/DERBY-1156
>  Project: Derby
> Type: Improvement

>   Components: Store
> Versions: 10.1.2.3
> Reporter: Mike Matrigali
> Assignee: Suresh Thalamati
> Priority: Minor
>  Fix For: 10.2.0.0
>  Attachments: encryptspec.html, reencrypt_1.diff, reencrypt_2.diff, 
> reencrypt_3.diff, reencrypt_4.diff
>
> encrypted database to be re-encrypted with a new password.
> Here are some ideas for an initial implementation.
> The easiest way to do this is to make sure we have exclusive access to the
> data and that no log is required in the new copy of the db.  I want to avoid
> the log as it also is encrypted.  Here is my VERY high level plan:
> 1) Force exclusive access by putting all the work in the low level store,
>offline boot method.  We will do redo recovery as usual, but at the end
>there will be an entry point to do the copy/encrypt operation.
> copy/encrypt process:
> 0) The request to encrypt/re-encrypt the db will be handled with a new set
>of url flags passed into store at boot time.  The new flags will provide
>the same inputs as the current encrypt flags.  So at high level the
>request will be "connect db old_encrypt_url_flags; new_encrypt_url_flags".
>TODO - provide exact new flag syntax.
> 1) Open a transaction do all logged work to do the encryption.  All logging
>will be done with existing encryption.
> 2) Copy and encrypt every db file in the database.  The target files will
>be in the data directory.  There will be a new suffix to track the new
>files, similar to the current process used for handling drop table in
>a transaction consistent manner without logging the entire table to the 
> log.
>Entire encrypted destination file is guaranteed synced to disk before
>transaction commits.  I don't think this part needs to be logged.
>Files will be read from the cache using existing mechanism and written
>directly into new encrypted files (new encrypted data does not end up in
>the cache).
> 3) Switch encrypted files for old files.  Do this under a new log operation
>so the process can be correctly rolled back if the encrypt db operation
>transaction fails.  Rollback will do file at a time switches, no reading
>of encrypted data is necessary.
> 4) log a "change encryption of db" log record, but do not update
>system.properties with the change.
> 5) commit transaction.
> 6) update system.properties and sync changes.
> 7) TODO - need someway to handle crash between steps 5 and 6.
> 6) checkpoint all data, at this point guaranteed that there is no outstanding
>transaction, so after checkpoint is done there is no need for the log.
> ISSUES:
> o there probably should be something that catches a request to encrypt to
>   whatever db was already encrypted with.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



is there an existing framework for adding 10.2 soft upgrade regression tests?

2006-07-13 Thread Mike Matrigali

I am just looking for an example of how to add a simple test to make
sure a hard upgrade only feature fails in soft upgrade mode in 10.2.



Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Daniel John Debrunner
Lance J. Andersen wrote:

> I discussed this briefly at my JDBC EG meeting yesterday.
> 
> As i expected, all of the vendors on the call indicated that they return
> the same data type for key returned in the case of column defined as
> identity via the getGeneratedKeys() method.  The consensus was that this
> is what a user would expect.
> 
> As to the unique key question posed by Dan, this is going to be an
> ongoing EG discussion as some vendors do return identity columns values
> in cases that  are not unique (because the backend like Derby allows for
> it) which gets complex as some vendors also in some cases support
> returning a ROWID currently (but this is a difference scenario then
> using a defined column in the table).

Beyond that it's also unclear what should a driver do if the application
requests columns in the ResultSet that are not generated or identity
columns.

E.g. with the example in section 13.6 of JDBC 4 what is the expected
behaviour if the column list is any of:

 {"ORDER_ID", "ISBN"}
 {"ISBN"}
 {"ORDER_DATE"}
 {"ORDER_ID", "ORDER_DATE"}

where ORDER_DATE is a column that has a default of CURRENT_DATE (ie.
value not provided by the INSERT).

Dan.







Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen

I discussed this briefly at my JDBC EG meeting yesterday.

As i expected, all of the vendors on the call indicated that they return 
the same data type for key returned in the case of column defined as 
identity via the getGeneratedKeys() method.  The consensus was that this 
is what a user would expect.


As to the unique key question posed by Dan, this is going to be an 
ongoing EG discussion as some vendors do return identity columns values 
in cases that  are not unique (because the backend like Derby allows for 
it) which gets complex as some vendors also in some cases support 
returning a ROWID currently (but this is a difference scenario then 
using a defined column in the table).



The behavior of how JDBC methods on the ResultSet/ResultSetMetaData work 
do not change.  The value returned will differ for ResultSetMetaData for 
the returned column type.


This issue pointed out a problem in the JDBC EoD RI which made the 
assumption that the value returned matched the column type in the base 
table.


A Derby user encountered this issue as well, trying to use 10.2 and JDBC 
EoD  http://binkley.blogspot.com/2006/04/nifty-jdbc-40.html.



HTH
-lance





Rick Hillegas wrote:

Hi Kathey,

Thanks for your responses. Some replies follow. Regards-Rick

Kathey Marsden wrote:


Rick Hillegas wrote:


I'd like to try to summarize where I think the discussion stands:

1) Lance, our JDBC expert, has confirmed that this is not a 
compliance problem. That means this is not a bug.


2) Lance would like to change the behavior of 
Statement.getGeneratedKeys(). Currently this method always returns a 
ResultSet whose column has the canonical type DECIMAL( 31, 0). He 
would like this method to return a ResultSet whose column type 
changes depending on the type of the actual autogenerated column in 
the affected table; that is, the column could have type SMALLINT, 
INT, or BIGINT.


3) It does not seem that this change would have a very big impact on 
customers. At least, we have not been able to imagine how this would 
impact customers adversely. However, this is just theory and we have 
not polled the user community yet.



We not only have not polled the user community, we do not have 
anything we can poll them with yet.  getGeneratedKeys returns a 
result set.  Users will call certain methods on  that  ResultSet and 
the return values will be different.   We need to define what those 
are and the potential impact.  Then we map them to the user symptom 
and then we can define scenarios that might be affected.  If it is 
important  that we break our current documented behavior we have to 
take these painful steps to assess  risk.  A vague poll without 
understanding  the possible impact ourselves and presenting it 
clearly is not effective  or fair to the user base as we found with 
DERBY-1459.
Can you please complete the list below with any other changes  in the 
result set returned by getGeneratedKeys or  confirm that there are no 
other calls impacted?  Let's not include the likely of each happening 
yet.  We just want to understand what has changed and what symptoms 
users might see.
I agree with what we have so far the risk is low but w need to go 
through the whole exercise.  How has the result set returned 
changed?  What symptoms might users see?Define user scenarios and 
risk. Then poll the user community.


Certainly there would be  these changes for the ResultSet returned by 
getGeneratedKeys():


o  getMetaData()  would  correspond to the ResultSetMetadata of the 
base table column and so will have different types, columnwidths etc, 
so formatting and other decisions based on this information may be 
affected.


Agreed.

o  getObject()  would  return a different type and applications 
making casts based on the assumption it is a BigDecimal  may see cast 
exceptions or other problematic behavior because of this assumption.


Agreed.

o getString()  would return a different String representation which  
might  be problematic if a particular format was expected and  parsed.


This doesn't appear to be true for the small integers with which I've 
experimented. Are there problems in the toString() methods of 
BigDecimal and (perhaps) Derby's j2me decimal object?




Would other ResultSet methods might be affected?  For instance,  
would getInt(), getLong(), getShort()  etc. all still work as they 
did before and return the same values?


They should.





So what do we think?

A) Does anyone prefer the current behavior over Lance's proposed 
behavior?




Only in that it saves a lot of time in risk assesssment  and not 
changing it prevents us from setting  a precedent for changing 
documented and compliant behaviour to something else with no really 
significant benefit to users, but rather just the sake of tidiness 
and the convenience of writing code that is not guaranteed to be 
portable to other JDBC Drivers.
http://wiki.apache.org/db-derby/ForwardCompatibility#head-1aa64e215b1979230b8d9440e3e21d43c3

[jira] Commented: (DERBY-1130) Client should not allow databaseName to be set with setConnectionAttributes

2006-07-13 Thread Kathey Marsden (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1130?page=comments#action_12420895 ] 

Kathey Marsden commented on DERBY-1130:
---

Your approach sounds ok to me, since the SQL states are already different.

> Client should not allow databaseName to be set with setConnectionAttributes
> ---
>
>  Key: DERBY-1130
>  URL: http://issues.apache.org/jira/browse/DERBY-1130
>  Project: Derby
> Type: Bug

>   Components: Network Client
> Versions: 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.1.2.2, 
> 10.1.2.3, 10.2.0.0, 10.1.3.0, 10.1.2.4
> Reporter: Kathey Marsden
> Assignee: Deepa Remesh

>
> Per this thread,  setConnectionAttributes should not set databaseName. 
> http://www.nabble.com/double-check-on-checkDataSource-t1187602.html#a3128621
> Currently this is allowed for client but should be disabled.  I think it is 
> OK to change because we have documented that client will be changed to match 
> embedded for implementation defined behaviour.   Hopefully its use is rare as 
> most folks would use the standard setDatabaseName.  Still there should be a 
> release not when the change is made and it would be better to change it 
> sooner than later:
> Below is the repro. 
> Here is the output with Client
> D>java DatabaseNameWithSetConnAttr
> ds.setConnectionAttributes(databaseName=wombat;create=true)
> ds.getDatabaseName() = null (should be null)
> FAIL: Should not have been able to set databaseName with connection attributes
> Also look for tests  disabled with this bug number in the test 
> checkDataSource30.java
> import java.sql.*;
> import java.lang.reflect.Method;
> public class DatabaseNameWithSetConnAttr{
>   public static void main(String[] args) {
>   try {
>   
>   String attributes = "databaseName=wombat;create=true";
>   org.apache.derby.jdbc.ClientDataSource ds = new
>   org.apache.derby.jdbc.ClientDataSource();
>   //org.apache.derby.jdbc.EmbeddedDataSource ds = new
>   //org.apache.derby.jdbc.EmbeddedDataSource();
>   System.out.println("ds.setConnectionAttributes(" + 
> attributes + ")");
>   ds.setConnectionAttributes(attributes);
>   System.out.println("ds.getDatabaseName() = " +
>  ds.getDatabaseName() 
> + " (should be null)" );
>   Connection conn  = ds.getConnection();
>   } catch (SQLException e) {
>   String sqlState = e.getSQLState();
>   if (sqlState != null && 
> sqlState.equals("XJ041"))
>   {
>   System.out.println("PASS: An exception was 
> thrown trying to get a connetion from a datasource after setting databaseName 
> with setConnectionAttributes");
>   System.out.println("EXPECTED EXCEPTION: " + 
> e.getSQLState() 
>  + " 
> - " + e.getMessage());
>   return;
>   }
>   while (e != null)
>   {
>   System.out.println("FAIL - UNEXPECTED 
> EXCEPTION: " + e.getSQLState());
>   e.printStackTrace();
>   e = e.getNextException();
>   }
>   return;
>   }
>   System.out.println("FAIL: Should not have been able to set 
> databaseName with connection attributes");
>   }
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread Tomohito Nakayama (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420894 ] 

Tomohito Nakayama commented on DERBY-550:
-

Configuring -Xmx256m option to jdb under NetworkServer, I tested again.
Then,I found different stack for the error.

DRDAConnThread_3[1] where
  [1] org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream. 
(DynamicByteArrayOutputStream.java:63)
  [2] org.apache.derby.impl.store.raw.data.BasePage.insertAllowOverflow 
(BasePage.java:821)
  [3] org.apache.derby.impl.store.raw.data.BasePage.insert (BasePage.java:694)
  [4] org.apache.derby.impl.store.access.heap.HeapController.doInsert 
(HeapController.java:306)
  [5] org.apache.derby.impl.store.access.heap.HeapController.insert 
(HeapController.java:573)
  [6] org.apache.derby.impl.sql.execute.RowChangerImpl.insertRow 
(RowChangerImpl.java:447)
  [7] org.apache.derby.impl.sql.execute.InsertResultSet.normalInsertCore 
(InsertResultSet.java:995)
  [8] org.apache.derby.impl.sql.execute.InsertResultSet.open 
(InsertResultSet.java:522)
  [9] org.apache.derby.impl.sql.GenericPreparedStatement.execute 
(GenericPreparedStatement.java:357)
  [10] org.apache.derby.impl.jdbc.EmbedStatement.executeStatement 
(EmbedStatement.java:1,181)
  [11] org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement 
(EmbedPreparedStatement.java:1,510)
  [12] org.apache.derby.impl.jdbc.EmbedPreparedStatement.execute 
(EmbedPreparedStatement.java:1,188)
  [13] org.apache.derby.impl.drda.DRDAStatement.execute (DRDAStatement.java:559)
  [14] org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTT 
(DRDAConnThread.java:3,655)
  [15] org.apache.derby.impl.drda.DRDAConnThread.processCommands 
(DRDAConnThread.java:928)
  [16] org.apache.derby.impl.drda.DRDAConnThread.run (DRDAConnThread.java:254)
DRDAConnThread_3[1] eval java.lang.Runtime.getRuntime().maxMemory()
 java.lang.Runtime.getRuntime().maxMemory() = 266403840
DRDAConnThread_3[1] 

I think it is remarkable that error happend at code of Engine in this case.

> BLOB : java.lang.OutOfMemoryError with network JDBC driver 
> (org.apache.derby.jdbc.ClientDriver)
> ---
>
>  Key: DERBY-550
>  URL: http://issues.apache.org/jira/browse/DERBY-550
>  Project: Derby
> Type: Bug

>   Components: JDBC, Network Server
> Versions: 10.1.1.0
>  Environment: Any environment.
> Reporter: Grégoire Dubois
> Assignee: Tomohito Nakayama
>  Attachments: BlobOutOfMem.java
>
> Using the org.apache.derby.jdbc.ClientDriver driver to access the
> Derby database through network, the driver is writting all the file into 
> memory (RAM) before sending
> it to the database.
> Writting small files (smaller than 5Mo) into the database works fine,
> but it is impossible to write big files (40Mo for example, or more), without 
> getting the
> exception java.lang.OutOfMemoryError.
> The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
> Here follows some code that creates a database, a table, and trys to write a 
> BLOB. 2 parameters are to be changed for the code to work for you : 
> DERBY_DBMS_PATH and FILE
> import NetNoLedge.Configuration.Configs;
> import org.apache.derby.drda.NetworkServerControl;
> import java.net.InetAddress;
> import java.io.*;
> import java.sql.*;
> /**
>  *
>  * @author  greg
>  */
> public class DerbyServer_JDBC_BLOB_test {
> 
> // The unique instance of DerbyServer in the application.
> private static DerbyServer_JDBC_BLOB_test derbyServer;
> 
> private NetworkServerControl server;
> 
> private static final String DERBY_JDBC_DRIVER = 
> "org.apache.derby.jdbc.ClientDriver";
> private static final String DERBY_DATABASE_NAME = "Test";
> 
> // ###
> // ### SET HERE THE EXISTING PATH YOU WANT 
> // ###
> private static final String DERBY_DBMS_PATH =  "/home/greg/DatabaseTest";
> // ###
> // ###
> 
> 
> private static int derbyPort = 9157;
> private static String userName = "user";
> private static String userPassword = "password";
> 
> // 
> ###
> // # DEFINE HERE THE PATH TO THE FILE YOU WANT TO WRITE INTO 
> THE DATABASE ###
> // # TRY A 100kb-3Mb FILE, AND AFTER A 40Mb OR BIGGER FILE 
> #
> // 
> ###
> private static final File FILE = new File("/home/greg/01.jpg");
> // 
> ###

Re: Derby Internals Wiki

2006-07-13 Thread Mike Matrigali



Daniel John Debrunner wrote:

Jean T. Anderson wrote:



Sanket Sharma wrote:



Hi,

While reading Derby source code for my project, I thought It will be
good to share my knowledge with other developers. Since my project is
about adding JMX to Derby, it will interact with a lot of internal API
calls. As I continue to read and understand code, I think will good if
I can document all this somewhere. Is there any Derby Internals wiki
page where I can post all this information?

I would also encourage where it makes sense to add the documentation to
the actual code/javadoc and then if you want link to the javadoc from
the wiki.



Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-13 Thread Jean T. Anderson
Daniel John Debrunner wrote:
> Satheesh Bandaram wrote: 
>>Jean T. Anderson wrote:
>>
>>>One thing to consider is DdlUtils is database agnostic. For example,
>>>adding support for "create view" doesn't mean just adding it for Derby,
>>>but also adding it for every database supported (see the list at
>>>http://db.apache.org/ddlutils/database-support.html ).
>>>
>>This is important... While ddlUtils goal is to support all databases,
>>Ramin is attempting to fix his target database to Derby and to make
>>source database mySQL for his migration utility. (for now) It doesn't
>>seem like Ramin should take on adding database specific code to ddlUtils
>>(that DatabaseMetadata might not expose directly) and to test on 13
>>databases that ddlUtils currently supports.
> 
> Isn't the point of Google summer of code to introduce students to open
> source development, and this switch to ddlutils and additonal community
> involvement is typical of open source? Scratch your own itch but also
> benefit the community as well.

It's also possible DdlUtils might welcome partial contributions (and
that other hands might dive in to fill in other databases). For anyone
interested, a DdlUtils discussion thread started at:

http://mail-archives.apache.org/mod_mbox/db-ddlutils-dev/200607.mbox/[EMAIL 
PROTECTED]

and here's the tiny url:
   http://tinyurl.com/rhnk2

-jean




[jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread Tomohito Nakayama (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420891 ] 

Tomohito Nakayama commented on DERBY-550:
-

I executed BlobOutOfMem.java with Network Server running on jdb and
tried to recognize circumstance where OutOfMemoryError happens.

The result was as next.

[EMAIL PROTECTED]:~/derby/test/20060714$ startDebugNetworkServer.ksh 
Initializing jdb ...
> catch java.lang.OutOfMemoryError
Deferring all java.lang.OutOfMemoryError.
It will be set after the class is loaded.
> run
run org.apache.derby.drda.NetworkServerControl start -h localhost -p 1527
Set uncaught java.lang.Throwable
Set deferred all java.lang.OutOfMemoryError
Set deferred uncaught java.lang.Throwable
> 
VM Started: Apache Derby Network Server - 10.2.0.4 alpha started and ready to 
accept connections on port 1527 at 2006-07-13 15:40:31.278 GMT 

Exception occurred: java.lang.OutOfMemoryError (uncaught)
Exception occurred: java.lang.OutOfMemoryError 
(uncaught)"thread=DRDAConnThread_3", java.io.ByteArrayOutputStream.(), 
line=59 bci=37

DRDAConnThread_3[1] where
  [1] java.io.ByteArrayOutputStream. (ByteArrayOutputStream.java:59)
  [2] org.apache.derby.impl.drda.DDMReader.getExtData (DDMReader.java:958)
  [3] org.apache.derby.impl.drda.DDMReader.getExtData (DDMReader.java:944)
  [4] org.apache.derby.impl.drda.DRDAConnThread.readAndSetExtParam 
(DRDAConnThread.java:4,355)
  [5] org.apache.derby.impl.drda.DRDAConnThread.readAndSetAllExtParams 
(DRDAConnThread.java:4,320)
  [6] org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTTobjects 
(DRDAConnThread.java:3,811)
  [7] org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTT 
(DRDAConnThread.java:3,640)
  [8] org.apache.derby.impl.drda.DRDAConnThread.processCommands 
(DRDAConnThread.java:928)
  [9] org.apache.derby.impl.drda.DRDAConnThread.run (DRDAConnThread.java:254)
DRDAConnThread_3[1] up
DRDAConnThread_3[2] list
954
955 
956 if (desiredLength != -1) {
957// allocate a stream based on a known amount of data
958 => baos = new ByteArrayOutputStream ((int) desiredLength);
959 }
960 else {
961// allocate a stream to hold an unknown amount of data
962baos = new ByteArrayOutputStream ();
963//isLengthAndNullabilityUnknown = true;
DRDAConnThread_3[2] up
DRDAConnThread_3[3] list
940 
941
942 byte[] getExtData (boolean checkNullability) throws 
DRDAProtocolException
943 {
944 =>  return  getExtData(ddmScalarLen, checkNullability);
945 }
946
947
948 byte[] getExtData (long desiredLength, boolean checkNullability) throws 
DRDAProtocolException
949  {
DRDAConnThread_3[3] up
DRDAConnThread_3[4] list
4,351   FdocaConstants.isNullable(drdaType))
4,352   checkNullability = true;
4,353   
4,354   try {   
4,355 =>byte[] paramBytes = 
reader.getExtData(checkNullability);
4,356   String paramString = null;
4,357   switch (drdaType)
4,358   {
4,359   case  
DRDAConstants.DRDA_TYPE_LOBBYTES:
4,360   case  
DRDAConstants.DRDA_TYPE_NLOBBYTES:
DRDAConnThread_3[4] 

I confirm this part is where Andreas mentioned in previous comment.

I think this is one of very suspective places.
However I think there may be another place where memory was used in waste.
Because reports from others for memory usage told that amount of used memory 
was much more larger than actual  size LOB to be streamed.

> BLOB : java.lang.OutOfMemoryError with network JDBC driver 
> (org.apache.derby.jdbc.ClientDriver)
> ---
>
>  Key: DERBY-550
>  URL: http://issues.apache.org/jira/browse/DERBY-550
>  Project: Derby
> Type: Bug

>   Components: JDBC, Network Server
> Versions: 10.1.1.0
>  Environment: Any environment.
> Reporter: Grégoire Dubois
> Assignee: Tomohito Nakayama
>  Attachments: BlobOutOfMem.java
>
> Using the org.apache.derby.jdbc.ClientDriver driver to access the
> Derby database through network, the driver is writting all the file into 
> memory (RAM) before sending
> it to the database.
> Writting small files (smaller than 5Mo) into the database works fine,
> but it is impossible to write big files (40Mo for example, or more), without 
> getting the
> exception java.lang.OutOfMemoryError.
> The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
> Here follows some code that creates a database, a table, and trys to write a 
> BLOB. 2 parameters are to be changed for the code to work for you : 
> DERBY_DBMS_PATH and FILE
> import NetNoLedge.Configuration.Configs;
> import org.apache.derby.drda.NetworkServer

Re: Google SOC:MySQL to Derby Migration tool design question

2006-07-13 Thread Daniel John Debrunner
Satheesh Bandaram wrote:

> Jean T. Anderson wrote:
> 
> 
>>One thing to consider is DdlUtils is database agnostic. For example,
>>adding support for "create view" doesn't mean just adding it for Derby,
>>but also adding it for every database supported (see the list at
>>http://db.apache.org/ddlutils/database-support.html ).
>> 
>>
> 
> This is important... While ddlUtils goal is to support all databases,
> Ramin is attempting to fix his target database to Derby and to make
> source database mySQL for his migration utility. (for now) It doesn't
> seem like Ramin should take on adding database specific code to ddlUtils
> (that DatabaseMetadata might not expose directly) and to test on 13
> databases that ddlUtils currently supports.

Isn't the point of Google summer of code to introduce students to open
source development, and this switch to ddlutils and additonal community
involvement is typical of open source? Scratch your own itch but also
benefit the community as well.

Dan.



[jira] Commented: (DERBY-1274) Network Server does not shutdown the databases it has booted when started and shutdown from the command line

2006-07-13 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1274?page=comments#action_12420888 ] 

Daniel John Debrunner commented on DERBY-1274:
--

In many cases 'Cloudscape" should be replaced by Derby, but not in *all* cases. 
Thus automated blanket changes are not appropriate, each change needs to be 
looked at individually. Examples are when the code is referring to Cloudscape 
releases prior to it being open sourced as Derby.

> Network Server does not shutdown the databases it has booted when started and 
> shutdown from the command line
> 
>
>  Key: DERBY-1274
>  URL: http://issues.apache.org/jira/browse/DERBY-1274
>  Project: Derby
> Type: Bug

>   Components: Network Server
> Versions: 10.2.0.0, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Fernanda Pizzorno
>  Attachments: derby-1274.diff, derby-1274.stat, derby-1274v2.diff, 
> derby-1274v3.diff, derby-1274v3.stat, derby-1274v4.diff, derby-1274v4.stat
>
> If network server is started  and shutdown from the comand line t does not 
> shutdown the database.   This can is evidenced by the fact that the db.lck 
> file remains after  the following steps.
> java org.apache.derby.drda.NetworkServerControl start &
> 
> java org.apache.derby.drda.NetworkServerControl shutdown
>  There is much discussion about the correct behavior of NetworkServer in this 
> regard related to embedded server scenarios in DERBY-51, but it seems clear 
> in this  case the databases should be shutdown.
>  
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

2006-07-13 Thread Fernanda Pizzorno (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12420883 ] 

Fernanda Pizzorno commented on DERBY-802:
-

I am reviewing your patch (derby-802.diff), but I won't have time to finish 
today. Can you wait a bit longer before you commit?

> OutofMemory Error when reading large blob when statement type is 
> ResultSet.TYPE_SCROLL_INSENSITIVE
> --
>
>  Key: DERBY-802
>  URL: http://issues.apache.org/jira/browse/DERBY-802
>  Project: Derby
> Type: Bug

>   Components: JDBC
> Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 
> 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
> Reporter: Sunitha Kambhampati
> Assignee: Andreas Korneliussen
> Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
> Grégoire Dubois on the list reported this problem.  From his mail: the 
> reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
> exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>
> private static File file = new 
> File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
> private static File destinationFile = new 
> File("/home/greg/DerbyDatabase/"+file.getName());
>
> /** Creates a new instance of derby_filewrite_fileread */
> public derby_filewrite_fileread() {   
>
> }
>
> public static void main(String args[]) {
> try {
> 
> Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
> Connection connection = DriverManager.getConnection 
> ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
> connection.setAutoCommit(false);
>
> Statement statement = 
> connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
> ResultSet.CONCUR_READ_ONLY);
> ResultSet result = statement.executeQuery("SELECT TABLENAME FROM 
> SYS.SYSTABLES");
>
> // Create table if it doesn't already exists.
> boolean exist=false;
> while ( result.next() ) {
> if ("db_file".equalsIgnoreCase(result.getString(1)))
> exist=true;
> }
> if ( !exist ) {
> System.out.println("Create table db_file.");
> statement.execute("CREATE TABLE db_file ("+
>" name  VARCHAR(40),"+
>" file  BLOB(2G) NOT 
> NULL)");
> connection.commit();
> }
>
> // Read file from disk, write on DB.
> System.out.println("1 - Read file from disk, write on DB.");
> PreparedStatement 
> preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) 
> VALUES (?,?)");
> FileInputStream fileInputStream = new FileInputStream(file);
> preparedStatement.setString(1, file.getName());
> preparedStatement.setBinaryStream(2, fileInputStream, 
> (int)file.length());   
> preparedStatement.execute();
> connection.commit();
> System.out.println("2 - END OF Read file from disk, write on 
> DB.");
>
>
> // Read file from DB, and write on disk.
> System.out.println("3 - Read file from DB, and write on disk.");
> result = statement.executeQuery("SELECT file FROM db_file WHERE 
> name='"+file.getName()+"'");
> byte[] buffer = new byte [1024];
> result.next();
> BufferedInputStream inputStream=new 
> BufferedInputStream(result.getBinaryStream(1),1024);
> FileOutputStream outputStream = new 
> FileOutputStream(destinationFile);
> int readBytes = 0;
> while (readBytes!=-1) {
> readBytes=inputStream.read(buffer,0,buffer.length);
> if ( readBytes != -1 )
> outputStream.write(buffer, 0, readBytes);
> } 
> inputStream.close();
> outputStream.close();
> System.out.println("4 - END OF Read file from DB, and write on 
> disk.");
> }
> catch (Exception e) {
> e.printStackTrace(System.err);
> }
> }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:

[jira] Commented: (DERBY-1430) Test parameterMapping.java often fails with DerbyNetClient on Solarisx86

2006-07-13 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1430?page=comments#action_12420882 ] 

Mayuresh Nirhali commented on DERBY-1430:
-

Here is my analysis on this issue so far:

This bug is seen on jdbcapi/parameterMapping.java (in DerbyNetClient framework) 
test *inconsistently so far. I have run this test in standalone manner several 
times in last few days and have not seen this failure even once. I tried 
running the derbynetclientmats suite and have seen the exact failure only once. 
The platform here is identical to the one specified in JIRA entry, SolX86,v10 
with jvm1.5.

However, I could reproduce this failure in standalone manner while my machine 
is running derbynetclientmats suite simulteneously. Here, I suspect some sort 
of race condition between clients trying to connect to server listening at port 
1527. Another type of error similar to the one mentioned in JIRA entry can also 
be observed for the same test, when the test is run in this scenario. The other 
error seen is as below,


> FAIL unexpected exception -  (58009):Insufficient data while reading from the
network - expected a minimum of 6 bytes and received only -1 bytes.  The 
connection has been terminated.java.sql.SQLException: Insufficient data while 
reading from the network - expected a minimum of 6 bytes and received only -1 
bytes.  The connection has been terminated.
> Caused by: org.apache.derby.client.am.DisconnectException: Insufficient data 
> while reading from the network - expected a minimum of 6 bytes and received 
> only
-1 bytes.  The connection has been terminated.
>   ... 2 more
Test Failed.


I also looked at the test report history and found that the exact error was 
seen on June 2nd (r411220) first, but with a different test. A lot of other 
tests have failed since then due to same issue. This means that this issue is 
not particular to the test, paramaterMapping.java.

In the recent past, this issue is seen only with the test, 
parameterMapping.java, but inconsistently. Also, this seems to happen only when 
bunch of tests (suite) is run in DerbyNetClient framework.

I haven't got to the root cause of this yet. I would like to understand how 
does the harness handle client requests ? Can 2 requests be active/valid in any 
way at a time ?? any pointers on harness design would be very helpful.

any inputs on what other info could help and how to gather ??


> Test parameterMapping.java often fails with DerbyNetClient on Solarisx86
> 
>
>  Key: DERBY-1430
>  URL: http://issues.apache.org/jira/browse/DERBY-1430
>  Project: Derby
> Type: Bug

>   Components: Regression Test Failure
> Versions: 10.2.0.0
>  Environment: derbyall on Solaris x86.  
> Reporter: Øystein Grøvlen
>  Fix For: 10.2.0.0
>  Attachments: derby.log
>
> parameterMapping.java has lately failed about every other day in the nightly 
> test on Solaris x86.   First time seen on June 4.  (Note that the computer 
> that this is run on has had its disk cache turned off lately.  Maybe there is 
> a connection?)  The test gets the following exception:
> FAIL unexpected exception -  (58009):Insufficient data while reading from the 
> network - expected a minimum of 6 bytes and received only -1 bytes.  The 
> connection has been terminated.java.sql.SQLException: Insufficient data while 
> reading from the network - expected a minimum of 6 bytes and received only -1 
> bytes.  The connection has been terminated.
>   at 
> org.apache.derby.client.am.SQLExceptionFactory.getSQLException(Unknown Source)
>   at org.apache.derby.client.am.SqlException.getSQLException(Unknown 
> Source)
>   at org.apache.derby.client.am.Connection.prepareStatement(Unknown 
> Source)
>   at 
> org.apache.derbyTesting.functionTests.tests.jdbcapi.parameterMapping.main(Unknown
>  Source)
> Caused by: org.apache.derby.client.am.DisconnectException: Insufficient data 
> while reading from the network - expected a minimum of 6 bytes and received 
> only -1 bytes.  The connection has been terminated.
>   at org.apache.derby.client.net.Reply.fill(Unknown Source)
>   at org.apache.derby.client.net.Reply.ensureALayerDataInBuffer(Unknown 
> Source)
>   at org.apache.derby.client.net.Reply.readDssHeader(Unknown Source)
>   at org.apache.derby.client.net.Reply.startSameIdChainParse(Unknown 
> Source)
>   at 
> org.apache.derby.client.net.NetStatementReply.readPrepareDescribeOutput(Unknown
>  Source)
>   at 
> org.apache.derby.client.net.StatementReply.readPrepareDescribeOutput(Unknown 
> Source)
>   at 
> org.apache.derby.client.net.NetStatement.readPrepareDescribeOutput_(Unknown 
> Source)
>   at 
> org.apache.derby.client.am.Statement.readPrepareDescribeOutput(Unknown Source)
>   at 
> org.apache.derby.client.am.PreparedStatement.readPrepareDescribeInputOutput(Unknown
>  Source)
>   at 
>

[jira] Commented: (DERBY-1274) Network Server does not shutdown the databases it has booted when started and shutdown from the command line

2006-07-13 Thread Fernanda Pizzorno (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1274?page=comments#action_12420881 ] 

Fernanda Pizzorno commented on DERBY-1274:
--

I saw that the code I uncommented said "Cloudscape" and I thought of replacing 
it by "Derby", but then I saw that "Cloudscape" was much more used in that file 
(NetworkServerControlImpl.java) so I did not change it. In my opinion it would 
be better replace all occurrences of "Cloudscape" by "Derby" than changing only 
the part of the code that I have uncommented, but I don't think it should be 
done as a part of this issue. If nobody objects, I will create a jira issue for 
replacing all occurrences of "Cloudscape" by "Derby" in 
NetworkServerControlImpl.java.

> Network Server does not shutdown the databases it has booted when started and 
> shutdown from the command line
> 
>
>  Key: DERBY-1274
>  URL: http://issues.apache.org/jira/browse/DERBY-1274
>  Project: Derby
> Type: Bug

>   Components: Network Server
> Versions: 10.2.0.0, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Fernanda Pizzorno
>  Attachments: derby-1274.diff, derby-1274.stat, derby-1274v2.diff, 
> derby-1274v3.diff, derby-1274v3.stat, derby-1274v4.diff, derby-1274v4.stat
>
> If network server is started  and shutdown from the comand line t does not 
> shutdown the database.   This can is evidenced by the fact that the db.lck 
> file remains after  the following steps.
> java org.apache.derby.drda.NetworkServerControl start &
> 
> java org.apache.derby.drda.NetworkServerControl shutdown
>  There is much discussion about the correct behavior of NetworkServer in this 
> regard related to embedded server scenarios in DERBY-51, but it seems clear 
> in this  case the databases should be shutdown.
>  
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Revoke REFERENCES privilege and drop foreign key constraint

2006-07-13 Thread Daniel John Debrunner
Bryan Pendleton wrote:
> [ Possible re-send of this message; I've been having email problems,
> sorry. ]
> 
>> I looked through alter table constant action to see what happens
>> when a user issues a drop constraint foreignkeyname and it seems
>> like there is lot more involved then simply calling the data
>> dictionary to drop the constraint descriptor.
> 
> What about re-factoring this code and moving the extra code out of
> AlterTableConstantAction's drop constraint subroutine and into data
> dictionary's
> drop constraint routine.

I think that's pushing too much knowledge of the SQL system into the
DataDictionary. A constraint may share the underlying index with other
SQL indexes, thus dropping the constraint must check usage on the
underlying index etc.

> Then, we could share this code between alter table drop constraint, and
> revoke privilege.

The ConstantAction class for the drop constraint already contains the
logic, thus it could be the share point. Though as Mamta showed, we
already have a easy api to do the sharing, at the SQL level. All that is
required is a mechanism to run a SQL statement as another user, i think
this is something that will be required in the future, so seems like a
good thing to add.

Dan.



Re: Optimizer patch reviews? (DERBY-781, DERBY-1357)

2006-07-13 Thread Army

Bryan Pendleton wrote:

I will give you whatever feedback I have by the end of this weekend,


Thanks Bryan!


although I don't expect to have many substantive comments to make.


Comments of any kind are appreciated--so please comment away...

Thanks again,
Army



Re: Revoke REFERENCES privilege and drop foreign key constraint

2006-07-13 Thread Bryan Pendleton

[ Possible re-send of this message; I've been having email problems, sorry. ]

> I looked through alter table constant action to see what happens
> when a user issues a drop constraint foreignkeyname and it seems
> like there is lot more involved then simply calling the data
> dictionary to drop the constraint descriptor.

What about re-factoring this code and moving the extra code out of
AlterTableConstantAction's drop constraint subroutine and into data dictionary's
drop constraint routine.

Then, we could share this code between alter table drop constraint, and
revoke privilege.

thanks,

bryan



[jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)

2006-07-13 Thread Bryan Pendleton (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-550?page=comments#action_12420877 ] 

Bryan Pendleton commented on DERBY-550:
---


RE: Andreas's observation that LOB implemtation classes would need to be 
reimplemented, so that they will do the streaming.

I'm wondering whether this might accidentally introduce other new and 
undesirable behaviors.

The net effect of a change like this is that the LOB data will remain in the 
network pipe, or be queued on the server side, until the blob is read by the 
client.

But what if that takes a long time, or in fact never happens?

We might need a way to "cancel" a partially-sent BLOB object which was returned 
by the server but which the client for whatever reason never decided to read.

The current "greedy" algorithm seems to ensure that we minimize the risk of 
producer-consumer deadlocks of various sorts, at the expense of accumulating 
the entire data into memory.

I hope this makes sense. I don't know of an actual problem here; I just have a 
funny feeling that this change is going to be rather tricky to accomplish.




> BLOB : java.lang.OutOfMemoryError with network JDBC driver 
> (org.apache.derby.jdbc.ClientDriver)
> ---
>
>  Key: DERBY-550
>  URL: http://issues.apache.org/jira/browse/DERBY-550
>  Project: Derby
> Type: Bug

>   Components: JDBC, Network Server
> Versions: 10.1.1.0
>  Environment: Any environment.
> Reporter: Grégoire Dubois
> Assignee: Tomohito Nakayama
>  Attachments: BlobOutOfMem.java
>
> Using the org.apache.derby.jdbc.ClientDriver driver to access the
> Derby database through network, the driver is writting all the file into 
> memory (RAM) before sending
> it to the database.
> Writting small files (smaller than 5Mo) into the database works fine,
> but it is impossible to write big files (40Mo for example, or more), without 
> getting the
> exception java.lang.OutOfMemoryError.
> The org.apache.derby.jdbc.EmbeddedDriver doesn't have this problem.
> Here follows some code that creates a database, a table, and trys to write a 
> BLOB. 2 parameters are to be changed for the code to work for you : 
> DERBY_DBMS_PATH and FILE
> import NetNoLedge.Configuration.Configs;
> import org.apache.derby.drda.NetworkServerControl;
> import java.net.InetAddress;
> import java.io.*;
> import java.sql.*;
> /**
>  *
>  * @author  greg
>  */
> public class DerbyServer_JDBC_BLOB_test {
> 
> // The unique instance of DerbyServer in the application.
> private static DerbyServer_JDBC_BLOB_test derbyServer;
> 
> private NetworkServerControl server;
> 
> private static final String DERBY_JDBC_DRIVER = 
> "org.apache.derby.jdbc.ClientDriver";
> private static final String DERBY_DATABASE_NAME = "Test";
> 
> // ###
> // ### SET HERE THE EXISTING PATH YOU WANT 
> // ###
> private static final String DERBY_DBMS_PATH =  "/home/greg/DatabaseTest";
> // ###
> // ###
> 
> 
> private static int derbyPort = 9157;
> private static String userName = "user";
> private static String userPassword = "password";
> 
> // 
> ###
> // # DEFINE HERE THE PATH TO THE FILE YOU WANT TO WRITE INTO 
> THE DATABASE ###
> // # TRY A 100kb-3Mb FILE, AND AFTER A 40Mb OR BIGGER FILE 
> #
> // 
> ###
> private static final File FILE = new File("/home/greg/01.jpg");
> // 
> ###
> // 
> ###
> 
> /**
>  * Used to test the server.
>  */
> public static void main(String args[]) {
> try {
> DerbyServer_JDBC_BLOB_test.launchServer();
> DerbyServer_JDBC_BLOB_test server = getUniqueInstance();
> server.start();
> System.out.println("Server started");
> 
> // After the server has been started, launch a first connection 
> to the database to
> // 1) Create the database if it doesn't exist already,
> // 2) Create the tables if they don't exist already.
> Class.forName(DERBY_JDBC_DRIVER).newInstance();
> Connection connection = DriverManager.getConnection 
> ("jdbc:derby://localhost:"+derbyPort+"/"+DERBY_DATABASE_NAME+";create=true", 
> userNa

Re: Optimizer patch reviews? (DERBY-781, DERBY-1357)

2006-07-13 Thread Bryan Pendleton

I posted two patches for some optimizer changes a little over a week
ago: one for DERBY-781 and one for DERBY-1357.


Hi Army,

I have been reading your wonderful DERBY-781 document. I will give you
whatever feedback I have by the end of this weekend, although I don't
expect to have many substantive comments to make.

Thanks very much for putting the effort into writing up the changes;
it is not wasted work.

bryan





Re: Language based matching

2006-07-13 Thread Bryan Pendleton
"Is there some easy Java regular expression matching function  like 
String.matches(Collator collator, String pattern, String value)? "


Here's an article from 1999 in which some person apparently decided
that they needed to write such a thing themselves.

Warning: the code described in this article appears to be patented,
and thus probably isn't acceptable for an Open Source project such as ours.

http://www-128.ibm.com/developerworks/java/library/j-text-searching.html

thanks,

bryan



[jira] Commented: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Olav Sandstaa (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-253?page=comments#action_12420864 ] 

Olav Sandstaa commented on DERBY-253:
-

Knut Anders, thanks for the proposal to release notes. This looks very good.

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1274) Network Server does not shutdown the databases it has booted when started and shutdown from the command line

2006-07-13 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1274?page=comments#action_12420862 ] 

Knut Anders Hatlen commented on DERBY-1274:
---

I think the patch looks good. If you really want to start a network server from 
the command line inside the test, you could have a look at 
derbynet/testconnection.java. However, I don't think that would improve the 
test very much.

One minor comment: The code that you uncommented says "Cloudscape" many times. 
Maybe it should be changed to Derby?

> Network Server does not shutdown the databases it has booted when started and 
> shutdown from the command line
> 
>
>  Key: DERBY-1274
>  URL: http://issues.apache.org/jira/browse/DERBY-1274
>  Project: Derby
> Type: Bug

>   Components: Network Server
> Versions: 10.2.0.0, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Fernanda Pizzorno
>  Attachments: derby-1274.diff, derby-1274.stat, derby-1274v2.diff, 
> derby-1274v3.diff, derby-1274v3.stat, derby-1274v4.diff, derby-1274v4.stat
>
> If network server is started  and shutdown from the comand line t does not 
> shutdown the database.   This can is evidenced by the fact that the db.lck 
> file remains after  the following steps.
> java org.apache.derby.drda.NetworkServerControl start &
> 
> java org.apache.derby.drda.NetworkServerControl shutdown
>  There is much discussion about the correct behavior of NetworkServer in this 
> regard related to embedded server scenarios in DERBY-51, but it seems clear 
> in this  case the databases should be shutdown.
>  
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-982) sysinfo api does not provide genus name for client

2006-07-13 Thread Kristian Waagan (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-982?page=comments#action_12420855 ] 

Kristian Waagan commented on DERBY-982:
---

I had a look at the patch, and also downloaded it and tried running the test.
The patch applies cleanly, but the test fails.
Is it supposed to only be run on release/snapshot distributions? I'm aksing, 
because it seems the test is dependent on a build target in 
'tools/release/build.xml'. I can't see the test have been enabled in any suites 
either.

The diff tells me that only the getX() and getX(sysinfo.DBMS) calls are 
returning correct results. The rest return "" or -1.
Since I'm not sure if I'm running the test correctly, can somebody that knows 
how to run the test report their findings?

If the test is actually working (when run the correct way), I think the patch 
could be committed. It has been around for a while.

> sysinfo api does not provide genus name for client
> --
>
>  Key: DERBY-982
>  URL: http://issues.apache.org/jira/browse/DERBY-982
>  Project: Derby
> Type: Bug

>   Components: Tools
> Versions: 10.1.2.1
> Reporter: Kathey Marsden
> Assignee: Andrew McIntyre
>  Fix For: 10.2.0.0
>  Attachments: derby-982.diff, derby-982_v2.diff
>
> The sysinfo api does not provide access to the genus name for client to allow 
> applications to retrieve information from sysinfo about the client 
> information.
> http://db.apache.org/derby/javadoc/publishedapi/org/apache/derby/tools/sysinfo.html
> Note: Currently ProductGenusNames has a genus name for network server but 
> network server is closely tied to  the engine so should always have the same 
> version.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



enabling tracing info while running tests

2006-07-13 Thread Mayuresh Nirhali

Hello,

I am trying to get tracing info for a test run in standalone manner. The 
test runs fine, but I do not see the traceFile being created.


The command I use is as below,


java -cp $CLASSPATH -Dframework=DerbyNetClient 
-DtestSpecialProps=derby.infolog.append=true^derby.drda.traceFile=./trace.out^derby.drda.traceLevel=org.apache.derby.jdbc.ClientDataSource.TRACE_PROTOCOL_FLOWS 
org.apache.derbyTesting.functionTests.harness.RunTest 
jdbcapi/parameterMapping.java


Is there anything that I am missing ??


What is the best way to generate tracing data for tests ??


TIA
Mayuresh


[jira] Updated: (DERBY-1274) Network Server does not shutdown the databases it has booted when started and shutdown from the command line

2006-07-13 Thread Fernanda Pizzorno (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1274?page=all ]

Fernanda Pizzorno updated DERBY-1274:
-

Attachment: derby-1274v4.diff
derby-1274v4.stat

The attached patch (derby-1274v4.diff) is the same as v3 - typos (see John's 
comment).

> Network Server does not shutdown the databases it has booted when started and 
> shutdown from the command line
> 
>
>  Key: DERBY-1274
>  URL: http://issues.apache.org/jira/browse/DERBY-1274
>  Project: Derby
> Type: Bug

>   Components: Network Server
> Versions: 10.2.0.0, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Fernanda Pizzorno
>  Attachments: derby-1274.diff, derby-1274.stat, derby-1274v2.diff, 
> derby-1274v3.diff, derby-1274v3.stat, derby-1274v4.diff, derby-1274v4.stat
>
> If network server is started  and shutdown from the comand line t does not 
> shutdown the database.   This can is evidenced by the fact that the db.lck 
> file remains after  the following steps.
> java org.apache.derby.drda.NetworkServerControl start &
> 
> java org.apache.derby.drda.NetworkServerControl shutdown
>  There is much discussion about the correct behavior of NetworkServer in this 
> regard related to embedded server scenarios in DERBY-51, but it seems clear 
> in this  case the databases should be shutdown.
>  
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Knut Anders Hatlen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-253?page=all ]

Knut Anders Hatlen updated DERBY-253:
-

Derby Info: [Release Note Needed]

Yes, I think we should mention this in the release notes. Here's my proposal:

PROBLEM

PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream()
throw SQLException when invoked after upgrading to Apache Derby 10.2.

SYMPTOM

Calling either of these methods will result in an exception with
SQLSTATE 0A000 and message: "Feature not implemented: ..."

CAUSE

PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream()
have been deprecated since JDBC 2.0. Derby's implemetation of these
methods was broken, and it was decided that the methods should throw a
not-implemented exception instead of being fixed.

SOLUTION

This was an intentional change. No Derby product solution is offered.

WORKAROUND

Use setCharacterStream() and getCharacterStream() instead of
setUnicodeStream() and getUnicodeStream().

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-1274) Network Server does not shutdown the databases it has booted when started and shutdown from the command line

2006-07-13 Thread John H. Embretsen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-1274?page=comments#action_12420844 ] 

John H. Embretsen commented on DERBY-1274:
--

I have taken a look at the patch derby-1274v3.diff, and I think it looks good. 

I found two minor typos in ShutDownDBWhenNSShutsDownTest.java:

- class level JavaDoc: "when started form the command line" should be "when 
started from the command line".

- first comment inside testDatabasesShutDownWhenNSShutdown() method:
 "Check that the database will be shut down when the server is start shut 
down." I think the word "start" should be removed here.

I do not know of a very clean and easy way of starting the Network Server from 
the command line from inside the test itself, without potentially losing some 
important property or setting that the harness usually sets. The 
harness.NetServer.java class might be helpful. Does anyone else have any ideas?
In the mean time, I think the way it is done in v3.diff is good enough.




> Network Server does not shutdown the databases it has booted when started and 
> shutdown from the command line
> 
>
>  Key: DERBY-1274
>  URL: http://issues.apache.org/jira/browse/DERBY-1274
>  Project: Derby
> Type: Bug

>   Components: Network Server
> Versions: 10.2.0.0, 10.1.2.3
> Reporter: Kathey Marsden
> Assignee: Fernanda Pizzorno
>  Attachments: derby-1274.diff, derby-1274.stat, derby-1274v2.diff, 
> derby-1274v3.diff, derby-1274v3.stat
>
> If network server is started  and shutdown from the comand line t does not 
> shutdown the database.   This can is evidenced by the fact that the db.lck 
> file remains after  the following steps.
> java org.apache.derby.drda.NetworkServerControl start &
> 
> java org.apache.derby.drda.NetworkServerControl shutdown
>  There is much discussion about the correct behavior of NetworkServer in this 
> regard related to embedded server scenarios in DERBY-51, but it seems clear 
> in this  case the databases should be shutdown.
>  
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Olav Sandstaa (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-253?page=all ]
 
Olav Sandstaa resolved DERBY-253:
-

Resolution: Fixed

Verified that the patch is in. Thanks for commiting it, Knut Anders!

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-528) Support for DRDA Strong User ID and Password Substitute Authentication (USRSSBPWD) scheme

2006-07-13 Thread Francois Orsini (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-528?page=all ]

Francois Orsini updated DERBY-528:
--

Attachment: 528_stat_v2.txt
528_diff_v2.txt

Thanks for the comments / feedback.

I have attached some new changes which include some bug fixes. Merged with the 
latest as well.

@Bernt
- I have removed NetConnectionRequest.java
- USRIDPWD is the default if EUSRIDPWD was not supported by the client - I had 
made the default USRSSBPWD (strong password substitute) as it can be supported 
by all the clients >= 10.2 and JVM from 1.3.1 -  I have reverted back to a 
default of USRIDPWD because of DERBY-926 which I have to fix as if I make 
USRSSBPWD the default, it will cause a protocol exception on derby servers 
prior to 10.2; until DERBY-926 is fixed and can be handled better on the 
client..as well as doing the right thing on the server when returning supported 
SECMEC's as part of ACCSECRD.
- Regarding  EncryptionManager and DecryptionManager - there are comments in 
the code stating that these classes will be refactored to be more modular as 
they share a lot of similar code - It will also be easier to add support for 
other DRDA security mechanisms - I will log a JIRA and would live to implement 
this separately as I had started to do it when we were on the topic of shared 
code/classes, some months ago. 

So for now, USRSSBPWD  is no longer the default after EUSRIDPWD in the client 
until DERBY-926 is fixed or a temporary handling of the protocol exception 
reported as in DERBY-926 is duoable in Derby's client driver.

@Kathey - Yes, I have tested all the compatibility combos - my main issue is 
DERBY-926 which causes the COMPAT test to fail when going CLIENT_10.2> 
SERVER_PRE_10_2 - otherwise all the tests were passing...If I can put a 
temporary workaround to handle the protocol exception (DERBY-926) in the 
client, then I will put USRSSBPWD back as the default secMec to use on the 
client _when_ EUSRIDPWD cannot be used...In the meantime, we can leave USRIDPWD 
as the 2nd default in ClientBaseDataSource until either a workaround is found 
or DERBY-926 is fixed (after the commit of this JIRA). I have traced that 
correct message exchanges is happening as well.

> Support for DRDA Strong User ID and Password Substitute Authentication 
> (USRSSBPWD) scheme
> -
>
>  Key: DERBY-528
>  URL: http://issues.apache.org/jira/browse/DERBY-528
>  Project: Derby
> Type: New Feature

>   Components: Security
> Versions: 10.1.1.0
> Reporter: Francois Orsini
> Assignee: Francois Orsini
>  Fix For: 10.2.0.0
>  Attachments: 528_SecMec_Testing_Table.txt, 528_diff_v1.txt, 528_diff_v2.txt, 
> 528_stat_v1.txt, 528_stat_v2.txt
>
> This JIRA will add support for (DRDA) Strong User ID and Password Substitute 
> Authentication (USRSSBPWD) scheme in the network client/server driver layers.
> Current Derby DRDA network client  driver supports encrypted userid/password 
> (EUSRIDPWD) via the use of DH key-agreement protocol - however current Open 
> Group DRDA specifications imposes small prime and base generator values (256 
> bits) that prevents other JCE's  to be used as java cryptography providers - 
> typical minimum security requirements is usually of 1024 bits (512-bit 
> absolute minimum) when using DH key-agreement protocol to generate a session 
> key.
> Strong User ID and Password Substitute Authentication (USRSSBPWD) is part of 
> DRDA specifications as another alternative to provide ciphered passwords 
> across the wire.
> Support of USRSSBPWD authentication scheme will enable additional JCE's to  
> be used when encrypted passwords are required across the wire.
> USRSSBPWD authentication scheme will be specified by a Derby network client 
> user via the securityMechanism property on the connection UR - A new property 
> value such as ENCRYPTED_PASSWORD_SECURITY will be defined in order to support 
> this new (DRDA) authentication scheme.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Olav Sandstaa (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-253?page=all ]

Olav Sandstaa updated DERBY-253:


Derby Info:   (was: [Patch Available])

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Olav Sandstaa (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-253?page=comments#action_12420833 ] 

Olav Sandstaa commented on DERBY-253:
-

Thanks for reviewing and committing this patch, Knut Anders!

Since this patch removes functionality from the client driver (although 
probably broken functionality), is this something that needs a comment in the 
release notes?

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-253) Client should throw not implemented exception for depricated setUnicodeStream/getUnicodeStream

2006-07-13 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-253?page=comments#action_12420831 ] 

Knut Anders Hatlen commented on DERBY-253:
--

The tests ran cleanly. Committed revision 421570. Thanks Olav!

> Client should throw not implemented exception for depricated 
> setUnicodeStream/getUnicodeStream
> --
>
>  Key: DERBY-253
>  URL: http://issues.apache.org/jira/browse/DERBY-253
>  Project: Derby
> Type: Bug

>   Components: Network Client, JDBC
> Versions: 10.1.1.0
> Reporter: Kathey Marsden
> Assignee: Olav Sandstaa
>  Fix For: 10.2.0.0
>  Attachments: derby253.diff
>
> setUnicodeStream and getUnicodeStream are deprecated API's 
> Network client
> PreparedStatement.setUnicodeStream() and ResultSet.getUnicodeStream() should 
> throw not implemented exceptions rather than trying to handle these calls.
> Note: The current client implementation of setUnicodeStream() and 
> getUnicodeStream() are broken and can cause unexpected errors

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-1497) assert failure in MessageUtil, because exception thrown with too many parameters when handling OutOfMemoryError

2006-07-13 Thread Andreas Korneliussen (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-1497?page=all ]
 
Andreas Korneliussen closed DERBY-1497:
---


> assert failure in MessageUtil, because exception thrown with too many 
> parameters when handling OutOfMemoryError
> ---
>
>  Key: DERBY-1497
>  URL: http://issues.apache.org/jira/browse/DERBY-1497
>  Project: Derby
> Type: Sub-task

>   Components: Network Client
> Versions: 10.2.0.0
> Reporter: Andreas Korneliussen
> Assignee: Andreas Korneliussen
> Priority: Trivial
>  Fix For: 10.2.0.0
>  Attachments: DERBY-1497.diff, DERBY-1497v2.diff
>
> If the VM throws a OutOfMemoryException, which is caught in:
> NetStatementReply.copyEXTDTA:
> protected void copyEXTDTA(NetCursor netCursor) throws DisconnectException 
> {
> try {
> parseLengthAndMatchCodePoint(CodePoint.EXTDTA);
> byte[] data = null;
> if (longValueForDecryption_ == null) {
> data = (getData(null)).toByteArray();
> } else {
> data = longValueForDecryption_;
> dssLength_ = 0;
> longValueForDecryption_ = null;
> }
> netCursor.extdtaData_.add(data);
> } catch (java.lang.OutOfMemoryError e) { <--- outofmemory
> agent_.accumulateChainBreakingReadExceptionAndThrow(new 
> DisconnectException(agent_,
> new ClientMessageId(SQLState.NET_LOB_DATA_TOO_LARGE_FOR_JVM),
> e));  <- message does not take parameters, causing assert 
> failure
> }
> } 
> Instead of getting the message: java.sql.SQLException: Attempt to fully 
> materialize lob data that is too large for the JVM.  The connection has been 
> terminated.
> I am getting an assert: 
> Exception in thread "main" 
> org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Number of 
> parameters expected for message id 58009.C.6 (0) does not match number of 
> arguments received (1)
> at 
> org.apache.derby.shared.common.sanity.SanityManager.ASSERT(SanityManager.java:119)
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



  1   2   >