Re: Question about setTransactionIsolation in network client driver

2005-10-21 Thread Andreas Korneliussen

Deepa Remesh wrote:

When autocommit is set to false, a call to setTransactionIsolation
using client driver does not end the transaction when the method
exits. When a close() is called on the conection, it throws an
exception.

Running the code below:

conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
try{
conn.close();
}catch(SQLException se){
System.out.println("Got exception when closing the connection");
se.printStackTrace();
}

with client driver gives:
Got exception when closing the connection
org.apache.derby.client.am.SqlException: java.sql.Connection.close()
requested while a transaction is in progress on the connection.The
transaction remains active, and the connection cannot be closed.

with embedded driver, it works okay and does not throw any exception.

This looks like a bug to me. Can someone please confirm? If I don't
hear otherwise, I'll open a JIRA issue tommorow.

Thanks Kathey for bringing this up.


Hi,
Yes, this looks like a bug in the client driver. A call to 
setTransactionIsolation() should cause the current transaction to commit.


Andreas


Thanks,
Deepa




derby developer access

2005-10-21 Thread Andreas Korneliussen

Hi,
I would like to get developer access to JIRA so that I can assign myself 
to an issue. My Jira user name is: andreask


Thanks
-- Andreas


Re: Question about setTransactionIsolation in network client driver

2005-10-21 Thread Andreas Korneliussen

Roy's email was meant to be sent to me personally to correct my answer.

He says that according to the SQL spec, it is not allowed to set 
transaction isolation inside a transaction, however setting transaction 
 isolation should not open a transaction either.


My answer was based on the "JDBC API Tutorial and Reference, Third Edition".

-- Andreas

Nope. Det er ikke tillatt å sette transaction isolation inne i en 
transaksjon, men det å sette transaction isolation skal heller ikke åpne 
en transaksjon...


Roy





Re: [jira] Commented: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-10-28 Thread Andreas Korneliussen

How this interacts with the statement cache needs to be considered. The
current statement cache is lookup by current schema and text of the
query string. You may be going in a direction where the same text
'SELECT * FROM T' leads to different plans depending on the updatable
state of the result set.



I would need to check further into that, however I think that by making 
the new field for concurrency mode part of the identity of the 
GenericStatement, the cache would be correct.





2. If the concurrency mode for the java.sql.Statement object is CONCUR_READ_ONLY, the 
updatemode will be set to READ_ONLY. If the query string contains "for update" 
an error will be thrown.



That would be incorrect.
  1) Derby still needs to support positioned UPDATE and DELETE, in that
case it is fine to have a FOR UPDATE clause with a read only ResultSet.



You are right, it should not throw an error there.
I will modify the suggestion:

2. If the concurrency mode for the java.sql.Statement object is 
CONCUR_READ_ONLY, the updatemode will be set to READ_ONLY if the 
updateclause is unspecified or "for read only".  If the query string 
contains "for update" the updatemode is UPDATABLE, however the 
java.sql.ResultSet.updateXXX methods throws an exception (if called) 
since the resultset is not updatable.




  2) Applications use the FOR UPDATE clause to control locking for
future updates with read only ResultSets.



Note currently it throws an exception if the statement is not updatable 
i.e contains a join or order by.


-- Andreas



Dan.





Re: [jira] Commented: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-10-28 Thread Andreas Korneliussen
Just to clarify: my intention is to *not* change the lockmode for "for 
update" as part of this specific issue.


-- Andreas


Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-03 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Bernt M. Johnsen wrote:



The patch looks sound. I'll commit when I have run derbyall and
experimented a bit on my own.




Is there an overview of what the patch does, any implementation details?


Here:

The purpose of the patch is to allow a statement to be updatable without 
having to specify "FOR UPDATE".


My first look in the code indicated that I could do the fix by changing 
only one line of code:


org.apache.derby.impl.sql.compile.CursorNode.java, bind()

 if (updateMode == UNSPECIFIED) {
>>updateMode = determineUpdateMode(dataDictionary);
<

Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-03 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Bernt M. Johnsen wrote:



The patch looks sound. I'll commit when I have run derbyall and
experimented a bit on my own.



So the patch removes this warning

- ** NOTE: THIS IS NOT COMPATIBLE WITH THE ISO/ANSI STANDARD!!!
- **
- ** According to ANSI, cursors are updatable by default, unless
- ** they can't be (e.g. they contain joins).  But this would mean
- ** that we couldn't use an index on any single-table select,
- ** unless it was declared FOR READ ONLY.  This would be pretty
- ** terrible, so we are breaking the ANSI rules and making all
- ** cursors (i.e. select statements) read-only by default.
- ** Users will have to say FOR UPDATE if they want a cursor to
- ** be updatable.  Later, we may have an ANSI compatibility
- ** mode so we can pass the NIST tests.

but I can't see why it is removed. It seems that Derby will stil not
support a positioned update/delete on

SELECT * FROM T

whereas the comment implies according to the SQL standard it should.




Hi,
Thanks for reviewing the changes.

The fix changes the default behavior for cursors, and cursors are now 
updatable, unless they can't be (e.g they contain joins), and unless the 
concurrency mode is READ_ONLY. The default behaviour is used if and only 
if no update clause is specified (FOR UPDATE / FOR READ ONLY). On a 
single table SELECT, we do not have to use "FOR READ ONLY" to use an 
index, since the update mode will be READONLY if the concurrency mode is 
READ_ONLY. Users do not have to say "FOR UPDATE" if they want a cursor 
to be updatable, they can use concurrency mode UPDATABLE.


As you can see, the comment is therefore no longer valid.


I'm not saying that this change needs to improve Derby to support
positioned updates on such statements, but the code should continue to
document such limitations or variations to the standard.

Though maybe the patch does allow positioned updates on 'SELECT * FROM
T' if the JDBC result set is updateable? If that's the case, then some
additional tests should be added.



The patch does allow positioned updates on 'SELECT * FROM T' if the 
concurrency mode is set to CONCUR_UPDATABLE


This is tested implicitly by the fact that the JDBC driver uses 
positioned updates when doing updateRow(). It produces statements like: 
"update table T set ... where current of SQLCUR0".  Positioned updates 
is therefore tested in the jdbc/updateableResultSet.java test.


We do also plan to provide more tests as part of providing updatable 
scrollable resultsets.



Then the change itself would seem to possibly make any future change to
supported positioned update/delete on such statements harder. This is
because the statement is made read-only by the use of a read-only JDBC
result set. As I've said before, the JDBC result set can be read-only,
and positioned updates must still work. I'm not sure if such a concern
should block the patch or not, maybe it's up to the next person who
addresses this issue to modify the code. The only concern I then have,
is there any user visible impact of this setting to read only, that
would change later?



There is no "setting to read only" as part of this patch, it is rather 
the oposite.  Today, the updatemode always defaults to read only if the 
updateclause is unspecified. The change is simply to allow the cursor to 
be UPDATABLE if the concurrency mode for the statement is 
ResultSet.CONCUR_UPDATABLE.


-- Andreas




Re: (A)symmetry of Update and Shared locks in Derby

2005-11-03 Thread Andreas Korneliussen
I thought that asymmetric behaviour of updatelocks would reduce the 
probability of deadlocks.


Anyway, I also found another related issue w.r.t update locks:

According to the documentation, transactions using the 
TRANSACTION_SERIALIZABLE or TRANSACTION_REPEATABLE_READ isolation level 
should downgrade the update locks to shared locks when the transaction 
steps through to the next row: 
http://db.apache.org/derby/docs/10.1/devguide/cdevconcepts842385.html


This does not seem to happen, instead it seems that the update locks are 
not downgrade to shared locks when using rep. read:


I ran this test code to check this:

/**
 * Test that update locks are downgraded to shared locks
 * after repositioning.
 */
public void testUpdateLockDownGrade1()
throws SQLException
{
Statement s = con.createStatement(ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_UPDATABLE);

ResultSet rs = s.executeQuery("select * from t1 for update");

// After navigating through the resultset, presumably all rows 
are locked with shared locks

while (rs.next());

// Now open up a new connection
Connection con2 = getNewConnection();
Statement s2 = 
con2.createStatement(ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_UPDATABLE);


ResultSet rs2 = s2.executeQuery("select * from t1 for update");
try {
   rs2.next(); // We should be able to get a update lock here.
} finally {
con2.rollback();
}
}

(Both Connections con and con2 have isolation level REP.READ and 
autocommit off).


The test fails in rs2.next() with:
ERROR 40XL1: A lock could not be obtained within the time requested


--Andreas


Mike Matrigali wrote:
The current documentation of the expected behavior in derby is wrong in 
this case, it was not changed when the associated code change was made.

Let me know if you want to file the JIRA, or I will.

The current lock table is symmetric.
This behavior was changed as a result of customer input at the time
(pre-derby) and testing with the running the specj test
(http://www.spec.org/osg/jAppServer2001/).

Without this change Derby was seeing deadlocks, where other
databases were not.  Unfortunately I don't remember more details.


Dag H. Wanvik wrote:


Hi,

I did some tests of Derby with Update and shared locks, to check the
compatibility. To avoid deadlocks, these locks should be implemented
asymmetrically, as shown in
http://db.apache.org/derby/docs/10.1/devguide/rdevconcepts2462.html

As I read this matrix, once a transaction has an update lock
(intention to update), a shared lock should not be granted to another
transaction. My test, whoever, indicated that a shared lock was indeed
granted (connection2 in the repro) after an update lock was taken by
the first transaction. Is this a bug or am I missing something here?

Repro:




/*
 * Main.java
 *
 * Created on October 28, 2005, 2:28 PM
 *
 * To change this template, choose Tools | Options and locate the 
template under
 * the Source Creation and Management node. Right-click the template 
and choose

 *
 * Derby seems to allow *both* R + U, and U + R, which can lead to 
more deadlocks

 * cf. Gray, Reuter p 408, there should be asymmetry for these locks.
 */

package forupdatelockingtest;


import java.sql.*;

public class Main {
/**
 * @param args the command line arguments
 */
public static void main(String[] args) {

Statement updateStatement = null;
Statement selectStatement = null;
Statement selectStatement2 = null;
Statement ddlStatement = null;
Connection con = null;
Connection con2 = null;
ResultSet rs = null;
ResultSet rs2 = null;
try {
   Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
   con = 
DriverManager.getConnection("jdbc:derby:testdb;create=true;territory=en_US"); 


   con.setAutoCommit(false);
   
con.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);

   // Create table
   ddlStatement = con.createStatement();
   ddlStatement.execute("CREATE TABLE myTable (id int primary key, 
name varchar(50))");

}
catch (Exception e) {
   System.out.println(e);
   return;
}

try {
   // Insert data
   //
   PreparedStatement ps = con.prepareStatement("INSERT INTO 
myTable VALUES (?, ?)");

   for (int i=1; i<=10; i++) {
  ps.setInt(1, i);
  ps.setString(2, "Testing " + i);
  ps.executeUpdate();
   }
   ps.close();
   con.commit();
   // Get ResultSet
   //
   selectStatement = con.createStatement 
(ResultSet.TYPE_FORWARD_ONLY,

  ResultSet.CONCUR_UPDATABLE);
   rs = selectStatement.executeQuery("select * from myTable for 
update");

   // Position on first row
  

Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-04 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Andreas Korneliussen wrote:



Daniel John Debrunner wrote:



Bernt M. Johnsen wrote:




The patch looks sound. I'll commit when I have run derbyall and
experimented a bit on my own.




So the patch removes this warning

- ** NOTE: THIS IS NOT COMPATIBLE WITH THE ISO/ANSI STANDARD!!!
- **
- ** According to ANSI, cursors are updatable by default, unless
- ** they can't be (e.g. they contain joins).  But this would mean
- ** that we couldn't use an index on any single-table select,
- ** unless it was declared FOR READ ONLY.  This would be pretty
- ** terrible, so we are breaking the ANSI rules and making all
- ** cursors (i.e. select statements) read-only by default.
- ** Users will have to say FOR UPDATE if they want a cursor to
- ** be updatable.  Later, we may have an ANSI compatibility
- ** mode so we can pass the NIST tests.

but I can't see why it is removed. It seems that Derby will stil not
support a positioned update/delete on

SELECT * FROM T

whereas the comment implies according to the SQL standard it should.




Hi,
Thanks for reviewing the changes.

The fix changes the default behavior for cursors, and cursors are now
updatable, unless they can't be (e.g they contain joins), and unless the
concurrency mode is READ_ONLY. The default behaviour is used if and only
if no update clause is specified (FOR UPDATE / FOR READ ONLY). On a
single table SELECT, we do not have to use "FOR READ ONLY" to use an
index, since the update mode will be READONLY if the concurrency mode is
READ_ONLY. Users do not have to say "FOR UPDATE" if they want a cursor
to be updatable, they can use concurrency mode UPDATABLE.

As you can see, the comment is therefore no longer valid.



I believe the comment still is valid.

According to the SQL standard a statement like SELECT * FROM T is
updateable, Derby does not make that statement updatable in all cases.
And by updateable, I mean through positioned updated/delete.

The SQL standard does not require that the client's JDBC result set is
updateable to make the statement updateable, obviouly because the SQL
standard is self contained and independent of the JDBC spec.

I'm not disagreeing with your changes, just the removal of the comment.
Maybe the comment could be updated. You have improved the situation, but
there are still cases where Derby conflicts with the standard.

Dan.

Based on the comments from you and Bernt, I would suggest a new comment 
like:


NOTE: THIS IS NOT COMPATIBLE WITH THE ISO/ANSI SQL STANDARD.

According to the SQL-standard:
If updatability is not specified, a SELECT * FROM T will be implicitely
read only in the context of a cursor which is insensitive, scrollable or
have an order by clause. Otherwise it is implicitely updatable.

In Derby, we make a SELECT * FROM T updatable if the concurrency mode is
ResultSet.CONCUR_UPDATE. If we do make all SELECT * FROM T  updatable
by default, we cannot use an index on any single-table select, unless it
was declared FOR READ ONLY. This would be pretty terrible, so we are
breaking the ANSI rules. Later, we may have an ANSI compatibility mode 
so we can pass the NIST tests.



-- Andreas


Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-04 Thread Andreas Korneliussen

> Do you plan to make Derby client behave this way too? Currently Derby
> client seems to match Embedded behavior if FOR UPDATE clause is not
> specified and statement is ResultSet.CONCUR_UPDATABLE... With Derby
> client, the cursor is *not *updatable in this case.
>
> Satheesh


Hi,
There is no need to make any changes to the Derby client to get this 
feature also from the derby client driver. Its behaviour will continue 
to match the Embedded behaviour on this feature.


After this change, the cursor will be updatable also from the Derby 
client. So if the concurrency mode is CONCUR_UPDATABLE, you do not have 
to specify "FOR UPDATE".


-- Andreas


Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-09 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Andreas Korneliussen wrote:


Daniel John Debrunner wrote:




Though maybe the patch does allow positioned updates on 'SELECT * FROM
T' if the JDBC result set is updateable? If that's the case, then some
additional tests should be added.



The patch does allow positioned updates on 'SELECT * FROM T' if the
concurrency mode is set to CONCUR_UPDATABLE

This is tested implicitly by the fact that the JDBC driver uses
positioned updates when doing updateRow(). It produces statements like:
"update table T set ... where current of SQLCUR0".  Positioned updates
is therefore tested in the jdbc/updateableResultSet.java test.



Implicit testing is not ideal. This means we are testing a user visible
feature only through a specific implementation. If someone improved the
JDBC updateable result set implementation to not use positioned updates
or deletes then unless more testing was added we would have a testing hole.



I have added extra tests to test positioned updates and positioned 
deletes on "SELECT * FROM T".


--

We do also plan to provide more tests as part of providing updatable scrollable 
resultsets.

--

I am a bit curious about the statement about improving the JDBC 
updatable result set implementation to not use positioned updates.


Did you mean that an improvement could be to not base JDBC updatable 
resultsets on positioned updates at all, and instead invent another 
mechanism. If so, do you have any specific ideas on this ?


Or were you thinking of just minor improvements, still basing it on 
positioned updates ?


We are thinking of coninuing using positioned updates when doing 
scrollable updatable resultsets - that is why I am asking.


-- Andreas


Re: [jira] Updated: (DERBY-231) "FOR UPDATE" required for updatable result set to work

2005-11-09 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Andreas Korneliussen wrote:



I am a bit curious about the statement about improving the JDBC
updatable result set implementation to not use positioned updates.

Did you mean that an improvement could be to not base JDBC updatable
resultsets on positioned updates at all, and instead invent another
mechanism. If so, do you have any specific ideas on this ?



It was just a vague idea. It seems somewhat inefficient for an updatable
ResultSet to create a SQL statement, have that parsed & compiled in
order to perfom an update or delete etc. However,  the use of SQL is a
great example of re-use, especially as the problem looks simple 'update
the current row', but in fact is complicated. The update must determine
which triggers are to be invoked, which constraints to be checked etc.
etc. This of course is handled automatically through the use of SQL.
Maybe, just maybe, the code could be re-factored to allow updateable
ResultSets to avoid the SQL parsing step but that's the limit of what
I've thought about.



We are thinking of coninuing using positioned updates when doing
scrollable updatable resultsets - that is why I am asking.



You should continue on your current path. I'm not working on this at all.

The general point I was trying to make it that testing user visible
functionality implictly due to implementation knowledge is not a good
practice. Test as we intend the users to use it, not indirectly through
a different mechanism that might effect things somehow.


I agree with this point -  thanks for the clarifications.

-- Andreas


Re: RowLocation lifetime

2005-11-10 Thread Andreas Korneliussen

Hi,

 We are planning on using RowLocation to position the cursor when doing 
scrollable updatable cursors (i.e when navigating backwards in the 
resultset) - see 
http://permalink.gmane.org/gmane.comp.apache.db.derby.devel/10028 for 
more details.


The fact that the RowLocation could become invalid, do worry me a bit.

I did a test on a simple select statement, using transaction level 
read-uncommitted.


T1: select * from testtable

Then (before doing commit) I called on another connection:

T2: call SYSCS_UTIL.SYSCS_COMPRESS_TABLE("APP", "TESTTABLE", 0)
T2: Got exception:The exception 'SQL Exception: A lock could not be 
obtained within the time requested' was thrown while evaluating an 
expression.


So even in read-uncommitted mode, a lock intent level lock on the table 
is set (good), and it seems to be held until I close the resultset or 
commit the transaction T1.


The problem I then see is for cursors that are held over commit 
(ResultSet.HOLD_CURSORS_OVER_COMMIT). Maybe we should not support it for 
scrollable updatable resultsets.


Anyway , we would really appreciate to get some comments on the 
specification Dag sent out two days ago, to ensure that we are on the 
right track.


Thanks

-- Andreas

Mike Matrigali wrote:

Assuming row is not deleted, the question can only be answered
knowing the isolation level.  Basically the RowLocation can only
be counted on while a lock intent level lock is held on the table.
Intent table locks may be released as soon as a statement is
completed, or may be held to end of transaction depending on
the type of statement and type of isolation level.

The thing that may move an existing row in a heap are the compress
table system procedures.

If a row is deleted then there are other factors.

Rick Hillegas wrote:


Hello Store experts,

How long is a RowLocation in a Heap good for? Provided that the row is 
not deleted, can you count on its RowLocation reliably identifying the 
row for the duration of a Statement, a Transaction, a Connection? 
Forever?


Thanks,
-Rick








Re: RowLocation lifetime

2005-11-11 Thread Andreas Korneliussen

Mike Matrigali wrote:

From the store point of view, 3 things can happen to
RowLocations in heap tables:
1) It can be deleted (requires at least TABLE IX, ROW X locking)
   o internal to store it is just marked deleted
   o external to store requests for any operation on
 this row will fail.  Note that your cursor can
 experience this no matter what locking you do, as
 it is always possible for another statement in your
 transaction to do the delete.
2) It can be purged (requires at least TABLE IX, ROW X locking)
   o all physical evidence of the row is removed from table,
 both internal and external operations on this row will
 fail.  Only committed deleted rows are purged.
 Note this will never happen if you have some
 sort of lock on the row as the requested X lock is
 always requested in a system only transaction.
   o the actual RowLocation will not be reused while
 at least some sort of table level intent lock is held.
3) It can be reused (requires a table level X lock)
   o basically as part of a compress all rows can be shuffled
 in any way.  A former RowLocation can now point to
 a completely different row.

So as you point out, your implementation can have serious problems
with cursors held over commit.  This is why in current usage of
cursors over commit the only safe thing to do is to ask for the
next row location and use it.

Please make sure to consider the delete/purge cases also.  One case
that often causes problems is a transaction deleting a row that is
locked by it's own cursor from another statement in the same connection.


Yes, we need to consider those cases.

It seems that the store is capable of graciously handle that the row get 
deleted (i.e by its own transaction). If the transaction later tries to 
update the deleted row using the resultset, the store call will return 
false indicating that the row was not updated. The deleted row will not 
be purged as long as the transaction is open.


However in read-committed/read-uncommitted mode, a row read by the 
cursor, can be deleted by another transaction, and then purged.
It seems that the store does not handle an update of a deleted+purged 
record.


On our prototype impl., I get a get a NullPointerException from the 
store in this case.  It comes in GenericConglomerateController.replace(..)).


I would think there are multiple ways of adressing this issue:

1 We could  make the store graciously handle the situation if the 
RowLocation points to a deleted+purged row, by returning false if the 
RowLocation is invalid, (and from the caller we can give an exception)


2 Or we could make all scrollable updatable resultsets set read-locks or 
 updatelocks on every row, for all isolation levels (including 
read-uncommitted)


3 Or we could make purging require a table level X lock, instead of row 
locks


Below is output from the test:

T1: Read next Tuple:(0,0,17)
T1: Read next Tuple:(1,1,19)
T1: Read next Tuple:(2,2,21)
T1: Read next Tuple:(3,3,23)
T1: Read next Tuple:(4,4,25)
T1: Read next Tuple:(5,5,27)
T1: Read next Tuple:(6,6,29)
T1: Read next Tuple:(7,7,31)
T1: Read next Tuple:(8,8,33)
T1: Read next Tuple:(9,9,35)
T2: Deleted Tuple:(0,0,17)
T2: commit
T3: call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE
T3: purged deleted records
T3: commit
T1: Read first Tuple:(0,0,17)
T1: updateInt(2, 3);
T1: updateRow()
java.lang.NullPointerException
at 
org.apache.derby.impl.store.access.conglomerate.GenericConglomerateController.replace(GenericConglomerateController.java:465)
at 
org.apache.derby.impl.sql.execute.RowChangerImpl.updateRow(RowChangerImpl.java:516)
at 
org.apache.derby.impl.sql.execute.UpdateResultSet.collectAffectedRows(UpdateResultSet.java:577)
at 
org.apache.derby.impl.sql.execute.UpdateResultSet.open(UpdateResultSet.java:276)
at 
org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:368)
at 
org.apache.derby.impl.jdbc.EmbedResultSet.updateRow(EmbedResultSet.java:3256)
at 
resultsettests.ConcurrencyTest.testConcurrency7(ConcurrencyTest.java:306)


-- Andreas


Re: RowLocation lifetime

2005-11-14 Thread Andreas Korneliussen

Suresh Thalamati wrote:

Andreas Korneliussen wrote:


Mike Matrigali wrote:







I would think there are multiple ways of adressing this issue:

1 We could  make the store graciously handle the situation if the 
RowLocation points to a deleted+purged row, by returning false if the 
RowLocation is invalid, (and from the caller we can give an exception)




This may not be good option, because purged row location  can 
potentially be used by an another insert from a different

transaction.



Maybe I misunderstood, however I assumed the RowLocation would not be 
reused as long as there is a table intent lock on the table. Therefore 
the insert from a different transaction would need to use another 
RowLocation.


Mike Matrigali wrote:

2) It can be purged (requires at least TABLE IX, ROW X locking)
   o all physical evidence of the row is removed from table,
 both internal and external operations on this row will
 fail.  Only committed deleted rows are purged.
 Note this will never happen if you have some
 sort of lock on the row as the requested X lock is
 always requested in a system only transaction.
   o the actual RowLocation will not be reused while
 at least some sort of table level intent lock is held.


--Andreas


Re: RowLocation lifetime

2005-11-14 Thread Andreas Korneliussen

Mike Matrigali wrote:

i get confused when speaking about isolation level, update/read only
result sets, and underlying sql query of the result set.  I don't
know if one scrollable result sets are dependent on some sort of
isolation level.

With respect to straight embedded server execution of SQL, it is fine to 
run with

read-uncommitted level - but any actual update done on a row will get
an X lock held to end transaction.  At least from this level an SQL
operation is never failed dependent on the isolation level.

I don't remember if U locks are requested in read uncommitted mode,
but definitely X locks are requested when the actual update is done.

Note that all discussions of locking should specify under which 
isolation level the system is running.  I assumed read commited for

the below discussion as it is the default.



The discussion was intended for  read-committed and read-uncommitted, 
since for other isolation levels, the rows for which we use the 
RowLocation would be locked, and cannot be deleted or purged by another 
transaction. Also, if we delete the row in our own transaction, it will 
not be purged, since it is locked with an exclusive lock.


I think you are allowed to do updates in read-uncommitted, however when 
you read, you do not set read-locks, so you can read uncommitted  data.


Andreas



Daniel John Debrunner wrote:


Andreas Korneliussen wrote:





2 Or we could make all scrollable updatable resultsets set read-locks
or  updatelocks on every row, for all isolation levels (including
read-uncommitted)




I think updates are not allowed in read-uncommitted mode, so we should
not be getting locks in read-uncommitted.

Dan.











Re: RowLocation lifetime

2005-11-14 Thread Andreas Korneliussen

Mike Matrigali wrote:

null pointer is a bug, please report as a separate JIRA,not sure
what is going on.  Note that while it is convenient for testing
purposes to use the SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE
to cause the purging, purging of rows can happen at any time
after a delete has been committed.  The timing depends on a
number of factors and the timing may change in the future - so
one should assume that as soon as a row is deleted and committed
it may be purged.

Andreas Korneliussen wrote:


Mike Matrigali wrote:


From the store point of view, 3 things can happen to
RowLocations in heap tables:
1) It can be deleted (requires at least TABLE IX, ROW X locking)
   o internal to store it is just marked deleted
   o external to store requests for any operation on
 this row will fail.  Note that your cursor can
 experience this no matter what locking you do, as
 it is always possible for another statement in your
 transaction to do the delete.
2) It can be purged (requires at least TABLE IX, ROW X locking)
   o all physical evidence of the row is removed from table,
 both internal and external operations on this row will
 fail.  Only committed deleted rows are purged.
 Note this will never happen if you have some
 sort of lock on the row as the requested X lock is
 always requested in a system only transaction.
   o the actual RowLocation will not be reused while
 at least some sort of table level intent lock is held.
3) It can be reused (requires a table level X lock)
   o basically as part of a compress all rows can be shuffled
 in any way.  A former RowLocation can now point to
 a completely different row.

So as you point out, your implementation can have serious problems
with cursors held over commit.  This is why in current usage of
cursors over commit the only safe thing to do is to ask for the
next row location and use it.

Please make sure to consider the delete/purge cases also.  One case
that often causes problems is a transaction deleting a row that is
locked by it's own cursor from another statement in the same connection.


Yes, we need to consider those cases.

It seems that the store is capable of graciously handle that the row 
get deleted (i.e by its own transaction). If the transaction later 
tries to update the deleted row using the resultset, the store call 
will return false indicating that the row was not updated. The deleted 
row will not be purged as long as the transaction is open.


However in read-committed/read-uncommitted mode, a row read by the 
cursor, can be deleted by another transaction, and then purged.
It seems that the store does not handle an update of a deleted+purged 
record.


On our prototype impl., I get a get a NullPointerException from the 
store in this case.  It comes in 
GenericConglomerateController.replace(..)).


I would think there are multiple ways of adressing this issue:

1 We could  make the store graciously handle the situation if the 
RowLocation points to a deleted+purged row, by returning false if the 
RowLocation is invalid, (and from the caller we can give an exception)


It seems like the ConglomerateController.replace() function should throw
an exception (other than null pointer) if it is called with a 
non-existent RowLocation, but I could be convinced returning false
is ok.  The problem I have is that store really has no way to tell the 
difference between a BAD RowLocation input and one which was purged.




Yes, maybe the store should throw an exception if the RowLocation is 
invalid.  Seen from the user perspective, I think the update should fail 
the same way whether the row has been deleted+purged or only deleted 
Currently the function returns false if the row has been deleted.


I ran a check on all places in the code tree that the 
ConglomerateCongtroller.replace(..) method is called. I found that the 
return value is silently ignored all places, except for in the  store 
unit-tests.


This means that an updateRow() on a deleted record, would simply return 
silently (using the optimistic concurrency approach). After the 
transaction has committed, the row would remain deleted.


I will file a JIRA for the NullPointerException case.

--Andreas






Re: [jira] Commented: (DERBY-707) providing RowLocation for deleted+purged row to GenericConglomerateController causes nullpointerexception

2005-11-16 Thread Andreas Korneliussen

Mike Matrigali wrote:

This looks like a reasonable fix, it would be nice if you added a test
case which showed the problem (to make sure it does not get broken again
in the future).  If it were me I would add a short new test case to the
T_AccessFactory tests in
opensource/java/testing/org/apache/derbyTesting/unitTests/store.  I know
you have a test case you can't check in as it is the feature you are
working on.  If you are not interested in doing this, let me know.



Hi,
I have now updated one of the testcases in T_AccessFactory to test this 
problem, and it reproduced the nullpointerexception (when running on 
unpatched derby)


I also noticed that after my patch, the method:

boolean fetch(
RowLocation loc,
DataValueDescriptor[]   row,
FormatableBitSetvalidColumns )

is functionally equivalent to:

boolean fetch(
RowLocation loc,
DataValueDescriptor[]   row,
FormatableBitSetvalidColumns,
boolean waitForLock=true)


Previously, this bug had been fixed in the first method, not the other. 
To improve code-reuse, I would like to implement:


fetch(
RowLocation loc,
DataValueDescriptor[]   row,
FormatableBitSetvalidColumns )

as:
{
   return fetch(loc, row, validColumns, true /* waitForLock */ );
}

.. unless there are any good reasons not to do it.

Andreas


question about DatabaseMetaData

2005-11-23 Thread Andreas Korneliussen
When doing scrollable updatable resultsets for Derby, we are considering 
letting


DatabaseMetaData.ownUpdatesAreVisible(ResultSet.TYPE_SCROLL_INSENSITIVE) 
return TRUE


(Retrieves whether for the given type of ResultSet object, the result 
set's own updates are visible.)


and

DatabaseMetaData.othersUpdatesAreVisible(ResultSet.TYPE_SCROLL_INSENSITIVE)
return FALSE

In Derby: updatable resultsets are built on positioned update.  Would it 
be ok to make updates made directly using positioned updates (on the 
same cursor as the resultset) also be visible from the resultset even if 
DatabaseMetaData.othersUpdatesAreVisible(..) returns FALSE ?


I think it would be correct, since we are updating the same cursor, 
however I am not sure since the JDBC specification talks about "others" 
as other transactions or other resultsets, it does not mention 
positioned updates on the same cursor.


-- Andreas


Re: Ignored exceptions in DerbyJUnitTest

2006-01-10 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

DerbyJUnitTest has many cases where the SQLException thrown by Derby
(through JDBC) is ignored. To me, this seems very bad practice for test
code.


Isn't that bad practice for any code ?

Anyway, I think there are cases when you are interested in not throwing 
the SQLException to the JUnit framework.


If your test has already failed with a meaningful assertion which 
describes the error, the junit- test runner will call the tearDown() 
method on your TestCase object, which may throw an exception.
If the tearDown() throws i.e "ResultSet is closed", that error will 
overwrite the original test failure in the JUnit test report.


An alternative to swallowing the exception, is then of course to log it, 
or to gather them somehow, and report them at the end of running the 
testsuite.



For example:

//
// Swallow uninteresting exceptions when disposing of jdbc objects.
//
protected   static  voidclose( ResultSet rs )
{
try {
if ( rs != null ) { rs.close(); }
}
catch (SQLException e) {}
}   

If ResultSet.close is throwing an exception when being closed then to my
mind that's a bug, but we would never see it through a Junit test.

Unless the resultset is already closed, or you have closed the 
connection for the resultset.


Andreas


Re: Ignored exceptions in DerbyJUnitTest

2006-01-10 Thread Andreas Korneliussen



If ResultSet.close is throwing an exception when being closed then to my
mind that's a bug, but we would never see it through a Junit test.

Unless the resultset is already closed, or you have closed the 
connection for the resultset.


Correction: I guess ResultSet.close() should not throw an exception if 
it is already closed.


My point was anyway that we should be careful about throwing exceptions 
to the junit-framework in the tearDown() since it will overwrite the 
"real" error.


Andreas



running JUnit tests

2006-01-10 Thread Andreas Korneliussen
It seems to me that for including a new JUnit test into i.e derby-all we 
need to make a new java class with a main() method, which parses a 
command line and set up the testsuite and run it, just like any java 
program. Basically we are running the junit tests as test type "java".


Instead of having to do this for every junit test going into a derby 
test suite, I would propose a different strategy.


I propose to introduce a new test type called "junit" (current test 
types are: sql,sql2,unit,java,multi,demo - unit is not junit)


Then you can use:

java org.apache.derbyTesting.functionTests.harness.RunTest 
.junit


to run a Junit test - instead of:

java org.apache.derbyTesting.functionTests.harness.RunTest 
.java


When starting a test of type junit, the RunTest class may simply use the
junit.textui.TestRunner class, which has a main method which takes a 
TestCase class name as parameter.  The junit.textui.TestRunner  runs the 
tests defined by the suite() method of the TestCase class.


I think this strategy will make it easier to integrate new JUnit tests 
into the current test suites, since it save you the trouble of creating 
a java class with a main method for every test.



-- Andreas


Re: running JUnit tests

2006-01-10 Thread Andreas Korneliussen




Is there any write up of the correct way to write new Junit tests?


I am trying to follow the junit cookbook when writing new junit tests.
http://junit.sourceforge.net/doc/cookbook/cookbook.htm

I do have some thoughts on how to write new JUnit Tests.

When writing tests for Derby (using JDBC), I think most tests need to 
have access to some configuration data, like jdbc url, jdbc classname 
etc.  This could be put/gathered  into a immutable singleton object

(i.e DerbyTestConfig) which can be accessed by the tests.
Alternatively it could be gathered in a common testcase class, like 
DerbyJUnitTestCase, however then you may need to duplicate this code if 
you make i.e TestDecorators which are not subclasses of DerbyJUnitTestCase.


Since many tests are using JDBC connections, it could be useful to have 
a common TestCase base class which sets up a connection in the setUp() 
and closes it in the tearDown().  I.e:


class DerbyConnectionTestCase extends TestCase {

  protected Connection con;

  final void setUp() throws Exception
  {
 con =  DriverManager.connect(config.getJdbcUrl(), ..);
 doSetup();
  }

  final void tearDown() throws Exception
  {
 doTearDown();
 con.close(); <- of course with some better exception handling..
  }
..
}


I think a big mistake we made with the old harness was no formal way to
add Java tests, which meant multiple ways to start the engine and obtain
the connection and multiple utility methods (almost) performing the same
action.



When it comes to setting up additional fixture before running a set of 
testcases, I think a powerful and reusable mechanism is the 
junit.extensions.TestSetup. I.e one could make a DerbyNetworkServerSetup 
class. In its setUp() method it starts the network server, and in the 
tearDown() in stops it.  So if you have a suite of tests which requires 
a running network server, they can be wrapped like this:


TestSuite suite = new TestSuite(MyDerbyTestCase.class);
Test t = new DerbyNetworkServerSetup(suite);

When running Test t, it will first call setUp() in 
DerbyNetworkServerSetup, then it will run all testcases in 
MyDerbyTestCase, and finally call tearDown() in DerbyNetworkServerSetup.


Of course, one should consider if DerbyNetworkServerSetup should simply 
do nothing if the test is only testing in embedded mode (i.e make it 
dependent on the test configuration).




Would be nice if there were guidelines provided by whoever set up the
initial Junit framework.



I think Rick H /David set it up, so maybe they have some more guidelines 
or ideas.


--Andreas


Re: [VOTE] Rick Hillegas as a committer

2006-01-11 Thread Andreas Korneliussen

+1

Andreas


Re: running JUnit tests

2006-01-11 Thread Andreas Korneliussen

John Embretsen wrote:

Andreas Korneliussen wrote:

I propose to introduce a new test type called "junit" (current test 
types are: sql,sql2,unit,java,multi,demo - unit is not junit)


Then you can use:

java org.apache.derbyTesting.functionTests.harness.RunTest 
.junit


to run a Junit test - instead of:

java org.apache.derbyTesting.functionTests.harness.RunTest 
.java



If I understand this proposal correctly, there will be a one-to-one 
mapping between so-called ".junit" tests and ".java" files, right? And 
there will be no actual files ending with ".junit", but the harness will 
 map the test names ending with ".junit" to actual java/class files 
containing the actual JUnit test code?


Yes, the class name.

The harness uses the extension of the file submitted to figure out the 
test type.


I.e if it is .sql it starts ij, if it is .java it starts java with the 
class as argument, if it is unit it starts 
org.apache.derbyTesting.unitTests.harness.UnitTestMain, if it is multi 
it starts org.apache.derbyTesting.functionTests.harness.MultiTest


Which java program it starts is defined in the
buildTestCommand(..) method in 
org.apache.derbyTesting.functionTests.harness.RunTest


The proposal is that if it is junit, it starts "java 
junit.textui.TestRunner" application with the class as argument.


Andreas


Re: Features of the JUnit test execution harness

2006-02-02 Thread Andreas Korneliussen

Myrna van Lunteren wrote:
On 1/31/06, *Kristian Waagan* <[EMAIL PROTECTED] 
> wrote:


Differences in output should be irrelevant. Although not what you
mentioned above, the issue of (execution) control is very relevant. The
logic for running the tests multiple times, each time with a different
setup/environment must be located somewhere. I think Andreas' proposal
of introducing a separate JUnit test type (see
http://www.nabble.com/running-JUnit-tests-t887682.html#a2300670) makes
sense, as it gives us more freedom w.r.t. handling of JUnit tests.

 
Yes, that proposal made sense to me. I personally like the approach of 
having a class for various/different configurations. Although that could 
get out of hand.
 
Does this 'throw away' the work that Rick is doing on DERBY-874?
 


I think the work currently done on DERBY-874 was mainly to improve the 
DerbyJUnitTest's JavaDoc, and to log exceptions. So I would not throw 
that away.


However I do propose to change DerbyJUnitTest to move out everything 
about configuration into a separate class.


Following Andreas' approach we'd still be able to run the individual 
tests separately, yes?
 


Yes - definetly.


Andreas


Re: Features of the JUnit test execution harness

2006-02-03 Thread Andreas Korneliussen

Myrna van Lunteren wrote:
On 2/2/06, *Andreas Korneliussen* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


I think the work currently done on DERBY-874 was mainly to improve the
DerbyJUnitTest's JavaDoc, and to log exceptions. So I would not throw
that away.

However I do propose to change DerbyJUnitTest to move out everything
about configuration into a separate class.

 
cool. thx for the reply.
 
I now noticed that the wiki says all suggestions are to be put on the 
list, so here I go rather than plopping them directly on the wiki:
 

Great feature list.

What I think is important is that we provide a common mechanism to get 
access to the configuration of the test.


If someone then need to do some framework/version specific logic, they 
have a mechanism to get i.e the framework enum, and can write logic 
based on that.


I think the following could qualify as 'more details' to the jvm, 
framework, version specific logic:
 
1. jvm-specific:

1.1.
not all parameters are consistent for all jvms. Think here of jit 
settings / configurations, memory settings. For j2ME testing, that jvm 
doesn't come with a DriverManager implementation, so already from the 
start you know you have to go with DataSources.


So I guess what you are saying is that if the test framework provides a 
common mechanism to give a Connection to a derby database, it should go 
through a DataSource, instead of using DriverManager ?


1.2. Different versions of a vendor's jvm also may have slightly 
different implementations resulting in slightly different behavior - 
e.g. of the order of rows, for instance, or rounding of certain numeric 
values.

1.3. Some behavior will be only supported by later versions...
 
2. version specific.
This really falls back to the discussion on ...(can't find right now, 
raman's working on it, I think)... re mixed version testing. I think the 
conclusion was that the harness needs a way to cope with results from 
newer servers and clients - if they differ from results with same 
versions as the harness.
 
3. framework specific

The tests needs to be able to cope with the following
3.1. different client drivers (e.g. DerbyClient, IBM Universal JDBC Driver)
3.2. server may need to be started by harness, or not
3.3. server may be on the localhost, or be running on a remote machine.
 certain individual tests may not be able to run in with this 
mechanism...

3.4 should be able to have the harness start the server in a differrent jvm.
 
4. one thing the current harness has no way of doing is to cope with 
different OSs. For instance, sometimes there are slight differences in 
behaviour of the same jvm version on different OSs. Like slightly 
different error messages (although this may be irrelevant if we're not 
gathering & comparing output).
 
I think the following details would be useful (in addition to the above 
and item 1 on the wiki):
- there must be a way to skip individual tests without causing an error 
but with an informational message for certain configurations. eg. 
absence of optional jars (specifically thinking of db2jcc.jar), 
unsupported functionality with older jvms..., or when there is a problem 
that's being worked on, or that's been referred to some other 
organization ( e.g. in the case of jvm bugs, OS bugs...).
 
- some way to compare runtimestatistics.
   Currently this is done by comparing the output, I have a hard time 
thinking of another mechanism.




I am not sure which runtimestatistics you think of. Which output ? 
Output from a SQL statement ?



Thanks

--Andreas


testScript.xml and DerbyJUnitTest refactoring

2006-02-08 Thread Andreas Korneliussen

I have considered refactoring DerbyJUnitTest because of the following:

The way DerbyJUnitTest supports configuring the testcase may be error 
phrone, since it requires the calling of a number of static methods in a 
 specific order for the static members to initialize.  I.e it is not 
possible to run any of the exisiting JUnit tests using a standard Junit 
 testrunner without getting a NullPointerException. The static members 
are also non-final, and this allows testcases to call modifiers, 
potentially causing side-effects in other testcase objects.


Example:DerbyJUnitTest has a public static method called 
setDatabaseName(String dbName). If I do not call it, the _databaseName 
variable is null.  If one of my testcases call the setDatabaseName(..) 
method, it has side-effects which will affect all other testcases (they 
may start using another database).


The way DerbyJUnitTest supports configuring the correct JDBC client, is 
by having a String[] of client settings for a framework. This can be 
initialized by calling setClient(..) or findClientFromProperties()


private static String[] _defaultClientSettings;

	public	static	void		setClient( String[] client ) { 
_defaultClientSettings = client; }


	public	boolean	usingDB2Client() { return ( _defaultClientSettings == 
DB2JCC_CLIENT ); }


public  static  final   String[][]  LEGAL_CLIENTS =
{
DB2JCC_CLIENT,
DERBY_CLIENT,
EMBEDDED_CLIENT
};

The problem with this approach is that:
1. I may call setClient(..) with any array of strings not being part of 
LEGAL_CLIENTS. This will cause the methods usingDB2Client(), 
using...Client() to all return false, even if my String[] array contains 
a valid set of strings for JDBC client settings.


2. Calling setClient(..) has side-effects on other testcases, and it 
allows "non-standard" initialization of the client settings


3. If I do not call setClient(..) or findClientFromProperties().. 
methods like getConnection() fails with NullPointerException


4. The problem of configuring the client settings has been solved 
before, i.e in TestUtil, where there is a integer enum representing the 
framework.


One thing preventing me from refactoring DerbyJUnitTest is that I 
noticed the testScript.xml in CompatibilitySuite.  This seems to be a 
well-documented test harness for running the CompatibilitySuite against 
different versions of jdbc clients and derby servers. It consists of 
multiple "for loops" implemented in ant, calling 
CompatibiltySuite.main() with different arguments.


CompatibilitySuite.main takes advantage of the public setClient(..) 
method in DerbyJUnitTest, and the client settings are initialized using 
another mechanism than the one in i.e findClientFromProperties(), or any 
other mechanism found in the Derby test harness.


The main() method in the CompatibilitySuite is called from this script 
with the drivername, and by using stringmatching against drivernames in 
the LEGAL_CLIENTS[i][DRIVER_NAME], the correct client is loaded, and set 
using setClient(..).


My goal is to be able to run Derby junit tests from the junit 
testrunner, and to allow easy integration of JUnit tests into the 
current harness in Derby. Maintaining another test harness written as an 
ant.xml script is not really my itch. My question is therefore: could I 
leave it to someone else to maintain testScript.xml, or could it 
alternatively be removed ?


(Script 
org/apache/derbyTesting/functionTests/tests/junitTests/compatibility/testScript.xml)


--Sincerely

Andreas


Re: [jira] Commented: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-08 Thread Andreas Korneliussen

Daniel John Debrunner (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-934?page=comments#action_12365574 ] 


Daniel John Debrunner commented on DERBY-934:
-

Is this the plan for junit tests, to have a test directory per function, namely 
the 'sur' directory here?



I did not find that the SUR tests naturally belongs in any of the 
existing directories in junitTests, so I created a new package which is 
descriptive for the type of tests. If someone finds a better package 
name, I will gladly update it.


 junitTests/sur/SURTest.junit 


If these tests were added under the old harness the correct location would be



Note: I consider junit tests as a new test type which can be run under 
the (old) harness, and which in the future can be run in another harness.



tests/jdbcapi



Or should it be tests/lang, since updatableResultSet.java (tests 
non-scrollable updatable results in the old harness) is located in 
tests/lang ?



The existing junitTests sub-directories are higher level than a set of 
functionality and two of the three match the exisitng harness layout, lang and 
bderyNet



True.

Andreas


Re: [jira] Commented: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-08 Thread Andreas Korneliussen

Hi all,
Thanks for reviewing the patch.

The junitTest directory was not introduced as part of this patch. it had 
been introduced from before, and all (3-4) junit tests are in that 
subdirectory.


However I agree that it is not necessary to have a separate tree for 
JUnit type tests - so what should I do ?

a, put the new tests into the tests/jdbcapi directory
b, put them into a new directory : tests/junitTest/jdbcapi

If I go for a, the existing junit tests should then also be moved.

Either way is fine with me.

--Andreas

David W. Van Couvering wrote:

I am also a bit uncomfortable with a separate subdirectory for junit 
tests.  Is there a specific reason for this segregation?  If we have a 
new test type, .junit, why can't that be sufficient to distinguish 
between a JUnit test and an "old-style" test?  I'd like to reuse the 
existing subdirectories like lang and jdbcapi rather than create a 
mirror of subdirectories, or even a completely orthogonal set of 
subdirectories, under junitTests.


David

Daniel John Debrunner (JIRA) wrote:

[ 
http://issues.apache.org/jira/browse/DERBY-934?page=comments#action_12365574 
]

Daniel John Debrunner commented on DERBY-934:
-

Is this the plan for junit tests, to have a test directory per 
function, namely the 'sur' directory here?


 junitTests/sur/SURTest.junit
If these tests were added under the old harness the correct location 
would be


tests/jdbcapi

The existing junitTests sub-directories are higher level than a set 
of functionality and two of the three match the exisitng harness 
layout, lang and bderyNet






create a set of JUnit tests for Scrollable Updatable Resultsets
---

Key: DERBY-934
URL: http://issues.apache.org/jira/browse/DERBY-934
Project: Derby
   Type: Sub-task
 Components: Test
   Reporter: Andreas Korneliussen
   Assignee: Andreas Korneliussen
Attachments: DERBY-934.diff, DERBY-934.stat

Add a set of JUnit tests which tests the implementation for 
Scrollable Updatable ResultSets.

The following is a description of how the tests will be implemented:
Data model in test:
We use one table containing three int fields and one varchar(5000)
field. Then we run the tests on a number of variants of this model:
1. None of the fields are indexed (no primary key, no secondary key)
2. One of the fields is indexed as primary key
3. One of the fields is indexed as primary key, another field is
  indexed as secondary key
4. One field is indexed as secondary key
(primary key is unique, secondary key is not unique)
By having these variations in the data model, we cover a number of
variations where the ScrollInsensitiveResultSet implementation uses
different classes of source ResultSets, and the CurrentOfResultSet
uses different classes of target and source ResultSet.
The table can be created with the following fields:
(id int, a int, b int, c varchar(5000))
-
Queries for testing SUR:
Select conditions:
* Full table scan
SQL: SELECT * FROM T1
* Full table scan with criteria on non-indexed field
SQL: .. WHERE c like ?
SQL: .. WHERE b > ?
* Full table scan with criteria on indexed field
SQL: .. WHERE id>a
* SELECT on primary key conditionals:
- Upper and lower bond criteria:
SQL: .. WHERE ID>? and IDSQL: .. WHERE ID in (1,2,3,4) SQL: .. WHERE a  in (1,2,3,4) (Other 
nested queries containing a table seems to not permit updates)

* SELECT on secondary key conditionals:
SQL: .. WHERE a>? and aTest that you get an exception when specifying update clause "FOR 
UPDATE"

along with a query which is not updatable.
Cases:
* Query containing order by
* Query containing a join
Test that you get an exception when attempting to update a ResultSet 
which has been downgraded to a read only ResultSet due to the query 
Cases:

* Query contained a join
* Query contained a read only update clause
* Query contained a order by
Test that you get an exception when attempting to update a ResultSet 
which has concurrency mode CONCUR_READ_ONLY


Concurrency tests:
(ConcurrencyTest)
Cases: * Test that update locks are downgraded to shared locks after
 repositioning. (fails with derby)
* Test that we can aquire a update lock even if the row is locked with
 a shared lock.
* Test that we can aquire a shared lock even if the row is locked with
 an update lock.
* Test that we do not get a concurrency problem when opening two
 cursors as readonly.
* Test what happens if you update a deleted and purged record
* Test what happens if you update a deleted and purged record using
 positioned update
* Test what happens if you update a tuple which is deleted and then
 reinserted.
* Test what happens if you update a tuple which is deleted and then
 reinserted with the exact same values
* Test what happens if you update a tuple which has been modified by
 another transaction.
* Test that you cannot comp

Re: [VOTE] Knut Anders Hatlen as committer

2006-02-08 Thread Andreas Korneliussen

+1


Re: testScript.xml and DerbyJUnitTest refactoring

2006-02-08 Thread Andreas Korneliussen

Rick Hillegas wrote:


..


When you have a harness that runs JUnit tests, then I can write a new 
script for running the compatibility combinations. But please don't 
throw this suite away. Instead, before you change DerbyJUnitTest, 
maybe you can clone it to some other class just for use by the 
compatiblity suite. Once you have revamped DerbyJUnitTest and built 
out the harness, I can convert the compatibility test and suite.



Great, thanks for the feedback - I think your suggestion will work fine

-- Andreas



Re: [jira] Commented: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-09 Thread Andreas Korneliussen

David W. Van Couvering wrote:
I would say you could put the new tests into the tests/jdbcapi directory 
and log a JIRA to move over the other tests, rather than try and swallow 
that into your patch, that's too much for one patch.


David


Yes, I agree, it should go in different patches.

Andreas


Re: [patch] cleanup of org.apache.derby.iapi.services.cache.ClassSize

2006-02-09 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Kev Jackson wrote:


Following on from yesterdays post, here's a patch that tries to get this
class to conform to the coding standards chosen by the Derby developers



Except that we have not chosen a coding standard. :-)


I just read the following on http://db.apache.org/source.html :

-
All Java Language source code in the repository must be written in 
conformance to the " Code Conventions for the Java Programming Language 
as published by Sun, or in conformance with another well-defined 
convention specified by the subproject. See the FAQ page  for links to 
subproject conventions.

-

The link to the FAQ page was dead.

I guess that if the subproject has not defined a coding standard, it 
should use the standard of the "super" project ?


Andreas



http://wiki.apache.org/db-derby/DerbyContributorChecklist

and in another e-mail:


I'm willing to contribute as I'd like to learn a bit about how databases work 
under the hood,
but I can't stand the current code style and I'd like to know if this is an 
agreed upon convention,
or a legacy of the various owners of the codebase up to now.



The history of the code (from its inception over 8 years ago) was that
the only coding standard in force was to be clear, and don't make
changes in the code just for formatting sake. It was just not worth the
time originally to try and get everyone to agree on a single standard,
some people like braces one way, some the other, some like spaces in
different positions in loops or if statements etc, sometimes a single
rule is not applicable for all situations etc. etc.

The issue is that you say you can't stand the current code style, but
maybe someone else can't stand a code style you like, hard to please
everyone. :-)

Please do get involved in Derby, look at the jira issues marked with a
component of Newcomer, or fix a bug thet scratches your itch with Derby.
Many on the list will be willing to guide you, just ask.



- remove unused import
- change static final variables from camelCase to CONST_NAMES (except
where they are public and could be used elsewhere BWC concern)


[just curious, what's BWC?]



- add {} for conditionals
- strip lots of extra white space



As Myrna said, these type of changes can cause problems for merges of
fixes across branches and also to others with existing edits against the
same files.



As for the CLA/ICLA, I've already signed one (for ant), so do I need to
sign another for Derby?



Nope, I belive a single ICLA at the ASF is sufficient.

Thanks,
Dan.





Re: [jira] Commented: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-09 Thread Andreas Korneliussen

Myrna van Lunteren wrote:
When - I think Rick - made the first junit test, I suggested a different 
directory for it.
 
I was thinking of a future time, when the current test harness would be 
completely replaced by a junit-based framework. In that rosy future, all 
'converted', and any 'new' tests would be junit based, until finally, 
nothing worthwhile remained under the current 
/java/testing/org/apache/derbyTesting/functionTests dir.
In that scenario, I was thinking the junit tests, and presumably some 
sort of framework/harness for gathering totals etc. to show up under 
/java/testing/org/apache/derbyTesting/junitTests/...

With comparable directory structure as is currently under functionTests.
 
This is not where the tests junit tests ended up. As it is now, we might 
as well put them in their logical place under functionTests/tests.




I think the general feedback I get is that it is OK to put the junit 
SUR-tests into: ../functionTests/tests/jdbcapi, so I have therefore 
updated the patch.


As far as I know, svn is capable of handling moving of repository files 
(svn move), so if someone want to refactor the test hiearchy for junit 
tests in the future, they could do so while keeping the history of the 
junit tests. Of course, getting it perfect the first time would have 
been preferrable.


-- Andreas



Re: [jira] Updated: (DERBY-918) introduce a new test type to run junit tests from the current harness

2006-02-10 Thread Andreas Korneliussen

David W. Van Couvering wrote:
I would have added this as a patch but as I understand it the JIRA 
comment mailer is down...


I applied this patch and everything builds fine, but I did notice that I 
got failures with the existing Junit tests when I ran them as .junit 
instead of .java.


Do the existing JUnit tests need to be modified somehow?  Has this 
thread already been run and I just missed it?


Yes, they need to be modified to successfully run, since they depend on 
running some static methods to initialize some variables - otherwise 
they will give nullpointerexception.


I think I commented on that while uploading the patch.



I'm trying to find out how to validate if this patch actually works, any 
suggestions, since it doesn't appear to come with any junit tests that 
actually work with it.  Any suggestions?




You could download the the tests in 934 and run them with this patch.
--Andreas



Thanks,

David

+ java org.apache.derbyTesting.functionTests.harness.RunTest 
junitTests/lang/Boo

leanTest.junit
*** Start: BooleanTest jdk1.4.2_07 2006-02-09 10:54:27 ***
0 add
 > .E
 > There was 1 error:
 > 1) 
testBoolean(org.apache.derbyTesting.functionTests.tests.junitTests.lang.Boo

leanTest)java.lang.NullPointerException

*** Start: CompatibilityTest jdk1.4.2_07 2006-02-09 10:55:43 ***
0 add
 > .F
 > There was 1 failure:
 > 1) 
warning(junit.framework.TestSuite$1)junit.framework.AssertionFailedError: N
o tests found in 
org.apache.derbyTesting.functionTests.tests.junitTests.derbyNet

.CompatibilityTest
 > FAILURES!!!
 > Tests run: 1,  Failures: 1,  Errors: 0
Test Failed.

$ runtest.sh junitTests/lang/BooleanTest.java
+ java org.apache.derbyTesting.functionTests.harness.RunTest 
junitTests/lang/Boo

leanTest.java
*** Start: BooleanTest jdk1.4.2_07 2006-02-09 15:49:15 ***
*** End:   BooleanTest jdk1.4.2_07 2006-02-09 15:49:35 ***

$ runtest.sh junitTests/derbyNet/CompatibilityTest.java
+ java org.apache.derbyTesting.functionTests.harness.RunTest 
junitTests/derbyNet

/CompatibilityTest.java
*** Start: CompatibilityTest jdk1.4.2_07 2006-02-09 15:48:37 ***
0 add
 > java.sql.SQLException: No suitable driver
 > Exception in thread "main"
Test Failed.
*** End:   CompatibilityTest jdk1.4.2_07 2006-02-09 15:48:53 ***

Andreas Korneliussen (JIRA) wrote:


 [ http://issues.apache.org/jira/browse/DERBY-918?page=all ]

Andreas Korneliussen updated DERBY-918:
---

Attachment: (was: DERBY-918.diff)



introduce a new test type to run junit tests from the current harness
-

Key: DERBY-918
URL: http://issues.apache.org/jira/browse/DERBY-918
Project: Derby
   Type: Improvement
 Components: Test
Environment: All
   Reporter: Andreas Korneliussen
   Assignee: Andreas Korneliussen




It seems to me that for including a new JUnit test into i.e derby-all 
we need to make a new java class with a main() method, which parses a 
command line and set up the testsuite and run it, just like any java 
program. Basically we are running the junit tests as test type "java".
Instead of having to do this for every junit test going into a derby 
test suite, I would propose a different strategy.
I propose to introduce a new test type called "junit" (current test 
types are: sql,sql2,unit,java,multi,demo - unit is not junit)

Then you can use:
java org.apache.derbyTesting.functionTests.harness.RunTest 
.junit

to run a Junit test - instead of:
java org.apache.derbyTesting.functionTests.harness.RunTest 
.java

When starting a test of type junit, the RunTest class may simply use the
junit.textui.TestRunner class, which has a main method which takes a 
TestCase class name as parameter.  The junit.textui.TestRunner  runs 
the tests defined by the suite() method of the TestCase class.
I think this strategy will make it easier to integrate new JUnit 
tests into the current test suites, since it save you the trouble of 
creating a java class with a main method for every test.








Re: [jira] Commented: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-10 Thread Andreas Korneliussen

David Van Couvering (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-934?page=comments#action_12365844 ] 


David Van Couvering commented on DERBY-934:
---

One final comment: that was a *lot* of code to review in one go.  In the future it would 
be great if you could submit this in smaller, more manageable chunks.  It will ensure a 
better, closer review and won't cause "review fatigue" on us poor committers.



Hi David,
Thank you very much for the hard work of reviewing all my patches. I am 
going through your comments and will update the patch today.


I did think this patch would be fairly managable, since it consist of 
new files only, however I could of course have splitted it into three 
patches, one for concurrency tests, one for sur query mix and one for 
the surtest.  Wouldn't that just be more work  for both the committer 
and developer ?



--Andreas


David



create a set of JUnit tests for Scrollable Updatable Resultsets
---

Key: DERBY-934
URL: http://issues.apache.org/jira/browse/DERBY-934
Project: Derby
   Type: Sub-task
 Components: Test
   Reporter: Andreas Korneliussen
   Assignee: Andreas Korneliussen
Attachments: DERBY-934.diff, DERBY-934.stat

Add a set of JUnit tests which tests the implementation for Scrollable 
Updatable ResultSets.
The following is a description of how the tests will be implemented:
Data model in test:
We use one table containing three int fields and one varchar(5000)
field. 
Then we run the tests on a number of variants of this model:

1. None of the fields are indexed (no primary key, no secondary key)
2. One of the fields is indexed as primary key
3. One of the fields is indexed as primary key, another field is
  indexed as secondary key
4. One field is indexed as secondary key
(primary key is unique, secondary key is not unique)
By having these variations in the data model, we cover a number of
variations where the ScrollInsensitiveResultSet implementation uses
different classes of source ResultSets, and the CurrentOfResultSet
uses different classes of target and source ResultSet.
The table can be created with the following fields:
(id int, a int, b int, c varchar(5000))
-
Queries for testing SUR:
Select conditions:
* Full table scan
SQL: SELECT * FROM T1
* Full table scan with criteria on non-indexed field
SQL: .. WHERE c like ?
SQL: .. WHERE b > ?
* Full table scan with criteria on indexed field
SQL: .. WHERE id>a
* SELECT on primary key conditionals:
- Upper and lower bond criteria:
SQL: .. WHERE ID>? and IDSQL: .. WHERE ID in (1,2,3,4) 
SQL: .. WHERE a  in (1,2,3,4) 
(Other nested queries containing a table seems to not permit updates)

* SELECT on secondary key conditionals:
SQL: .. WHERE a>? and aTest that you get an exception when attempting to update a ResultSet 
which has been downgraded to a read only ResultSet due to the query 
Cases:

* Query contained a join
* Query contained a read only update clause
* Query contained a order by
Test that you get an exception when attempting to update a ResultSet 
which has concurrency mode CONCUR_READ_ONLY


Concurrency tests:
(ConcurrencyTest)
Cases: 
* Test that update locks are downgraded to shared locks after

 repositioning. (fails with derby)
* Test that we can aquire a update lock even if the row is locked with
 a shared lock.
* Test that we can aquire a shared lock even if the row is locked with
 an update lock.
* Test that we do not get a concurrency problem when opening two
 cursors as readonly.
* Test what happens if you update a deleted and purged record
* Test what happens if you update a deleted and purged record using
 positioned update
* Test what happens if you update a tuple which is deleted and then
 reinserted.
* Test what happens if you update a tuple which is deleted and then
 reinserted with the exact same values
* Test what happens if you update a tuple which has been modified by
 another transaction.
* Test that you cannot compress the table while the ResultSet is open,
 and the transaction is open (for validity of RowLocation)
* Test that you cannot purge a row if it is locked
* Test that Derby set updatelock on current row when using
 read-uncommitted







Re: [jira] Updated: (DERBY-795) After calling ResultSet.relative(0) the cursor loses its position

2006-02-10 Thread Andreas Korneliussen

David W. Van Couvering wrote:
Hi, Andreas.  Upon further thought, once we work through the comments on 
DERBY-934 (still to come) I am going to go ahead and apply all these 
patches at once, no need for you to do extra work.  But a request for 
next time to please try and keep your patches independent instead of 
interdependent.



That is great.
I think the patches are independent, however they are slightly related.

DERBY-918: a patch for improving the test harness

DERBY-934: a set of tests which can be run independentely using 
junit.textui.TestRunner or by using the test harness with patch 918


DERBY-795: a patch for a specific bug in derby. I did not submit extra 
tests for this, since it is covered in 934, however there is a simple 
java program there, which can be run to verify the fix and the bug


There are no compile dependencies between these patches. I think it was 
much better to submit these as independent patches, instead of in one 
huge patch.


Andreas



Thanks!

David

David W. Van Couvering wrote:

Hi, Andreas, your set of patches have a set of dependencies which are 
a little confusing at first, and ultimately somewhat intractable:


DERBY-795 is tested by DERBY-934
DERBY-934 can't be run without, and therefore depends upon, DERBY-918

I really can't just commit one of these patches at a time, it has to 
be all or none.


I really would like each of these patches stand on their own, or at a 
minimum don't submit a dependent patch until the patch it depends upon 
has been committed.


Here's what I would like to see:

DERBY-918 comes with its own sample unit test that verifies that the 
.junit test type works.  Something very simple and easy.


DERBY-795 has its own test that comes with it, rather than being 
tested by DERBY-934


I have some comments on DERBY-934 too, I'll send these in a separate 
email.


Thanks,

David

David W. Van Couvering wrote:

Crud, I missed this comment somehow, I'll look at DERBY-934 again, I 
bet *both* my questions will be answered :)


I'll get back to you if I need anything else, Andreas.

David

Andreas Korneliussen (JIRA) wrote:


 [ http://issues.apache.org/jira/browse/DERBY-795?page=all ]

Andreas Korneliussen updated DERBY-795:
---

Attachment: DERBY-795.diff
DERBY-795.diff

Attached is a fix for this issue.
The problem is detected by the jdbcapi/SURQueryMix.junit test 
provided in DERBY-934, when running in embedded mode.




After calling ResultSet.relative(0) the cursor loses its position
-

Key: DERBY-795
URL: http://issues.apache.org/jira/browse/DERBY-795
Project: Derby
   Type: Bug
 Components: JDBC
   Versions: 10.1.2.1
Environment: Any
   Reporter: Andreas Korneliussen
   Assignee: Andreas Korneliussen
   Priority: Minor
Attachments: DERBY-795.diff, DERBY-795.diff

After calling rs.relative(0), on a scrollable ResultSet, the cursor 
looses its position, and a rs.getXXX(..) fails with:

SQL Exception: Invalid cursor state - no current row.
Probably caused by the following logic in 
ScrollInsensitiveResultSet.getRelativeRow(int row):

// Return the current row for 0
if (row == 0)
{
   if ((beforeFirst || afterLast) ||
   (!beforeFirst && !afterLast)) {
   return null;
   } else {
return getRowFromHashTable(currentPosition);
   }
}
The if () will always evaluate to true, regardless of the values of 
beforeFirst and afterLast

Test code:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class RelativeZeroIssue {
  public static void main(String[] args) throws Exception {
  Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
   Connection con = 
DriverManager.getConnection("jdbc:derby:testdb2;create=true");

   con.setAutoCommit(false);
   try {   Statement statement = 
con.createStatement();

  /** Create the table */
   statement.execute("create table t1(id int)");
   statement.execute("insert into t1 values 1,2,3,4,5,6,7,8");
  Statement s = 
con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,

   ResultSet.CONCUR_READ_ONLY);
   ResultSet rs = s.executeQuery("select * from t1");
   rs.next();
   System.out.println(rs.getInt(1));
   System.out.println(rs.relative(0));
   System.out.println(rs.getInt(1));
   }  finally {
  con.rollback();
   con.close();
   }
   }
 
}

Output from test:
1
false
Exception in thread "main" SQL Exception: Invalid cursor state - no 

Re: [jira] Updated: (DERBY-100) Add support for insert functionality using JDBC 2.0 updatable resultset apis

2006-02-10 Thread Andreas Korneliussen

Knut Anders Hatlen wrote:

For rowInserted(), my intention was to make the implementation of
this method as similar as possible to that of the rowDeteled() and
rowUpdated() methods. Since these two methods do not have entry
trancing, I removed it from the rowInserted() method.



Why not add tracing to rowDeleted() and rowUpdated()? :)



I guess because this is for a patch for inserting rows, and someone 
would probably question why she changed unrelated code, arguing that 
such changes should go in a different patch ;-)



Andreas


Re: [jira] Updated: (DERBY-795) After calling ResultSet.relative(0) the cursor loses its position

2006-02-13 Thread Andreas Korneliussen

David W. Van Couvering wrote:
I agree they should be submitted as independent patches.  But they 
should also be committable independently.




They are committable independently, however it is a good idea to commit 
the patches incrementally, as they were submitted:


1. Review and commit DERBY-918. Test with existing junit tests

2. Review and commit DERBY-934. Test using junit.textui.TestRunner or by 
using the feature provided in DERBY-918.


3. Review and commit 795. Test using existing junit tests (934)


I can't verify DERBY-918 without a test that works with the .junit test 
type.  That's provided in DERBY-934.  Why not provide a simple sanity 
Junit test with this patch that I can use to verify?




I agree that it would be great to have a simple sanity junit test.
The feature DERBY-918 is the ability to launch a junit test from the old 
harness, without having to make a new main method for every junit test.


It can be tested with the old junit tests. The fact that the old tests 
fails, (due to the fact that they were never designed to run in 
junit.textui.TestRunner), do not make them valueless in terms of 
verifying the feature.


Then, I can't verify DERBY-795 is fixed without applying and running the 
tests in DERBY-934.  It would be great if DERBY-795 included a test that 
verifies it is fixed as part of the patch.




DERBY-795 comes with a test in the description.  It is not part of the 
patch, since I know this will be tested as part of another patch.


DERBY-934 can be run directly from junit.textui.TestRunner's main, and 
can therefore be verified without DERBY-918.


That would make them *truly* independent, not just in terms of what they 
are about, but in terms of the ability of the committer to review and 
test them independently.




I do not see how it affects the committers ability to review the patches 
independently.  Huge patches with a lot of soon-to-be obsolete code, is 
harder to review than small patches, which can be reviewed and committed 
incrementally.



Andreas


David

Andreas Korneliussen wrote:


David W. Van Couvering wrote:

Hi, Andreas.  Upon further thought, once we work through the comments 
on DERBY-934 (still to come) I am going to go ahead and apply all 
these patches at once, no need for you to do extra work.  But a 
request for next time to please try and keep your patches independent 
instead of interdependent.



That is great.
I think the patches are independent, however they are slightly related.

DERBY-918: a patch for improving the test harness

DERBY-934: a set of tests which can be run independentely using 
junit.textui.TestRunner or by using the test harness with patch 918


DERBY-795: a patch for a specific bug in derby. I did not submit extra 
tests for this, since it is covered in 934, however there is a simple 
java program there, which can be run to verify the fix and the bug


There are no compile dependencies between these patches. I think it 
was much better to submit these as independent patches, instead of in 
one huge patch.


Andreas



Thanks!

David

David W. Van Couvering wrote:

Hi, Andreas, your set of patches have a set of dependencies which 
are a little confusing at first, and ultimately somewhat intractable:


DERBY-795 is tested by DERBY-934
DERBY-934 can't be run without, and therefore depends upon, DERBY-918

I really can't just commit one of these patches at a time, it has to 
be all or none.


I really would like each of these patches stand on their own, or at 
a minimum don't submit a dependent patch until the patch it depends 
upon has been committed.


Here's what I would like to see:

DERBY-918 comes with its own sample unit test that verifies that the 
.junit test type works.  Something very simple and easy.


DERBY-795 has its own test that comes with it, rather than being 
tested by DERBY-934


I have some comments on DERBY-934 too, I'll send these in a separate 
email.


Thanks,

David

David W. Van Couvering wrote:

Crud, I missed this comment somehow, I'll look at DERBY-934 again, 
I bet *both* my questions will be answered :)


I'll get back to you if I need anything else, Andreas.

David

Andreas Korneliussen (JIRA) wrote:


 [ http://issues.apache.org/jira/browse/DERBY-795?page=all ]

Andreas Korneliussen updated DERBY-795:
---

Attachment: DERBY-795.diff
DERBY-795.diff

Attached is a fix for this issue.
The problem is detected by the jdbcapi/SURQueryMix.junit test 
provided in DERBY-934, when running in embedded mode.




After calling ResultSet.relative(0) the cursor loses its position
-

Key: DERBY-795
URL: http://issues.apache.org/jira/browse/DERBY-795
Project: Derby
   Type: Bug
 Components: JDBC
   Versions: 10.1.2.1
Environment: Any
   Reporter: Andreas Korneliussen
   Assignee: Andreas Korneliusse

conflict detection strategies

2006-02-14 Thread Andreas Korneliussen
Some context: In scrollable updatable resultsets, we populate an 
internal table with the following data:


[]+

Example layeout:

  1 <1,10> falsefalse1,"a",3
  2 <1,11> falsefalse2,"b",2
  3 <1,12> falsefalse3,"c",9


When doing updateRow(), or deleteRow(), we use the RowLocation to 
navigate to the row being updated.


Problem:
For holdable cursors, we will release the table intent lock when doing 
commit on the transaction for the cursor.


The table intent lock, prevents the system from doing a compress of the 
table, causing all RowLocations to be invalid. In addition, it prevents 
reuse of RowLocation for deleted + purged rows.


In order to support holdable scrollable updatable cursors, we consider 
having a service which allows the system to notify subscribers (i.e 
cursors) that it has executed i.e a compress.


If the user then calls updateRow() or deleteRow(), we can then give an 
exception like:


"The row could not be updated, because its location has been updated by 
the system"


In addition, we consider having a reclaim of locks, so that immediatly 
after a commit, the new transaction with the holdable cursor, may 
reclaim the table intent lock.  This will reduce the time period which 
the system may compress the table, however not completely remove the 
possibility of a compress.


Any comments on implementing such strategy ?

An alternative to this strategy, could be to go the other way: cursors 
notify the system that it should not do compress.


I would appreciate feedback on this topic, especially if you find any 
pitfalls with the proposed strategies, or have better alternatives.


Andreas


Re: conflict detection strategies

2006-02-14 Thread Andreas Korneliussen

Hi,

The implementation of SUR  just builds on the existing scrollable 
resultsets, which collects all rows into a table. We have extended this 
table to also contain RowLocation and some metadata.
This means we do not need to change the store module to navigate 
backward etc - no changes in the store module.


Updatable cursors in derby  uses RowLocation, however the row is 
guaranteed to be locked (current row has update lock, I think, 
regardless of isolation level).
As for holdable cursors, forward only updatable cursors require the user 
to navigate to the next row after a commit, thereby getting a new 
rowlocation, on a row which is locked.


I will propose a more detailed solution tomorrow, so it becomes more 
clear, and less mysterious, what I really propose :-)

Any other suggestions are of course welcome.

--Andreas

-
From: Mike Matrigali <[EMAIL PROTECTED] 
<http://gmane.org/get-address.php?address=mikem%5fapp%2drphTv4pjVZMJGwgDXS7ZQA%40public.gmane.org>>
Subject: Re: conflict detection strategies 
<http://news.gmane.org/find-root.php?message_id=%3c43F22581.7090600%40sbcglobal.net%3e>
Newsgroups: gmane.comp.apache.db.derby.devel 
<http://news.gmane.org/gmane.comp.apache.db.derby.devel>

Date: 2006-02-14 18:46:25 GMT (59 minutes ago)

I have not been following the scrollable updatable result set work,
I had assumed that the work would be similar to the other resultset
work in Derby with no special requirements from the store.  Is there
a proposed design for this project that I should go look at?  I looked
at the doc associated with DERBY-690, but there are a lot of suggested
approaches - but not clear which choices have been made.

There is a lot of discussion about using the current support for
update where current of.  In the current system how does the system
translate a user request for an update where current of, to the
actual update of the row.  Does it currently use RowLocation?

If possible I would like to see a solution that does not require special
messages sent back and forth between modules about state.



Andreas Korneliussen wrote:

Some context: In scrollable updatable resultsets, we populate an 
internal table with the following data:


[]+

Example layeout:

  1 <1,10> falsefalse1,"a",3
  2 <1,11> falsefalse2,"b",2
  3 <1,12> falsefalse3,"c",9


When doing updateRow(), or deleteRow(), we use the RowLocation to 
navigate to the row being updated.


Problem:
For holdable cursors, we will release the table intent lock when doing 
commit on the transaction for the cursor.


The table intent lock, prevents the system from doing a compress of 
the table, causing all RowLocations to be invalid. In addition, it 
prevents reuse of RowLocation for deleted + purged rows.


In order to support holdable scrollable updatable cursors, we consider 
having a service which allows the system to notify subscribers (i.e 
cursors) that it has executed i.e a compress.


If the user then calls updateRow() or deleteRow(), we can then give an 
exception like:


"The row could not be updated, because its location has been updated 
by the system"


In addition, we consider having a reclaim of locks, so that immediatly 
after a commit, the new transaction with the holdable cursor, may 
reclaim the table intent lock.  This will reduce the time period which 
the system may compress the table, however not completely remove the 
possibility of a compress.


Any comments on implementing such strategy ?

An alternative to this strategy, could be to go the other way: cursors 
notify the system that it should not do compress.


I would appreciate feedback on this topic, especially if you find any 
pitfalls with the proposed strategies, or have better alternatives.


Andreas





Re: Adding SUR tests to derbyall (was Re: [jira] Updated: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets)

2006-02-15 Thread Andreas Korneliussen

This is great, getting these tests into the suites.

I just ran the SURTest on my sandbox,to compare the output.  The output 
you attached is the same as mine when running with DerbyNetClient.


In network mode, it seems that the JDBC network client is not consistent 
with the embedded client in terms of giving warnings and propagating the 
correct sql state and error message.  I think some of these issues calls 
for new JIRAs.


I have also checked looked more closely at the output from the SURTest 
running in embedded mode.

I think SURTest in some cases reveals some bugs in exisiting Derby code.

Especially, the output from running

testScrollablePositionedUpdateWithForUpdate1() seems to reveal a serious 
bug:


It seems that the current scrollable read-only resultsets allows 
positioned updates if you use "for update" in the query. However, the 
tuple actually being updated, is not necessarily the same as the current 
tuple in the resultset/cursor.  The test therefore fails with an 
assertion when it tries to verify the data in the result set.


I reproduced this error in ij:

ij> get scroll insensitive cursor c1 as 'select * from t for update';
ij> next c1;
ID |C
--
1  |hei
ij> next c1;
ID |C
--
2  |hei igjen
ij> next c1;
ID |C
--
3  |hei hei hei
ij> previous c1;
ID |C
--
2  |hei igjen
ij> previous c1;
ID |C
--
1  |hei
ij> update t set C='hade' where current of c1;
1 row inserted/updated/deleted
ij> select * from t;
ID |C
--
1  |hei
2  |hei igjen
3  |hade

3 rows selected
ij>

So the positioned update, updated tuple 3, instead of tuple 1...!
Once we provide the SUR implementation, this will no longer be an issue.

Andreas


David W. Van Couvering wrote:
So, I pinged Andreas on the backchannel, and he responded that the 
failures with SURTest are expected (as we are still lacking some of the 
Scrollable Updatable ResultSet (SUR) functionality), that the 
SURQueryMixTest should pass when I appy the patch for DERBY-795, and 
that if I remove some printStackTrace debug calls from ConcurrencyTest 
that should run clean.


I made his recommended changes, and confirmed.

So, I would like to see these tests in derbyall.  I re-ran the tests 
with framework=DerbyNetClient and SURQueryMixTest and ConcurrencyTest 
both again pass.  They also pass running with the 1.3 JDK.  I have no 
idea if they will pass or succeed with the IBM JDK as I don't have 
access to that.


Under DerbyNetClient, SURTest has more/different failures and errors. 
Some of these appear to be related to null SQL States.  Andreas, perhaps 
you can look at these failures and tell me if they look OK.  For now I 
won't be adding SURTest to the derbynetclientmats suite.


I also ran with framework=DerbyNet and got hard stop failures saying a 
null username is not supported.


ConcurrencyTest currently has a long timeout so I don't think we should 
have it in derbyall unless we reduce the timeout


So, that all said, I am going to add the following:

- Add master for SURTest for embedded client that has all the expected 
failures.  Andreas, please make sure to update the master as more 
functionality comes in and the results change.


- Add SURTest and SURQueryMixTest to jdbcapi.runall.  NOTE that there 
may be failures with the IBM JDK tests related to this.


- Add SURQueryMixTest to derbynetclientmats.runall.  I'm holding off on 
SURTest.


- Holding off adding ConcurrencyTest to jdbcapi.runall and 
derbynetclientmats.runall until the timeout duration is reduced to a 
reasonable length


Please let me know if you have any comments.

Thanks,

David

David W. Van Couvering wrote:

Hi, Andreas. Are you expecting these tests to pass?  In my environment 
SURTest fails with 8 failures and 52 errors, SURQueryMixTest has 64 
failures and 0 errors, and ConcurrencyTest fails on a lock timeout.


David

Andreas Korneliussen wrote:


David W. Van Couvering wrote:

My JIRA rendering is completely broken, so I am unable to add a 
comment.


I took a look at your patch file, but it appears to be incomplete?  
It ends half way through a method in ConcurrencyTest.




I just downloaded my diff from JIRA, and it was all ok. Maybe it was 
JIRA problem when you downloaded the patch ?


Attached is the diff

Andreas






.F.FF.F.F.F.F...E.E.E.E.E.E.E.E..java.sql.SQLException: invalid 
operation: connection closed
Caused by: org.apache.derby.clie

Re: [jira] Commented: (DERBY-796) jdbc 4.0 specific Blob and Clob method support

2006-02-15 Thread Andreas Korneliussen

  if ((int) pos <= 0) {
  throw new SqlException(agent_.logWriter_,
  new MessageId(SQLState.BLOB_BAD_POSITION), new Long(pos));
  }


Is the casting of pos from long to int safe ? Consider the case if pos 
is > Integer.MAXINT. Is it intentional that pos > Integer.MAXINT gives 
this exception ?


How about:

>   if (pos <= 0L) {
>   throw new SqlException(agent_.logWriter_,
>   new MessageId(SQLState.BLOB_BAD_POSITION), new Long(pos));
>   }

or if it is intentional to give an exception if pos is bigger than 
Integer.MAXINT, one could write it more explicitly:


>   if (pos <= 0L || pos >= MAXPOS) {
>   throw new SqlException(agent_.logWriter_,
>   new MessageId(SQLState.BLOB_BAD_POSITION), new Long(pos));
>   }


Andreas


Re: conflict detection strategies

2006-02-15 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Andreas Korneliussen wrote:



Problem:
For holdable cursors, we will release the table intent lock when doing
commit on the transaction for the cursor.

The table intent lock, prevents the system from doing a compress of the
table, causing all RowLocations to be invalid. In addition, it prevents
reuse of RowLocation for deleted + purged rows.



I think this last paragraph is an incorrect assuption. The table intent
lock prevents other transactions from doing a compress, but not the
transaction holding the lock.



That is a good point.

The main problem would be the system doing a compress, however we should 
take into account the fact that the user can run compress from the same 
transaction, and then maybe invalidate the resultset, or prevent the 
compress from running.



I think there are other situations where the RowLocation will become
invalid, such as the transaction deleteing the row.



Yes, however as far as I understood, the RowLocation would not be reused 
as long as at least some sort of table level intent lock is held, and 
the store will simply return false if one tries to do update / delete / 
fetch on a RowLocation which is deleted, or deleted+purged.


Andreas


Re: conflict detection strategies

2006-02-15 Thread Andreas Korneliussen

Mike Matrigali wrote:
..

If possible I would like to see a solution that does not require special
messages sent back and forth between modules about state.



I am not entirely sure what restrictions you want to put on the design, 
it is a bit unclear to me.


I have considered some other solutions:

1. Change the locking behaviour, so that a table intent lock which is 
set by an updatable cursor, is kept as long as the cursor is open - this 
will ensure that the RowLocations are valid.


2. After a commit, we could clear all data in the internal table in the 
SUR. The problem with this approach is that the resultset would not 
necessarily be repopulated with the same data - it would be sensitive 
for changes across its own transactions commits, it would be highly 
ineffecient.


3. Let the cursors notify the OnlineCompress module that it should fail 
any attempt to compress/defragment or purge the table.


More details on what I suggested yesterday:

The OnlineCompress class could provide an event mechanism, where 
subscribers (OnlineCompressListener) register themselves to listen to 
OnlineCompressEvents. The ScrollInsensitiveResultSet class could then 
implement the OnlineCompressListener interface, and register itself once 
it starts populating the table with RowLocations. The OnlineCompress 
class then simply notifies all listeners once it is doing defragment / 
compress.
The listeners should unregister themselves (i.e 
ScrollInsensitiveResultSet class could do it once it closes). The 
OnlineCompress class could use a WeakHashMap to put the listeners into, 
in case they are not well-behaved. I have not checked if derby already 
has event manager type of modules, if it does, I would attempt to reuse 
them.


Please also let me know if any of the other alternatives seems better.


Andreas



Andreas Korneliussen wrote:



Some context: In scrollable updatable resultsets, we populate an
internal table with the following data:

[]+

Example layeout:

 1 <1,10> falsefalse1,"a",3
 2 <1,11> falsefalse2,"b",2
 3 <1,12> falsefalse3,"c",9


When doing updateRow(), or deleteRow(), we use the RowLocation to
navigate to the row being updated.

Problem:
For holdable cursors, we will release the table intent lock when doing
commit on the transaction for the cursor.

The table intent lock, prevents the system from doing a compress of the
table, causing all RowLocations to be invalid. In addition, it prevents
reuse of RowLocation for deleted + purged rows.

In order to support holdable scrollable updatable cursors, we consider
having a service which allows the system to notify subscribers (i.e
cursors) that it has executed i.e a compress.

If the user then calls updateRow() or deleteRow(), we can then give an
exception like:

"The row could not be updated, because its location has been updated by
the system"

In addition, we consider having a reclaim of locks, so that immediatly
after a commit, the new transaction with the holdable cursor, may
reclaim the table intent lock.  This will reduce the time period which
the system may compress the table, however not completely remove the
possibility of a compress.

Any comments on implementing such strategy ?

An alternative to this strategy, could be to go the other way: cursors
notify the system that it should not do compress.

I would appreciate feedback on this topic, especially if you find any
pitfalls with the proposed strategies, or have better alternatives.

Andreas





Re: conflict detection strategies

2006-02-15 Thread Andreas Korneliussen



I think there are other situations where the RowLocation will become
invalid, such as the transaction deleteing the row.



Yes, however as far as I understood, the RowLocation would not be reused 
as long as at least some sort of table level intent lock is held, and 
the store will simply return false if one tries to do update / delete / 
fetch on a RowLocation which is deleted, or deleted+purged.




To be clear: we do handle the situation were RowLocation points to a 
deleted row by giving a WARNING if the user tries to do updateRow() or 
deleteRow(). For positioned updates, we will give an update count of 0. 
Therefore we do not really think of those RowLocations as invalid.



Andreas



Andreas




Re: [jira] Updated: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-16 Thread Andreas Korneliussen

Myrna van Lunteren wrote:
oy, I get better results after svn update than I got with the patches. 
I'll check with various jvms now.
 


Hi,
I do not expect the tests to pass in j2me, since they use DriverManager. 
I did experiement a bit with using ij.startJBMS() to get the Connection, 
however it just returned null, so I guess it failed somewhere and 
swallowed the excpetion. Maybe the TestUtil class could be used instead 
(I have already used it for getting jdbc url etc)


Andreas



Re: conflict detection strategies

2006-02-16 Thread Andreas Korneliussen

Mike Matrigali wrote:
There are very few cross thread dependencies not managed by locks 
currently.  These things add extra complications to current and

future code.  Also I want to understand clearly the new restrictions
be imposted on the access methods (both current and possible
future).  In the future we would like to do more automatic space
reclamation as part of the zero-admin goal, the ability to do this
in the future internal to access methods is probably affected by
the proposals here.



I think the complexities are there already. It seems very hard to know 
under exactly which conditions, a RowLocation remains valid.  We have 
assumed that the RowLocation will remain valid as long as we hold a 
table intent lock (valid means that it either points to the same row or 
to a deleted row).


That is what we concluded from the previous discusssion about validity 
of RowLocation. If you in the future need to make code, or there already 
is code, which breaks this assumption, we would need to know which other 
mechanisms we should use to either verify that the RowLocation is valid, 
or to block the system to make it invalid.


Locks can be used to manage cross thread dependencies, however they are 
bound to the transaction, and therefore does not help very much for 
cursors held across commits. So if the only mechanism we can have to 
ensure that the RowLocations are valid, is by the means of locks, I 
doubt we can support the feature of scrollable insensitive *holdable* 
updatable resultset.



It is true that the current access methods don't reuse row locations
until a table level lock is granted.  But your project would be the
first dependency on this outside of the access method implementations
themselves.  It is very clear the contract that the access methods
have with their clients while locks are held on the data they are
looking at, what you are proposing is a contract on unlocked data.



I guess we are the first to use RowLocation without holding a lock on 
the row. This is necessary, unless we simply make SUR cursors set locks 
for all rows in the cursor independent from isolation level.



Note that the current "in-place" compress will MOVE rows from one
row location to another if one does not have a row lock on the row.
This is done in the 2nd phase and only holds an intent lock, and
exclusive row locks on the rows being moved.
The off-line compress only does work under an X table lock.
So the row that you are updating actually will exist in the table,
but currently you will request the old location and will get back
a delete row indicator.  I think because of this option 1 does not
work.

Are you saing that RowLocations can be invalidated by "in-place" 
compress even if we hold a Table intent lock ?


How do you call "in-place" compress today ? Does the system use it 
automatically, or do the user have to call it manually ?



The state of held cursors across commits is very murky in the standards.
We looked very carefully at forward only held cursors, and the standards
there are carefully worded to basically not promise anything about the 
rows that were viewed that preceded the commit (clearly since the 
standard says the only thing you can do after the commit is a next to 
get a new row or close - never can access rows looked at before the

commit).  What options are legal
implementations of updatable scrollable result sets for held cursors 
across commits?  Do the standards guarantee anything about data in the

cursor looked at before the commit?



I looked at the SQL standard, and for cursors held over commit, it says:

"If the cursor is insensitive, then significant changes are not visible"

Andreas




Andreas Korneliussen wrote:


Mike Matrigali wrote:
..


If possible I would like to see a solution that does not require special
messages sent back and forth between modules about state.



I am not entirely sure what restrictions you want to put on the 
design, it is a bit unclear to me.


I have considered some other solutions:

1. Change the locking behaviour, so that a table intent lock which is 
set by an updatable cursor, is kept as long as the cursor is open - 
this will ensure that the RowLocations are valid.


2. After a commit, we could clear all data in the internal table in 
the SUR. The problem with this approach is that the resultset would 
not necessarily be repopulated with the same data - it would be 
sensitive for changes across its own transactions commits, it would be 
highly ineffecient.


3. Let the cursors notify the OnlineCompress module that it should 
fail any attempt to compress/defragment or purge the table.


More details on what I suggested yesterday:

The OnlineCompress class could provide an event mechanism, where 
subscribers (OnlineCompressListener) register themselves to listen to 
OnlineCompressEvents. The ScrollInsensitiveResultSet class could then 
implement the OnlineCompressListener interface, a

Re: conflict detection strategies

2006-02-17 Thread Andreas Korneliussen

Mike Matrigali wrote:

I posted some questions about how the delete/update is done, those
answers would help me understand better what is needed for a solution.

I am going to start a separate thread concentrating on RowLocation
guarantees from store.


That is great.


Some other answers below.

Andreas Korneliussen wrote:



Mike Matrigali wrote:



There are very few cross thread dependencies not managed by locks
currently.  These things add extra complications to current and
future code.  Also I want to understand clearly the new restrictions
be imposted on the access methods (both current and possible
future).  In the future we would like to do more automatic space
reclamation as part of the zero-admin goal, the ability to do this
in the future internal to access methods is probably affected by
the proposals here.



I think the complexities are there already. It seems very hard to know
under exactly which conditions, a RowLocation remains valid.  We have
assumed that the RowLocation will remain valid as long as we hold a
table intent lock (valid means that it either points to the same row or
to a deleted row).

That is what we concluded from the previous discusssion about validity
of RowLocation. If you in the future need to make code, or there already
is code, which breaks this assumption, we would need to know which other
mechanisms we should use to either verify that the RowLocation is valid,
or to block the system to make it invalid.


There is already code that breaks this assumption, the in place compress
table.  It currently is only executed as a call to a system procedure:
http://db.apache.org/derby/docs/dev/ref/rrefproceduresinplacecompress.html

But the hope was in the future to do the same kind of work that this
procedure does, internally in background.

As you have already determined that off-line compress is even more of a
problem, but it does run under an exclusive table lock.  After it runs
the container you were connected to does not even exist any more.



The online compress seems to require table-exclusive locks when passing 
the defragment or truncate_end argument.
There are two testcases in jdbcapi/ConcurrencyTest which test this 
(testDefragmentDuringScan and testTruncateDuringScan).


The tests first deletes all rows except the first and the last. Then it 
commits. Then it opens a SUR in read-uncommitted mode. Read all records 
into the resultset, and position the cursor to afterlast.
Then it runs the defragment or truncate. The test will hang waiting for 
a lock. Finally, it updates the rows correctly.


Output from the tests (modified the test slightly to get more output):
T1: delete records
T1: commit
T2: Read next Tuple:(0,0,17)
T2: Read next Tuple:(9,9,35)
T3: call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE
T3: DEFRAGMENT
ERROR 40XL1: A lock could not be obtained within the time requested
 
org.apache.derby.impl.store.access.RAMTransaction.openScan(RAMTransaction.java:1582)
at 
org.apache.derby.iapi.db.OnlineCompress.setup_indexes(OnlineCompress.java:605)
at 
org.apache.derby.iapi.db.OnlineCompress.defragmentRows(OnlineCompress.java:359)
at 
org.apache.derby.iapi.db.OnlineCompress.compressTable(OnlineCompress.java:227)
at 
org.apache.derby.catalog.SystemProcedures.SYSCS_INPLACE_COMPRESS_TABLE(SystemProcedures.java:858)

...
   at 
org.apache.derbyTesting.functionTests.tests.jdbcapi.ConcurrencyTest.testCompressDuringScan(ConcurrencyTest.java:777)
   at 
org.apache.derbyTesting.functionTests.tests.jdbcapi.ConcurrencyTest.testDefragmentDuringScan(ConcurrencyTest.java:712)

...

T3: got expected exception
T1: Read first Tuple:(0,0,17)
T1: updateInt(2, 3);
T1: updateRow()
T1: Read last Tuple:(9,9,35)
T1: updateInt(2, 3);
T1: updateRow()
T1: commit
T4: select * from table
T4: Read next Tuple:(0,3,17)
T4: Read next Tuple:(9,3,35)

So to me it seems that our assumptions are correct.. Only a purge is 
allowed with row-level locking, online compress and online defragment 
seems to be blocked by an open cursor on the table.

Maybe that is not how online compress was intended to be ?

Andreas





Locks can be used to manage cross thread dependencies, however they are
bound to the transaction, and therefore does not help very much for
cursors held across commits. So if the only mechanism we can have to
ensure that the RowLocations are valid, is by the means of locks, I
doubt we can support the feature of scrollable insensitive *holdable*
updatable resultset.



I agree, I don't believe derby is currently architected to correctly
implement "holdable" SUR.  I don't think the outside event driven
approach is the right way to go.


It is true that the current access methods don't reuse row locations
until a table level lock is granted.  But your project would be the
first dependency on this outside of the access method implementations
themselves.  It is very clear the contract that the access methods
have with their cli

Re: [jira] Updated: (DERBY-934) create a set of JUnit tests for Scrollable Updatable Resultsets

2006-02-17 Thread Andreas Korneliussen

Kathey Marsden wrote:

Myrna van Lunteren wrote:



I'm not certain it's so bad - the test expects failures.
But I logged DERBY-999. Unfortunately, I can't upload the .tmp file right
now, Jira gives me an error...I'll try that again later.




I am sorry. I did not realize that the test masters  had so many
failures checked in and read the diff backwards.
Kathey

Actually, ideally it should not be necessary to have master files to 
junit tests. The reason these specific tests now have master files, is 
because they are expecting to fail since the feature is not submitted yet.


Andreas


Re: conflict detection strategies

2006-02-17 Thread Andreas Korneliussen

Mike Matrigali wrote:

It looks like I may have been thinking about future directions vs.
current reality.  The question still is what should the contract
be, rather than what a specific implementation currently provides.



I agree. I think the contract could be one of these:

1. A RowLocation is valid as long as the transaction has a table intent lock
2. A RowLocation is valid as long as the transaction has a row lock for 
the row.


Both contracts have different tradeoffs, I guess. for store, alt. 2 
gives more flexibility in future online compress operations, however I 
think it would require SURs to set and hold locks for all isolation levels.


JavaDoc for RowLocation do say:

  See the conglomerate implementation specification for
  information about the conditions under which a row location
  remains valid.

So currently, it is implementation defined.


Andreas




Andreas Korneliussen wrote:


Mike Matrigali wrote:


I posted some questions about how the delete/update is done, those
answers would help me understand better what is needed for a solution.

I am going to start a separate thread concentrating on RowLocation
guarantees from store.


That is great.


Some other answers below.

Andreas Korneliussen wrote:



Mike Matrigali wrote:



There are very few cross thread dependencies not managed by locks
currently.  These things add extra complications to current and
future code.  Also I want to understand clearly the new restrictions
be imposted on the access methods (both current and possible
future).  In the future we would like to do more automatic space
reclamation as part of the zero-admin goal, the ability to do this
in the future internal to access methods is probably affected by
the proposals here.



I think the complexities are there already. It seems very hard to know
under exactly which conditions, a RowLocation remains valid.  We have
assumed that the RowLocation will remain valid as long as we hold a
table intent lock (valid means that it either points to the same row or
to a deleted row).

That is what we concluded from the previous discusssion about validity
of RowLocation. If you in the future need to make code, or there 
already
is code, which breaks this assumption, we would need to know which 
other
mechanisms we should use to either verify that the RowLocation is 
valid,

or to block the system to make it invalid.




There is already code that breaks this assumption, the in place compress
table.  It currently is only executed as a call to a system procedure:
http://db.apache.org/derby/docs/dev/ref/rrefproceduresinplacecompress.html 



But the hope was in the future to do the same kind of work that this
procedure does, internally in background.

As you have already determined that off-line compress is even more of a
problem, but it does run under an exclusive table lock.  After it runs
the container you were connected to does not even exist any more.



The online compress seems to require table-exclusive locks when 
passing the defragment or truncate_end argument.
There are two testcases in jdbcapi/ConcurrencyTest which test this 
(testDefragmentDuringScan and testTruncateDuringScan).


The tests first deletes all rows except the first and the last. Then 
it commits. Then it opens a SUR in read-uncommitted mode. Read all 
records into the resultset, and position the cursor to afterlast.
Then it runs the defragment or truncate. The test will hang waiting 
for a lock. Finally, it updates the rows correctly.


Output from the tests (modified the test slightly to get more output):
T1: delete records
T1: commit
T2: Read next Tuple:(0,0,17)
T2: Read next Tuple:(9,9,35)
T3: call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE
T3: DEFRAGMENT
ERROR 40XL1: A lock could not be obtained within the time requested
 
org.apache.derby.impl.store.access.RAMTransaction.openScan(RAMTransaction.java:1582) 

at 
org.apache.derby.iapi.db.OnlineCompress.setup_indexes(OnlineCompress.java:605) 

at 
org.apache.derby.iapi.db.OnlineCompress.defragmentRows(OnlineCompress.java:359) 

at 
org.apache.derby.iapi.db.OnlineCompress.compressTable(OnlineCompress.java:227) 

at 
org.apache.derby.catalog.SystemProcedures.SYSCS_INPLACE_COMPRESS_TABLE(SystemProcedures.java:858) 


...
   at 
org.apache.derbyTesting.functionTests.tests.jdbcapi.ConcurrencyTest.testCompressDuringScan(ConcurrencyTest.java:777) 

   at 
org.apache.derbyTesting.functionTests.tests.jdbcapi.ConcurrencyTest.testDefragmentDuringScan(ConcurrencyTest.java:712) 


...

T3: got expected exception
T1: Read first Tuple:(0,0,17)
T1: updateInt(2, 3);
T1: updateRow()
T1: Read last Tuple:(9,9,35)
T1: updateInt(2, 3);
T1: updateRow()
T1: commit
T4: select * from table
T4: Read next Tuple:(0,3,17)
T4: Read next Tuple:(9,3,35)

So to me it seems that our assumptions are correct.. Only a purge is 
allowed with row-level locking, online compress and online defragment 
seems to be blocked by an open cursor on the

Re: conflict detection strategies

2006-02-17 Thread Andreas Korneliussen

I have attached the writeup to the JIRA issue (DERBY-690)
--Andreas


Dag H. Wanvik wrote:

Hi,



"Daniel" == Daniel John Debrunner <[EMAIL PROTECTED]> wrote:



Daniel> Was this posted, the more detailed solution?
Daniel> 
Daniel> There was a little more detail on a proposed interaction with online

Daniel> compress, but I think that is based upon a whole lot of design thinking
Daniel> and assumptions which has not made it to list.

We will submit a detailed description of the approach with the first
patch for DERBY-690 (embedded SUR); it should be ready for upload any
day now. This should hopefully provide enough context for the
reviewers.

Thanks,
Dag

Daniel> 
Daniel> I'd assumed you meant you were going to describe your proposed full

Daniel> solution to SUR, I'm interested to know your approach works with the
Daniel> various isolation levels, how you handle deleting the rows, how
Daniel> holdability is supported, etc. And if during working on this you had to
Daniel> work out how scrollable read-only cursors are implemented, adding that
Daniel> as background would be excellent. Great knowledge to add to the
Daniel> community. Don't assume reviewers know how things work today, provide as
Daniel> much information as you know.




Re: conflict detection strategies

2006-02-20 Thread Andreas Korneliussen

Mike Matrigali wrote:

What is your definition of a "valid" RowLocation?  Mine is:

1) A non-deleted RowLocation is guaranteed to point at the intended
   record until a delete is executed, by a client of the access method.
   This part requires no locking, and is the current protocol.

2) A row lock is required to prevent a delete of the row by another
   transaction.  There is no way to prevent a delete of the RowLocation
   by the same transaction.  This is the current protocol.

3) I think SUR requires that upon committed delete access to the
   RowLocation always return some sort of failure, and never access
   to a different row.  Truncate currently breaks this, for
   conglomerates that guarantee non-reusable RowLocations.  In current
   access methods this could be enforced this while holding a
   easily if the table level intent
   lock requirement is added.
   I would be comfortable adding this to store contract.  It
   seems reasonable and allows coordination through locking.

Yes, I agree with you definition, and I would appreciate if you could 
add the table intent lock requirement to the contract.



Note this does not adress other current client usages of the access
methods.  But I believe that all those clients could easily agree
to the table intent lock protocol.  This would mean that any client
that wanted to break this protol must get an exclusive table lock
first.  I believe all those clients already do, but for different
reasons.

Does this solve your non-holdable case?  The holdable case is a 
different beast, and should be a different thread.



Yes, as far as I can see, this solves the non-holdable case.

And to be entirely safe, would it be possible to add a requirement that 
truncate/defragment and other compress operations always should run in 
autocommit mode ?


I am thinking of the cases were a user could run these in the same 
transaction as they have an updatable cursor ? If they are in autocommit 
mode, I think the cursor (non-holdable) will be closed.


Andreas








Andreas Korneliussen wrote:


Mike Matrigali wrote:


It looks like I may have been thinking about future directions vs.
current reality.  The question still is what should the contract
be, rather than what a specific implementation currently provides.



I agree. I think the contract could be one of these:

1. A RowLocation is valid as long as the transaction has a table 
intent lock
2. A RowLocation is valid as long as the transaction has a row lock 
for the row.


Both contracts have different tradeoffs, I guess. for store, alt. 2 
gives more flexibility in future online compress operations, however I 
think it would require SURs to set and hold locks for all isolation 
levels.


JavaDoc for RowLocation do say:

  See the conglomerate implementation specification for
  information about the conditions under which a row location
  remains valid.

So currently, it is implementation defined.


Andreas




Andreas Korneliussen wrote:


Mike Matrigali wrote:


I posted some questions about how the delete/update is done, those
answers would help me understand better what is needed for a solution.

I am going to start a separate thread concentrating on RowLocation
guarantees from store.


That is great.


Some other answers below.

Andreas Korneliussen wrote:



Mike Matrigali wrote:



There are very few cross thread dependencies not managed by locks
currently.  These things add extra complications to current and
future code.  Also I want to understand clearly the new restrictions
be imposted on the access methods (both current and possible
future).  In the future we would like to do more automatic space
reclamation as part of the zero-admin goal, the ability to do this
in the future internal to access methods is probably affected by
the proposals here.



I think the complexities are there already. It seems very hard to 
know

under exactly which conditions, a RowLocation remains valid.  We have
assumed that the RowLocation will remain valid as long as we hold a
table intent lock (valid means that it either points to the same 
row or

to a deleted row).

That is what we concluded from the previous discusssion about 
validity
of RowLocation. If you in the future need to make code, or there 
already
is code, which breaks this assumption, we would need to know which 
other
mechanisms we should use to either verify that the RowLocation is 
valid,

or to block the system to make it invalid.






There is already code that breaks this assumption, the in place 
compress

table.  It currently is only executed as a call to a system procedure:
http://db.apache.org/derby/docs/dev/ref/rrefproceduresinplacecompress.html 



But the hope was in the future to do the same kind of work that this
procedure does, internally in background.

As you have already determined that off-line compress is even more 
of a

problem, but it does run under an exclusive table lock.  After it runs
the container you were connected t

RowLocation validation, for holdable SUR

2006-02-20 Thread Andreas Korneliussen


Following is a proposal to ensure that a client of store can verify the 
validity of a RowLocation.  A RowLocation has become invalid if a store 
operation has caused it to point to another row or to a non-existent 
position (deleted row or non-existing page/record-id).
I think we need a mechanism to detect that a RowLocation has become 
invalid in order to implement *holdable* SUR.


To do this, I would propose:

- The RowLocation object should contain a version number for the page.

- A version number should be stored in the header for a Page

- Whenever an operation which may invalidate row-locations is executed, 
the version number for the page is updated. These operations include 
online/offline compress.


- When navigating to a RowLocation which has invalid version number, the 
store may fail (i.e return false)


The page header for a stored page, currently has a number of fields 
which are intended for future use, and it seems that it is possible to 
use these fields without breaking backward compatibility.
I noticed one of the fields in the header is named "generation" (from 
StoredPage.java):


 *  4 bytes integer	generation  generation number of this 
page(FUTURE USE)
 *  4 bytes integer	prevGeneration  previous generation of page 
(FUTURE USE)


Could I use the generation field for this, or has it been reserved for 
something else ? Alternatively, I could use one of the other long fields 
reserved for future use.


Any comments ?

Thanks

--Andreas


Re: RowLocation validation, for holdable SUR

2006-02-21 Thread Andreas Korneliussen
I will modify the suggestion somewhat. I think first, that offline 
compress is not a problem, even for the holdable SUR. Since offline 
compress moves the records to another container, the SUR cursors should 
 detect that container they use is no longer valid, when renavigating 
to the row.


If a client of store moves a row by deleting and inserting it somewhere 
else, the SUR should not find the row when trying to do renavigate to it 
for update or delete, and can give an error.


What our problem is, is the case where a row is inserted into the 
container, and it gets the same RowLocation as a row which we have read 
into the SUR. The row which we had previously read into the SUR, must 
have been deleted and purged for this to happen.


In addition, as far as I can see, for a new row to get the same 
RowLocation as a row previously deleted and purged, the page for the 
row, must have been truncated, and recreated.


So then how can we detect that a page has been recreated ? We could i.e 
use a timestamp on the create/recreate time of the page. This timestamp 
could be read by the SUR as it reads the RowLocation (so we do not need 
to change the impl. of RowLocation), and again, we would probably need 
to change the header for the page, so that we can store the timestamp.



Andreas




Mike Matrigali wrote:

Some questions:

o row locations are stored in every index row.  Are you proposing a data 
level upgrade of every row in all databases?

o What is your proposal in the case of soft upgrade (note I believe not
  supporting "holdable" SUR in soft upgrade is an option).
o The hard case is the compress case that removes pages from a file, in
  this case there is no place to store the version number that you
  are relying on (the same problem in the current system why truncte 
can't support non-reusable rowlocations).
o Is it worth the on disk and in memory overhead to every row location 
to support holdable SUR?


I believe one of the operations you are trying to address is when a 
client of store moves a record by deleting and inserting it.  This is

what compress does today.  So if we start with row loc A pointing at
row A, and compress deletes row A and inserts it at row loc B.  In both
the current and new system access to A will return an error, but neither
will "know" that the row has been moved to a new ID.  Is this ok?

If the current system always supported non-reusable row id's, even in
the truncate case do you have what you need?  Again this will not 
prevent clients of store from moving a row by inserting and deleting

it somewhere else.


Andreas Korneliussen wrote:



Following is a proposal to ensure that a client of store can verify 
the validity of a RowLocation.  A RowLocation has become invalid if a 
store operation has caused it to point to another row or to a 
non-existent position (deleted row or non-existing page/record-id).
I think we need a mechanism to detect that a RowLocation has become 
invalid in order to implement *holdable* SUR.


To do this, I would propose:

- The RowLocation object should contain a version number for the page.

- A version number should be stored in the header for a Page

- Whenever an operation which may invalidate row-locations is 
executed, the version number for the page is updated. These operations 
include online/offline compress.


- When navigating to a RowLocation which has invalid version number, 
the store may fail (i.e return false)


The page header for a stored page, currently has a number of fields 
which are intended for future use, and it seems that it is possible to 
use these fields without breaking backward compatibility.
I noticed one of the fields in the header is named "generation" (from 
StoredPage.java):


 *  4 bytes integergeneration  generation number of this 
page(FUTURE USE)
 *  4 bytes integerprevGeneration  previous generation of page 
(FUTURE USE)


Could I use the generation field for this, or has it been reserved for 
something else ? Alternatively, I could use one of the other long 
fields reserved for future use.


Any comments ?

Thanks

--Andreas








Re: conflict detection strategies

2006-02-22 Thread Andreas Korneliussen

Andreas Korneliussen wrote:

Daniel John Debrunner wrote:


Andreas Korneliussen wrote:



Problem:
For holdable cursors, we will release the table intent lock when doing
commit on the transaction for the cursor.

The table intent lock, prevents the system from doing a compress of the
table, causing all RowLocations to be invalid. In addition, it prevents
reuse of RowLocation for deleted + purged rows.




I think this last paragraph is an incorrect assuption. The table intent
lock prevents other transactions from doing a compress, but not the
transaction holding the lock.





It seems to me that that online compress will not use the same transaction:

ij> autocommit off;
ij>  get cursor c1 as 'select * from t1 for update';
ij>  call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP','T1', 1, 1, 1);
ERROR 40XL1: A lock could not be obtained within the time requested
ij> rollback;


Offline compress is rejected if executed from the same connection:
ij>   get cursor c1 as 'select * from t1 for update';
ij> next c1;
ID
---
1
ij>  call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'T1', 0);
ERROR 38000: The exception 'SQL Exception: Operation 'ALTER TABLE' 
cannot be performed on object 'T1' because there is an open ResultSet 
dependent on that object.' was thrown while evaluating an expression.
ERROR X0X95: Operation 'ALTER TABLE' cannot be performed on object 'T1' 
because there is an open ResultSet dependent on that object.

ij>

Are there other user-visible mechanisms to start online compress ?

If not, I think we could conclude that there are no known issues with 
the use of RowLocation in non-holdable SUR (given the discussions about 
validity of RowLocation in separate threads)


Andreas



That is a good point.

The main problem would be the system doing a compress, however we should 
take into account the fact that the user can run compress from the same 
transaction, and then maybe invalidate the resultset, or prevent the 
compress from running.



I think there are other situations where the RowLocation will become
invalid, such as the transaction deleteing the row.



Yes, however as far as I understood, the RowLocation would not be reused 
as long as at least some sort of table level intent lock is held, and 
the store will simply return false if one tries to do update / delete / 
fetch on a RowLocation which is deleted, or deleted+purged.


Andreas




default holdability

2006-02-22 Thread Andreas Korneliussen
Currently Derby supports a limited combination of ResultSet types and 
Resultset concurrency modes. Common for all the current combinations, is 
that Derby can support HOLD_CURSORS_OVER_COMMIT, however for future 
combinations this may be a problem.


We have a problem in that Derby may not be architected to correctly 
implement "holdable" scrollable updatable resultsets (SUR), and as a 
fallback, we may consider not to support the holdabilitiy for SUR.


I also saw some JIRAs with synopsis:

Derby-1005: "Holdability for a connection should automatically become 
CLOSE_CURSORS_AT_COMMIT for a global transaction."


and

Derby-1006:"Client allows setHoldability to HOLD_CURSORS_OVER_COMMIT  on 
both connection and statement in a global transaction "


I am not sure these are relevant for the discussion, however it seems to 
me that there are other places in the system where Derby cannot support 
HOLD_CURSORS_OVER_COMMIT.


I think Derby is architected to support the holdability mode 
CLOSE_CURSOR_AT_COMMIT by all combinations of ResultSets.


I therefore find it reasonable to consider changing the *default* 
holdability mode from HOLD_CURSORS_OVER_COMMIT to CLOSE_CURSORS_AT_COMMIT.


Clients which depend on HOLD_CURSOR_OVER_COMMIT, should as a consequence 
explictly set the holdability. I think that a client should not depend 
on holdability mode without specifying it from the application, or at 
least check the DatabaseMetadata.getDefaultHoldability() and then call 
setHoldability() on the Connection it if it depends on something else.
If Derby cannot support the specified holdability for a specific 
resultset type, Derby could downgrade the holdabiltiy, and give a warning.


An issue with changing the default holdability, is that in JDBC 2 (JDK 
1.3), there is no way to specify holdability, and all ResultSets get the 
default.


--Andreas


Re: RowLocation validation, for holdable SUR

2006-02-22 Thread Andreas Korneliussen

Mike Matrigali wrote:

The SUR should not know anything about the underlying implementation
of the access method getting the row, so having it "read a timestamp"
from page does not work. If the timestamp is not in the rowlocation,
we could add a get a timestamp for row at this rowlocation, but forcing
two trips to the store for every row is a overhead.  Rather than discuss
implementation it would be nice to understand the minimum necessary
services needed to be provided by the access method.  Do the same 
interfaces need to be provided by VTI's?  At least

for your use I think the timestamp need only guarantee to be different
after a truncate from previous version on page.

Since you are ok with invalidating the SUR in the case of offline 
compress, what about invalidating the SUR in the case of online

compress also?  One way to do this is for the system catalogs to
maintain a table version number, which would be guaranteed to not
change while any sort of table intent lock was present.  Any operation
which either copied rows to another container or truncated the
table would bump the version number.  And holdable cursors would need
to recheck the version number after losing the lock at commit time.



Yes, the intention has been to get a mechanism to invalidate the cursors 
in case of a compress. So if this is better accomplished by setting a 
table version number in the system catalogs, it is fine with me.



The downside is that some SUR's are invalidated that didn't need to be,
but compress kicking in, in a holdable cursor in the time between a 
commit and then next operation in the cursor is going to be a

rare event.  The upside is that there is no extra per row overhead in
the system for the normal case.

There already exists a ddl invalidation scheme for invalidating query
plans, maybe this existing structure could be used to invalidate
SUR's after the commit?



I do not know.  In this case, we are in the execute phase.  So, is there 
anyone who knows if the DDL invalidation scheme for invalidating query 
plans can invalidate queries in the execute phase ?


Thanks

-- Andreas




Andreas Korneliussen wrote:

I will modify the suggestion somewhat. I think first, that offline 
compress is not a problem, even for the holdable SUR. Since offline 
compress moves the records to another container, the SUR cursors 
should  detect that container they use is no longer valid, when 
renavigating to the row.


If a client of store moves a row by deleting and inserting it 
somewhere else, the SUR should not find the row when trying to do 
renavigate to it for update or delete, and can give an error.


What our problem is, is the case where a row is inserted into the 
container, and it gets the same RowLocation as a row which we have 
read into the SUR. The row which we had previously read into the SUR, 
must have been deleted and purged for this to happen.


In addition, as far as I can see, for a new row to get the same 
RowLocation as a row previously deleted and purged, the page for the 
row, must have been truncated, and recreated.


So then how can we detect that a page has been recreated ? We could 
i.e use a timestamp on the create/recreate time of the page. This 
timestamp could be read by the SUR as it reads the RowLocation (so we 
do not need to change the impl. of RowLocation), and again, we would 
probably need to change the header for the page, so that we can store 
the timestamp.



Andreas




Mike Matrigali wrote:


Some questions:

o row locations are stored in every index row.  Are you proposing a 
data level upgrade of every row in all databases?

o What is your proposal in the case of soft upgrade (note I believe not
  supporting "holdable" SUR in soft upgrade is an option).
o The hard case is the compress case that removes pages from a file, in
  this case there is no place to store the version number that you
  are relying on (the same problem in the current system why truncte 
can't support non-reusable rowlocations).
o Is it worth the on disk and in memory overhead to every row 
location to support holdable SUR?


I believe one of the operations you are trying to address is when a 
client of store moves a record by deleting and inserting it.  This is

what compress does today.  So if we start with row loc A pointing at
row A, and compress deletes row A and inserts it at row loc B.  In both
the current and new system access to A will return an error, but neither
will "know" that the row has been moved to a new ID.  Is this ok?

If the current system always supported non-reusable row id's, even in
the truncate case do you have what you need?  Again this will not 
prevent clients of store from moving a row by inserting and deleting

it somewhere else.


Andreas Korneliussen wrote:



Following is a proposal to ensure that a client of store can verify 
the validity of a RowLocation.  A RowLocation has become invalid if 
a store 

compiling XML.java in Java 1.5

2006-02-23 Thread Andreas Korneliussen
I noticed that /java/engine/org/apache/derby/iapi/types/XML.java uses 
the org.apache.xalan.processor.TransformerFactoryImpl class,and it had 
the following comments:


// Note that even though the following has a Xalan
// package name, it IS part of the JDK 1.4 API, and
// thus we can compile it without having Xalan in
// our classpath.
import org.apache.xalan.processor.TransformerFactoryImpl;

The JDK 1.5 API and runtime, does not have this class, (it has been 
renamed), so I have the following questions:


1. Is it necessary to use the impl class, instead of accessing the 
object through the TransformerFactory interface ?


2. At runtime, is there a guarantee that the class is available when 
using Java 1.5 JDK and using XML functionality ? I noticed the class is 
also inside xalan.jar, so I guess that this functionality requires this 
jar file to be included at runtime.


Thanks

Andreas


Re: compiling XML.java in Java 1.5

2006-02-23 Thread Andreas Korneliussen

Army wrote:

Andreas Korneliussen wrote:

I noticed that /java/engine/org/apache/derby/iapi/types/XML.java uses 
the org.apache.xalan.processor.TransformerFactoryImpl class,and it had 
the following comments:


// Note that even though the following has a Xalan
// package name, it IS part of the JDK 1.4 API, and
// thus we can compile it without having Xalan in
// our classpath.
import org.apache.xalan.processor.TransformerFactoryImpl;



As part of my (currently stalled) work for DERBY-688 I have reorganized 
the XML code in some key ways with the goal of allowing XML.java to 
build without any dependency on Xalan.  Those changes include the 
movement of all Xalan-related dependencies into a separate class that is 
only loaded at runtime _if_ Xalan is in the user's classpath.


I haven't had a chance to complete that work yet because I've been 
focusing on other issues lately--but I do hope to have the changes 
complete for 10.2.  All of that said, let me now answer your two questions:


The JDK 1.5 API and runtime, does not have this class, (it has been 
renamed), so I have the following questions:


1. Is it necessary to use the impl class, instead of accessing the 
object through the TransformerFactory interface ?



The current Derby code uses the impl class because it (the impl class) 
defines two methods that are not part of the TransformerFactory API: 
namely, newTemplatesHandler() and newTransformerHandler().  The current 
code uses those classes to process stylesheets to in turn evaluate XPath 
expressions.


There may be a way to accomplish the same thing indirectly by using 
other methods that are part of the standard API, but I don't know off 
hand if that's the case.  In any event, with my changes for DERBY-688 
I've rewritten the XPath evaluation code to _not_ use stylesheets and to 
instead use the lower-level Xalan XPath API.  This means that 
TransformerImpl is no longer required--but of course there are other 
Xalan-specific classes that are necessary, so in the end the depenency 
on Xalan is still there.  But it's been moved out of XML.java into 
another class.


Which leads to the next question:

2. At runtime, is there a guarantee that the class is available when 
using Java 1.5 JDK and using XML functionality ? I noticed the class 
is also inside xalan.jar, so I guess that this functionality requires 
this jar file to be included at runtime.



When I originally submitted the XML type and initial operators 
(DERBY-334) I did so with the expectation that the XML type would only 
work if Xalan was in the user's classpath.  That's why the XML tests do 
not currently run as part of derbyall--I didn't want to force anyone 
running the tests to have to download Xalan (or Xerces for that matter) 
just to run the regression tests.


As for runtime checks, the current code doesn't explicitly check for 
Xalan, so the result will be some sort of ClassNotFound or NoClassDef 
exception.  That might seem silly, but the whole tone of DERBY-334 was 
that the XML datatype was a new type that was not ready for production 
use--which is why I haven't documented it yet (and which is why we've 
been able to get by so far without running nightly regression tests).


With DERBY-688, though, I will be adding better run-time checks to throw 
more user-friendly errors if the required Xalan classes aren't found.  I 
will also reorganize the code, as mentioned above, to remove Xalan 
dependencies from XML.java, which is the base type for XML columns, so 
that users can operate on tables with XML columns even if they don't 
have Xalan in their classpath, assuming they don't actually reference 
the XML columns (currently that's not possible).


Additional changes targeted for DERBY-688 are described in that issue 
and the corresponding spec (which, I admit, needs to be updated).  The 
intent is that, by making these changes, we can make XML an "official" 
Derby datatype ready for production use, and thus we can add it to the 
documentation.  Most of the changes are well on their way to completion, 
I just have to tie up some loose ends and make everything "official".  
As I said, I'm hoping to do that for the 10.2 release...


Does that answer your questions?


Yes, thank you very much for you explanations

--Andreas


Army





Re: [jira] Commented: (DERBY-690) Add scrollable, updatable, insensitive result sets

2006-02-23 Thread Andreas Korneliussen

Daniel John Debrunner wrote:


Fernanda Pizzorno (JIRA) wrote:

 


Read-uncommited isolation level does aquire locks for updatable forward-only 
result sets, the same behavior has been kept on scrollable insensitve updatable 
result sets.
   



Hmmm, this means such ResultSets are not running at read-uncommitted
isolation level, they are running at read-committed isolation level.

I understand this is existing behaviour, I wonder if it arises due to
SELECT's FOR UPDATE causing this locking behaviour.

 

SELECT's FOR UPDATE have this behaviour, even if you do not have 
CONCUR_UPDATABLE.



Customers in the past have complained about any locks obtained in
read-uncommitted mode, it might be suprising for them to have a
read-uncommitted ResultSet blocked by other transactions.
 

I think what the standards say about read-uncommitted mode, is that 
phenomenas like dirty read can occur, however as far as I know, even 
serializable would be a legal implemenation of read-uncommitted (not the 
other way around), since the standards do not require these phenomena to 
happen.


It is good to be in line with forward-only (FO), to get a symmetric 
system in terms of behaviour, and also a less complicated 
implementation.  So, what happens is that in read-uncommitted SUR sets 
locks in line with read-committed (and FO), however it may of course be 
many other queries in the transaction which are not SUR, and they will 
not set locks (so the dirty read phenomena can occur even if you use SUR 
in your transaction). 

Also, I think the JDBC tutorials mention that using updatable 
resultsets, may affect the locking behaviur compared to read-only. I.e from

http://java.sun.com/j2se/1.3/docs/guide/jdbc/getstart/resultset.html :

CONCUR_UPDATABLE
   * Indicates a result set that can be updated programmatically
   * Available to drivers that implement the JDBC 2.0 core API
   * Reduces the level on concurrency. Updatable results sets may use 
write-only locks so that only one user at a time has access to a data 
item. This eliminates the possibility that two or more users might 
change the same data, thus ensuring database consistency. However, the 
price for this consistency is a reduced level of concurrency.


(In Derby we use update locks instead of write locks)


I wonder if, at least for scrollable updateable ResultSets, we should
disallow read-uncommitted isolation level, it may be a cleaner way
forward thean implementing them as read-committed and then have concerns
in the future about backwards compatibility if we ever need to implement
them correctly. Maybe the more precise statement is disallow updateRow
and deleteRow when the isolation level is read un-committed. insertRow
seems fine.

 

I do not think it would be good to disallow updateRow() and deleteRow() 
in an updatable resultset.



Forward-only updateable read-uncommitted ResultSets ideally should be
disallowed as well, but that might cause backward compatibility issues.

 

I would argue that the current behavior is correct. 


Andreas


Dan.

 





Re: conflict detection strategies

2006-02-27 Thread Andreas Korneliussen

Mike Matrigali wrote:

I was not aware of the alter table dependency check for offline
compress, this must be outside of store somehow  - maybe something
tied to all alter table statements.  It may make sense
to change online compress to hook up with whatever offline compress
is doing to make this happen.

Just testing the current system does not mean that future changes
won't break SUR.  Unless we agree to change the contract on unlocked
RowLocations, then it still isn't right for code to depend on
an unlocked RowLocation not ever pointing to the wrong row - because
of the issue with truncate.   Some possible issues with your test
in the online compress case:



I think you previously said:
"
   In current access methods this could be enforced this while holding a
   easily if the table level intent
   lock requirement is added.
   I would be comfortable adding this to store contract.  It
   seems reasonable and allows coordination through locking. "

I therefore think it would be good if the  contract said:
 * truncate and compress requires exclusive table locking
 * the truncate, purge and compress operations do not share any locks 
with user transactions


Are you ok with adding this to the contract ?


1) online compress has 3 separate phases, all of which do different
   types of locking.  Some use internal transactions, which explain
   the conflict lock.  I would try the following test:
   o autocommit off
   o hold cursor as your example, with a next
   o commit transaction
   o execute in place compress now that hold cursor has released all 
it's locks.




This is exactly the problem for the holdable case: truncate. After 
truncate, a Page can be recreated, and RowLocations may be reused on the 
new page. This should not be a problem for the non-holdable case, since 
we will hold the table intent lock.


Andreas


Andreas Korneliussen wrote:


Andreas Korneliussen wrote:


Daniel John Debrunner wrote:


Andreas Korneliussen wrote:



Problem:
For holdable cursors, we will release the table intent lock when doing
commit on the transaction for the cursor.

The table intent lock, prevents the system from doing a compress of 
the
table, causing all RowLocations to be invalid. In addition, it 
prevents

reuse of RowLocation for deleted + purged rows.






I think this last paragraph is an incorrect assuption. The table intent
lock prevents other transactions from doing a compress, but not the
transaction holding the lock.





It seems to me that that online compress will not use the same 
transaction:


ij> autocommit off;
ij>  get cursor c1 as 'select * from t1 for update';
ij>  call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP','T1', 1, 1, 1);
ERROR 40XL1: A lock could not be obtained within the time requested
ij> rollback;


Offline compress is rejected if executed from the same connection:
ij>   get cursor c1 as 'select * from t1 for update';
ij> next c1;
ID
---
1
ij>  call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'T1', 0);
ERROR 38000: The exception 'SQL Exception: Operation 'ALTER TABLE' 
cannot be performed on object 'T1' because there is an open ResultSet 
dependent on that object.' was thrown while evaluating an expression.
ERROR X0X95: Operation 'ALTER TABLE' cannot be performed on object 
'T1' because there is an open ResultSet dependent on that object.

ij>

Are there other user-visible mechanisms to start online compress ?

If not, I think we could conclude that there are no known issues with 
the use of RowLocation in non-holdable SUR (given the discussions 
about validity of RowLocation in separate threads)


Andreas



That is a good point.

The main problem would be the system doing a compress, however we 
should take into account the fact that the user can run compress from 
the same transaction, and then maybe invalidate the resultset, or 
prevent the compress from running.



I think there are other situations where the RowLocation will become
invalid, such as the transaction deleteing the row.



Yes, however as far as I understood, the RowLocation would not be 
reused as long as at least some sort of table level intent lock is 
held, and the store will simply return false if one tries to do 
update / delete / fetch on a RowLocation which is deleted, or 
deleted+purged.


Andreas












Re: RowLocation validation, for holdable SUR

2006-02-27 Thread Andreas Korneliussen

Mike Matrigali wrote:

The SUR should not know anything about the underlying implementation
of the access method getting the row, so having it "read a timestamp"
from page does not work. If the timestamp is not in the rowlocation,
we could add a get a timestamp for row at this rowlocation, but forcing
two trips to the store for every row is a overhead.  Rather than discuss
implementation it would be nice to understand the minimum necessary
services needed to be provided by the access method.  Do the same 
interfaces need to be provided by VTI's?  At least

for your use I think the timestamp need only guarantee to be different
after a truncate from previous version on page.

Since you are ok with invalidating the SUR in the case of offline 
compress, what about invalidating the SUR in the case of online

compress also?  One way to do this is for the system catalogs to
maintain a table version number, which would be guaranteed to not
change while any sort of table intent lock was present.  Any operation
which either copied rows to another container or truncated the
table would bump the version number.  And holdable cursors would need
to recheck the version number after losing the lock at commit time.



I think I could go for the following solution to invalidate the SUR in 
case of online compress:

- A sequence number is associated with each Container
- The sequence number is updated when doing truncate

A holdable cursor will need to reopen the controller after a commit, 
since the controllers get closed at the end of the transaction (in 
closeForEndTransaction(..)).


When reopening a controller, one may check that the sequence number has 
not been changed since it was initially opened. If it has changed, one 
can conclude that there has been a online compress, and updates cannot 
be safely executed, and we may reject the reopen.


Any attempt to do update on a non-reopened controller, will fail, and a 
warning given (cursor operation conflict).


This solution does not have the downside of requiring any changes to the 
page layout, or RowLocation. It also does not have a cost per row. The 
downside, is that a online compress will invalidate the cursor from 
doing any update, even for rows which are unaffected of the truncate.


Note: the ScrollInsensitiveResultSet does not need to know anything 
about the sequence number.


Andreas


The downside is that some SUR's are invalidated that didn't need to be,
but compress kicking in, in a holdable cursor in the time between a 
commit and then next operation in the cursor is going to be a

rare event.  The upside is that there is no extra per row overhead in
the system for the normal case.

There already exists a ddl invalidation scheme for invalidating query
plans, maybe this existing structure could be used to invalidate
SUR's after the commit?

Andreas Korneliussen wrote:

I will modify the suggestion somewhat. I think first, that offline 
compress is not a problem, even for the holdable SUR. Since offline 
compress moves the records to another container, the SUR cursors 
should  detect that container they use is no longer valid, when 
renavigating to the row.


If a client of store moves a row by deleting and inserting it 
somewhere else, the SUR should not find the row when trying to do 
renavigate to it for update or delete, and can give an error.


What our problem is, is the case where a row is inserted into the 
container, and it gets the same RowLocation as a row which we have 
read into the SUR. The row which we had previously read into the SUR, 
must have been deleted and purged for this to happen.


In addition, as far as I can see, for a new row to get the same 
RowLocation as a row previously deleted and purged, the page for the 
row, must have been truncated, and recreated.


So then how can we detect that a page has been recreated ? We could 
i.e use a timestamp on the create/recreate time of the page. This 
timestamp could be read by the SUR as it reads the RowLocation (so we 
do not need to change the impl. of RowLocation), and again, we would 
probably need to change the header for the page, so that we can store 
the timestamp.



Andreas




Mike Matrigali wrote:


Some questions:

o row locations are stored in every index row.  Are you proposing a 
data level upgrade of every row in all databases?

o What is your proposal in the case of soft upgrade (note I believe not
  supporting "holdable" SUR in soft upgrade is an option).
o The hard case is the compress case that removes pages from a file, in
  this case there is no place to store the version number that you
  are relying on (the same problem in the current system why truncte 
can't support non-reusable rowlocations).
o Is it worth the on disk and in memory overhead to every row 
location to support holdable SUR?


I believe one of the operations you are trying to address is when a 
client of store moves a record by deleting and insertin

Re: conflict detection strategies

2006-02-28 Thread Andreas Korneliussen

Mike Matrigali wrote:



Andreas Korneliussen wrote:


Mike Matrigali wrote:


I was not aware of the alter table dependency check for offline
compress, this must be outside of store somehow  - maybe something
tied to all alter table statements.  It may make sense
to change online compress to hook up with whatever offline compress
is doing to make this happen.

Just testing the current system does not mean that future changes
won't break SUR.  Unless we agree to change the contract on unlocked
RowLocations, then it still isn't right for code to depend on
an unlocked RowLocation not ever pointing to the wrong row - because
of the issue with truncate.   Some possible issues with your test
in the online compress case:





I think you previously said:
"
   In current access methods this could be enforced this while holding a
   easily if the table level intent
   lock requirement is added.
   I would be comfortable adding this to store contract.  It
   seems reasonable and allows coordination through locking. "

I therefore think it would be good if the  contract said:
 * truncate and compress requires exclusive table locking
 * the truncate, purge and compress operations do not share any locks 
with user transactions



This seems fine, but may require changes to store code and inplace 
compress to actually support such a contract in store.  The previous

changes just documented what was already supported.


Note: I am here discussing the non-holdable case only.

Yes, I guess you are thinking of the part with not sharing any locks 
with user transactions? This comes from the problem of a user running 
online compress from the same connection as the SUR.


Truncate should be changed to run in a separate transaction, in order 
for the store to be consistent with the proposed contract.


A minimal requirement from SUR is that defragment does not share any 
locks with the user transaction.  If the rows cannot be defragmented, 
then none of the pages which we have read RowLocation from can be 
truncated. Defragment currently is in line with what we need, since it 
runs in a separate transaction.


Purge would only affect committed deleted rows (I guess no user 
transaction could lock these ?).



I still don't see how this helps the holdable case, I agree this helps
the non-holdable case.



Yes, I know this only helps the non-holdable case.

Andreas


Re: RowLocation validation, for holdable SUR

2006-02-28 Thread Andreas Korneliussen

Mike Matrigali wrote:



Andreas Korneliussen wrote:


Mike Matrigali wrote:


The SUR should not know anything about the underlying implementation
of the access method getting the row, so having it "read a timestamp"
from page does not work. If the timestamp is not in the rowlocation,
we could add a get a timestamp for row at this rowlocation, but forcing
two trips to the store for every row is a overhead.  Rather than discuss
implementation it would be nice to understand the minimum necessary
services needed to be provided by the access method.  Do the same 
interfaces need to be provided by VTI's?  At least

for your use I think the timestamp need only guarantee to be different
after a truncate from previous version on page.

Since you are ok with invalidating the SUR in the case of offline 
compress, what about invalidating the SUR in the case of online

compress also?  One way to do this is for the system catalogs to
maintain a table version number, which would be guaranteed to not
change while any sort of table intent lock was present.  Any operation
which either copied rows to another container or truncated the
table would bump the version number.  And holdable cursors would need
to recheck the version number after losing the lock at commit time.



I think I could go for the following solution to invalidate the SUR in 
case of online compress:

- A sequence number is associated with each Container
- The sequence number is updated when doing truncate

A holdable cursor will need to reopen the controller after a commit, 
since the controllers get closed at the end of the transaction (in 
closeForEndTransaction(..)).


When reopening a controller, one may check that the sequence number 
has not been changed since it was initially opened. If it has changed, 
one can conclude that there has been a online compress, and updates 
cannot be safely executed, and we may reject the reopen.


Any attempt to do update on a non-reopened controller, will fail, and 
a warning given (cursor operation conflict).


This solution does not have the downside of requiring any changes to 
the page layout, or RowLocation. It also does not have a cost per row. 
The downside, is that a online compress will invalidate the cursor 
from doing any update, even for rows which are unaffected of the 
truncate.


Note: the ScrollInsensitiveResultSet does not need to know anything 
about the sequence number.


Andreas


This sounds like a good direction.

I was suggesting that the sequence number be maintained in the system
catalogs and owned by upper layer of the system.  It seems like you are
proposing the sequence number be owned by store.  If owned by store
I think I would describe the sequence number something like:
An implementation specific long which will be changed to a never
previously used number if the table undergoes a change which
results in the possibility of a RowLocation which was previously
allocated being reused in a container which was built requesting
no RowLocation reuse.

Can you explain at what point, and in which part of the code does the
system check that the sequence number has changed and then fail the
SUR?  If only for SUR then there will be some querying from SUR to
store after every commit.  If only in store then the closing will affect
existing holdable cursors.



When the GenericScanController class (or one of its subclasses) calls 
OpenConglom.reopen(), it can read the timestamp from the container, and 
based on its own scan_state, and previous timestamp read, it can set a 
flag (oldRowLocationsInvalid).


The SUR uses a method currently called "setRowLocation(..)" which it 
uses to renavigate the controller. This method could check the 
oldRowLocationsInvalid flag and return false if the old row locations 
have become invalidated.


So, the setting of the flag, could happen for all holdable cursors, 
however the call to setRowLocation(..), which is only used by SUR and 
requires the RowLocation parameter to be a valid row location, is the 
only call which need to check on that flag, and have logic to fail the 
operation.


If the setRowLocation(..) call fails, the CurrentOfResultSet's will get 
a null reference to the RowLocation it is going to update.  This will 
cause a positioned update / delete / updateRow() / deleteRow() to fail, 
and give a warning (Cursor operation conflict)


Andreas





Re: conflict detection strategies

2006-03-01 Thread Andreas Korneliussen

Mike Matrigali wrote:

ok, I agree now that it is clear this discussion was concentrating
on non-holdable case.  Can you verify that the 3rd phase only of
in place compress already meets the proposed contract (args 0, 0, 1).



This is what I can verify:

Purge:
ij>  get cursor c1 as 'select * from t1';
ij>   call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'T1', 1, 0,0);
0 rows inserted/updated/deleted
ij> rollback;

As for purge, I have also verified that after a purge, the user 
transaction holds the row locks for the purged rows.



Defragment:
ij>  get cursor c1 as 'select * from t1';
ij>   call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'T1', 0, 1,0);
ERROR 40XL1: A lock could not be obtained within the time requested
ij> rollback;


Truncate:
ij>  get cursor c1 as 'select * from t1';
ij>   call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'T1', 0, 0,1);
0 rows inserted/updated/deleted
ij> rollback;

So, defagment is the only operation which meets the proposed contract.

Andreas



Andreas Korneliussen wrote:


Mike Matrigali wrote:




Andreas Korneliussen wrote:


Mike Matrigali wrote:


I was not aware of the alter table dependency check for offline
compress, this must be outside of store somehow  - maybe something
tied to all alter table statements.  It may make sense
to change online compress to hook up with whatever offline compress
is doing to make this happen.

Just testing the current system does not mean that future changes
won't break SUR.  Unless we agree to change the contract on unlocked
RowLocations, then it still isn't right for code to depend on
an unlocked RowLocation not ever pointing to the wrong row - because
of the issue with truncate.   Some possible issues with your test
in the online compress case:







I think you previously said:
"
   In current access methods this could be enforced this while 
holding a

   easily if the table level intent
   lock requirement is added.
   I would be comfortable adding this to store contract.  It
   seems reasonable and allows coordination through locking. "

I therefore think it would be good if the  contract said:
 * truncate and compress requires exclusive table locking
 * the truncate, purge and compress operations do not share any 
locks with user transactions





This seems fine, but may require changes to store code and inplace 
compress to actually support such a contract in store.  The previous

changes just documented what was already supported.




Note: I am here discussing the non-holdable case only.

Yes, I guess you are thinking of the part with not sharing any locks 
with user transactions? This comes from the problem of a user running 
online compress from the same connection as the SUR.


Truncate should be changed to run in a separate transaction, in order 
for the store to be consistent with the proposed contract.


A minimal requirement from SUR is that defragment does not share any 
locks with the user transaction.  If the rows cannot be defragmented, 
then none of the pages which we have read RowLocation from can be 
truncated. Defragment currently is in line with what we need, since it 
runs in a separate transaction.


Purge would only affect committed deleted rows (I guess no user 
transaction could lock these ?).



I still don't see how this helps the holdable case, I agree this helps
the non-holdable case.



Yes, I know this only helps the non-holdable case.

Andreas








Re: [jira] Created: (DERBY-1068) change of store contract: online compress operations should not share any locks with user transactions

2006-03-01 Thread Andreas Korneliussen

Mike Matrigali wrote:

Andreas, are you going to propose a patch for this or would you like
me to take the first take on it?


Hi,

I would appreciate it if you would like to take the first take on it, 
since I do currently not have any code on this issue.


Thanks

--Andreas


Andreas Korneliussen (JIRA) wrote:

change of store contract: online compress operations should not share 
any locks with user transactions
-- 



 Key: DERBY-1068
 URL: http://issues.apache.org/jira/browse/DERBY-1068
 Project: Derby
Type: Improvement
  Components: Store  Reporter: Andreas Korneliussen
Priority: Minor


Propose to add the following to the store contract:
* truncate and compress requires exclusive table locking
* the truncate, purge and compress operations do not share any locks 
with user transactions
Currently the store implementation allows users to share locks in 
truncate and possibly purge.


This request is driven by:
http://www.nabble.com/conflict-detection-strategies-t1120376.html#a2938142 



and:

http://www.nabble.com/RowLocation-contract-from-storage-engine-t1142235.html#a2994156 









Re: [jira] Commented: (DERBY-690) Add scrollable, updatable, insensitive result sets

2006-03-01 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Bernt M. Johnsen wrote:




We should also strive to make "insensitivity" as close to the SQL
defintion as possible (SQL 2003 p. 96):

  A change to SQL-data is said to be independent of a cursor CR if
  and only if it is not made by an  or a
   that is positioned on CR.

  A change to SQL-data is said to be significant to CR if and only if
  it is independent of CR, and, had it been committed before CR was
  opened, would have caused the table associated with the cursor to
  be different in any respect.

  A change to SQL-data is said to be visible to CR if and only if it
  has an effect on CR by inserting a row in CR, deleting a row from
  CR, changing the value of a column of a row of CR, or reordering
  the rows of CR.

  [...]

  - If the cursor is insensitive, then significant changes are not visible.



Does JDBC's definition of INSENSITIVE line up with SQL's?

JDBC 3.0 (14.1.1) (and JDBC 4.0 16.1.1)


The result set is insensitive to changes made to the underlying data source 
while
it is open. It contains the rows that satisfy the query at either the time the 
query is
executed or as the rows are retrieved.



SQL seems to say that if an update happens while the cursor is open then
an insensitive cursor will not see it.

JDBC says you might see it, due to the "as the rows are retrieved".




I interpreted the JDBC a bit differently: it just says which rows are in 
the resultset, not if you see the updates.



 - JDBC 3.0 spec: "The result set is insensitive to changes made to the
 underlying data source while it is open. It contains the rows that
 satisfy the query at either the time the query is executed or as the
 rows are retrieved."

To me, this says more about which rows are in the resultset - if they 
satisfy the query at execute time, or at retrieve time, they may go into 
the resultset.



Andreas


Dan.






Re: [jira] Updated: (DERBY-1067) support holdable Scrollable Updatable Resultsets

2006-03-06 Thread Andreas Korneliussen

Mike Matrigali (JIRA) wrote:

 [ http://issues.apache.org/jira/browse/DERBY-1067?page=all ]

Mike Matrigali updated DERBY-1067:
--



Hi,
Thanks for the review comments. See some inline comments.



Description:   (was: I am reviewing this patch.  (mike matrigali)
2 major concerns with this patch:
1) The timestamp is implemented in runtime, but not persistent.  The container
   can be thrown away as soon as someone is not referencing it, which I believe
   can happen in the holdable cursor case.   If you want to implement the
   timestamp then I think you have to add to the container header
   (see FileContainer line 278 for container header description), and
   follow code that updates estimated row count for how it is updated and
   read.  Note that doing this is an UPGRADE issue, and you should think
   about soft vs. hard upgrade for this feature.



I am addressing this issue, by writing the time stamp in the header as 
you suggested.



   For more comments about upgrade I need to know your plan.  On soft upgrade
   will timestamp be bumped or not.  I would prefer that it not be changed.


Right now, it would be bumped whenever someone executes online compress.
Can you run online compress during soft upgrade ? I guess the 
alternative would be to prevent hodable SUR during soft upgrade.




   The current assumption for "unused" fields in store is that they are
   guaranteed with a specific value (usually 0) before an upgrade.  So
   on hard upgrade we know the starting value.  Also if you change it in
   soft upgrade then you have to make sure that all previous of 10.1 don't
   have a problem with that field not being 0 - sometimes there are assertions
   about the field being 0, don't know for sure in this particular case.



Yes, I would need to check that. I did not find any assertions on this 
field in the current code, however I have not yet checked with all 
versions of 10.1.



2) I would have expected tests specific to this change associated with the
   patch.



Yes, currently I have provided some black box tests for holdable 
resultsets in HoldabilityTest.  They will check this feature by running 
online compress on a table where we have a holdable SUR. This test of 
course requires the rest of the SUR implementation to actually test this.


Did you also expect some unit tests for store ?


   some testing areas of concern:
   o soft upgrade, make sure 10.1 works correctly on a 10.2 soft upgrade run.


I guess this could be done by extending the phaseTester, by doing a 
online compress in phase 2 (soft upgrade). In phase 3, the old 10.1 
version would be started, and we should then see that it handles that 
the value in the FileContainer header has been changed.



   o what happens on timestamp overflow?



The next timestamp will be Long.MIN_VALUE, the next timestamp after that 
will be Long.MIN_VALUE +1. I think you would need to run very many 
compress operations on the table to actually test overflow, unless I 
inject some state.




minor comments:

general comments:
I would have rather seen the timestamp tied to the reusable rowlocation
concept rather than tied to compress.  While true the only thing in the
current code that breaks this is compress, so this may just be my itch.



Maybe I could do that, right now I have not. Is RowLocation known to the 
 Container ? The compress concept seems to be.



should timestamp be more "time" related.  A single db may reuse a containerid,
but only after a shutdown/reboot cycle.  A time based timestamp would mean
the new container timestamp would be different from the old one.  Probably
does not matter for held cursors, but what makes sense for the generic new
timestamp feature?



Instead of using a long, do you think it would be good to introduce a 
new interface similar to PageTimeStamp (instead: ContainerTimeStamp) ?



questions:
why do you get the timestamp for the open cursor at close rather than open?


I will change this and initialize it when the cursor opens.



style comments:
don't want to start coding style arg here, and admit not all store code is
perfect.  Most the access code is consistent though, and uses the brace on
separate line standard.



No problem, I will update the style for my changes to put braces on a 
separate line.


--Andreas


Re: [jira] Updated: (DERBY-1067) support holdable Scrollable Updatable Resultsets

2006-03-09 Thread Andreas Korneliussen



Right now, it would be bumped whenever someone executes online compress.
Can you run online compress during soft upgrade ? I guess the 
alternative would be to prevent hodable SUR during soft upgrade.



I am not really worried about "during" soft upgrade.  Basically I mean 
that a customer has decided to not upgrade the db so version of the

db will be at 10.1, I refer to this has
soft upgrade.  In that mode one can run 10.2 server against the db but
no changes to the state of the database are made such that 10.1 can not
be subsequently run.  Changing the header is obviously a database 
upgrade to 10.2.  The safest is just to not allow the change unless the

db is upgraded to 10.2.  The way I usually think about it is if the
change to the database is not something we think should be put in a
bug fix point release then it is not something we should put in
a soft upgrade.  In this case that would mean only updating the 
timestamp if the version of the db is 10.2.


So first I need to understand if your timestamp gets set if version of
db is 10.1.  And will scrollable updatable result set code be executed 
in a version 10.1 db?


I have for now not added any logic to prevent the timestamp to be set if 
the version of the DB is 10.1.  Scrollable updatable result set code may 
be executed in a version 10.1 DB if there is a soft upgrade to 10.2.


I agree that changing the header could be considered as a database 
upgrade, however in this case we do not change the format of the header, 
only make use of fields which are unused.  So if the customer does not 
want to upgrade after all, and starts using 10.1 again, they will not 
get a problem, since 10.1 databases ignore this part of the header.


I have extended the phaseTester, so that it now does a compress (causing 
the field to be bumped), and tested it against 10.1.2.1 and 10.1.1.0 
with no problems when going back to the previous database.


If I disallow changes to the header during soft upgrade, I would need to 
add logic in store to prevent the timestamp from being incremented when 
doing compress. In addition, SUR could not guarantee holdability, so 
there would need to be some logic in SUR to not be holdable.  I am not 
sure store has the information to check on database version,all I found 
was some information in dictionary (DD_Version). Therefore I did not go 
in that direction.


I will upload a new patch tomorrow.

Andreas












   The current assumption for "unused" fields in store is that they are
   guaranteed with a specific value (usually 0) before an upgrade.  So
   on hard upgrade we know the starting value.  Also if you change it in
   soft upgrade then you have to make sure that all previous of 10.1 
don't
   have a problem with that field not being 0 - sometimes there are 
assertions

   about the field being 0, don't know for sure in this particular case.



Yes, I would need to check that. I did not find any assertions on this 
field in the current code, however I have not yet checked with all 
versions of 10.1.


2) I would have expected tests specific to this change associated 
with the

   patch.



Yes, currently I have provided some black box tests for holdable 
resultsets in HoldabilityTest.  They will check this feature by 
running online compress on a table where we have a holdable SUR. This 
test of course requires the rest of the SUR implementation to actually 
test this.


Did you also expect some unit tests for store ?



some test is necessary, i am not sure if we need 2 sets.  Of course the
interesting tests are when row locations are invalidated but that comes
with your other jira item.




   some testing areas of concern:
   o soft upgrade, make sure 10.1 works correctly on a 10.2 soft 
upgrade run.




I guess this could be done by extending the phaseTester, by doing a 
online compress in phase 2 (soft upgrade). In phase 3, the old 10.1 
version would be started, and we should then see that it handles that 
the value in the FileContainer header has been changed.



   o what happens on timestamp overflow?



The next timestamp will be Long.MIN_VALUE, the next timestamp after 
that will be Long.MIN_VALUE +1. I think you would need to run very 
many compress operations on the table to actually test overflow, 
unless I inject some state.



i agree, very rare especially using a long.





minor comments:

general comments:
I would have rather seen the timestamp tied to the reusable rowlocation
concept rather than tied to compress.  While true the only thing in the
current code that breaks this is compress, so this may just be my itch.



Maybe I could do that, right now I have not. Is RowLocation known to 
the  Container ? The compress concept seems to be.



RowLocations are not known by container, but containers support 
non-reusable record id's which is what row locations are built on. These 
are a page level container specific implementation detail.




should timestamp be more "time" related.  A single db 

Re: [jira] Commented: (DERBY-690) Add scrollable, updatable, insensitive result sets

2006-03-14 Thread Andreas Korneliussen

Oystein Grovlen - Sun Norway wrote:

Andreas Korneliussen (JIRA) wrote:

[ 
http://issues.apache.org/jira/browse/DERBY-690?page=comments#action_12370325 
]

Andreas Korneliussen commented on DERBY-690:



11.2 positionAtRowLocation()

 a) Why can not clients use the existing reopenScanByRowLocation
instead?



reopenScanByRowLocation(..) let the user reopen the scan and start 
scanning from the RowLocation specified. After a call to 
reopenScanByRowLocation(..) the rowLocation is not locked, the page is 
not latched, and the user need to call next() to get the next record.
This will actually be the next record after the rowLocation specified 
in reopenScanByRowLocation().


Is this correct? OpenConglomerate.latchPageAndRepositionScan() will 
position on the record before the specified position.




Agree.

positionAtRowLocation(..) positions the scan, and locks the row. 



So does the combination of reopenScanByRowLocation() and next().





12. GenericScanController

12.1 reopenScanByRecordHandleAndSetLocks()

 a) If I have understood things correct, when a scan is initially
opened, the first row is not locked.  Locking happen on the
subsequent next().  Why could not a similar scheme be used
here? That is, reopen positions just before the specified row
and a subsequent call to next is performed to actually lock
it.  Looking at fetchRows() and the methods it calls, there
seems to already exist code to handle a repositioned scan.
(The combination of SCAN_INIT and a set record posisiton).



The combination of SCAN_INIT and a set record position will on the 
next() call move the rowlocation to the next row, not to the set 
record position.


If you position to a rowlocation which points to a previous row, and 
call next you may risk:
* on the next() call you skip the row if it has been deleted and 
return another row



But could not this be detected and handled by the caller?



It could theoretically be detected by calling fetchRowLocation() after 
the next() and comparing it with the RowLocation specified.


Instead of scancontroller.positionAtRowLocation(rowLoc) you would get:


scancontroller.reopenScanByRowLocation(rowLoc);
scancontroller.next();
RowLocation cmp = scancontroller.newRowLocationTemplate();
scancontroller.fetchLocation(cmp);
if (!cmp.equals(rowLoc)) { // row deleted ?
  // need to go back to avoid that next() skips a row
  scancontroller.reopenScanByRowLocation(rowLoc);
  return false;
} else {
  return true;
}

I think it is preferrable to provide the positionAtRowLocation(..) method.


positionAtRowLocation() returns false if the row has been deleted.


Andreas


Re: [jira] Commented: (DERBY-919) improve pattern for setting up junit tests

2006-03-15 Thread Andreas Korneliussen

Kristian Waagan wrote:
Answering these by mail, not Jira comment, as it is not the best way to 
answer a lot of specific questions. Maybe I'll condense the discussion 
and add a Jira comment later.
Just to be clear, I do not primarily work on this issue. I just wanted 
to bring out comments to get things started, and it does seem people 
have some.


David Van Couvering (JIRA) wrote:

[ 
http://issues.apache.org/jira/browse/DERBY-919?page=comments#action_12370400 
]

David Van Couvering commented on DERBY-919:
---

I think it is great to have base unit test like this, although I agree 
with Andreas that this should be renamed.  This class is almost solely 
about obtaining connections using different frameworks, and is very 
JDBC-specific.  There are plenty of unit tests that have no need for 
this functionality.
  



Yes, indeed. But then we are *almost* back to simply extending TestCase. 
What tests not related to frameworks (and thus JDBC) need some kind of 
common functionality? Including JDBC in the name seems like a solution. 
Do we agree that non-JDBC tests should extend TestCase directly?




I think it would be good to have a BaseTestCase which has access to the 
following:

* a TestConfiguration object
* and possibly some debug methods to log stack trace, print stuff.

BaseJDBCTestCase could extend this with some getConnection() methods.

I am not sure how this work integrates with/coincides with the work 
Andreas did to create a junit test type which allows you to run 'raw' 
JUnit tests under the harness.  Can you explain?
  



I think this work integrates well with the work to run tests a .junit.
You can confirm that by trying to run the sample testcase as .junit.



Andreas' work for running a "raw" JUnit test under the harness is not 
affected. This is all about getting a connection and some other basic 
functionality. It was written because the existing DerbyJUnitTest need 
additional methods calls before getConnection() returns a valid 
connection, and because TestUtil does not have a getConnection() (but 
several other getConnection(arguments) methods). We have several choices:

* Use TestUtil, maybe do some additional work on it.
* Adapt/change DerbyJUnitTest (dependencies restrict what we can change 
of existing API/behavior)

* Write a new common class from scratch

So far most of the comments I have received have been regarding 
implementation, which was not my primary goal. Do we all agree what we 
need, but we want to do it in different ways? Or are there still someone 
out there that have more fundamental issues to comment on?




I want:

BaseTestCase: a useful base TestCase which provides a TestConfiguration 
object, and some logging method
BaseJDBCTestCase: extends BaseTestCase, and additionally provides 
getConnection() methods.


How the getConnection() methods are implemented, is not the important 
issue to me. They may be implemented by calling TestUtil or implemented 
in the BaseJDBCTestCase itself. I do however think it is important to 
avoid forcing testcases to call methods to clean up potential 
side-effects from previous testcases.


Also, to run a TestCase as a test of type .junit, the testcase suite 
must be able to run stand-alone in a standard JUnit TestRunner .


<..>
- There are a lot of defaults being setup in a hardcoded fashion in 
resetState().  It would be better to have a section of static finals 
at the top with all the default values so that someone looking at this 
code can tell right away what they are.  Actually, looking at Andreas' 
TestConfiguration, that is a nice way of doing it . Having it as a 
separate class also seems to be useful and more coherent.
  



One note here, is that it would not be possible to change the framework 
with the current TestConfiguration. This would cause trouble for 
exceptional cases (as the current JDBC4 testsuite) and if we want to run 
useprocess=false and switch framework. Is this switching of framework 
something we don't need?


I do not see that we need switching framework in general. And .junit 
tests cannot be run with useprocess=false.


I think if you additionally supplied a getConnection(..) method which 
takes JDBCClient parameter, you could easily write special purpose 
testcases which do not use the default framework for getting the 
connectiom, if that is what you need.



Also, it seems the harness uses all of "hostName", "derbyTesting.serverhost" and 
"derbyTesting.clienthost". Can anyone shed some light on this?
I assume that derbyTesting.serverhost is the hostname for the derby 
server, derbyTesting.clienthost is the hostname for the client.



Andreas


Re: [jira] Commented: (DERBY-919) improve pattern for setting up junit tests

2006-03-16 Thread Andreas Korneliussen
The discussions on this issue have confused me some. Is it the intention 
to have a replacement for the ...functionTests/util/DerbyJUnitTest class?


Yes, the intention is to provide a common base class with can be reused 
when writing .junit tests. Initially the issue was to fix 
DerbyJUnitTest, and all the subclasses to be compatible with running 
tests as .junit. I think that is still a long term goal. However 
developers do want something they can use now, and they want to provide 
reusable Junit components. Therefore providing a new class may be the 
shortest path to that goal.


Could it possibly be used in tests that currently run as .junit from 
within the functionTests/harness ?

Or would it be a third setup?
 


This would be compatible with running tests as .junit.

Andreas


Thx, Myrna
 




Re: sysinfo_api.junit failing in Tinderbox (was: Re: Regression Test Failure! - TinderBox_Derby 422909)

2006-07-18 Thread Andreas Korneliussen

Andrew McIntyre wrote:

The test below that is reported as failing is the new test introduced
with DERBY-982, sysinfo_api.junit. This test passes on both my Mac OS
X and Windows XP system before and after commit. Does anyone have any
insight as to why the test may be failing in the Tinderbox
environment?



It failed for me when using jar-files in the classpath, however it 
passed when using classes directory. The tinderbox uses jar-files.



[EMAIL PROTECTED]:/<5>802/testing> java 
org.apache.derbyTesting.functionTests.harness.RunTest 
tools/sysinfo_api.junit

*** Start: sysinfo_api jdk1.4.2_02 2006-07-18 10:34:35 ***
0 add
> .F.F.F.F.
> There were 4 failures:
> 1) 
testMajorVersion(org.apache.derbyTesting.functionTests.tests.tools.sysinfo_api)junit.framework.AssertionFailedError: 
expected:<10> but was:<-1>
> 2) 
testMinorVersion(org.apache.derbyTesting.functionTests.tests.tools.sysinfo_api)junit.framework.AssertionFailedError: 
expected:<2> but was:<-1>
> 3) 
testProductName(org.apache.derbyTesting.functionTests.tests.tools.sysinfo_api)junit.framework.ComparisonFailure: 
expected: but was:<>
> 4) 
testVersionString(org.apache.derbyTesting.functionTests.tests.tools.sysinfo_api)junit.framework.ComparisonFailure: 
expected:<10.2.0.4 alpha> but was:<>

> FAILURES!!!
> Tests run: 5,  Failures: 4,  Errors: 0
Test Failed.
*** End:   sysinfo_api jdk1.4.2_02 2006-07-18 10:34:41 ***
[EMAIL PROTECTED]:/<5>802/testing> env | grep CLASSPA
CLASSPATH=/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyrun.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derby.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyclient.jar:/export/home/tmp/devel/issue/802/trunk/tools/java/junit.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbytools.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbynet.jar:/export/home/tmp/devel/issue/802/trunk/tools/java/junit.jar:/export/home/tmp/devel/issue/802/trunk/tools/java/jakarta-oro-2.0.8.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyTesting.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_de_DE.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_es.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_fr.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_it.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_ja_JP.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_ko_
KR.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_pt_BR.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_zh_CN.jar:/export/home/tmp/devel/issue/802/trunk/jars/sane/derbyLocale_zh_TW.jar:/usr/local/share/java/db2jcc/lib/db2jcc.jar:/usr/local/share/java/db2jcc/lib/db2jcc_license_c.jar
[EMAIL PROTECTED]:/<5>802/testing> setenv CLASSPATH 
/export/home/tmp/devel/issue/802/trunk/classes/:/export/home/tmp/devel/issue/802/trunk/tools/java/junit.jar:/export/home/tmp/devel/issue/802/trunk/tools/java/jakarta-oro-2.0.8.jar
[EMAIL PROTECTED]:/<5>802/testing> java 
org.apache.derbyTesting.functionTests.harness.RunTest 
tools/sysinfo_api.junit *** Start: sysinfo_api jdk1.4.2_02 2006-07-18 
10:36:18 ***

*** End:   sysinfo_api jdk1.4.2_02 2006-07-18 10:36:24 ***


Andreas


Re: Is jdbcapi.ConcurrencyTest Junit test run anywhere

2006-07-21 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

I couldn't see that the JUnit test

org.apache.derbyTesting.functionTests.tests.jdbcapi.ConcurrencyTest

is run anywhere. Seems to work fine for me, is it meant to be in the
jdbcapi.runall suite list as are the other JUnit tests in that package?



Hi,

The ConcurrencyTest tests locking behavior in general, and also some 
specific SUR (scrollable updatable resultsets) related locking issues


I have filed a JIRA - https://issues.apache.org/jira/browse/DERBY-1558 
to have it included into a suite.


Regards

-- Andreas



Re: svn commit: r423676 - in /db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests: derbynet/testProtocol_app.properties jdbc4/TestQueryObject_app.properties

2006-07-26 Thread Andreas Korneliussen

[EMAIL PROTECTED] wrote:

Author: djd
Date: Wed Jul 19 17:20:05 2006
New Revision: 423676

URL: http://svn.apache.org/viewvc?rev=423676&view=rev
Log:
Mark test jdbc4/TestQueryObject as failing with the SecurityManager due to 
DERBY-1540
Remove disablein of SecurityManager for derbynet/testProtocol as it passes with 
the SecurityManager.



FYI: the test only passes with SecurityManager when running with 
classpath from classes directory. The classes directory got the 
permissions to open a socket, derbyTesting.jar does not have permission 
to create a new Socket. This causes nightly regression - DERBY-1545.


I think this is just another example that we should discourage the use 
of classes directory for testing - there are too many regressions caused 
by doing it, and besides, no user of Derby ever uses the classes 
directory, so what are we really testing ?


I would actually propose to remove all permissions set in 
derby_testing.policy to the classes directory, so that tests will fail 
when using it.


Regards

Andreas



Re: [jira] Resolved: (DERBY-1545) derbynet/testProtocol.java fails with security manager enabled

2006-07-26 Thread Andreas Korneliussen

Myrna van Lunteren wrote:

Hi,

I have a question on the commit.
You added permissions for localhost and 127.0.0.1, i.e. also localhost...
I think this means this problem might then still pop up in remote
server testing, no?

I don't understand the nature of the problem...but I think this means
that a similar permission needs to be granted to either
${derbyTesting.serverhost} (which means it can't be done in this test
policy file) or, if these are permissions for the client, the test
specific policy file should also list an entry for
${derbyTesting.clienthost}. If that it's client permissions, can you
please add that?

Myrna


It is the client side permission, so I could try to add 
${derbyTesting.clienthost} and see if it still works.


-- Andreas


Re: AccessControlException on log/logmirror.ctrl after bouncing server...

2006-07-27 Thread Andreas Korneliussen

Myrna van Lunteren wrote:

Hi,

..

Now here are my ponderables:
- why does derbynet.jar need to have this permission, isn't it
sufficient for derby.jar to have them (derby.jar has these permissions
already).


I think it should be sufficient if derby.jar has these permissions, and 
that you have encountered a bug, which appears when starting an 
exisiting DB. I have reproduced the problem.



- why are these permissions only required after bouncing the server?
- is this situation not tested anywhere? i.e. not one networkserver test
 that bounces the server and reconnects to the same database?


Probably not.


- what can I do to get a stack trace?


I added a try / catch block in 
org.apache.derby.impl.drda.Database.makeConnection() and got this stack 
trace:



Apache Derby Network Server - 10.2.0.5 alpha started and ready to accept 
connections on port 1527 at 2006-07-27 09:06:07.044 GMT
java.sql.SQLException: Failed to start database 
'/export/home/tmp/devel/derbydev/testing/testdb', see the next exception 
for details.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:44)
at 
org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:88)
at 
org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:94)
at 
org.apache.derby.impl.jdbc.Util.generateCsSQLException(Util.java:173)
at 
org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(EmbedConnection.java:1955)
at 
org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:1619)
at 
org.apache.derby.impl.jdbc.EmbedConnection.(EmbedConnection.java:216)
at 
org.apache.derby.impl.jdbc.EmbedConnection30.(EmbedConnection30.java:72)
at 
org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java:73)
at 
org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:209)
at 
org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:116)
at 
org.apache.derby.impl.drda.Database.makeConnection(Database.java:232)
at 
org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(DRDAConnThread.java:1191)
at 
org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(DRDAConnThread.java:1169)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(DRDAConnThread.java:2758)
at 
org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(DRDAConnThread.java:1031)
at 
org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:874)
at 
org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:254)

NEXT Exception follows
java.security.AccessControlException: access denied 
(java.io.FilePermission 
/export/home/tmp/devel/derbydev/testing/testdb/log/logmirror.ctrl read)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:269)
at 
java.security.AccessController.checkPermission(AccessController.java:401)
at 
java.lang.SecurityManager.checkPermission(SecurityManager.java:524)

at java.lang.SecurityManager.checkRead(SecurityManager.java:863)
at java.io.File.exists(File.java:678)
at 
org.apache.derby.impl.store.raw.log.LogToFile.boot(LogToFile.java:2987)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
at 
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.bootLogFactory(BaseDataFileFactory.java:1761)
at 
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.setRawStoreFactory(BaseDataFileFactory.java:1217)

at org.apache.derby.impl.store.raw.RawStore.boot(RawStore.java:373)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
at 
org.apache.derby.impl.store.access.RAMAccessManager.boot(RAMAccessManager.java:987)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
at 
org.apache.derby.impl.services.monitor.TopService.bootModule(TopService.java:290)
at 
org.apache.derby.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:542)
at 
org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:418)
at 
org.apache.derby.impl.db.BasicDatabase.bootStore(BasicDatabase.java:738)
at 
org.apache.derby.impl.db.BasicDatabase.boot(BasicDatabase.java:178)
 

Re: AccessControlException on log/logmirror.ctrl after bouncing server...

2006-07-27 Thread Andreas Korneliussen

I think this is the same problem as reported in

http://issues.apache.org/jira/browse/DERBY-1241


Andreas



Re: [VOTE] Approve coding conventions for the Derby project

2006-08-14 Thread Andreas Korneliussen

+1 Adopt the coding convention described.


Andreas


Re: DERBY-1057 : patch Derby1057_ref4.diff and a "G" svn status

2006-08-15 Thread Andreas Korneliussen

Jean T. Anderson wrote:

Applying derby1057_ref4.diff gets

   Reversed (or previously applied) patch detected!  Assume -R?

so I did this trick:

   svn up -r 431389
   patch -p0 -i derby1057_ref4.diff
   svn up

'svn status' shows everything looks good except for the ditamap file:

   G  src/ref/refderby.ditamap

I'm looking for info about that "G" status and am not finding it at
http://svnbook.red-bean.com/en/1.0/re26.html , but I vaguely recall from
other ASF projects that it means something about local modifications
colliding with what is checked into the server.

The doc build succeeds. Is it "safe" to just go ahead and commit this?



Yes, the G stands for merged.
http://svnbook.red-bean.com/en/1.2/svn.tour.cycle.html#svn.tour.cycle.examine.status

Andreas


thanks,

 -jean




Re: [junit] Move JUnit base/utility classes???

2006-08-16 Thread Andreas Korneliussen

David Van Couvering wrote:

Sounds good to me...

David

Daniel John Debrunner wrote:

Currently the JUnit base and utility classes are in this package:

org.apache.derbyTesting.functionTests.util

(See http://wiki.apache.org/db-derby/DerbyJUnitTesting)

I was wondering if they should be moved, for two reasons:

  1) That package is cluttered up with other stuff, it's more or less a
dumping ground. Utilites, JUnit base classes, "user level" classes for
procedures and vtis, etc.

  2) JUnit tests can be much more than funcional tests, e.g. having
system tests as JUnit tests would make them easy for everyone to run.

I was thinking of the following package:

org.apache.derbyTesting.junit

The package would be limited to base classes for JUnit tests and JUnit
related utilities such as the JDBC class. Classes for specific tests, or
those that implement Java procedures for tests etc. would not be allowed.

The functional tests would continue to live in their current location,
just that the super-class BaseJDBCTestCase would be in the new package.

Thoughts?


I support moving the utility classes to a new package.

Below are some thoughts on how Junit tests could be structured, in case 
we would like to also move tests:


Package for derby-specific utility classes:

org.apache.derbyTesting.junit.util
 -> TestConfiguration, JDBCClient, JDBC,..


Package for common testcase/testsetup classes:

org.apache.derbyTesting.junit.common
 -> BaseTestCase, BaseJDBCTestCase, BaseJDBCTestSetup


Packages for tests:
 org.apache.derbyTesting.junit.tests
 org.apache.derbyTesting.junit.tests.jdbcapi
 org.apache.derbyTesting.junit.tests.lang
 org.apache.derbyTesting.junit.tests.store
 org.apache.derbyTesting.junit.tests. . .


Andreas



[junit] _app.properties replacement mechanism for JUnit tests

2006-08-16 Thread Andreas Korneliussen
I am about to enable more testcases in ConcurrencyTest, which relies on 
getting a lock timeout to verify a lock conflict. The problem is that 
default setting for lock timeout is 60 seconds, and the test uses a lot 
of time waiting for the timeout. I can make the test go 10 times faster 
by reducing derby.locks.waitTimeout (and with all testcases enabled it 
used 350 secs in embedded framework).


With the current harness, I could add a ConcurrencyTest_app.properties 
file and set it there. However this test is now part of _Suite (a pure 
junit Suite).


One quick-fix for me would be to create a _Suite_app.properties. This 
would cause all junit tests in the suite to run with the same 
properties. Another quick-fix would be to remove it from the _Suite, and 
put it into the "old-harness" suites.


- However, if the Derby community is interested at completely replacing 
the old test harness in the future, it would be good to have a pure 
junit solution to this.


One solution could be to create a TestSetup, which can configure the 
database for a junit Test, and delete the database once the test is 
complete - i.e


public class DBSetup extends BaseJDBCTestSetup
{
  public void setUp() { create and configure db }
  public void tearDown() { delete db }
}


Other thoughts ?

-- Andreas


Re: [junit] Move JUnit base/utility classes???

2006-08-16 Thread Andreas Korneliussen

The functional tests would continue to live in their current location,
just that the super-class BaseJDBCTestCase would be in the new package.

Thoughts?


I support moving the utility classes to a new package.

Below are some thoughts on how Junit tests could be structured, in case
we would like to also move tests:

Package for derby-specific utility classes:

org.apache.derbyTesting.junit.util
 -> TestConfiguration, JDBCClient, JDBC,..


Package for common testcase/testsetup classes:

org.apache.derbyTesting.junit.common
 -> BaseTestCase, BaseJDBCTestCase, BaseJDBCTestSetup


I'm not sure the split between util and common here is worth it. The
classes in common have a very close relationship with the classes in
util, to my mind it seems they should be together.



Yes, that is correct.


Packages for tests:
 org.apache.derbyTesting.junit.tests
 org.apache.derbyTesting.junit.tests.jdbcapi
 org.apache.derbyTesting.junit.tests.lang
 org.apache.derbyTesting.junit.tests.store
 org.apache.derbyTesting.junit.tests. . .



I don't see any value in this, the existing layput looks fine to me.

org.apache.derbyTesting.junit.functionalTests.tests.jdbcapi
org.apache.derbyTesting.junit.unitTests.lang
org.apache.derbyTesting.junit.systemTests.app1 (future)


(Note: existing layout does not have .junit.)

The value would have been to separate out the junit tests from the other 
tests, however I understand it is not prioritized and possibly not worth 
the effort.


Regards
Andreas


Re: svn commit: r432569 - /db/derby/code/trunk/java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java

2006-08-18 Thread Andreas Korneliussen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Daniel John Debrunner wrote:
> [EMAIL PROTECTED] wrote:
> 
>> Author: andreask
>> Date: Fri Aug 18 06:05:45 2006
>> New Revision: 432569
>>
>> URL: http://svn.apache.org/viewvc?rev=432569&view=rev
>> Log:
>> Fixed bug in setUp causing it to only start server when running in embedded 
>> mode.
> 
>>  final public class NetworkServerTestSetup extends TestSetup {
>> @@ -57,7 +53,7 @@
>>   */
>>  protected void setUp() throws Exception {
>>  
>> -if (config.getJDBCClient().isEmbedded()) {
>> +if (!config.getJDBCClient().isEmbedded()) {
>>  BaseTestCase.println("Starting network server:");
>>  networkServerController = new NetworkServerControl
>>  (InetAddress.getByName(config.getHostName()), 
>> config.getPort());
>>
> 
> Why is the check for isEmbedded even there?
>

As it is now, I need the check of isEmbedded() somewhere. Either in
NetworkServerTestSetup, or in the jdbcapi._Suite.suite() method.

> Wouldn't a test or a suite installing this decorator indicate that the
> network server needs to be started? Not saying it's wrong, I'm just
> trying to understand how it would be used. I was assuming that this
> decorator would only be used outside of the existing harness, or inside
> the harness only for tests that only run in network server mode.
> 
> E.g. I was imanging a top level JUnit suite AllJDBC that would include
> the jdbcapi._Suite and jdbc40._Suite like this.
> 
>suite.add(jdbcapi._Suite.suite());
>suite.add(jdbcapi._Suite.suite());
> 
>suite.add(new NetworkServerTestSetup(jdbcapi._Suite.suite()));
>suite.add(new NetworkServerTestSetup(jdbc40._Suite.suite()));
> 

Then NetworkServerTestSetup would also need a mechanism to tell the
underlying test(s) that they should use network driver instead of embedded.

The way I was planning to use NetworkServerTestSetup, right now, was to
have :

jdbcapi._Suite.suite() {
   ..
   suite.addTest(ConcurrencyTest.suite());
   suite.addTest(..);
   ..

   return new NetworkServerTestSetup(suite))
}

Alternatively, I could move the check to the suite() method:

jdbcapi._Suite.suite() {
   ..
   suite.addTest(ConcurrencyTest.suite());
   suite.addTest(..);
   ..
   if (isEmbedded) {
  return suite
   } else {
  return new NetworkServerTestSetup(suite))
   }
}

I would also need to disable starting of network server from the harness
for the _Suite test.

Andreas

> Dan.
> 
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.4 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE5dc936DpyYjYNyIRAu67AKDJ/96f1NqWIjYdAOGoJzJtYwfk+QCdEVRL
5HebJuZtQtqgYBroFJQaKUo=
=u76I
-END PGP SIGNATURE-


Re: [junit] frameworks in Junit WAS Re: svn commit: r432569

2006-08-18 Thread Andreas Korneliussen

Daniel John Debrunner wrote:

Andreas Korneliussen wrote:



Daniel John Debrunner wrote:



[EMAIL PROTECTED] wrote:




Author: andreask
Date: Fri Aug 18 06:05:45 2006
New Revision: 432569

URL: http://svn.apache.org/viewvc?rev=432569&view=rev
Log:
Fixed bug in setUp causing it to only start server when running in embedded 
mode.



final public class NetworkServerTestSetup extends TestSetup {
@@ -57,7 +53,7 @@
*/
   protected void setUp() throws Exception {
   
-if (config.getJDBCClient().isEmbedded()) {

+if (!config.getJDBCClient().isEmbedded()) {
   BaseTestCase.println("Starting network server:");
   networkServerController = new NetworkServerControl
   (InetAddress.getByName(config.getHostName()), config.getPort());



Why is the check for isEmbedded even there?




As it is now, I need the check of isEmbedded() somewhere. Either in
NetworkServerTestSetup, or in the jdbcapi._Suite.suite() method.




Wouldn't a test or a suite installing this decorator indicate that the
network server needs to be started? Not saying it's wrong, I'm just
trying to understand how it would be used. I was assuming that this
decorator would only be used outside of the existing harness, or inside
the harness only for tests that only run in network server mode.

E.g. I was imanging a top level JUnit suite AllJDBC that would include
the jdbcapi._Suite and jdbc40._Suite like this.

 suite.add(jdbcapi._Suite.suite());
 suite.add(jdbcapi._Suite.suite());

 suite.add(new NetworkServerTestSetup(jdbcapi._Suite.suite()));
 suite.add(new NetworkServerTestSetup(jdbc40._Suite.suite()));




Then NetworkServerTestSetup would also need a mechanism to tell the
underlying test(s) that they should use network driver instead of embedded.

The way I was planning to use NetworkServerTestSetup, right now, was to
have :

jdbcapi._Suite.suite() {
  ..
  suite.addTest(ConcurrencyTest.suite());
  suite.addTest(..);
  ..

  return new NetworkServerTestSetup(suite))
}



I'm lost here trying to understand what that achieves, adding the
network server test setup decorator?



I want to run the test ConcurrencyTest with a set of properties for the 
derby database to reduce lock timeout. I add SystemPropertiesSetup(..) 
around the test to set the system properties, however that will only 
work in embedded mode, unless of course I also run the network server in 
the same vm. Therefore additionally having NetworkServerTestSetup helps 
achieving this.



When run in the harness the network server is already started correctly
so won't this just try to start another server and fail?



It won't fail, since the the harness won't start it when 
_Suite_app.properties contains a property to not start it.



When run in a Junit test runner the that decorator will do nothing (I
think).



It will start the network server when run in any frameworks except 
embedded. That way, you may be able to run tests in DerbyNetClient 
framework in standard Junit test runners without manually starting the 
network server, or depending on the harness to do it.



I think the decorator you've added is useful but I think how different
frameworks are handled when just running using JUnit test runners needs
some planning. I've been thinking about it, but not sure of the correct
approach. <


I think the new decorator simply builds on what is currently present: 
TestConfiguration. Each BaseTestCase has a TestConfiguration (currently 
it is a final static, immutable singleton, and configured from system 
properties). Therefore, within one VM we only support one framework (in 
TestConfiguration).


I think what you wish is to be able to run multiple tests with different 
frameworks (all frameworks) within the same VM. To do that it is 
necessary to look into the way Testconfiguration is being used, and 
possibly how suites are being created.




...Seems to me it should be based upon requirements of
developers, which I think are:

1) Run the whole suite in all frameworks without having to specify any
options. I think this should be something like:

java junit.textui.TestRunner
org.apache.derbyTesting.functionTests.suites.All

or (as well)

java -jar test/derbyTesting.jar

2) Run all the tests relevant to a single area, e.g. store, language,
jdbc in embedded, e.g.

java junit.textui.TestRunner
org.apache.derbyTesting.functionTests.suites.JDBC

java junit.textui.TestRunner
org.apache.derbyTesting.functionTests.suites.Language

java junit.textui.TestRunner
org.apache.derbyTesting.functionTests.suites.Store

3) Run all the tests relevant to a single area, across all frameworks

Does this make sense?



Yes, except I am a bit worried about

java junit.textui.TestRunner
 org.apache.derbyTesting.functionTests.suites.All

I'd rather see it run all suites in one framework specified on 
commandline, than in trying to figure out which frameworks are availabe 
on the classpath (i.e db2client etc).


Andreas


Re: [junit] frameworks in Junit WAS Re: svn commit: r432569

2006-08-18 Thread Andreas Korneliussen



It won't fail, since the the harness won't start it when
_Suite_app.properties contains a property to not start it.



My intention with my current set of work for JUnit is to divorce the
JUnit tests from the existing harness and have them be able to run
standalone. Any time we add dependencies back to the harness, e.g. a
_Suite_app.properties file then that goal gets harder. Maybe it's ok in
this case, because it's really a step towards the divorce by relying
less on the harness (ie the test starts the server itself). I would just
encourage everyone to consider carefully before they add _app.properties
files for JUnit tests.




Yes, that was my intention too: avoid adding _app.properties for 
ConcurrencyTest. Therefore I wanted to use SystemPropertiesTestSetup and 
 also the new NetworkServerTestSetup (to make SystemPropertiesTestSetup 
have relevance in network frameworks).



I think the new decorator simply builds on what is currently present: 
TestConfiguration. Each BaseTestCase has a TestConfiguration (currently it is a 
final static, immutable singleton, and configured from system properties). 
Therefore, within one VM we only support one framework (in TestConfiguration).

I think what you wish is to be able to run multiple tests with different frameworks (all frameworks) within the same VM. To do that it is necessary to look into the way Testconfiguration is being used, and possibly how suites are being created. 



Yes, that's my plan.



Yes, except I am a bit worried about

java junit.textui.TestRunner
org.apache.derbyTesting.functionTests.suites.All

I'd rather see it run all suites in one framework specified on commandline, than in trying to figure out which frameworks are availabe on the classpath (i.e db2client etc). 



I'm not clear on your concern here. Could you be more explicit?
I do believe there should be an option to run all tests in a single
framework, but I also think there should be a suite that matches what
derbyall is today.



Agreed - my concern was that the derbyall replacement test would need to 
analyze the classpath in order to figure which tests to run. Today, I 
think RunSuite does that when running derbyall. Not really a big concern.


Regards
Andreas



Thanks, I think we are making progress here.
Dan.






Re: [jira] Commented: (DERBY-1387) Add JMX extensions to Derby

2006-08-21 Thread Andreas Korneliussen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sanket Sharma wrote:
> On 8/21/06, Andreas Korneliussen (JIRA)  wrote:
>>[
>> http://issues.apache.org/jira/browse/DERBY-1387?page=comments#action_12429432
>> ]
>>
>> Andreas Korneliussen commented on DERBY-1387:
>> -
>>
>> Hi, thanks for providing this patch. I have tried compiling it, and I
>> do have the following comments:
>>
>> 1. It seems that most of the classes have multiple entries in the
>> patch file, so when applying the patch, I got the same class multiple
>> times in the same file.
> 
> I'm not able to follow what you are trying to point out. Are you
> referring to the imports? Can you please explain so that I may look
> back at the patch and correct? This is the first time I've submitted
> anything using svn diff. Any help would be appriciated.
> 

The same file seems to be added multiple times in the patch file. I.e on
line 13 the class MBeanFactory is added:

Index: java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
===
- --- java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
(revision 0)
+++ java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
(revision 0)
@@ -0,0 +1,161 @@


this is repeated in line 372:
Index: java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
===
- --- java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
(revision 0)
+++ java/engine/org/apache/derby/iapi/services/mbeans/MBeanFactory.java
(revision 0)
@@ -0,0 +1,161 @@


A possible reason, could be:
a, you did svn diff >> filename.diff instead of svn diff > filename.diff
b, my download is corrupt

Andreas

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.4 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE6ddi36DpyYjYNyIRApp8AJ0WfizxdQ/YvoNyvkTQSFQJdOQ9igCfZVGN
SZ16uxx2gI4C8HISjFqOaXY=
=ef2Q
-END PGP SIGNATURE-


Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen
Knut Anders Hatlen wrote:
> "Andreas Korneliussen (JIRA)"  writes:
> 
>> Assuming the Derby embedded JDBC driver is thread-safe, it should be
>> safe for a result set to call its own close() method in its
>> finalizer. If you get a dead-lock in the finalizer, it proves that
>> it is also possible to write a multithreaded program which gets
>> deadlocks when calling ResultSet.close, and derby then is not really
>> MT-safe.
>>
>> If this happens, I think it is better to fix the embedded driver so
>> that it really becomes MT-safe, than avoiding synchronization in the
>> finalizer threads.
> 
> There are calls to System.runFinalization() many places in the
> code. If the thread that invokes System.runFinalization() has obtained
> the same mutex that a finalize method requires, there can indeed be
> deadlocks. (But I guess you will argue that we shouldn't call
> runFinalization() explicitly.)
> 
Yes.

>> As for the suggested change in 1142, I would note that If there is
>> no synchronization in the finalizer, and you set a field in a object
>> from it, there is no guarantee that other threads will see the
>> modification of the field (unless, I think, it is
>> volatile). However, I think Mayuresh has been working on this issue,
>> so maybe he has tried that approach?
> 
> FWIW, I tried that approach in my sandbox (setting a volatile variable
> in GenericLanguageConnectionContext from BaseActivation.markUnused())
> and I didn't see the OutOfMemoryError any more. It's a very simple
> fix, and I don't think the overhead is noticeable, so I'd recommend
> that we go for that solution.
> 
Seems like a good idea.

Andreas




Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen

> Not sure of the guarantee of the unsynchronized field being set. Are you 
> saying that field will never be seen as set, or that the setting may not be 
> seen for some time?
> 

It may be seen, however it may also never  be seen.

Andreas


Re: [junit] Move JUnit base/utility classes???

2006-08-22 Thread Andreas Korneliussen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kristian Waagan wrote:
<>

> Hi,
> 
> I still have a question regarding the placement of JUnit tests. I have
> brought this up before, but got very little response (I did get some,
> see below).
> 
> Do we want to support [unit] testing of package private classes?

Maybe it is sufficient to support testing the public methods of public
classes ?

> The easiest solution to achieve this is to keep a mirrored/separate
> source tree, where the tests are put into the corresponding Derby
> production code package (for instance 'org.apache.derby.iapi.types').
> 

I think there are some testclasses which are in the same source tree as
the derby source, however the .class files are packaged into
dergbyTesting.jar.  I.e:  org/apache/derby/impl/drda/TestProto.class


> Andrew brought up the concern wrt building and distribution. We do not
> want to include the testing classes in the releases. Further, JAR
> sealing is causing headaches (unable to distribute the test classes in a
> separate JAR).
> 

Yes, and that is probably why derbynet's manifest has:

Name: org/apache/derby/impl/drda/
Sealed: false


Andreas
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.4 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE6vbY36DpyYjYNyIRArhHAJsE/FmJpRui8eGLjCGZR2RCj4DLIACeOmtf
Vvr1uNT1BlAslq+Oahti3HI=
=nXDM
-END PGP SIGNATURE-


Re: Question on SUR and U lock.

2006-08-23 Thread Andreas Korneliussen
Sunitha Kambhampati wrote:
> I was doing some testing for SUR and had a question about the expected
> behavior.
> 
> The SUR related doc on
> http://db.apache.org/derby/docs/dev/devguide/rdevconceptssur.html  says
> "The row which the cursor is positioned on is locked, however once it
> moves to another row,
> the lock may be released depending on transaction isolation level."
> 
> In my test (isolation level is default - RC),   I have a SUR, resultset
> has 1000 rows,  all rows are materialized by calling absolute(-1). After
> this the cursor is positioned before the first row by calling
> beforeFirst().  Printing the locktable shows
> 1)a IX lock on the table,  which is fine.
> 2)U row lock on the table. Why do we hold the U lock when the cursor is
> not positioned on any row ?.

Hi,
Thanks for testing SUR. This is not as it was intendended to be, and I
have already filed a bug report:

DERBY-1696 "transaction may sometimes keep lock on a row after moving
off the resultset in scrollable updatable resultset"

Hopefully it can be fixed for 10.2

Regards
Andreas


Re: [jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-28 Thread Andreas Korneliussen
Mike Matrigali wrote:
> 
> 
> Andreas Korneliussen (JIRA) wrote:
>>  [ http://issues.apache.org/jira/browse/DERBY-1696?page=all ]
>>
>> Andreas Korneliussen updated DERBY-1696:
>> 
>>
>> Component/s: Store
>>
>> To fix this issue, I need a mechanism to notify the store
>> (scancontroller) to move off the row (i.e to afterLast() or
>> beforeFirst()), so that it can release the lock on the current row.
>>
>> I do consider the following options:
>>
>> Alternative 1: Use the method
>> ScanController.positionAtRowLocation(RowLocation rl)
>>
>> Here the RowLocation objects could represent the positions beforeFirst
>> and afterLast. I.e one could make use of the RecordHandle.
>> RESERVED4_RECORD_HANDLE and
>> RecordHandle. RESERVED4_RECORD_HANDLE to represent to beforeFirst and
>> afterLast positions.
>>
>> When the method ScanController.positionAtRowLocation(RowLocation rl),
>> is called with a rowlocation with these  positions,
>> the scan implementation may release the U-lock of the current row
>>
>> Alternative 2:
>> Add new methods to ScanController interface: moveToAfterLast() and
>> moveToBeforeFirst()
> 
> Can you just close the scan if you don't need it positioned anymore?

I'll check if that works

Regards
Andreas



signature.asc
Description: OpenPGP digital signature


Re: [jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-28 Thread Andreas Korneliussen

>> Can you just close the scan if you don't need it positioned anymore?
> 
> I'll check if that works
> 

The problem is that we need it re-positioned later, i.e if the user
moves to the afterLast() position and then scrolls to any other row, we
need to lock that row.

If the scan has been closed with close(), we cannot reopen it. If it has
been closed with closeForEndTransaction(.) we may reopen it, however
that would be quite undesirable, since we are not ending the
transaction, and we do not access the scancontroller from ScanManager
(which has closeForEndTransaction(.).

An alternative which I have considered is to make
ScanController.positionAtRowLocation(..) handle this by adding semantics
to the RowLocation being NULL.

I.e in HeapScan.java add this:

 public boolean positionAtRowLocation(RowLocation rl) throws
StandardException {
if (rl==null)
{
positionAtDoneScan(scan_position);
return(false);
}

positionAtDoneScan will set the scan state to SCAN_DONE and release lock
as desired.

Andreas



signature.asc
Description: OpenPGP digital signature


Re: [jira] Updated: (DERBY-1696) transaction may sometimes keep lock on a row after moving off the resultset in scrollable updatable resultset

2006-08-29 Thread Andreas Korneliussen
Mike Matrigali wrote:
> The current interface to reopen a heap scan at the beginning
> is reopenScan(), I think it is confusing to change
> positionAtRowLocation(null) to also do this.
> 
> Does your usage want the table level intent lock released or not?
> 

The usage requires that the table level intent lock is not released.

> Because previously this code was never triggered by user code
> the heap implmentation does the unlocking in the fetchRows
> side which was always called immediately after the reopen.  Now that
> a user can indirectly cause a reopen and a fetch in separate statements
> it is reasonable to move the release of the lock to the reopen rather
> than delay it to the next fetch.
> 
> Note the reopenScan() logic the btree case already does the unlocking
> here (code is completely different than heap case, so can't be copied).
> 
> To reposition a heap scan at the beginning you should just be able to
> call:
> reopenScan() with mostly null args, this is the standard way most heap
> scans are positioned at the beginning.
> 
> Does your code use the reopenScanByRowLocation interfaces, I don't
> think this code is doing unlocking in read committed correctly either.
> 

The code does not use reopenScanByRowLocation(..), it uses
positionAtRowLocation(..) which uses a private method
"reopenScanByRecordHandleAndSetLocks" which does unlocking and locking.

I have prepared a patch which uses reopenScan(). Since reopenScan() sets
the scan state to SCAN_INIT or SCAN_HOLD_INIT, I do also need to update
some of the logic which was added w.r.t holdability:

reopenAfterEndTransaction() assumes that if the scan state is in
SCAN_HOLD_INIT, no RowLocations have been read, and therefore it is not
necessary to set the rowlocationsInvalidated flag. Since we now will use
reopenScan() when moving to beforeFirst / afterLast, this does not hold,
and the flag must also be set in these cases as well.

Andreas

> /mikem
> 
> Andreas Korneliussen wrote:
>>>> Can you just close the scan if you don't need it positioned anymore?
>>>
>>> I'll check if that works
>>>
>>
>>
>> The problem is that we need it re-positioned later, i.e if the user
>> moves to the afterLast() position and then scrolls to any other row, we
>> need to lock that row.
>>
>> If the scan has been closed with close(), we cannot reopen it. If it has
>> been closed with closeForEndTransaction(.) we may reopen it, however
>> that would be quite undesirable, since we are not ending the
>> transaction, and we do not access the scancontroller from ScanManager
>> (which has closeForEndTransaction(.).
>>
>> An alternative which I have considered is to make
>> ScanController.positionAtRowLocation(..) handle this by adding semantics
>> to the RowLocation being NULL.
>>
>> I.e in HeapScan.java add this:
>>
>>  public boolean positionAtRowLocation(RowLocation rl) throws
>> StandardException {
>> if (rl==null)
>> {
>> positionAtDoneScan(scan_position);
>> return(false);
>> }
>>
>> positionAtDoneScan will set the scan state to SCAN_DONE and release lock
>> as desired.
>>
>> Andreas
>>
> 




signature.asc
Description: OpenPGP digital signature


Re: jdk1.6 regresstion test failures: _Suite.junit and TestQueryObject with IllegalAccessException

2006-09-08 Thread Andreas Korneliussen
..
> * Diff file
> derbyall/derbynetclientmats/DerbyNetClient/jdbc40/_Suite.diff *** Start:
> _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 2006-09-01
> 21:27:16 *** 0 add > F. > There
> was 1 failure: > 1)
> testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
> Unexpected SQL state. expected:<22001> but was:<58009> > FAILURES!!! >
> Tests run: 2048, Failures: 1, Errors: 0 Test Failed. *** End: _Suite
> jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 2006-09-01 21:27:39
> *** * Diff file derbyall
> 
> Have these issues been fixed the last few days.  If not, I can open
> jiras for these issues.

The last issue has been fixed on trunk (DERBY-1800)

Regards
Andreas


> Thanks,
> Sunitha.
> 
> 




signature.asc
Description: OpenPGP digital signature


Re: [jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-18 Thread Andreas Korneliussen
Øystein Grøvlen wrote:
> Andreas Korneliussen (JIRA) wrote:
> 
>> String.toUpperCase(..) with english locale, should return a string
>> with the same number of characters, and it should therefore be valid
>> to do a check of number of characters before doing any conversions.
> 
> Is it correct to always use English locale in this case?  Ref the
> reference guide on SQL identifiers:
> 
>   An ordinary identifier must begin with a letter and contain
>   only letters, underscore characters (_), and digits. The
>   permitted letters and digits include all Unicode letters and
>   digits, but Derby does not attempt to ensure that the
>   characters in identifiers are valid in the database's
>   locale.
> 
> Should not it be possible to match column names in any locale?
> 

Your question is a valid question to ask about this method, however my
intention was to make the method keep its current behavior. The patch
simply preserves the current behaviour (which is to use english locale).
So any sets of strings s1 and s2 should make the method return the same
values as before the patch. If this is not the case, the patch is not as
intended.

When looking deeper into the String class, my understanding is that the
only Locale which has different semantics than other Locales when it
comes to toUpperCase(Locale..), is Turkish, so maybe Derby does not work
correctly in Turkish locale.

I also wondered why Derby has its own SQLIgnoreCase method, instead of
simply using String.equalsIgnoreCase(). The Derby implementation is very
inefficient compared to the String.equalsIgnoreCase() method, since you
risk creating two new string objects before doing the comparison.

Andreas


> -- 
> Øystein




signature.asc
Description: OpenPGP digital signature


Re: [jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-19 Thread Andreas Korneliussen
Daniel John Debrunner wrote:
> Andreas Korneliussen wrote:
> 
>> Øystein Grøvlen wrote:
>>
>>> Andreas Korneliussen (JIRA) wrote:
>>>
>>>
>>>> String.toUpperCase(..) with english locale, should return a string
>>>> with the same number of characters, and it should therefore be valid
>>>> to do a check of number of characters before doing any conversions.
>>> Is it correct to always use English locale in this case?  Ref the
>>> reference guide on SQL identifiers:
>>>
>>>  An ordinary identifier must begin with a letter and contain
>>>  only letters, underscore characters (_), and digits. The
>>>  permitted letters and digits include all Unicode letters and
>>>  digits, but Derby does not attempt to ensure that the
>>>  characters in identifiers are valid in the database's
>>>  locale.
>>>
>>> Should not it be possible to match column names in any locale?
>>>
> 
> No, see below.
> 
>> Your question is a valid question to ask about this method, however my
>> intention was to make the method keep its current behavior. The patch
>> simply preserves the current behaviour (which is to use english locale).
>> So any sets of strings s1 and s2 should make the method return the same
>> values as before the patch. If this is not the case, the patch is not as
>> intended.
>>
>> When looking deeper into the String class, my understanding is that the
>> only Locale which has different semantics than other Locales when it
>> comes to toUpperCase(Locale..), is Turkish, so maybe Derby does not work
>> correctly in Turkish locale.
> 
> I think the changes were made to use a single locale (English) for the
> SQL language so that Derby would work in Turkish. Having the name
> matching in SQL be dependent on the locale of the client or engine would
> mean that the potential exists for a SQL statement from a single
> application to have different meanings in different locales. That is not
> the expected behaviour when working against a programming language.
> 
> When the SQL parser upper cased items in the engine's locale an
> application using 'insert' would fail in Turkish, as it does not upper
> case to "INSERT".
> 
>> I also wondered why Derby has its own SQLIgnoreCase method, instead of
>> simply using String.equalsIgnoreCase(). The Derby implementation is very
>> inefficient compared to the String.equalsIgnoreCase() method, since you
>> risk creating two new string objects before doing the comparison.
> 
> I think because String.equalsIgnoreCase() is dependent on the current
> locale.
> 

String.toUpperCase() is locale dependent, however I am not sure that
String.equalsIgnoreCase() is locale dependend (does not seem so when
reading the code and javadoc).

I did find an issue with the German double s: ß.

"ß".toUpperCase() returns "SS".

However "ß".equalsIgnoreCase("SS") returns false.

So basically, "ß".toUpperCase().equalsIgnoreCase("ß") returns false.

The Derby method: SQLUtil.SQLIgnoreCase("ß", "SS") returns true (however
the patch which I attached, will make it return false and therefore is
not as intended).

If my column name is "classnames", should it be accessible by using the
string "claßnames" ?

Regards
Andreas



signature.asc
Description: OpenPGP digital signature


Re: [jira] Updated: (DERBY-1862) Simple hash improves performance

2006-09-19 Thread Andreas Korneliussen
<>

> As far as I remember from my high-school German, is that even if all "ß" may 
> be
> converted to uppercase "SS", not all "SS" in uppercase may be converted to the
> lowercase "ß". If the "SS" appears in a combined word (in German, words are
> combined by concatenating them, as in Norwegian) where one word ends with "S"
> and the second word starts with "S", the result when converted to lowercase
> should be "ss" (I am trying to construct an example, but my German is very, 
> very
> rusty... ;-)
> 

There is of course no logic in String.toLowerCase() to make "SS" be
converted to "ß" based on German grammar rules, since it does a
character by character conversion.

So "ICH HEISSE BERNT".toLowerCase() will be "ich heisse bernt", and not
"ich heiße bernt" ;-)

Regards
Andreas




signature.asc
Description: OpenPGP digital signature


  1   2   3   4   5   6   >