Statement cache deadlock

2009-03-05 Thread Jeff Stuckman
Hello,

I have spent the past three days trying to find the cause of a seemingly
impossible deadlock in my Derby application. This was hard to debug
because the deadlock only occurs under heavy load. I think I have found the
cause -- due to a design flaw in the statement cache, a deadlock can occur
if a prepared statement becomes out-of-date.

I will illustrate this with the following example:

The application is using the embedded Derby driver. The application has two
threads, and each thread uses its own connection.

There is a table named MYTABLE with column MYCOLUMN.

1. A thread prepares and executes the query SELECT MYCOLUMN FROM MYTABLE.
The prepared statement is stored in the statement cache (see
org.apache.derby.impl.sql.GenericStatement for this logic)
2. After some time, the prepared statement becomes invalid or out-of-date
for some reason (see org.apache.derby.impl.sql.GenericPreparedStatement)
3. Thread 1 begins a transaction and executes LOCK TABLE MYTABLE IN
EXCLUSIVE MODE
4. Thread 2 begins a transaction and executes SELECT MYCOLUMN FROM MYTABLE.
The statement is in the statement cache but it is out-of-date. The thread
begins to recompile the statement. To compile the statement, the thread
needs a shared lock on MYTABLE. Thread 1 already has an exclusive lock on
MYTABLE. Thread 2 waits.
5. Thread 1 executes SELECT MYCOLUMN FROM MYTABLE. The statement is in the
statement cache but it is being compiled. Thread 1 waits on the statement's
monitor.
6. We have a deadlock. Derby eventually detects a lock timeout, but the
error message is not descriptive. The stacks at the time of the deadlock
are:

Thread 1:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
org.apache.derby.impl.sql.GenericPreparedStatement.rePrepare(Unknown Source)
org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source)
org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source)
org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown
Source)
org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeQuery(Unknown
Source)

Thread 2 (some frames omitted):
org.apache.derby.impl.services.locks.ActiveLock.waitForGrant(Unknown Source)
org.apache.derby.impl.services.locks.ConcurrentLockSet.lockObject(Unknown
Source)
org.apache.derby.impl.services.locks.AbstractPool.lockObject(Unknown Source)
.
org.apache.derby.impl.sql.compile.ResultColumnList.generateHolderMethod(Unkn
own Source)
org.apache.derby.impl.sql.compile.FromBaseTable.getScanArguments(Unknown
Source)
.
org.apache.derby.impl.sql.compile.IndexToBaseRowNode.generate(Unknown
Source)
org.apache.derby.impl.sql.compile.ProjectRestrictNode.generateMinion(Unknown
Source)

org.apache.derby.impl.sql.compile.ScrollInsensitiveResultSetNode.generate(Un
known Source)
.
org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
org.apache.derby.impl.sql.GenericPreparedStatement.rePrepare(Unknown Source)
org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source)
org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown
Source)
org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeQuery(Unknown
Source)

I discovered an undocumented property that allows me to disable the
statement cache. Setting derby.language.statementCacheSize=0 will disable
the statement cache and prevent the deadlock, because the
GenericPreparedStatement objects are no longer shared between sessions/JDBC
connections.

In my situation, this deadlock is impossible to prevent through careful
database design. You are only safe if all of your transactions are very
simple and cannot be interleaved in a sequence that causes the deadlock.
(For the sake of simplicity, I used LOCK TABLE in my example, but any UPDATE
statement would fit.)

I'm asking any knowledgeable parties to help me with the following:
1. Read my deadlock scenario and see if my analysis of the situation is
correct.
2. Help me write a test case, so I can post this as an issue. (I'm hoping
that we can fix this, so nobody else spends 3 days debugging this like I
just did.) I'm not sure how to invalidate the cache entry so the issue can
be reproduced.
3. If someone is familiar with the statement caching code, maybe we could
find possible solutions. Maybe we could wait until all the locks for
compilation have been acquired before grabbing the
GenericPreparedStatement object (compilingStatement=true). At the very
least, the statement complier could recognize the significance of the lock
timeout and suggest a solution in the exception text.
4. Document the property derby.language.statementCacheSize

Thanks,
Jeff




How to read blobs?

2009-03-05 Thread epdv


Hello!

I want to read blobs in a loop (more exact: the length of the blobs). 
I'm using a connection with autocommit(true) and a prepared statement 
with sth. like select mydata, myblob where mykey=?.


Then I'm using the following loop:

while (rs.next()) {
int m = wantedColumnNames.length;
Blob blob = (Blob)rs.getObject(x);
if (blob == null) {
result = 0L;
} else {
result = blob.length();
blob.free();
}
...
}

I'm always getting an error message saying that the blob cannot be 
released after closing, probably the transaction had already been 
committed. As connection is in autocommit mode, I assume it has already 
been closed after statement execution, but it I try non-autocommit mode, 
the resultset is not open after execute (at least derby says that). How 
can I get the expected result?


BTW, using derby 10.4.2.0 in client/server mode.

Kind regards

Peter



Re: How to read blobs?

2009-03-05 Thread Kristian Waagan

epdv wrote:


Hello!

I want to read blobs in a loop (more exact: the length of the blobs). 
I'm using a connection with autocommit(true) and a prepared statement 
with sth. like select mydata, myblob where mykey=?.


Then I'm using the following loop:

while (rs.next()) {
int m = wantedColumnNames.length;
Blob blob = (Blob)rs.getObject(x);
if (blob == null) {
result = 0L;
} else {
result = blob.length();
blob.free();
}
...
}

I'm always getting an error message saying that the blob cannot be 
released after closing, probably the transaction had already been 
committed. As connection is in autocommit mode, I assume it has already 
been closed after statement execution, but it I try non-autocommit mode, 
the resultset is not open after execute (at least derby says that). How 
can I get the expected result?


Hello Peter,

Can you post the stack traces from the exceptions you get?
That would be nice, so that we can see where the exception is being thrown.
It shouldn't matter, but have you tried using getBlob() instead of 
getObject()?


If possible, posting the runnable code somewhere may also help.


Regards,
--
Kristian



BTW, using derby 10.4.2.0 in client/server mode.

Kind regards

Peter





Derby 10.1 - 10.2 upgrade issue

2009-03-05 Thread Kalyan Inuganti
Hi,

I am reaching out to you guys for some help with a Derby indexing issue that
we have run into at Monsanto, St. Louis. The issue is pretty much the same
issue that was reported by Thomas J. Taylor in 2007 (The link is provided
below).

*Brief Description:*

I have a database that was originally created with Derby 10.1.1.0 and was
recently upgraded to Derby 10.2.2.0. I've performed this upgrade on several
copies of the same database schema (each created on different computers,
but with the same version of Java (1.5.0_07) and Derby (10.1)).

For all but one of the database upgrades, it worked correctly. However, in
one case, it appears that the PRIMARY KEY and FOREIGN KEY constraints have
been lost/corrupted. When I use DBLook to check a 'working' database, I see
the appropriate constraints for keys. However, on the 'defective' database,
these constraints are missing.


We have over 80 tables in the DB and over 1000 users. Even though we have
only 2 reported occurrences of this issue so far, it might be more
widespread. It is a nightmare to manually identify the corrupted indexes for
each occurrence. Any thoughts on how we can tackle this through a
programmatic approach?

Here is the link to the old report:

http://mail-archives.apache.org/mod_mbox/db-derby-user/200704.mbox/%3c462d5df8.80...@gmail.com%3e

Thanks,
Kal


Re: How to read blobs?

2009-03-05 Thread epdv

Sorry, only German language - Win, no lang=C available :(

Kind regards

Peter


org.apache.derby.client.am.SqlException: Der für diesen CLOB/BLOB 
angegebene Locator ist ungültig
Caused: org.apache.derby.client.am.SqlException: Bei der Auswertung 
eines Ausdrucks wurde die Ausnahme 'java.sql.SQLException: Der für 
diesen CLOB/BLOB angegebene Locator ist ungültig' ausgelöst.
at org.apache.derby.client.am.Statement.completeExecute(Unknown 
Source)
at 
org.apache.derby.client.net.NetStatementReply.parseEXCSQLSTTreply(Unknown Source)
at 
org.apache.derby.client.net.NetStatementReply.readExecuteCall(Unknown 
Source)
at 
org.apache.derby.client.net.StatementReply.readExecuteCall(Unknown Source)
at 
org.apache.derby.client.net.NetStatement.readExecuteCall_(Unknown Source)
at org.apache.derby.client.am.Statement.readExecuteCall(Unknown 
Source)
at 
org.apache.derby.client.am.PreparedStatement.flowExecute(Unknown Source)
at 
org.apache.derby.client.am.PreparedStatement.executeX(Unknown Source)
Caused: org.apache.derby.client.am.SqlException: Es können keine 
weiteren java.sql.Clob/java.sql.Blob-Methoden aufgerufen werden, nachdem 
die free()-Methode aufgerufen oder nachdem die Blob/Clob-Transaktion 
ausgeführt oder wiederholt (Rollback) wurde.
at 
org.apache.derby.client.am.CallableLocatorProcedures.handleInvalidLocator(Unknown 
Source)
at 
org.apache.derby.client.am.CallableLocatorProcedures.blobGetLength(Unknown 
Source)

at org.apache.derby.client.am.Blob.getLocatorLength(Unknown Source)
at org.apache.derby.client.am.Lob.sqlLength(Unknown Source)
Caused: java.sql.SQLException: Es können keine weiteren 
java.sql.Clob/java.sql.Blob-Methoden aufgerufen werden, nachdem die 
free()-Methode aufgerufen oder nachdem die Blob/Clob-Transaktion 
ausgeführt oder wiederholt (Rollback) wurde.
at 
org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(Unknown 
Source)
at 
org.apache.derby.client.am.SqlException.getSQLException(Unknown Source)

at org.apache.derby.client.am.Blob.length(Unknown Source)




Re: How to read blobs?

2009-03-05 Thread Peter Ondruška
Hi, just add -Duser.language=en to java startup options.

2009/3/5, epdv peter.nabbef...@gmx.de:
 Sorry, only German language - Win, no lang=C available :(

 Kind regards

 Peter


 org.apache.derby.client.am.SqlException: Der für diesen CLOB/BLOB
 angegebene Locator ist ungültig
 Caused: org.apache.derby.client.am.SqlException: Bei der Auswertung
 eines Ausdrucks wurde die Ausnahme 'java.sql.SQLException: Der für
 diesen CLOB/BLOB angegebene Locator ist ungültig' ausgelöst.
  at org.apache.derby.client.am.Statement.completeExecute(Unknown
 Source)
  at
 org.apache.derby.client.net.NetStatementReply.parseEXCSQLSTTreply(Unknown
 Source)
  at
 org.apache.derby.client.net.NetStatementReply.readExecuteCall(Unknown
 Source)
  at
 org.apache.derby.client.net.StatementReply.readExecuteCall(Unknown Source)
  at
 org.apache.derby.client.net.NetStatement.readExecuteCall_(Unknown Source)
  at org.apache.derby.client.am.Statement.readExecuteCall(Unknown
 Source)
  at
 org.apache.derby.client.am.PreparedStatement.flowExecute(Unknown Source)
  at
 org.apache.derby.client.am.PreparedStatement.executeX(Unknown Source)
 Caused: org.apache.derby.client.am.SqlException: Es können keine
 weiteren java.sql.Clob/java.sql.Blob-Methoden aufgerufen werden, nachdem
 die free()-Methode aufgerufen oder nachdem die Blob/Clob-Transaktion
 ausgeführt oder wiederholt (Rollback) wurde.
  at
 org.apache.derby.client.am.CallableLocatorProcedures.handleInvalidLocator(Unknown
 Source)
  at
 org.apache.derby.client.am.CallableLocatorProcedures.blobGetLength(Unknown
 Source)
  at org.apache.derby.client.am.Blob.getLocatorLength(Unknown Source)
  at org.apache.derby.client.am.Lob.sqlLength(Unknown Source)
 Caused: java.sql.SQLException: Es können keine weiteren
 java.sql.Clob/java.sql.Blob-Methoden aufgerufen werden, nachdem die
 free()-Methode aufgerufen oder nachdem die Blob/Clob-Transaktion
 ausgeführt oder wiederholt (Rollback) wurde.
  at
 org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(Unknown
 Source)
  at
 org.apache.derby.client.am.SqlException.getSQLException(Unknown Source)
  at org.apache.derby.client.am.Blob.length(Unknown Source)





Re: How to read blobs?

2009-03-05 Thread Rick Hillegas

Hi Peter,

If all you need is the lengths of the blobs, then you could use the 
length() function:


select length( blobColumn ) from myTable

For null blobs, the returned value will be null.

Hope this helps,
-Rick

epdv wrote:


Hello!

I want to read blobs in a loop (more exact: the length of the blobs). 
I'm using a connection with autocommit(true) and a prepared statement 
with sth. like select mydata, myblob where mykey=?.


Then I'm using the following loop:

while (rs.next()) {
int m = wantedColumnNames.length;
Blob blob = (Blob)rs.getObject(x);
if (blob == null) {
result = 0L;
} else {
result = blob.length();
blob.free();
}
...
}

I'm always getting an error message saying that the blob cannot be 
released after closing, probably the transaction had already been 
committed. As connection is in autocommit mode, I assume it has 
already been closed after statement execution, but it I try 
non-autocommit mode, the resultset is not open after execute (at least 
derby says that). How can I get the expected result?


BTW, using derby 10.4.2.0 in client/server mode.

Kind regards

Peter





RE: Derby 10.1 - 10.2 upgrade issue

2009-03-05 Thread Thomas J. Taylor
Hi Kal,

 

I'll check to see if I can find the code/process that I used back then to
solve the issue. Since I only had one (remote) Derby installation causing
problems, once I figured out the way to resolve the problem (drop  recreate
index), I probably (1) used DBLook to identify the corrupt (missing)
indexes, then used SQurilleL to (2) identify the names of the keys through
the GUI, (3) write the DDL to drop and re-create the indexes.

 

You should be able to use JDBC to get the same index information and
drop/create the index that way; however, the challenge is identifying the
corrupt indices. Perhaps this might work?

 

Connection connection; // existing db connection

Statement statement = connection.createStatement();  

try {

 // test table to confirm corrupt index: SQLException is thrown if
corrupt

statement.executeQuery(SELECT DeviceID, DeviceName, DeviceType FROM
DeviceInfo WHERE DeviceID=1);

} catch (SQLException ex) {

 // retrieve index information for the corrupt table

//
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/DatabaseMetaData.html#getIn
dexInfo(java.lang.String,
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/DatabaseMetaData.html%23ge
tIndexInfo(java.lang.String,%20java.lang.String,%20java.lang.String,%20boole
an,%20boolean)  java.lang.String, java.lang.String, boolean, boolean)

 DatabaseMetaData databaseMetaData = conn.getMetaData();

 ResultSet resultSet = databaseMetaData.getIndexInfo(null, null,
DeviceInfo, false, false);

 // for each index, drop  recreate the index

 while (resultSet.hasNext()) {

   // get the name of the 

   String indexNameString = resultSet.getString(INDEX_NAME);

   statement.executeUpdate(DROP INDEX +indexNameString+ ON
DeviceInfo);

// recreate index:
http://db.apache.org/derby/docs/10.2/ref/rrefsqlj20937.html

   statement.executeUpdate(CREATE UNIQUE INDEX +indexNameString+
ON DeviceInfo (DeviceID));

}

}

 

Thomas Taylor

INFOTECH Soft, Inc.

 

From: Kalyan Inuganti [mailto:kinuga...@gmail.com] 
Sent: Thursday, March 05, 2009 11:10 AM
To: derby-user@db.apache.org
Subject: Derby 10.1 - 10.2 upgrade issue

 

Hi,

I am reaching out to you guys for some help with a Derby indexing issue that
we have run into at Monsanto, St. Louis. The issue is pretty much the same
issue that was reported by Thomas J. Taylor in 2007 (The link is provided
below). 

Brief Description:

I have a database that was originally created with Derby 10.1.1.0 and was


recently upgraded to Derby 10.2.2.0. I've performed this upgrade on several






copies of the same database schema (each created on different computers,


but with the same version of Java (1.5.0_07) and Derby (10.1)).





For all but one of the database upgrades, it worked correctly. However, in






one case, it appears that the PRIMARY KEY and FOREIGN KEY constraints have


been lost/corrupted. When I use DBLook to check a 'working' database, I see


the appropriate constraints for keys. However, on the 'defective' database,






these constraints are missing.


We have over 80 tables in the DB and over 1000 users. Even though we have
only 2 reported occurrences of this issue so far, it might be more
widespread. It is a nightmare to manually identify the corrupted indexes for
each occurrence. Any thoughts on how we can tackle this through a
programmatic approach?

Here is the link to the old report:

http://mail-archives.apache.org/mod_mbox/db-derby-user/200704.mbox/%3C462D5D
f8.80...@gmail.com%3e

Thanks,
Kal   



Re: Derby 10.1 - 10.2 upgrade issue

2009-03-05 Thread Rick Hillegas

Hi Kalyan,

You may need to join some Derby system tables in order to retrieve the 
information you need. The following email thread may be helpful:


http://www.nabble.com/How-can-I-fetch-constraint-attribute-on-Column-Level-from-SYS-Tables---td19554573.html#a19554573

Hope this helps,
-Rick

Kalyan Inuganti wrote:

Hi,

I am reaching out to you guys for some help with a Derby indexing 
issue that we have run into at Monsanto, St. Louis. The issue is 
pretty much the same issue that was reported by Thomas J. Taylor in 
2007 (The link is provided below).


*Brief Description:*
 I have a database that was originally created with Derby 10.1.1.0 and was
recently upgraded to Derby 10.2.2.0. I've performed this upgrade on several

copies of the same database schema (each created on different computers,
but with the same version of Java (1.5.0_07) and Derby (10.1)).

For all but one of the database upgrades, it worked correctly. However, in

one case, it appears that the PRIMARY KEY and FOREIGN KEY constraints have
been lost/corrupted. When I use DBLook to check a 'working' database, I see
the appropriate constraints for keys. However, on the 'defective' database,

these constraints are missing. 
  

We have over 80 tables in the DB and over 1000 users. Even though we 
have only 2 reported occurrences of this issue so far, it might be 
more widespread. It is a nightmare to manually identify the corrupted 
indexes for each occurrence. Any thoughts on how we can tackle this 
through a programmatic approach?


Here is the link to the old report:

http://mail-archives.apache.org/mod_mbox/db-derby-user/200704.mbox/%3c462d5df8.80...@gmail.com%3e

Thanks,
Kal   




generated always as identity

2009-03-05 Thread aponb
I need to replicate data from mysql to derby. I have the init script for 
the empty derby db. Some tables have columns, which provide generated 
keys. These tables are created like this:
create table foo (foo_id bigint not null generated always as identity 
primary key, ... );


If I try to replicate my data from the other database to derby, it won't 
work because derby wants to generate the key for foo_id. So my idea was 
to create all tables with these columns without the generated always as 
identity statement. Then to import all data. And after then to alter 
the column again.


The import worked, but what I couldn't do was to add the statement 
generated always as identity to the column foo_id.

Does anybody know how can I do that?



Re: generated always as identity

2009-03-05 Thread George H
Have you tried setting the start and increment values of the identity
statement like

GENERATED ALWAYS AS IDENTITY(START WITH x, INCREMENT BY y )

where x is the last auto_increment number from mysql and y is well the
step number to increment by.

--
George H
george@gmail.com



On Thu, Mar 5, 2009 at 6:44 PM,  ap...@gmx.at wrote:
 I need to replicate data from mysql to derby. I have the init script for the
 empty derby db. Some tables have columns, which provide generated keys.
 These tables are created like this:
 create table foo (foo_id bigint not null generated always as identity
 primary key, ... );

 If I try to replicate my data from the other database to derby, it won't
 work because derby wants to generate the key for foo_id. So my idea was to
 create all tables with these columns without the generated always as
 identity statement. Then to import all data. And after then to alter the
 column again.

 The import worked, but what I couldn't do was to add the statement
 generated always as identity to the column foo_id.
 Does anybody know how can I do that?




Re: generated always as identity

2009-03-05 Thread Kristian Waagan

George H wrote:

Have you tried setting the start and increment values of the identity
statement like

GENERATED ALWAYS AS IDENTITY(START WITH x, INCREMENT BY y )

where x is the last auto_increment number from mysql and y is well the
step number to increment by.

--
George H
george@gmail.com



On Thu, Mar 5, 2009 at 6:44 PM,  ap...@gmx.at wrote:

I need to replicate data from mysql to derby. I have the init script for the
empty derby db. Some tables have columns, which provide generated keys.
These tables are created like this:
create table foo (foo_id bigint not null generated always as identity
primary key, ... );

If I try to replicate my data from the other database to derby, it won't
work because derby wants to generate the key for foo_id. So my idea was to
create all tables with these columns without the generated always as
identity statement. Then to import all data. And after then to alter the
column again.

The import worked, but what I couldn't do was to add the statement
generated always as identity to the column foo_id.
Does anybody know how can I do that?


Hello,

As far as I know, you cannot alter the column in such a way.

Can you use GENERATED BY DEFAULT AS IDENTITY instead?
If so, you have to adjust the identity value after the initial data has 
been imported, as George described in his answer, to make sure you don't 
get any duplicate values.
It may also be wise to make sure your insertion code properly handles 
duplicate primary key exceptions.



HTH,
--
Kristian








RE: Derby 10.1 - 10.2 upgrade issue

2009-03-05 Thread Thomas J. Taylor
Kal,

According to the Java 5 docs, the getIndexInfo ResultSet should contain all
of the required information (INDEX_NAME, COLUMN_NAME, NON_UNIQUE) - if the
index consists of multiple columns, then multiple ResultSet entries will be
returned for a given INDEX_NAME, and you'll have to use ORDINAL_POSITION to
recreate the sequencing of columns within the index.

 

If not, you might try looking at the thread/link that Rick Hillegas sent
(http://www.nabble.com/How-can-I-fetch-constraint-attribute-on-Column-Level-
from-SYS-Tables---td19554573.html#a19554573)

 

Thomas

INFOTECH Soft, Inc.

getIndexInfo

ResultSet http://java.sun.com/j2se/1.5.0/docs/api/java/sql/ResultSet.html
getIndexInfo(String
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html  catalog,

   String
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html  schema,

   String
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html  table,

   boolean unique,

   boolean approximate)

   throws SQLException
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/SQLException.html 

Retrieves a description of the given table's indices and statistics. They
are ordered by NON_UNIQUE, TYPE, INDEX_NAME, and ORDINAL_POSITION. 

Each index column description has the following columns: 

1.  TABLE_CAT String = table catalog (may be null) 

2.  TABLE_SCHEM String = table schema (may be null) 

3.  TABLE_NAME String = table name 

4.  NON_UNIQUE boolean = Can index values be non-unique. false when
TYPE is tableIndexStatistic 

5.  INDEX_QUALIFIER String = index catalog (may be null); null when
TYPE is tableIndexStatistic 

6.  INDEX_NAME String = index name; null when TYPE is
tableIndexStatistic 

7.  TYPE short = index type: 

otableIndexStatistic - this identifies table statistics that are
returned in conjuction with a table's index descriptions 

otableIndexClustered - this is a clustered index 

otableIndexHashed - this is a hashed index 

otableIndexOther - this is some other style of index 

8.  ORDINAL_POSITION short = column sequence number within index; zero
when TYPE is tableIndexStatistic 

9.  COLUMN_NAME String = column name; null when TYPE is
tableIndexStatistic 

10.  ASC_OR_DESC String = column sort sequence, A = ascending, D =
descending, may be null if sort sequence is not supported; null when TYPE is
tableIndexStatistic 

11.  CARDINALITY int = When TYPE is tableIndexStatistic, then this is the
number of rows in the table; otherwise, it is the number of unique values in
the index. 

12.  PAGES int = When TYPE is tableIndexStatisic then this is the number of
pages used for the table, otherwise it is the number of pages used for the
current index. 

13.  FILTER_CONDITION String = Filter condition, if any. (may be null) 

Parameters: 

catalog - a catalog name; must match the catalog name as it is stored in
this database;  retrieves those without a catalog; null means that the
catalog name should not be used to narrow the search 

schema - a schema name; must match the schema name as it is stored in this
database;  retrieves those without a schema; null means that the schema
name should not be used to narrow the search 

table - a table name; must match the table name as it is stored in this
database 

unique - when true, return only indices for unique values; when false,
return indices regardless of whether unique or not 

approximate - when true, result is allowed to reflect approximate or out of
data values; when false, results are requested to be accurate 

Returns: 

ResultSet - each row is an index column description 

Throws: 

SQLException
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/SQLException.html  - if a
database access error occurs

 

 

From: Kalyan Inuganti [mailto:kinuga...@gmail.com] 
Sent: Thursday, March 05, 2009 1:44 PM
To: tho...@infotechsoft.com; derby-user@db.apache.org
Subject: Re: Derby 10.1 - 10.2 upgrade issue

 

Hi Thomas,

I have been doing some research on how i can get the column name(s) that the
index corresponds to (see the 2nd bold section of the SQL) and haven't found
anything yet. Any ideas? I would also like to know if a given index is a
unique index or not? The reason I say this is because I looked at Create
Index... statements and there are 2 flavors - ones with the Unique
qualifier and the others without.

statement.executeUpdate(CREATE UNIQUE INDEX +indexNameString+ ON
DeviceInfo (DeviceID));

Thanks a lot for your help!
Kal

On Thu, Mar 5, 2009 at 11:15 AM, Thomas J. Taylor
thomas.tay...@infotechsoft.com wrote:

Hi Kal,

 

I'll check to see if I can find the code/process that I used back then to
solve the issue. Since I only had one (remote) Derby installation causing
problems, once I figured out the way to resolve the problem (drop  recreate
index), I probably (1) used DBLook to identify the corrupt (missing)
indexes, then 

Re: Statement cache deadlock

2009-03-05 Thread Dag H. Wanvik

Thank you for the detailed analysis, Jeff! I think your analysis
sounds reasonable. I suggest you file this as a bug in JIRA, even if
we have no repro yet. You could invalidate the prepared statement by
dropping (maybe even adding) an index from/to the table, I think.

Thanks,
Dag

Jeff Stuckman stuck...@umd.edu writes:

 Hello,

 I have spent the past three days trying to find the cause of a seemingly
 impossible deadlock in my Derby application. This was hard to debug
 because the deadlock only occurs under heavy load. I think I have found the
 cause -- due to a design flaw in the statement cache, a deadlock can occur
 if a prepared statement becomes out-of-date.

 I will illustrate this with the following example:

 The application is using the embedded Derby driver. The application has two
 threads, and each thread uses its own connection.

 There is a table named MYTABLE with column MYCOLUMN.

 1. A thread prepares and executes the query SELECT MYCOLUMN FROM MYTABLE.
 The prepared statement is stored in the statement cache (see
 org.apache.derby.impl.sql.GenericStatement for this logic)
 2. After some time, the prepared statement becomes invalid or out-of-date
 for some reason (see org.apache.derby.impl.sql.GenericPreparedStatement)
 3. Thread 1 begins a transaction and executes LOCK TABLE MYTABLE IN
 EXCLUSIVE MODE
 4. Thread 2 begins a transaction and executes SELECT MYCOLUMN FROM MYTABLE.
 The statement is in the statement cache but it is out-of-date. The thread
 begins to recompile the statement. To compile the statement, the thread
 needs a shared lock on MYTABLE. Thread 1 already has an exclusive lock on
 MYTABLE. Thread 2 waits.
 5. Thread 1 executes SELECT MYCOLUMN FROM MYTABLE. The statement is in the
 statement cache but it is being compiled. Thread 1 waits on the statement's
 monitor.
 6. We have a deadlock. Derby eventually detects a lock timeout, but the
 error message is not descriptive. The stacks at the time of the deadlock
 are:

 Thread 1:
 java.lang.Object.wait(Native Method)
 java.lang.Object.wait(Object.java:485)
 org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
 org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
 org.apache.derby.impl.sql.GenericPreparedStatement.rePrepare(Unknown Source)
 org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source)
 org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source)
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown
 Source)
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeQuery(Unknown
 Source)

 Thread 2 (some frames omitted):
 org.apache.derby.impl.services.locks.ActiveLock.waitForGrant(Unknown Source)
 org.apache.derby.impl.services.locks.ConcurrentLockSet.lockObject(Unknown
 Source)
 org.apache.derby.impl.services.locks.AbstractPool.lockObject(Unknown Source)
 .
 org.apache.derby.impl.sql.compile.ResultColumnList.generateHolderMethod(Unkn
 own Source)
 org.apache.derby.impl.sql.compile.FromBaseTable.getScanArguments(Unknown
 Source)
 .
 org.apache.derby.impl.sql.compile.IndexToBaseRowNode.generate(Unknown
 Source)
 org.apache.derby.impl.sql.compile.ProjectRestrictNode.generateMinion(Unknown
 Source)
 
 org.apache.derby.impl.sql.compile.ScrollInsensitiveResultSetNode.generate(Un
 known Source)
 .
 org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
 org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
 org.apache.derby.impl.sql.GenericPreparedStatement.rePrepare(Unknown Source)
 org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source)
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown
 Source)
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeQuery(Unknown
 Source)

 I discovered an undocumented property that allows me to disable the
 statement cache. Setting derby.language.statementCacheSize=0 will disable
 the statement cache and prevent the deadlock, because the
 GenericPreparedStatement objects are no longer shared between sessions/JDBC
 connections.

 In my situation, this deadlock is impossible to prevent through careful
 database design. You are only safe if all of your transactions are very
 simple and cannot be interleaved in a sequence that causes the deadlock.
 (For the sake of simplicity, I used LOCK TABLE in my example, but any UPDATE
 statement would fit.)

 I'm asking any knowledgeable parties to help me with the following:
 1. Read my deadlock scenario and see if my analysis of the situation is
 correct.
 2. Help me write a test case, so I can post this as an issue. (I'm hoping
 that we can fix this, so nobody else spends 3 days debugging this like I
 just did.) I'm not sure how to invalidate the cache entry so the issue can
 be reproduced.
 3. If someone is familiar with the statement caching code, maybe we could
 find possible solutions. Maybe we could wait until all the locks for
 compilation have been acquired 

Re: Unable to group by certain functions

2009-03-05 Thread Matt Solnit
  ij SELECT ABS(a) FROM foo GROUP BY ABS(a);
 
  works fine.  But if I use CEIL or FLOOR, I get an error:

 I'm truly speculating here, but I wonder if this behavior is
 because CEIL and FLOOR return floating point values, and therefore
 have a certain amount of inexactness in them, which is unacceptable
 for GROUP BY processing?

Hi Bryan.  Thanks for your response.  This precisely one of the subjects
that were debated in the 2006 thread
(http://markmail.org/message/pggvaqh6ejg2dtf3).  Here are some snippets:

Daniel Debrunner:  Grouping by DOUBLE should be allowed since they can be
compared. It's probably not advisable, but Derby's SQL should allow it.

Bernt Johnsen:  A double is just a finite set of bits ... so it should
work, even if it is not the most reasonable thing to do.

And of course, it works fine in MySQL and PostgreSQL ;-).

 I'm afraid all I can really do is to confirm that I see the
 same behavior that you see, and it doesn't appear to be random,
 but rather is intentional, and has to do with the fact that CEIL
 and FLOOR are implemented differently than ABS.

What's next?  Should I submit an issue to JIRA?

 bryan



Re: Unable to group by certain functions

2009-03-05 Thread Bryan Pendleton

but rather is intentional, and has to do with the fact that CEIL
and FLOOR are implemented differently than ABS.


What's next?  Should I submit an issue to JIRA?


Yes, please. I think that would be very helpful. Please include your
test programs/scripts, and pointers to the various email threads
that you located, as they are definitely quite helpful.

Here's a few more pointers regarding filing an issue:
http://db.apache.org/derby/DerbyBugGuidelines.html

thanks,

bryan



Order the results, *and* slice them

2009-03-05 Thread Daniel Noll

Hi all.

I'm trying to use ROW_NUMBER() in conjunction with ORDER BY and not 
having a lot of luck.


This is the query I'm using, which I found on the Derby wiki:

  SELECT * FROM (
SELECT ROW_NUMBER() OVER(ORDER BY startdate ASC) AS row,
   id, startdate, enddate, ...
FROM HistoryRecord
  ) AS tmp
  WHERE row = ? AND row  ?

I have also tried what I consider to be the intuitive way:

  SELECT * FROM (
SELECT ROW_NUMBER() OVER() AS row, id, startdate, enddate, ...
FROM HistoryRecord
ORDER BY startdate ASC
  ) AS tmp
  WHERE row = ? AND row  ?

Both of these give an error at ORDER BY.  Am I doing something wrong, or 
is this a missing part of the ROW_NUMBER() feature?


Daniel


--
Daniel NollForensic and eDiscovery Software
Senior Developer  The world's most advanced
Nuixemail data analysis
http://nuix.com/and eDiscovery software


Re: Order the results, *and* slice them

2009-03-05 Thread Bryan Pendleton
Both of these give an error at ORDER BY.  Am I doing something wrong, or 
is this a missing part of the ROW_NUMBER() feature?


I think you might be seeing DERBY-3634:
http://issues.apache.org/jira/browse/DERBY-3634

I think there is some renewed work on the ROW_NUMBER feature for Derby 10.5,
so hopefully some of these problems will be addressed soon.

bryan



Re: How to read blobs?

2009-03-05 Thread epdv


Thank You for the answer - but I've also got problems with 
autocommit(false): Recordset not open. How can I get the InputStream 
then, and can I access the opened InputStream independently from the 
blob, i.e. can I read the stream still after freeing the blob?


Kind regards

Peter Nabbefeld



Donald McLean schrieb:

You cannot use blobs with autocommit(true). It just doesn't work.

Donald

On Thu, Mar 5, 2009 at 8:38 AM, epdvpeter.nabbef...@gmx.de  wrote:

Hello!

I want to read blobs in a loop (more exact: the length of the blobs). I'm
using a connection with autocommit(true) and a prepared statement with sth.
like select mydata, myblob where mykey=?.

-
Family photographs are a critical legacy for
ourselves and our descendants. Protect that
legacy with a digital backup and recovery plan.






Re: How to read blobs?

2009-03-05 Thread Kristian Waagan

epdv wrote:


Thank You for the answer - but I've also got problems with 
autocommit(false): Recordset not open. How can I get the InputStream 
then, and can I access the opened InputStream independently from the 
blob, i.e. can I read the stream still after freeing the blob?


You can operate on the Blob until you commit or rollback if you turn off 
autocommit.
As soon as you call Blob.free(), the Blob object, or any objects 
obtained from it (like an InputStream), cannot be used any more. This 
holds regardless of whether you have committed/aborted or not.


The problem you see with the result set, must be something else.


--
Kristian



Kind regards

Peter Nabbefeld



Donald McLean schrieb:

You cannot use blobs with autocommit(true). It just doesn't work.

Donald

On Thu, Mar 5, 2009 at 8:38 AM, epdvpeter.nabbef...@gmx.de  wrote:

Hello!

I want to read blobs in a loop (more exact: the length of the blobs). 
I'm
using a connection with autocommit(true) and a prepared statement 
with sth.

like select mydata, myblob where mykey=?.

-
Family photographs are a critical legacy for
ourselves and our descendants. Protect that
legacy with a digital backup and recovery plan.