Re: Performance degradation with Derby Embedded under Tomcat vs Jetty

2015-04-14 Thread Brett Wooldridge
While it may be that PostgreSQL is faster, make sure you are measuring 
accurately.

First, you should be running the Java Server VM, preferably Java 8.

Second, the server VM JIT compile threshold is 10,000 executions of a given 
method -- so the VM will get faster and faster as it progressively compiles 
code.  I would run 1 queries though Derby before taking measurements.

Lastly, PostgreSQL may be running with substantially larger page caches by 
default.  To the point where much of the data may be completely in memory.

  Read the Derby Tuning Guide to set the page cache to as close to the same 
size as PostgreSQL as possible.

Sent from my iPhone

> On Apr 14, 2015, at 15:30, Øyvind Harboe  wrote:
> 
> I compared the performance on PostgreSQL vs. Derby for this report. 
> Unsurprisingly PostgreSQL performs much better.


Re: Performance degradation with Derby Embedded under Tomcat vs Jetty

2015-04-13 Thread Brett Wooldridge
Are you sure that under Tomcat the application is really using the Tomcat
connection pool?  If not that would certainly account for the differences.
Even if Tomcat DBCP is configured, checking that the pool configurations
are similar would seem like an important check.

-Brett


On Mon, Apr 13, 2015 at 4:56 PM, Øyvind Harboe 
wrote:

> I know the Tomcat setup isn't using the clientdriver when it slows
> down, because when I first tried to switch to the ClientDriver it
> failed. After I copied the derbyclient.xxx.jar to tomcat/lib/, it
> worked.
>
>
>
>
>
> On Mon, Apr 13, 2015 at 9:51 AM, Dyre Tjeldvoll
>  wrote:
> > On 04/13/2015 12:27 AM, Øyvind Harboe wrote:
> >>
> >> I found another crucial clue:
> >>
> >> if I use ClientDriver instead of EmbeddedDriver and connect to the
> >> Derby database running under Jetty, I get identical performance with
> >> Tomcat and Jetty.
> >>
> >> This is very strong indication that there is something about the
> >> combination of EmbeddedDriver and Tomcat that is gumming up the works.
> >
> >
> > Could it be that your Tomcat setup always uses the ClientDriver for some
> > reason? I don't know much about setting up Databases with AppServers, but
> > based on the number of questions on SO and other places, it isn't
> trivial...
> >
> > --
> > Regards,
> >
> > Dyre
>
>
>
> --
> Øyvind Harboe - Can Zylin Consulting help on your project?
> http://www.zylin.com/
>


Re: Derby in production environment

2014-01-17 Thread Brett Wooldridge
There are definitely other businesses that use Derby.  My company's is one,
but as an embedded database many companies do not advertise or promote the
fact that they use Derby.  I can assure you that Derby is reliable and
scales quite well.  Just one data point to consider.



On Fri, Jan 17, 2014 at 7:42 PM, AirDT  wrote:

> Thank you for your reply Dyre
>
> As indicated in the list
> http://wiki.apache.org/db-derby/UsesOfDerby
>
> many products use Derby but only one Business Application use it.
> Why ?
>
> We develop an application that uses HSQLDB today. We have about 500 users.
> We fail to understand why sometimes, users lose data. We want to replace
> the
> HSQLDB database with another database ...
>
> AirDT
>
>
>
> --
> View this message in context:
> http://apache-database.10148.n7.nabble.com/Derby-in-production-environment-tp136557p136578.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>


Re: NPE when compressing large table

2012-09-04 Thread Brett Wooldridge
This looks like a bug to me, I recommend you open one.  You might also try
SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE.  If you can copy the DB to your
dev environment, reproduce the issue, and run a derby 10.8.2.2 built with
line numbers, it would probably help the devs pinpoint the issue.

Brett

On Wed, Sep 5, 2012 at 5:11 AM, Erick Lichtas wrote:

> Hi Everyone,
>
> ** **
>
> I am having an issue with the SYSCS_UTIL.SYSCS_COMPRESS_TABLE operation on
> a large table in Derby 10.8.2.2.
>
> ** **
>
> Statement stmt = *null*;
>
> CallableStatement cs = *null*;
>
> *try* {
>
>String sql = "select schemaname, tablename from sys.sysschemas s, "
> 
>
>   + "sys.systables t where s.schemaid=t.schemaid and
> t.tabletype='T'";
>
>stmt = con.createStatement();
>
>ResultSet rs = stmt.executeQuery(sql);
>
>cs = con.prepareCall("call SYSCS_UTIL.SYSCS_COMPRESS_TABLE(?, ?,
> ?)");
>
>*while* (rs.next()) {
>
>   String schema = rs.getString(1).trim();
>
>   String table = rs.getString(2).trim();
>
>   log("Compressing " + schema + "." + table + "...");
>
>   cs.setString(1, schema);
>
>   cs.setString(2, table);
>
>   cs.setShort(3, (*short*) 1);
>
>   cs.execute();
>
>}
>
> }
>
> *finally* {
>
>*if* (stmt != *null*) {
>
>   stmt.close();
>
>}
>
>*if* (cs != *null*) {
>
>   cs.close();
>
>}
>
> }
>
> ** **
>
> The above code, successfully runs over 3 of 20 some tables, then hits the 4
> th table with over 2 million records.  After working on this table for
> about 5 minutes, the process terminates with an SQLException wrapping a
> NPE.  
>
> This happened in a production instance and I have not yet set up a
> reproduction in a development environment.  I’m wondering if anyone has any
> thoughts based on the stacktrace below?
>
> ** **
>
> com.linoma.gaservices.upgrader.UpgradeException: Java exception: ':
> java.lang.NullPointerException'.
>
> at
> com.linoma.gaservices.upgrader.DefaultUpgrader.upgrade(DefaultUpgrader.java:227)
> 
>
> at
> com.linoma.gaservices.upgrader.UnixUpgrader.upgrade(UnixUpgrader.java:32)*
> ***
>
> at
> com.linoma.gaservices.upgrader.UpgradeStarter.startUpgrade(UpgradeStarter.java:25)
> 
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79)
> 
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 
>
> at java.lang.reflect.Method.invoke(Method.java:618)
>
> at my.app.upgrader.Startup.main(Startup.java:72)
>
> Caused by: java.sql.SQLException: Java exception: ':
> java.lang.NullPointerException'.
>
> at
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
>
> at
> org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>
> at org.apache.derby.impl.jdbc.Util.javaException(Unknown
> Source)
>
> at
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
> Source)
>
> at
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
> Source)
>
> at
> org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
> 
>
> at
> org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source)
>
> at
> org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Unknown Source)
>
> at org.apache.derby.jdbc.InternalDriver.connect(Unknown
> Source)
>
> at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown
> Source)
>
> at
> java.sql.DriverManager.getConnection(DriverManager.java:572)
>
> at
> java.sql.DriverManager.getConnection(DriverManager.java:218)
>
> at
> my.app.upgrader.db.DerbyDatabaseUpgrader.closeDB(DerbyDatabaseUpgrader.java:99)
> 
>
> at
> my.app.upgrader.DefaultUpgrader.upgrade(DefaultUpgrader.java:223)
>
> ... 7 more 
>
> Caused by: java.lang.NullPointerException
>
> at
> org.apache.derby.impl.store.raw.data.BaseDataFileFactory.openContainer(Unknown
> Source)
>
> at
> org.apache.derby.impl.store.raw.data.BaseDataFileFactory.openContainer(Unknown
> Source)
>
> at
> org.apache.derby.impl.store.raw.xact.Xact.openContainer(Unknown Source)***
> *
>
> at
> org.apache.derby.impl.store.access.conglomerate.OpenC

Re: Guidance/Help/Book/References?

2012-06-23 Thread Brett Wooldridge
Hi Pavel,

Unfortunately, I'm not writing with any answers.  My company's
software also makes use
of Derby in a high-concurrency setting, and we have encountered
similar issues.  For us,
issues around dead-locks when simply preparing statements has been a
major pain.  We
have indeed had to put in code-level synchronization in various places
to try to deal with
the issues.

Unfortunately, I think you've run into bug 4279...

https://issues.apache.org/jira/browse/DERBY-4279

"This deadlock is unique because it can still occur in a properly
designed database.
You are only safe if all of your transactions are very simple and
cannot be interleaved
in a sequence that causes the deadlock, or if your particular statements do not
require a table lock to compile. (For the sake of simplicity, I used
LOCK TABLE in my
example, but any UPDATE statement would fit.)"

I had proposed a patch to 4279, but ultimately had to abandon it due
to complex and
non-obvious synchronization issues.  Maybe I'll take another cut at it...

-Brett

On Sat, Jun 23, 2012 at 1:18 AM, Pavel Bortnovskiy
 wrote:
> Hello, all:
>
>
>
> Derby is used heavily in my project and its tables are frequently accessed
> concurrently by multiple threads. Some threads update one or several tables,
> while other threads perform run select statements against those. I’ve
> written to this group several times whenever errors occurred, but some of
> those message have either been ignored or contained references to fairly
> short explanations on the website. So, the problems continue and it is
> difficult to find the proper solution.
>
>
>
> All problems happen due to the concurrent access to tables. For instance,
> today’s problem had to do with one thread updating one table (the table is
> first truncated and then re-populated to stay in sync with the source of
> data), while the other thread was trying to create a PreparedStatement. So,
> while one thread was truncating a table, another threw this exception
> (probably caused by nested select statements):
>
>
>
> java.sql.SQLException: The conglomerate (1,280) requested does not exist.
>
>     at
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
> Source)
>
>     at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
>
>     at
> org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
>
>     at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedPreparedStatement20.(Unknown Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedPreparedStatement30.(Unknown Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedPreparedStatement40.(Unknown Source)
>
>     at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
>
>     at
> org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
>
>     at
> com.jefco.fi.commons.database.SqlSyntaxStatement.(SqlSyntaxStatement.java:76)
>
>     at
> com.jefco.fi.commons.database.SqlSyntaxStatement.(SqlSyntaxStatement.java:44)
>
>    ...
>
> Caused by: java.sql.SQLException: The conglomerate (1,280) requested does
> not exist.
>
>     at
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
>
>     at
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
> Source)
>
>     ... 19 more
>
> Caused by: ERROR XSAI2: The conglomerate (1,280) requested does not exist.
>
>     at
> org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
>
>     at
> org.apache.derby.impl.store.access.heap.HeapConglomerateFactory.readConglomerate(Unknown
> Source)
>
>     at
> org.apache.derby.impl.store.access.RAMAccessManager.conglomCacheFind(Unknown
> Source)
>
>     at
> org.apache.derby.impl.store.access.RAMTransaction.findExistingConglomerate(Unknown
> Source)
>
>     at
> org.apache.derby.impl.store.access.RAMTransaction.getStaticCompiledConglomInfo(Unknown
> Source)
>
>     at
> org.apache.derby.impl.sql.compile.BaseJoinStrategy.fillInScanArgs1(Unknown
> Source)
>
>     at
> org.apache.derby.impl.sql.compile.HashJoinStrategy.getScanArgs(Unknown
> Source)
>
>     at
> org.apache.derby.impl.sql.compile.FromBaseTable.getScanArguments(Unknown
> Source)
>
>     at
> org.apache.derby.impl.sql.compile.FromBaseTable.generateResultSet(Unknown
> Source)
>
>     at org.apache.derby.impl.sql.compile.FromBaseTable.generate(Unknown
> Source)
>
>     at
> org.apache.derby.impl.sql.c

Re: Suitable test query for Derby?

2012-04-15 Thread Brett Wooldridge
Well, full disclosure, I'm a contributor to Bitronix. :-)

Brett

Sent from my iPhone

On Apr 15, 2012, at 20:52, Martin Lichtin  wrote:

> Thanks Brett and Knut.
> Interesting you guys mention Bitronix TM.
> I was dealing with Atomikos when this question of a test query came up.
> Unlike Bitronix it doesn't support the JDBC4 isValid().
>
>  Original Message 
> Subject: Re: Suitable test query for Derby?
> From: Knut Anders Hatlen 
> To: Derby Discussion 
> Date: 4/13/2012 9:29 PM
>
>> Martin Lichtin  writes:
>>
>>> Hi
>>> Implementations of connection pooling typically use a "test query" to first 
>>> check
>>> whether a connection is still valid before sending the actual query across.
>>> A test query looks like
>>>   SELECT SYSDATE FROM DUAL (eg. PostgreSQL or Oracle)
>>> or
>>>   SELECT 1 (eg. Ingres)
>>>
>>> What is a suitable test query for Derby? I couldn't find one off-hand.
>>
>> VALUES 1 would be Derby's equivalent of the Ingres query. Or you could
>> use isValid(), as Brett suggested. (If I remember correctly, the client
>> driver actually implements isValid() by executing VALUES 1 on the
>> server.)
>>


Re: Suitable test query for Derby?

2012-04-13 Thread Brett Wooldridge
Alternatively, you can use a connection pool at supports the JDBC4
Connection.isValid()
method, such as the one included with Bitronix JTA.  Of course, you have to
use a
transaction manager in the case of Bitronix, but that's not a bad thing.
 In theory, the
JDBC4 isValid(), if implemented correctly by the driver, should be much
less expensive
than a test query (which still has to be parsed and executed).  Derby does
implement
the isValid() method.

Brett

On Fri, Apr 13, 2012 at 11:48 PM, Martin Lichtin  wrote:

> Hi
> Implementations of connection pooling typically use a "test query" to
> first check
> whether a connection is still valid before sending the actual query across.
> A test query looks like
>  SELECT SYSDATE FROM DUAL (eg. PostgreSQL or Oracle)
> or
>  SELECT 1 (eg. Ingres)
>
> What is a suitable test query for Derby? I couldn't find one off-hand.
>
> Thanks
> Martin
>


Re: CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE

2012-02-17 Thread Brett Wooldridge
Is there a bug open related to this that I can watch?  As our
customers' databases grow, I'm concerned we'll run into the same
issue.  We normally run with a thread stack size of 128k, because our
application can have in excess of 200 threads, so requiring a stack
size of 2mb for compression is not desirable.

Sent from my iPhone

On Feb 18, 2012, at 8:11, Mike Matrigali  wrote:

> Matthew McCawley wrote:
>> I've run into the same issue as Adriano when running on a single, large table
>> about 1.4 GB in size. I enable autocommit before the compress statement and
>> disable it after. I have encountered the error when deleting portions of the
>> data as well as all of it. I also found that the compression would succeed
>> if I used a stack size of 2 MB and a maximum heap size of 1 GB (-Xss2048k
>> -Xmx1g). I'll be working with this more next week, so I'll see if anything
>> changes when working with a larger dataset.
> You see the same kind of repeated stack in the error?  This loop looks 
> strange to me, and I don't think should be related to size of the tables:
> at 
> org.apache.derby.iapi.store.raw.xact.RawTransaction.notifyObservers(Unknown 
> Source)
> at org.apache.derby.impl.store.raw.data.DropOnCommit.update(Unknown Source)
> at java.util.Observable.notifyObservers(Observable.java:142)
> at 
> org.apache.derby.iapi.store.raw.xact.RawTransaction.notifyObservers(Unknown 
> Source)
> at org.apache.derby.impl.store.raw.data.DropOnCommit.update(Unknown Source)
> at java.util.Observable.notifyObservers(Observable.java:142)
> at 
> org.apache.derby.iapi.store.raw.xact.RawTransaction.notifyObservers(Unknown 
> Source)
> at org.apache.derby.impl.store.raw.data.DropOnCommit.update(Unknown Source)
> at java.util.Observable.notifyObservers(Observable.java:142)
> at 
> org.apache.derby.iapi.store.raw.xact.RawTransaction.notifyObservers(Unknown 
> Source)
> at org.apache.derby.impl.store.raw.data.DropOnCommit.update(Unknown Source)
> at java.util.Observable.notifyObservers(Observable.java:142)
> at 
> org.apache.derby.iapi.store.raw.xact.RawTransaction.notifyObservers(Unknown 
> Source)
>
>
> There are some reported problems with the amount of memory in general that 
> compres table uses, which are likely to be a different issue.  For
> these memory issues it is helpful to post exactly what jvm you are using, 
> what OS, and what flags you are giving the jvm.  And how much memory is on 
> your machine.
>
> Derby was not originally created with vldb in mind, so multi-gigabyte tables 
> could very well be exercising new code paths.  Derby definitely
> has the ability to perform index creations/sorts on tables bigger than
> memory size, but there are some reported problems in its estimates of
> how much memory it should use to do so.  These estimates can definitely
> be affected by jvm startup flags.


Re: Derby Database stored procedure (SYSCS_UTIL.SYSCS_BACKUP_DATABASE(?)) is failing to take database backup

2011-06-09 Thread Brett Wooldridge
Jayaprakash,

We would all like to help you.  I think the issue is that there is a
lack of information to properly diagnose the problem.  Therefore a
workaround is hard to recommend.  From my perspective, the most
important piece of missing information is a thread dump.

There are many resources on the internet about how to get a thread
dump from your running application.  Here is one:

http://expertodev.wordpress.com/2009/05/30/how-to-take-java-thread-dump/

Regards,
Brett

On Thu, Jun 9, 2011 at 10:09 PM, Jayaprakash Kumar
 wrote:
> Hi Team
>
> We are facing the problem with derby Backup Database procedure function
> SYSCS_UTIL.SYSCS_BACKUP_DATABASE(?) and
> this function failed to take the backup of the database and throws below
> error message.
>
> Error Log:-
> com.db.mbc.scheduler.jobs.JobExecutionException: Error in performing backup
> database :A network protocol error was encountered and the connection has
> been terminated: the requested command encountered an unarchitected and
> implementation-specific condition for which there was no architected message
>         at
> com.db.mbc.scheduler.jobs.DBBackupScheduleJob.backUpDB(DBBackupScheduleJob.java:155)
>         at
> com.db.mbc.scheduler.jobs.DBBackupScheduleJob.executeJob(DBBackupScheduleJob.java:88)
>         at
> com.db.mbc.scheduler.jobs.BaseScheduleJob.execute(BaseScheduleJob.java:97)
>         at
> com.db.mbc.base.job.SpringAwareJobActionListener.actionFired(SpringAwareJobActionListener.java:73)
>         at
> com.topgear.executer.impl.StdJobExecutionShell.run(StdJobExecutionShell.java:207)
>         at
> com.topgear.threadpool.impl.StdThreadPool.run(StdThreadPool.java:168)
>         at java.lang.Thread.run(Thread.java:595)
> Job execution COMPLETED with result [true]
>
>
> -- Its look like bug in derby database.
>
> -- Some one noticed about this error before but no one suggested work around
> for this issue. This issue highly  blocking the customers to take the
> database backup in production.
>
>
>
> Regards,
> Jayaprakash Kumar,
> -
> Deutsche Bank AG
> Alfred-Herrhausen-Allee 16-24, 65760 Eschborn, Germany
> Direct : +49(69)910-65832
> MBC Hotline : +49 6991061400
> ***
> “This mail is transmitted to you on behalf of HCL Technologies.
> Diese Post wird Ihnen im Namen der HCL Technologies übermittelt”
>
> ---
>
> This e-mail may contain confidential and/or privileged information. If you
> are not the intended recipient (or have received this e-mail in error)
> please notify the sender immediately and destroy this e-mail. Any
> unauthorized copying, disclosure or distribution of the material in this
> e-mail is strictly forbidden.


Re: SYSCS_UTIL.SYSCS_BACKUP_DATABASE failing-urgent

2011-06-09 Thread Brett Wooldridge
Sorry I wasn't clearer.  A thead dump is a snapshot of all running
threads in the system, not a stacktrace of failures.

You can generate a threaddump on Windows by running your server from
the command line, and when the hang occurs during backup, pressing
CTRL-Break (CTRL key and Break key at the same time).  This will
produce a threaddump in your console (cmd.exe) window.  Copy/paste the
threaddump somewhere we can take a look at it.

If you are on Linux, you can produce a threaddump by sending the
process a QUIT signal:

kill -QUIT process_id

Regards,
Brett


On Thu, Jun 9, 2011 at 4:01 PM, dvijender  wrote:
>
> Hi Mike, Brett,
>
> Attached thread dump. Here the database size is 20 GB.
>
>
> Brett Wooldridge-2 wrote:
>>
>> Do you have a thread dump during the hang?  What
>> OS are you running on?
>>
>> Brett
>>
>> Sent from my iPhone
>>
>> On Jun 7, 2011, at 16:38, Vijender Devakari 
>> wrote:
>>
>> Hi,
>>
>> Can you respond to this as this is very urgent.
>>
>> Best Regards,
>>
>> Vijender D,
>> Deutsche Bank
>> Ph: +65 6423 8530
>> Mobile: +65 94517994
>>
>> 
>> This mail is transmitted to you on behalf of [HCL].
>> Diese Post wird Ihnen im Namen der [HCL] ubermittelt
>> *
>>
>> Vijender Devakari/ext/dbcom
>>
>>
>>
>>     *Vijender Devakari/ext/dbcom*
>>
>>             06/03/2011 04:28 PM
>>
>>  
>> To
>> 
>> derby-user@db.apache.org 
>> cc
>> 
>>  
>> Subject
>> 
>> SYSCS_UTIL.SYSCS_BACKUP_DATABASE failing    Hi
>> Team,
>>
>> we are using below proc for backing up the database, but this call is
>> hanging when the Database is size more like 3GB, 4GB, ...
>>
>> Can you let me know in which case we can use this and also what else we
>> need
>> to use to backup database incase if the siz is more.
>>
>> CALL SYSCS_UTIL.SYSCS_BACKUP_DATABASE(?)
>>
>> Best Regards,
>>
>> Vijender D,
>> Deutsche Bank
>> Ph: +65 6423 8530
>> Mobile: +65 94517994
>>
>> 
>> This mail is transmitted to you on behalf of [HCL].
>> Diese Post wird Ihnen im Namen der [HCL] ubermittelt
>> *
>>
>>
>> ---
>>
>> This e-mail may contain confidential and/or privileged information. If you
>> are not the intended recipient (or have received this e-mail in error)
>> please notify the sender immediately and destroy this e-mail. Any
>> unauthorized copying, disclosure or distribution of the material in this
>> e-mail is strictly forbidden.
>>
>>
> http://old.nabble.com/file/p31806974/Logs.zip Logs.zip
> --
> View this message in context: 
> http://old.nabble.com/Re%3A-SYSCS_UTIL.SYSCS_BACKUP_DATABASE-failing-urgent-tp31789732p31806974.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


LOBs/Replication

2011-06-08 Thread Brett Wooldridge
I have a question about replication as it pertains to LOBs (BLOBs and CLOBs).
According to the documentation...

If the master looses connection with the slave, "transactions are
allowed to continue
processing while the master tries to reconnect with the slave. Log
records generated
while the connection is down are buffered in main memory. If the log
buffer reaches
its size limit before the connection can be reestablished, the master
replication
functionality is stopped."

And the documentation for derby.replication.logBufferSize says the
maximum size of
the buffer is 1048576 (1MB).

This seems to imply that if I have a database in which I store LOBs
which are, for
example, 256K in size, and the connection between master and slave is severed,
I can perform 4 inserts or less before the master gives up.  Is this
so?  If this is in fact
the case, I would like to file a request that this limit be raised
considerably or eliminated
altogether.

I have two servers (master and slave) running 64-bit JVMs, 64GB of memory each,
SSD drives, connected by 10GbE fiber.  I would like to dedicate as
much memory as I
want to deal with a disconnect/resume scenario (to avoid the onerous
failover).  At an
insertion rate of 16 rows per second (~4MB), currently the setup would
tolerate a
connection interruption of a fraction of a second.  A 1GB buffer would
afford a connection
interruption of ~250 seconds (for example, rebooting the fiber switch).

Last question, aimed at devs, why does Derby even bother to buffer
logs in memory?
Can't it just keep an offset/marker into the transaction log files,
and replay transactions
from there, rather than buffering them in memory?

Regards,
Brett


Re: SYSCS_UTIL.SYSCS_BACKUP_DATABASE failing-urgent

2011-06-07 Thread Brett Wooldridge
Do you have a thread dump during the hang?  What
OS are you running on?

Brett

Sent from my iPhone

On Jun 7, 2011, at 16:38, Vijender Devakari 
wrote:

Hi,

Can you respond to this as this is very urgent.

Best Regards,

Vijender D,
Deutsche Bank
Ph: +65 6423 8530
Mobile: +65 94517994


This mail is transmitted to you on behalf of [HCL].
Diese Post wird Ihnen im Namen der [HCL] ubermittelt
*

Vijender Devakari/ext/dbcom



*Vijender Devakari/ext/dbcom*

06/03/2011 04:28 PM

 
To

derby-user@db.apache.org 
cc

 
Subject

SYSCS_UTIL.SYSCS_BACKUP_DATABASE failingHi
Team,

we are using below proc for backing up the database, but this call is
hanging when the Database is size more like 3GB, 4GB, ...

Can you let me know in which case we can use this and also what else we need
to use to backup database incase if the siz is more.

CALL SYSCS_UTIL.SYSCS_BACKUP_DATABASE(?)

Best Regards,

Vijender D,
Deutsche Bank
Ph: +65 6423 8530
Mobile: +65 94517994


This mail is transmitted to you on behalf of [HCL].
Diese Post wird Ihnen im Namen der [HCL] ubermittelt
*


---

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.


Re: Corrupted Derby DB

2011-05-26 Thread Brett Wooldridge
Tim,

Just a question, was this 'crash test' a power-off test?  For example,
turning off the computer while Derby is running?

If so, you should probably read this article:

http://www.jasonbrome.com/blog/archives/2004/04/03/writecache_enabled.html

TL;DR, anyone who expects complete durability (non-corruption even during
power loss) should disable the write-cache on their hard-disk.  There is no
doubt this will cut the write performance quite a bit.  Applications needing
both complete durability and high-performance should consider a controller
with a battery-backed write-cache.

Brett


Strange collation

2011-04-12 Thread Brett Wooldridge
I have perhaps a misunderstanding/mis-expectation of a database created with
collation TERRITORY_BASED:PRIMARY.

>From the manual:

"Territory based with collation strength PRIMARY. Specify this value to
make Derby behave similarly to many other databases, for which PRIMARY is
commonly the default. PRIMARY typically means that only differences in base
letters are considered significant, whereas differences in accents or case
are not considered significant."

So, I was _hoping_ that indeed Derby would behave similarly to many other
databases, but ...

Assume I create a database as follows:

   jdbc:derby:test;create=true;collation=TERRITORY_BASED:PRIMARY


And create a table as follows:


CREATE TABLE foo (

   id INTEGER GENERATED BY DEFAULT AS IDENTITY,

   column1 VARCHAR(255)

)


And insert a test row...


INSERT INTO foo (column1) VALUES ('test')


A "standard" query works as expected ...


SELECT * FROM foo WHERE column1 LIKE 'TE%'


... does indeed perform a case in-sensitive LIKE as desired, and returns
this row:


IDCOLUMN1

---

1  test


So far, so good.  Now this query...


SELECT * FROM foo WHERE id = '1'   -- Note the quoted '1' rather than a raw
1


... returns this error:


Error: Comparisons between 'INTEGER' and 'CHAR (UCS_BASIC)' are not
supported. Types must be comparable. String types must also have matching
collation. If collation does not match, a possible solution is to cast
operands to force them to the default collation (e.g. SELECT tablename FROM
sys.systables WHERE CAST(tablename AS VARCHAR(128)) = 'T1')

SQLState:  42818

ErrorCode: -1


Unfortunately, this is VERY unlike other databases.  This is a trivial query
which I would expect to work.  If Derby knows the collation the the
database, why are casts not automatically performed?  I actually encountered
this error first with this query to list all tables:


SELECT T2.* FROM SYS.SYSTABLES systabs, TABLE
(SYSCS_DIAG.SPACE_TABLE(systabs.tablename)) AS T2

WHERE systabs.tabletype = 'T'

ORDER BY isindex, conglomeratename


Which returns a similar error whereby the only solution is to
CAST(systabs.tabletype AS CHAR).


My company is, or rather was, in the process of switching our product from
Derby with default (UCS_BASIC) collation, to TERRITORY_BASED:PRIMARY in
order to solve performance issues related to the inability to perform case
in-sensitive queries (because using a function like LCASE() will not use an
index).


Unfortunately, given the size of the code-base (several hundred thousand
lines), the fact that a myriad of these CAST related errors could be hiding
in nooks and crannies is forcing us to reconsider.  Even if we _could_ find
and correct all queries were something like an INTEGER is being compared to
a string, having to put CAST everywhere we might interact with system tables
seems gross.


Is there any solution to this?  Any connection mode, startup parameter, etc.
that we're missing?  Is there any reason Derby shouldn't perform these CAST
automatically?  No other database I've used (Oracle, SQL Server, MySQL,
PostgreSQL, ...) seem to have this problem.


Thanks,

Brett Wooldridge


Re: BLOB streaming

2011-02-18 Thread Brett Wooldridge
The question is, is it still fully materialized on the server before
streaming to the client?

Brett

Sent from my iPhone

On Feb 18, 2011, at 22:11, Knut Anders Hatlen  wrote:

> Brett Wooldridge  writes:
>
>> Hi all,
>>
>>
>> I just came across this in the manual, and if accurate raises some
>> concerns for me:
>>
>> For applications using the client driver, if the stream is stored in a
>> column of a type other than LONG VARCHAR or LONG VARCHAR FOR BIT DATA,
>> the entire stream must be able to fit into memory at one time. Streams
>> stored in LONG VARCHAR and LONG VARCHAR FOR BIT DATA columns do not
>> have this limitation.
>>
>> This seems to imply that if I have a BLOB containing 1GB of data, and
>> I'm using the client driver, the result cannot be streamed?
>> Can this possibly be correct? �Given the apparent limit of VARCHAR of
>> ~32K, is there no way to stream large data to a client?
>
> A lot of work went into Derby 10.2 and Derby 10.3 to avoid the need to
> materialize LOBs on the client, so I believe this statement isn't true
> anymore. I've filed https://issues.apache.org/jira/browse/DERBY-5056 to
> update the manual.
>
> Thanks,
>
> --
> Knut Anders


BLOB streaming

2011-02-17 Thread Brett Wooldridge
Hi all,

I just came across this in the manual, and if accurate raises some concerns
for me:

For applications using the client driver, if the stream is stored in a
column of a type other than LONG VARCHAR or LONG VARCHAR FOR BIT DATA, the
entire stream must be able to fit into memory at one time. Streams stored in
LONG VARCHAR and LONG VARCHAR FOR BIT DATA columns do not have this
limitation.

This seems to imply that if I have a BLOB containing 1GB of data, and I'm
using the client driver, the result cannot be streamed?
Can this possibly be correct?  Given the apparent limit of VARCHAR of ~32K,
is there no way to stream large data to a client?

Brett


Re: Updatable ResultSet and boolean

2011-02-14 Thread Brett Wooldridge
Created:

https://issues.apache.org/jira/browse/DERBY-5042

On Tue, Feb 15, 2011 at 11:51 AM, Bryan Pendleton <
bpendleton.de...@gmail.com> wrote:

> java.sql.SQLException: An attempt was made to put a data value of type
>> 'byte' into a data value of type 'BOOLEAN'.
>>
>> Is it a bug, or am I doing something obviously wrong?
>>
>
> Looks likely to be a bug to me. Boolean is a brand new type,
> and undoubtedly still has some rough edges.
>
> Can you log a full JIRA issue, with supporting code, etc?
>
> http://db.apache.org/derby/DerbyBugGuidelines.html
>
> thanks,
>
> bryan
>


Updatable ResultSet and boolean

2011-02-14 Thread Brett Wooldridge
I am encountering an error trying to update a BOOLEAN type through an
updatable ResultSet.  My table looks like this:

   CREATE TABLE file_store (

path VARCHAR(255) NOT NULL,

network VARCHAR(32) NOT NULL,

file_blob BLOB,

file_md5 VARCHAR(32),

is_directory BOOLEAN NOT NULL DEFAULT false,

PRIMARY KEY (path, network)

);


My SQL looks like this:


   SELECT path, network, is_directory FROM file_store

   WHERE path=? AND network=? AND is_directory=true FOR UPDATE OF path,
network, is_directory


My Java code looks like this:


...

   resultSet.moveToInsertRow();

resultSet.updateString("path", dirPath);

resultSet.updateString("network", network);

resultSet.updateBoolean("is_directory", true);

resultSet.insertRow();


An exception is thrown in the call to updateBoolean() as follows:


   java.sql.SQLException: An attempt was made to put a data value of type
'byte' into a data value of type 'BOOLEAN'.


Is it a bug, or am I doing something obviously wrong?


Brett


Update sequence

2011-01-30 Thread Brett Wooldridge
We are currently using Hibernate with a table managed sequence scheme
(javax.persistence.GenerationType.TABLE), but now that Derby supports
Sequences we are considering migrating to real sequences
(javax.persistence.GenerationType.SEQUENCE).

One question we have is, is it possible to ALTER a sequence?  From
what I've seen, the answer is currently no, but I thought I would ask
here just in case.

Thanks,
Brett


Re: can't delete my DB directory

2010-06-10 Thread Brett Wooldridge
On Windows the Process Explorer tool from Microsoft can be used to determine
which file handles are open, and what process is holding them:

http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx

Brett


On Thu, Jun 10, 2010 at 6:24 PM, Kristian Waagan wrote:

> On 09.06.10 03:32, oldmhe wrote:
>
>>
>> The OS is Windows XP.
>>
>> Since I'm using the embedded driver, I don't think it's possible to check
>> what process is hanging on to the file.  I.e., I believe it's the same
>> process as my program.  When my program exits, I'm able to delete the
>> directory manually.
>>
>
> Hi,
>
> It would still be useful to see which files the process has open handles
> to. If there is no tool readily available for doing this, a crude way would
> be to run the partly successful delete and then just post a listing of the
> files that are left in the database directories.
>
>
>
>> Regarding your last question, my program does shutdown explicitly (as
>> shown
>> in my original post).
>>
>
> Yes, you are shutting down the Derby engine, but not the specific database
> explicitly.
> An engine shutdown should shut down all booted databases, but it would be
> nice to rule out a bug in this area.
>
>
> Regards,
> --
> Kristian
>
>
>> Since posting, I found a solution, but I don't understand why it works.
>>
>> Below is a simplification of what the program does:
>>
>> 1. load the embedded driver
>> set AutoCommit to false
>> connect to the DB engine, and create a DB
>> create tables
>> load the tables with data, and commit all work
>>
>> 2. using SELECT, read some data records, and create an output file.
>>
>> 3. shutdown the DB engine
>>
>> 4. try to delete the DB directory (and all files and subdirectories)
>> exit
>>
>> With regard to my initial post, Step 4 fails to delete all the files and
>> directories (it's able to delete most of them).
>>
>> However:
>>
>> 1.  If I omit Step 2 (the reading of the DB), Step 4 succeeds.
>>
>> 2.  Or, if I add "xxx.commit()" between Step 2 and Step 3, then Step 4
>> succeeds.
>>
>> It seems that a commit() is needed even though Step 2 makes no change to
>> the
>> DB.  This is contrary to my expectations for two reasons:
>>
>> a)  Since Step 2 is a read-only operation, I don't see why commit() is
>> needed.
>>
>> b)  Even if a commit() is needed, the shutdown shouldease all DB
>>
>> resources (and not hang on to any files).
>>
>> Any thoughts?
>>
>>
>> Kristian Waagan-4 wrote:
>>
>>>
>>> Hello,
>>>
>>> What operating system are you using?
>>> Are you able to use the operation system's proper tool to check which
>>> process (if any) is hanging on to the file?
>>> (i.e. pfiles or lsof)
>>>
>>> Also, do you see the same behavior if you in addition shut down the
>>> database explicitly?
>>> (i.e. 'DriverManager.getConnection("jdbc:derby:myDB;shutdown=true");')
>>>
>>>
>>> Regards,
>>> --
>>> Kristian
>>>
>>>


>>>
>>>
>>>
>>
>


Re: can't delete my DB directory

2010-06-10 Thread Brett Wooldridge
AFAIK, when auto-commit is off, transaction demarcation is left (almost)
entirely to the user.  You should commit() the connection, or close() any
Statements/ResultSets.  What a database does to handle "unfinished"
transactions (even read-only ones) at shutdown is likely highly DB-specific
and not part of the spec.  While I think Derby should not hang or leave open
resources after shutdown, I wouldn't count on it.

In the same way that an OS will usually close all open file handles owned by
a process when that process exits, it is better if your program cleans up
after itself and closes resources explicitly rather than relying on some
implicit behavior.

In that vein, given that you have turned auto-commit off, the last thing
your program should do before shutdown is commit() or rollback().  In
auto-commit mode, the end of one transaction implies the beginning of the
next.  So if you've run *any* SQL on a connection, that connection should be
rolled back or committed at some point.

-Brett

On Thu, Jun 10, 2010 at 3:48 PM, oldmhe  wrote:

>
> Thanks for the info.
>
> Though this is my first Java DB program.  I've written many C programs
> using
> ESQL, and have never needed to issue a commit after a read or query
> operation.
>
> Is this "commit after query" requirement an SQL concept, or is it just
> related to the Java JDBC API?
>
>
> Brett Wooldridge-2 wrote:
> >
> > Even read operations create transactions (and locks).  Because you set
> > autocommit to false, you must manually commit.
> >
> > Having said that, I would expect shutdown to automatically rollback
> > any open transactions at the time of shutdown.
> >
> > Brett
> >
> > Sent from my iPhone
> >
> > On Jun 9, 2010, at 10:32, oldmhe  wrote:
> >
> >>
> >> The OS is Windows XP.
> >>
> >> Since I'm using the embedded driver, I don't think it's possible to
> >> check
> >> what process is hanging on to the file.  I.e., I believe it's the same
> >> process as my program.  When my program exits, I'm able to delete the
> >> directory manually.
> >>
> >> Regarding your last question, my program does shutdown explicitly
> >> (as shown
> >> in my original post).
> >>
> >> Since posting, I found a solution, but I don't understand why it
> >> works.
> >>
> >> Below is a simplification of what the program does:
> >>
> >> 1. load the embedded driver
> >>set AutoCommit to false
> >>connect to the DB engine, and create a DB
> >>create tables
> >>load the tables with data, and commit all work
> >>
> >> 2. using SELECT, read some data records, and create an output
> >> file.
> >>
> >> 3. shutdown the DB engine
> >>
> >> 4. try to delete the DB directory (and all files and
> >> subdirectories)
> >>exit
> >>
> >> With regard to my initial post, Step 4 fails to delete all the files
> >> and
> >> directories (it's able to delete most of them).
> >>
> >> However:
> >>
> >> 1.  If I omit Step 2 (the reading of the DB), Step 4 succeeds.
> >>
> >> 2.  Or, if I add "xxx.commit()" between Step 2 and Step 3, then Step 4
> >> succeeds.
> >>
> >> It seems that a commit() is needed even though Step 2 makes no
> >> change to the
> >> DB.  This is contrary to my expectations for two reasons:
> >>
> >> a)  Since Step 2 is a read-only operation, I don't see why commit() is
> >> needed.
> >>
> >> b)  Even if a commit() is needed, the shutdown should release all DB
> >> resources (and not hang on to any files).
> >>
> >> Any thoughts?
> >>
> >>
> >> Kristian Waagan-4 wrote:
> >>>
> >>> Hello,
> >>>
> >>> What operating system are you using?
> >>> Are you able to use the operation system's proper tool to check which
> >>> process (if any) is hanging on to the file?
> >>> (i.e. pfiles or lsof)
> >>>
> >>> Also, do you see the same behavior if you in addition shut down the
> >>> database explicitly?
> >>> (i.e. 'DriverManager.getConnection
> >>> ("jdbc:derby:myDB;shutdown=true");')
> >>>
> >>>
> >>> Regards,
> >>> --
> >>> Kristian
> >>>
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>
> >> --
> >> View this message in context:
> >>
> http://old.nabble.com/can%27t-delete-my-DB-directory-tp28782490p28825037.html
> >> Sent from the Apache Derby Users mailing list archive at Nabble.com.
> >>
> >
> >
>
> --
> View this message in context:
> http://old.nabble.com/can%27t-delete-my-DB-directory-tp28782490p28839396.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


Re: can't delete my DB directory

2010-06-08 Thread Brett Wooldridge
Even read operations create transactions (and locks).  Because you set
autocommit to false, you must manually commit.

Having said that, I would expect shutdown to automatically rollback
any open transactions at the time of shutdown.

Brett

Sent from my iPhone

On Jun 9, 2010, at 10:32, oldmhe  wrote:

>
> The OS is Windows XP.
>
> Since I'm using the embedded driver, I don't think it's possible to
> check
> what process is hanging on to the file.  I.e., I believe it's the same
> process as my program.  When my program exits, I'm able to delete the
> directory manually.
>
> Regarding your last question, my program does shutdown explicitly
> (as shown
> in my original post).
>
> Since posting, I found a solution, but I don't understand why it
> works.
>
> Below is a simplification of what the program does:
>
> 1. load the embedded driver
>set AutoCommit to false
>connect to the DB engine, and create a DB
>create tables
>load the tables with data, and commit all work
>
> 2. using SELECT, read some data records, and create an output
> file.
>
> 3. shutdown the DB engine
>
> 4. try to delete the DB directory (and all files and
> subdirectories)
>exit
>
> With regard to my initial post, Step 4 fails to delete all the files
> and
> directories (it's able to delete most of them).
>
> However:
>
> 1.  If I omit Step 2 (the reading of the DB), Step 4 succeeds.
>
> 2.  Or, if I add "xxx.commit()" between Step 2 and Step 3, then Step 4
> succeeds.
>
> It seems that a commit() is needed even though Step 2 makes no
> change to the
> DB.  This is contrary to my expectations for two reasons:
>
> a)  Since Step 2 is a read-only operation, I don't see why commit() is
> needed.
>
> b)  Even if a commit() is needed, the shutdown should release all DB
> resources (and not hang on to any files).
>
> Any thoughts?
>
>
> Kristian Waagan-4 wrote:
>>
>> Hello,
>>
>> What operating system are you using?
>> Are you able to use the operation system's proper tool to check which
>> process (if any) is hanging on to the file?
>> (i.e. pfiles or lsof)
>>
>> Also, do you see the same behavior if you in addition shut down the
>> database explicitly?
>> (i.e. 'DriverManager.getConnection
>> ("jdbc:derby:myDB;shutdown=true");')
>>
>>
>> Regards,
>> --
>> Kristian
>>
>>>
>>>
>>
>>
>>
>
> --
> View this message in context: 
> http://old.nabble.com/can%27t-delete-my-DB-directory-tp28782490p28825037.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>


Re: Problem with Column Names

2010-06-07 Thread Brett Wooldridge
It is not valid in SQL to have column name (or other identifiers) that begin
with a numeric character.  You're going to have to code specifically for
this case.

Brett

On Mon, Jun 7, 2010 at 10:29 PM, pb2208  wrote:

>
> I am defining a DB where I will need to import numerous data files.  These
> data files may come from EXCEL, CSV files, and Tab delimited files.  All of
> these files use alphabetic (A-Za-z0-9) in their column names.  Some columns
> may contain only the (0-9) characters.
>
> I have been trying to create a table using the numeric characters.
>  However,
> I am always getting exception errors pointing to the column name.  The
> command I am using is:
>
> Create Table mytab (393 decimal(3));
>
> I have also tried changing the 393 to encase it in single and double
> quotes.
> THis has no affect.
>
> How can I define and reference a column using numeric characters?
>
>
> --
> View this message in context:
> http://old.nabble.com/Problem-with-Column-Names-tp28805238p28805238.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


Re: Internal deadlock?

2010-06-03 Thread Brett Wooldridge
Indeed, setting derby.language.statementCacheSize=0 appears to work-around
the problem (at a fairly high performance cost).

It seems this bug (4279) was logged a year ago, any movement on a fix?  The
code I'm running isn't exactly rocket science -- select an object via
Hibernate, then delete it -- and the resulting SQL to the DB is also
trivial.

I've certainly thrown my vote in for the fix.

Anyway, thanks for the heads up, at least I can eliminate this error in the
field without a code change.

Brett

On Thu, Jun 3, 2010 at 7:02 PM, Knut Anders Hatlen wrote:

> On 06/ 3/10 10:55 AM, Brett Wooldridge wrote:
> > I'm encountering a lock-up in Derby when I put some load (minor) on
> > our server.  However, I'm not sure if it's a DB deadlock or if it's a
> > thread-lock in Derby's internals.
>
> Since the thread dump indicates that there are threads waiting while
> compiling statements, you may have come across this issue:
> https://issues.apache.org/jira/browse/DERBY-4279 (Statement cache
> deadlock)
>
> --
> Knut Anders
>
>


Re: Internal deadlock?

2010-06-03 Thread Brett Wooldridge
A minor update to my previous post.  Turning the PreparedStatement cache off
DOES NOT eliminate the error.  Further testing still results in the failure.
 However, this time I have an exception trace:

18:13:09,257 [JDBCExceptionReporter] [Jetty-1  ] WARN  -
SQL Error: 2, SQLState: XJ208
18:13:09,257 [JDBCExceptionReporter] [Jetty-1  ] ERROR -
Non-atomic batch failure.  The batch was submitted, but at least one
exception occurred on an individual memb
18:13:09,257 [JDBCExceptionReporter] [Jetty-1  ] WARN  -
SQL Error: -1, SQLState: 40XL1
18:13:09,257 [JDBCExceptionReporter] [Jetty-1  ] ERROR -
Error for batch element #0: DERBY SQL error: SQLCODE: -1, SQLSTATE: 40XL1,
SQLERRMC: 40XL1
18:13:09,257 [ractFlushingEventListener] [Jetty-1  ] ERROR -
Could not synchronize database state with session
org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch
update
at
org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:140)
at
org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:128)
at
org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184)
at
org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at
org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375)
at
org.hibernate.transaction.CacheSynchronization.beforeCompletion(CacheSynchronization.java:88)
at
bitronix.tm.BitronixTransaction.fireBeforeCompletionEvent(BitronixTransaction.java:398)
at bitronix.tm.BitronixTransaction.commit(BitronixTransaction.java:143)
at
bitronix.tm.BitronixTransactionManager.commit(BitronixTransactionManager.java:103)
at org.ziptie.zap.jta.TransactionElf.commit(TransactionElf.java:68)
at
org.ziptie.server.web.ZTransactionFilter.doFilter(ZTransactionFilter.java:86)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at
org.ziptie.server.security.ZSecurityFilter.doFilter(ZSecurityFilter.java:60)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at
org.ziptie.zap.metro.ZThreadContextFilter.doFilter(ZThreadContextFilter.java:49)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at org.ziptie.zap.web.internal.ZContext.handle(ZContext.java:148)
at org.ziptie.zap.web.ZSessionHandler.handle(ZSessionHandler.java:108)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451)
Caused by: org.apache.derby.client.am.BatchUpdateException: Non-atomic batch
failure.  The batch was submitted, but at least one exception occurred on an
individual member of the batch.
at org.apache.derby.client.am.Agent.endBatchedReadChain(Unknown Source)
at org.apache.derby.client.am.PreparedStatement.executeBatchRequestX(Unknown
Source)
at org.apache.derby.client.am.PreparedStatement.executeBatchX(Unknown
Source)
at org.apache.derby.client.am.PreparedStatement.executeBatch(Unknown Source)
at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
bitronix.tm.resource.jdbc.BaseProxyHandlerClass.invoke(BaseProxyHandlerClass.java:41)
at $Proxy47.executeBatch(Unknown Source)
at
org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.j

Re: Hibernate 3 Outer Join Problem

2010-05-31 Thread Brett Wooldridge
Hi anyz,

It seems to me that it is just luck that the mySQL dialect generated a
useable query.  These are trivial tables (and I presume classes), but the
mapping seems incorrect to me.  Without an explicit expresses relation
between Project and Program, it seems unlikely that Hibernate can reliably
process HQL (ie generate valid SQL).

I think programId in the Project mapping should be of type a.b.model.Program
not java.lang.Integer, and the relationship expresses as a join in the
mapping.

Anyway, I would suggest reading up on Hibernate mappings, and if you still
have trouble, ask over on the Hibernate forums. I am 99% sure there is
nothing wrong with the Derby dialect in this case.

Good luck.

Brett

Sent from my iPhone

On Jun 1, 2010, at 14:07, anyz  wrote:

Thanks Brett for your time. Yes it is more of Hibernate issue i was just
trying to find if some of derby user encounter with this.
Here are tables structure and mapping.


CREATE TABLE PROJECT (
 PROJECT_ID VARCHAR(50) NOT NULL,
 PROJECT_NAME VARCHAR(150) NOT NULL,
 PROJECT_MANAGER VARCHAR(150) NOT NULL,
 START_DATE TIMESTAMP NOT NULL,
 END_DATE TIMESTAMP NOT NULL,
 PROGRAM_ID INTEGER,
 PRIMARY KEY (PROJECT_ID)
);
ALTER TABLE PROJECT
 ADD FOREIGN KEY (PROGRAM_ID)
 REFERENCES PROGRAM (PROGRAM_ID);


CREATE TABLE PROGRAM (
 PROGRAM_ID INTEGER DEFAULT AUTOINCREMENT: start 1 increment 1 NOT NULL,
 PROGRAM_NAME VARCHAR(150) NOT NULL,
 PROGRAM_MANAGER VARCHAR(150) NOT NULL,
 PRIMARY KEY (PROGRAM_ID)
);

http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd";>
























http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd";>

















Thanks once again.

On Mon, May 31, 2010 at 7:55 PM, Brett Wooldridge <
brett.wooldri...@gmail.com> wrote:

> When you say "Hibernate 3", what specific version are you talking about?
>  While this is more of a Hibernate question than a Derby question, what is
> the structure of the two tables (for Project and Program) and what does the
> mapping look like?
>
> Brett
>
>
> On Mon, May 31, 2010 at 10:52 PM, anyz  wrote:
>
>> Its seems to be hibernate issue. Dialect in hibernate 3 is not uptodate i
>> think.
>>
>> We tried it as below:
>>
>> SELECT proj, prog, FROM Project proj, Program prog
>>
>> but it didn't work.
>>
>>
>>   On Mon, May 31, 2010 at 6:19 PM, Donald McLean wrote:
>>
>>> Have you tried explicitly specifying the table? i.e. Project.proj
>>>
>>> On Mon, May 31, 2010 at 7:00 AM, anyz  wrote:
>>> > i'm trying to use hibernate 3 with Derby 10.6. Normal queries work good
>>> but
>>> > using left outer join creates problem. The query i'm trying loads
>>> project
>>> > program if it exists and if programm not exists the project information
>>> is
>>> > loaded:
>>> >
>>> > SELECT proj, prog FROM Project
>>> > LEFT OUTER JOIN Program ON Program.programId = Project.programId
>>> > WHERE Project.projectName = '"MyProject"
>>> >
>>> > The error on executing query is :
>>> >
>>> > unexpected token: ON near line 1, column xxx [SELECT proj, prog, FROM
>>> > a.b.Project LEFT OUTER JOIN Program ON Program.programId =
>>> Project.programId
>>> > WHERE Project.projectName = 'MyProject' ]
>>> > at org.hibernate.hql.ast.QuerySyntax
>>> >
>>> > Could you please guid if Hibernate dialect support for Derbby is not
>>> that
>>> > good of i'm missing something.
>>> >
>>> > thanks
>>> >
>>>
>>>
>>>
>>> --
>>> Family photographs are a critical legacy for
>>> ourselves and our descendants. Protect that
>>> legacy with a digital backup and recovery plan.
>>>
>>> Join the photo preservation advocacy Facebook group:
>>> http://www.facebook.com/home.php?ref=logo#/group.php?gid=148274709288
>>>
>>
>>
>


Re: Hibernate 3 Outer Join Problem

2010-05-31 Thread Brett Wooldridge
When you say "Hibernate 3", what specific version are you talking about?
 While this is more of a Hibernate question than a Derby question, what is
the structure of the two tables (for Project and Program) and what does the
mapping look like?

Brett


On Mon, May 31, 2010 at 10:52 PM, anyz  wrote:

> Its seems to be hibernate issue. Dialect in hibernate 3 is not uptodate i
> think.
>
> We tried it as below:
>
> SELECT proj, prog, FROM Project proj, Program prog
>
> but it didn't work.
>
>
> On Mon, May 31, 2010 at 6:19 PM, Donald McLean wrote:
>
>> Have you tried explicitly specifying the table? i.e. Project.proj
>>
>> On Mon, May 31, 2010 at 7:00 AM, anyz  wrote:
>> > i'm trying to use hibernate 3 with Derby 10.6. Normal queries work good
>> but
>> > using left outer join creates problem. The query i'm trying loads
>> project
>> > program if it exists and if programm not exists the project information
>> is
>> > loaded:
>> >
>> > SELECT proj, prog FROM Project
>> > LEFT OUTER JOIN Program ON Program.programId = Project.programId
>> > WHERE Project.projectName = '"MyProject"
>> >
>> > The error on executing query is :
>> >
>> > unexpected token: ON near line 1, column xxx [SELECT proj, prog, FROM
>> > a.b.Project LEFT OUTER JOIN Program ON Program.programId =
>> Project.programId
>> > WHERE Project.projectName = 'MyProject' ]
>> > at org.hibernate.hql.ast.QuerySyntax
>> >
>> > Could you please guid if Hibernate dialect support for Derbby is not
>> that
>> > good of i'm missing something.
>> >
>> > thanks
>> >
>>
>>
>>
>> --
>> Family photographs are a critical legacy for
>> ourselves and our descendants. Protect that
>> legacy with a digital backup and recovery plan.
>>
>> Join the photo preservation advocacy Facebook group:
>> http://www.facebook.com/home.php?ref=logo#/group.php?gid=148274709288
>>
>
>


Re: Hibernate 3 Outer Join Problem

2010-05-31 Thread Brett Wooldridge
This looks like a Hibernate error, not a Derby error.  I've been using
Hibernate 3 with Derby for a few years and never had any issues.  Is this
query generated by Hibernate?  It seems like you are using HQL (the error is
in org.hibernate.hql.ast.QuerySyntax) , but you are feeding it SQL syntax.
 HQL syntax is similar but not equal to SQL.

Brett


On Mon, May 31, 2010 at 8:00 PM, anyz  wrote:

> i'm trying to use hibernate 3 with Derby 10.6. Normal queries work good but
> using left outer join creates problem. The query i'm trying loads project
> program if it exists and if programm not exists the project information is
> loaded:
>
> SELECT proj, prog FROM Project
> LEFT OUTER JOIN Program ON Program.programId = Project.programId
> WHERE Project.projectName = '"MyProject"
>
> The error on executing query is :
>
> unexpected token: ON near line 1, column xxx [SELECT proj, prog, FROM
> a.b.Project LEFT OUTER JOIN Program ON Program.programId = Project.programId
> WHERE Project.projectName = 'MyProject' ]
> at org.hibernate.hql.ast.QuerySyntax
>  Could you please guid if Hibernate dialect support for Derbby is not that
> good of i'm missing something.
>
> thanks
>
>


Re: JPA 2.0 and Derby 10.6.1.1 ERROR XCL13

2010-05-25 Thread Brett Wooldridge
I agree.  This is not a Derby error.  Derby is just throwing the error
because Hibernate, in this case, is trying to get a 3rd parameter when there
are only two in the query.  Check that the (Hibernate) dialect is configured
for Derby, and then ask this over on the Hibernate forums.

You can also enable trace logging for Hibernate (via log4j or whatever
logging framework you use) and get a clue about the query generation from
it's perspective.

Regards,
Brett

On Wed, May 26, 2010 at 1:50 PM, Bryan Pendleton  wrote:

> This part:
>
>
> > where product0_.lid=? and product0_.userVersion=? fetch first 2 rows
>
> seems to match this part:
>
>
> > cq.where(cb.and(cb.equal(lidAttr, lid), cb.equal(userVersionAttr,
> userVersion)));
>
> pretty closely, so it's hard to imagine why your O/R tool thought it
> needed to set 3 parameters in the query.
>
> Unless it somehow thought it was trying to set the "first 2 rows" part?
> That is,
> did it think it had prepared "fetch first ? rows" and was trying to set the
> 2?
>
> You may need to ask in a forum that covers your O/R tool, to get some
> suggestions about how to understand its query generation behaviors.
> Did you say you were using Hibernate?
>
> Regarding whether Derby is processing the statement twice, I think that
> the log is just a bit hard to read: the statement is listed twice,
> but once is a "begin compiling" trace, and once is an "end compiling"
> trace.
>
> So, I'm afraid I don't have any breakthrough answers for you, but hopefully
> you're armed with more information to ask a more specific question to the
> providers of the O/R tool.
>
> thanks,
>
> bryan
>


Re: Drop "GENERATED BY DEFAULT"

2010-05-19 Thread Brett Wooldridge
Thanks, I'll give it a try.

Brett

Sent from my iPhone

On May 19, 2010, at 16:11, Knut Anders Hatlen 
wrote:

> On 05/19/10 02:32 AM, Brett Wooldridge wrote:
>> Hi All,
>>
>> I have a table something like this:
>>
>> CREATE TABLE test (
>>   id BIGINT GENERATED BY DEFAULT AS IDENTITY NOT NULL,
>>   ...
>>   PRIMARY KEY (id)
>> );
>>
>> I wish to DROP the "GENERATED BY DEFAULT AS IDENTITY" from column
>> 'id'.
>> Looking at the various ALTER TABLE forms, I cannot seem to find a way
>> to do it.
>> There are a lot of other tables with foreign key references to this
>> column, so
>> dropping and recreating the column doesn't seem viable.  Is there a
>> supported way
>> to do it, or even an unsupported way by manipulating system tables?
>
> Hi Brett,
>
> I don't think it is documented anywhere, but setting the default value
> of a column clears its auto-increment property. So this should do
> the trick:
>
>ALTER TABLE test ALTER COLUMN id SET DEFAULT NULL
>
> --
> Knut Anders
>


Drop "GENERATED BY DEFAULT"

2010-05-18 Thread Brett Wooldridge
Hi All,

I have a table something like this:

CREATE TABLE test (
   id BIGINT GENERATED BY DEFAULT AS IDENTITY NOT NULL,
   ...
   PRIMARY KEY (id)
);

I wish to DROP the "GENERATED BY DEFAULT AS IDENTITY" from column 'id'.
Looking at the various ALTER TABLE forms, I cannot seem to find a way to do
it.
There are a lot of other tables with foreign key references to this column,
so
dropping and recreating the column doesn't seem viable.  Is there a
supported way
to do it, or even an unsupported way by manipulating system tables?

Thanks,
Brett


Re: Intermittent "Connection closed by unknown interrupt." errors

2010-05-05 Thread Brett Wooldridge
Ibrahim,

Who owns the thread in question?  Is it created by a web-container?  An
application container?  You?

If it is created by a web-container or app-container, there may be some kind
of lifecycle management going on where the container interrupts a thread
that is running too long (from it's perspective).

-Brett


On Tue, May 4, 2010 at 4:48 PM, Ibrahim Hasbini  wrote:

> Hello,
>
> There's no calls to Thread.interrupt() in my application code. I'm
> still unable to explain from where the interrupt is being generated.
>
> Thanks,
> Ibrahim
>
> >> I've been experiencing intermittent errors in derby.log that look like
> this:
>
> >> Caused by: java.lang.InterruptedException
> >>  at java.lang.Object.wait(Native Method)
> >>  at java.lang.Object.wait(Object.java:485)
> >>  at org.apache.derby.impl.store.raw.log.LogToFile.flush(Unknown
> Source)
> >>  at org.apache.derby.impl.store.raw.log.LogToFile.flush(Unknown
> Source)
> >
> >Do you call Thread.interrupt() anywhere in your application?
> >http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Thread.html#interrupt()
> >
> >This API is, unfortunately, unsafe to call in a Derby-based application,
> as the
> >Derby code does not protect itself from these interrupt() calls.
> >
> >thanks,
> >
> >bryan
>


Re: performance: Derby vs H2

2010-04-22 Thread Brett Wooldridge
Speed is important, but so is stability.  H2 does not have anywhere
near the track record of Derby. There are hundreds of thousands of
Derby installations running 24/7.  I have used Derby for years, as
have my customers, in mission critical applications and I have never
experienced a corruption or failure.

Unless you are doing something truly advanced Derby is fast enough.
For those other cases, H2 would also be unsuitable.  You would need
something like PostgreSQL or Oracle.

Brett

Sent from my iPhone

On Apr 23, 2010, at 4:44, Rami Ojares 
wrote:

> I think the best authority on this is Thomas Mueller (father of h2)
> who has actually done some performance comparisons between databases.
> I remember reading some debate about this on some discussion groups
> a year or two ago.
> If you want peer reviewed scientific knowledge then you probably
> should run those tests yourself.
>
> I have seen sometimes Sun employees referring on this list to some
> sort of benchmarks for derby.
> But I have not found anything public from google.
>
> I think the countless benchmarks published by different database
> companies have proven that
> performance is such a complex issue that any type of result can be
> manufactured.
> However when done well and extensively they can give some overall
> understanding.
>
> Because H2 has only one developer the code is a bit more
> straightforward and simpler than Derby that has a rich and long
> history as a codebase.
> Therefore I think that H2 is a tad cleaner and faster than Derby in
> basic operation.
> Both databases have queries that are very slow.
> One could call them bugs.
>
> - rami
>
> On 22.4.2010 19:47, Rayson Ho wrote:
>> Is it really true that H2 is faster than Derby??
>>
>> http://en.wikipedia.org/wiki/Apache_Derby
>>
>> A year ago, I tried to remove the section that says that H2 is
>> faster,
>> but someone always added it back into the article. And besides me,
>> seems like no one really care about the "Comparison to other embedded
>> SQL Java databases" section.
>>
>> http://en.wikipedia.org/wiki/Talk:Apache_Derby#Benchmarks
>>
>> Is it a well-known fact that H2 is always faster??
>>
>> And there is also H2's benchmark page:
>>
>> http://www.h2database.com/html/performance.html
>>
>> Is it a fair comparsion??
>>
>> Rayson
>>
>


Re: Question about replication

2010-04-19 Thread Brett Wooldridge
Jonathan,

Could you open an enhancement for this feature?  MySQL's driver has such an
option (among many others that would be nice), so I think it makes sense as
a feature.

Brett

On Tue, Apr 20, 2010 at 8:34 AM, Dag H. Wanvik  wrote:

> "Fisher, Jonathan"  writes:
>
> > Is there something I can specify in my jdbc url to try the master first,
> > and if unsuccessful, try the slave?
>
> I don't believe there is a transparent failover mechanism in Derby
> yet. You would need to let your client try the slave if the master
> does not respond, as far as I know.
>
> Thanks,
> Dag
>
> > Something like:
> > jdbc:derby://localhost:1527,localhost:1088/jmxdb
>


Re: SQL performance with complex query

2010-04-17 Thread Brett Wooldridge
Please file a JIRA issue.  I don't think preparing a query plan should
consume that much memory, or generate that much garbage.

Brett

Sent from my iPhone

On Apr 18, 2010, at 1:41, Chris Wilson  wrote:

> Hi Bryan and all,
>
> On Sat, 17 Apr 2010, Bryan Pendleton wrote:
>
>>> I think this could be a case where the query optimizer could use
>>> some work?
>>> Is it useful for me to file a bug report somewhere, e.g. in JIRA?
>>
>> Yes, it would be particularly helpful if you could provide a complete
>> standalone test case which demonstrates the problem.
>
> I believe I have done so; the link that I provided in my email to the
> sources of the test case from Subversion should run on any Unix system
> with Bash, Sun Java and the Derby libraries available. It has four
> source
> files (a shell script and three SQL scripts) and a copy of the Derby
> database to save you developers the time of reloading the fixture data
> used in the test.
>
>> If that's not possible, then perhaps you could gather query plan
>> output
>> and post that; perhaps the Derby optimizer is choosing a particularly
>> poor query plan.
>
> OK, this is interesting. With the default JVM settings, it crashes
> with
> OOM before outputting the query plan. However, with -Xmx1024m, it
> finishes in reasonable time, and does output the plan, which I've
> attached.
>
> I guess that means that the optimiser is just taking a lot of memory
> to
> optimise the query, and it spends forever in GC before finally
> hitting OOM
> and giving up when using the default settings? Does this bear looking
> into? Should I file a JIRA issue?
>
> Also the recommended page at
> (http://wiki.apache.org/db-derby/PerformanceDiagnosisTips) contains a
> broken link to "Working with Derby properties"
> (http://db.apache.org/derby/docs/dev/tuning/ctunsetprop34818.html)
> which
> is currently 404.
>
> Cheers, Chris.
> --
> Aptivate | http://www.aptivate.org | Phone: +44 1223 760887
> The Humanitarian Centre, Fenner's, Gresham Road, Cambridge CB1 2ES
>
> Aptivate is a not-for-profit company registered in England and Wales
> with company number 04980791.
> 


Re: Order of Rows Changing

2010-04-16 Thread Brett Wooldridge
I think without an order clause, as you know, there is no guarantee of
order. Also without an order clause, there is no guarantee of
consistency.

Queries that perform table scans will tend to be more consistent than
those that perform index scans (but worse performing).  Then again,
once a join comes into play, and if a hash gets involved in the merge,
all bets are off.

There are probably other factors that can affect order, such as the
page cache.

All that is to say that you can't expect consistent results without an
order clause.  You have shown that is the case empirically.  As to
*why*, even for someone relatively familiar with the code it is hard
to say.  The code path is long and the number of opportunities for
branching are too many to give an answer.  The cache is just one
example where from one iteration to the next the code path may change,
and the resulting order may be different.

With respect to paying a penalty for sort, if the order by clause is
on an indexed column you are not likely to pay a measurable penalty
even for millions of rows.

As an aside, I would strongly recommend the offset and limit
capabilities of Derby for paging.

Brett

Sent from my iPhone

On Apr 17, 2010, at 8:02, David Tarico  wrote:

> Hi,
>
> I have a situation where I run a query multiple times in succession,
> and
> the order of the rows in the result set is changing, even though I am
> the only user connected to the database and I am not changing the data
> in any way.
>
> I realize that that except for columns that are explicitly sorted, the
> order of rows in the ResultSet is arbitrary, but it's surprising to me
> that the arbitrary ordering has an element of randomness that will
> produce a different order from one moment to the next.  I'm
> wondering if
> someone can explain why this randomness happens and if there is any
> way
> to prevent it.
>
> My full query is below, but it is simply: Select [columns...] from
> table
> where Region = 'West' order by SalesRep
>
> There is an index on both Region and SalesRep.  According to the query
> plan, Derby is using the Region index to filter, and then doing an
> external sort on disk involving merge runs.  (See below for full query
> plan.)
>
> I'm just guessing, but my theory is that when the results are written
> out to files on disk, the order that the files are merged together is
> random, producing differing order of rows within a SalesRep.
>
> Here's an example of results I'm seeing:
>
> SaleRep 1, customer FOO
> SaleRep 1: customer BAR
> SaleRep 1: customer BAZ
> ...
> SaleRep 2, customer FOO
> SaleRep 2: customer BAR
> SaleRep 2: customer BAZ
> ...
>
> Then I run it again and I get:
>
> SaleRep 1, customer FOO
> SaleRep 1: customer BAR
> SaleRep 1: customer BAZ
> ...
> SaleRep 2, customer XXX
> SaleRep 2: customer YYY
> SaleRep 2: customer ZZZ
> ...
>
> Running the query a few more times sometimes produces the first
> results,
> sometimes the second results, and sometimes one of several additional
> orderings of rows within SalesRep 2.  Another very strange thing is
> that
> for SaleRep 1, the order of rows stays the same.  The order is only
> changing for rows within SaleRep 2 (and maybe other SalesReps).
>
> My usecase is that I want users to be able to page through the data,
> and
> for each page, I run the same SQL query and then pull out the rows for
> that page, expecting the ResultSet to be the same.  For this query,
> obviously I could force a consistent order of rows by order by
> SalesRep
> and then by primary key.
>
> But what I'd really like to understand is what the conditions are that
> trigger the random behavior in the ordering.  If my query was just
> "Select [columns...] from table where Region = 'West'", and there
> was no
> external merge sort, would the order of the rows fluctuate? If I don't
> already need to sort the data for other reasons, I'd rather not pay
> the
> performance penalty of adding a sort on the primary key.
>
> It boils down to this.  I want to run arbitrary queries, and I want to
> ensure a consistent order of results, but I don't care what that order
> is, and I don't want to pay the penalty of always doing a sort on
> primary key.  Thoughts?
>
> I'm running Derby embedded using JDBC driver 10.2.1.6, and the derby
> database internal data structures are version 10.1.  My SQL and query
> plan are below. Any tips or explanations are much appreciated!
>
> Thanks
> David.
>
>
>
> SELECTRPT_PARENT_1195016960126.EXTRACT_ID,
>RPT_PARENT_1195016960126.MCUST_NUM_1387820363_648703,
>RPT_PARENT_1195016960126.M_AR_TERM_NUM,
>RPT_PARENT_1195016960126.MAR_TERM_DE__1467979434_85937,
>RPT_PARENT_1195016960126.M_CRED_LIMIT,
>RPT_PARENT_1195016960126.MREFERENCE__422035331_13607,
>RPT_PARENT_1195016960126.MCNAME_64264526_571575,
>RPT_PARENT_1195016960126.M_ADDRESS1,
>RPT_PARENT_1195016960126.M_CITY,
>RPT_PARENT_1195016960126.M_STATE,
>RPT_PARENT_1195016960126.M_ZIP,
>RPT_PARENT_1195016

Re: curious problem dropping triggers and tables

2010-04-14 Thread Brett Wooldridge
Are you sure all of the file streams are flushed and closed after the
transfer?  Can you check with some tool like 'lsof' on Linux or some tool
from SysInternals if Windows?  After shutdown of the embedding VM, all
handles would be closed, which might be why ij works after that.  Just a
thought.

-Brett

On Wed, Apr 14, 2010 at 6:03 AM, gmbradford
wrote:

>
> I have a java application that gets a copy of a Derby database that is
> transferred to it over a network. (The sending end freezes the DB and then
> sends the files over a socket.) The receiving jvm closes the socket and
> streams when the database transfer is complete, and then connects to it to
> do some sql. The sql includes delete statements, drop trigger statements,
> and drop table statements. The delete statements consistently work fine,
> but
> the drop trigger and drop table statements don't always work (sometimes
> they
> do). When they don't, we see SqlSyntaxErrorException, with the complaint
> that the trigger or table does not exist. We can't look at the DB while
> this
> jvm is running because it's using embedded mode, but when it's stopped, ij
> shows that the triggers and tables exist and from ij the drops work fine.
>
> I'm wondering if anyone has an idea why this might be happening.
> --
> View this message in context:
> http://old.nabble.com/curious-problem-dropping-triggers-and-tables-tp28235584p28235584.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


Re: Opening/Converting a Derby Embedded DB with the Derby Network Server

2010-03-28 Thread Brett Wooldridge
SQuirreL is a tool with copy DB functionality.  I don't know if it
handles LOBs.  If it works, I would appreciate it if you post back here.

Brett

Sent from my iPhone

On Mar 29, 2010, at 7:32, "[SAS] Justin Swall"
 wrote:

> Unfortunately, the upgrade did not migrate the data from the HSQLDB to
> the new Derby DB.  I have posted on their forms for help.  Does anyone
> know if there is a utility available to migrate existing data from
> HSQL
> to Derby?
>
> Regards,
>
> Justin Swall
> Swall's Associated Services
>
>
> -Original Message-
> From: [SAS] Justin Swall [mailto:jsw...@swallservices.com]
> Sent: Sunday, March 28, 2010 2:50 PM
> To: Derby Discussion
> Subject: RE: Opening/Converting a Derby Embedded DB with the Derby
> Network Server
>
> I figured out my first problem.  The version of the LOB we are
> currently
> using is one subversion older than the new version I am doing my
> testing
> on.  The old version of the software used HSQL and the new version
> uses
> Derby so what I actually have is an old HSQL DB.  I'm going to install
> the old version on my test system, attach the HSQL DB, then do an
> in-place upgrade to the latest subversion as it should update the DB
> as
> part of the upgrade.  Hopefully once upgraded I'll be able to figure
> out
> how to attach it to the network server without incident.  I'll keep
> this
> thread updated as I know more.
>
> Justin Swall
> Swall's Associated Services
>
>
> -Original Message-
> From: [SAS] Justin Swall [mailto:jsw...@swallservices.com]
> Sent: Sunday, March 28, 2010 2:22 PM
> To: derby-user@db.apache.org
> Subject: Opening/Converting a Derby Embedded DB with the Derby Network
> Server
>
> I have an LOB application that uses a Derby Embedded DB (Openbravo
> POS @
> sourceforge.net).  Their application does support using the network
> server however so I spent some time over my weekend running some tests
> with a copy of the app, installing the JDK & Derby and seeing if I can
> get it talking to the network server instead of the embedded
> server.  So
> far so good, I was able to get it to connect to the network server and
> create a new DB and it is now successfully working off the new DB.
>
> My question is I currently have a copy of the embedded DB files (the
> .script and .properties files).  How can I get the data out of this
> embedded DB and over into a DB that can be mounted with the Network
> Server?  Is there some way to just directly mount this existing DB in
> the network server or does it need to be converted?
>
> I'm certain that this is a pretty basic question but I swear I've
> googled and searched the mailing list archives and so far can't figure
> out what to do so I'm looking to the experts for help.
>
> Thanks in advance.
>
> Justin Swall
> Swall's Associated Services


Connection failure when client thread name contains Japanese characters

2010-03-15 Thread Brett Wooldridge
Just a note to the list so that user's searching for this issue via google
might find this post. I have opened a bug (#4584), which is probably a
duplicate of bug#728 so that other's may find it and save the hours I spent
chasing it down. However, while related, it may not be a true duplicate of
728. The exception is similar to 728: Exception in thread "main"
org.apache.derby.client.am.SqlException: Unicode string can't convert to
Ebcdic string (Here is the version of the exception I received -- excuse the
Japanese characters): Caused by: org.apache.derby.client.am.SqlException:
Unicode ストリングを EBCDIC ストリングに変換することはできません。 at
org.apache.derby.client.net.EbcdicCcsidManager.convertFromUCS2(Unknown
Source) at org.apache.derby.client.net.Request.writeScalarString(Unknown
Source) at org.apache.derby.client.net.Request.writeScalarString(Unknown
Source) at
org.apache.derby.client.net.NetConnectionRequest.buildEXTNAM(Unknown Source)
at org.apache.derby.client.net.NetConnectionRequest.buildEXCSAT(Unknown
Source) at
org.apache.derby.client.net.NetConnectionRequest.writeExchangeServerAttributes(Unknown
Source) at
org.apache.derby.client.net.NetConnection.writeServerAttributesAndKeyExchange(Unknown
Source) at
org.apache.derby.client.net.NetConnection.flowServerAttributesAndKeyExchange(Unknown
Source) at
org.apache.derby.client.net.NetConnection.flowUSRIDPWDconnect(Unknown
Source) at org.apache.derby.client.net.NetConnection.flowConnect(Unknown
Source) at org.apache.derby.client.net.NetConnection.initialize(Unknown
Source) at org.apache.derby.client.net.NetConnection.(Unknown Source)
at org.apache.derby.client.net.NetConnection40.(Unknown Source) at
org.apache.derby.client.net.ClientJDBCObjectFactoryImpl40.newNetConnection(Unknown
Source) at
org.apache.derby.client.net.NetXAConnection.createNetConnection(Unknown
Source) at org.apache.derby.client.net.NetXAConnection.(Unknown
Source) at
org.apache.derby.client.ClientPooledConnection.getNetXAConnection(Unknown
Source) ... 45 more However, the difference between #728 and this issue is
that the database name (and connection URL) does NOT contain unicode
characters. In this case, the *thread name* of a client thread requesting a
connection contains Japanese characters. If the thread performing
java.sql.DriverManager.getConnection() has characters that cannot be
translated into EBCDIC the above exception is the result.

If the thread name is changed to contain only standard ASCII characters, the
connection to the DB is successful. Note again, in my case, the connection
URL is a standard connection URL with no i18n characters, something similar
to: jdbc:derby://localhost/database It is only the client thread-name that
contains i18n characters. I don't know why the client feels it necessary to
marshall the client-thread name, but that seems to be the problem. The fix
for this issue is likely easier than 728 if the requirement that the client
marshall the thread name can be removed (it seems senseless). Finally, just
for the record, a typical thread name that tickles this bug is: "Running-2
(MOTDバナーの設定 for 10.0.0...@default)" If the Japanese is removed from the
thread names, there is no problem. No reproduction is attached; just change
the name of a client thread to contain invalid EBCDIC characters and attempt
to call getConnection().
Brett


Re: Using IJ to copy data from one DB to an other one

2010-02-15 Thread Brett Wooldridge
Sylvian,

If this is a one-time migration you might checkout the SQuirreL
client.  I think it has a cross-DB copy feature.

Brett


Re: Streaming Results

2010-01-14 Thread Brett Wooldridge
Bernt,

I think the issue is that Derby will materialize the entire ResultSet on the
client-side before returning it to the user.  If the ResultSet is one
million rows, then one million rows will be transferred and materialized on
the client before the executeQuery() call returns to the user.

Some databases and drivers have the capability to return a streaming
ResultSet, such that rows are only transferred as ResultSet.next() is
called.  If the driver is clever, it can keep a bit ahead of the calls to
next() by transferring X number of rows at a time (where X is something much
smaller than a million).

I do not have personal knowledge of whether Derby supports ResultSet
streaming (which is distinct from streaming datatypes like CLOBs or BLOBs).
 I just wanted to make sure the distinction in the question was clear.

Brett


On Thu, Jan 14, 2010 at 7:33 PM, Bernt M. Johnsen wrote:

>  Stian Brattland wrote (2010-01-14 11:01:43):
> > Hi,
> >
> > Thank you for your quick reply.
> >
> > I will elaborate a little on my question:
> >
> > I often need to retrieve a large amount of data from a remote MySQL
> > database. However,
> > if my application runs with a default heap size, then i will quickly get
> > some sort of heap space
> > exception. The reason is that the ResultSet containing the retrieved
> > data is too large. What i have
> > done to get around this is to stream the results from the database, and
> > process rows one by one
> > as they are streamed (for instance, storing them in a local database,
> > like Derby). Of course, things
> > are most likely behaving more optimal than only transfering one row at a
> > time from the database
> > (yes, some buffers are most likely involved). However, my key point was
> > that i do not have to wait
> > for the entire ResultSet to become ready before i can start iterating
> > over the rows. Instead, rows (
> > be it one or hundred) are retrieved as i iterate over the ResultSet.
> >
> > So, my question is wether the Derby Driver has this ability too?
>
> Yes, that is exactly what the Derby driver (and all other reasonable
> implemented JDBC drivers) will do. The driver will attempt to fill up
> the communication buffer (32K) as long as at least one row fits into
> it.
>
> >
> > Kind regards,
> > Stian Brattland
> >
> >
> > My intention with the question was not really to point out that a the
> > driver needs to retrive
> > results in the most ineffective manner as possible.
> >
> > Bernt M. Johnsen skrev:
> >> Hi,
> >>
> >> Some general remarks (don't remember the exact details of what Derby
> >> actually does with setFetchSize).
> >>
> >>
> >> Stian Brattland wrote (2010-01-14 08:43:32):
> >>
> >>> Hi,
> >>>
> >>> I've got a question regarding results streaming. The J/Connector for
> >>> MySQL supports results streaming, which means
> >>> that you can stream and process rows in a ResultSet one by one.
> >>> Normally, all rows in a ResultSet will be retrived
> >>> before you can process the ResultSet. However, i am curious as to
> >>> wether  this "feature" also exists in Derby?
> >>>
> >>
> >> Normally, a JDBC driver will retrieve a suitable number of rows, not
> >> necessarily all, depending on various factors such as row size, number
> >> of rows resulting from the query and communication buffer size.
> >>
> >>
> >>> In MySQL, you would do the following to stream results from the
> >>> database  as you iterate through a ResultSet:
> >>>
> >>> stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
> java.sql.ResultSet.CONCUR_READ_ONLY);
> >>> stmt.setFetchSize(Integer.MIN_VALUE);
> >>>
> >>
> >> setFetchSize is just a hint to the driver, See
> >>
> >>
> http://jscstage.sfbay.sun.com/javase/6/docs/api/java/sql/Statement.html#setFetchSize(int)
> >>
> >> A well written driver will still try to do thing's optimal, such as
> >> e.g. fill up the communication buffer with rows to reduce the number
> >> of roundtrips, regardless of how low you set the fetchSize.
> >>
> >> And last, why would you like to force the driver to fetch the rows one
> >> by one? The only thing you will get fromthat, is extra overhead.
> >>
>
> --
> Bernt Marius Johnsen, Staff Engineer
> Database Technology Group, Sun Microsystems, Trondheim, Norway
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFLTvL+lFBD9TXBAPARAvlGAJ9zaW2WAFG/97gneqpYZq8IWAkfagCfVEH9
> 3kGYBcg23Fbt34k9lSiqOjk=
> =D9VP
> -END PGP SIGNATURE-
>
>


Re: C3P0 Connection pooling needed?

2009-11-20 Thread Brett Wooldridge
If you're looking for a transaction aware connection pool, along with
associated JTA transaction manager, I really like Bitronix (open
source).  C3P0 is not transaction aware.

Brett

Sent from my iPhone

On Nov 21, 2009, at 3:46, dag.wan...@sun.com (Dag H. Wanvik) wrote:

>
>> I am using embedded derby 10.3.
>>
>> Is there need for C3P0 connection pool software because i am
>> creating lot of
>> short lived connections to database?
>
> I vaguely remember an issue with C3PO, not sure if the interrupt issue
> described here could be a problem for you, or if it even was due to
> C3PO usage, but thought I'd mention it..
>
> http://old.nabble.com/ERROR-08000:-Connection-closed-by-unknown-interrupt-td16060173.html
>
> Dag


Re: Horrible performance - how can I reclaim table space?

2009-09-23 Thread Brett Wooldridge
Still, the fix is only for a multi-threaded update scenario.  I don't know
the access pattern of your application, so it may or may not help resolve
your issue.  I would expected offline compression of the table to have fixed
your issue.

On Thu, Sep 24, 2009 at 10:35 AM, T K  wrote:

> Ouch... I have 10.3.3.0! I will consider the upgrade
>
> Thanks Bret.
>
> ------
> *From:* Brett Wooldridge 
> *To:* Derby Discussion 
> *Sent:* Wednesday, September 23, 2009 9:31:51 PM
> *Subject:* Re: Horrible performance - how can I reclaim table space?
>
> If you are on 10.3, you might consider 10.3.3.1, as a space reclamation
> issue for large objects was resolved (
> http://issues.apache.org/jira/browse/DERBY-4050) between 10.3 and
> 10.3.3.1.  According to that defect, the upgraded version (10.3.3.1) will
> still not reclaim space lost prior to the update, so a full offline
> compression is required.
> -Brett
>
>
> On Thu, Sep 24, 2009 at 10:03 AM, T K  wrote:
>
>> We have a horrific performance issue with a table of 13 rows, each one
>> containing a very small blob, because the table is presumably full of dead
>> rows and we are table-scanning; here's part of the explain plan:
>>
>> Source result set:
>> Table Scan ResultSet for SOMETABLE at read
>> committed isolation level using instantaneous share row locking chosen by
>> the optimizer
>> Number of columns fetched=4
>> Number of pages visited=8546
>> Number of rows qualified=13
>> Number of rows visited=85040
>> optimizer estimated cost:
>> 787747.94
>>
>> So I assume I have over 85,000 dead rows in the table, and compressing it
>> does not reclaim the space. In fact, because we keep adding and deleting
>> rows, the performance gets worse by the hour, and according to the above
>> plan, Derby has processed over 32MB of data just to match 4 of the 13 rows.
>> For the time being, I want to optimize this table scan before I resort to
>> indices and/or reusing rows. This is with Derby 10.3
>>
>> Any thoughts?
>>
>> Thanks
>>
>>
>
>


Re: Badly performing WHERE caluse

2009-09-23 Thread Brett Wooldridge
That'll teach me to post too early in the morning (I'm in Tokyo) and before
coffee.  :-)


On Thu, Sep 24, 2009 at 10:19 AM, Matt Doran wrote:

> Brett Wooldridge wrote:
>
>> One could argue that in the first case (2>1), the optimizer is free to
>> choose whatever index it wants because the query can provably return no
>> rows.  However, in the second case (1=1), were the DateTimestamp condition
>> must be evaluated, it would appear that the optimizer indeed selected the
>> wrong index.
>>
>>
>>  Ummm, last time I checked, 2 was greater than 1.  :)
>


Re: Horrible performance - how can I reclaim table space?

2009-09-23 Thread Brett Wooldridge
If you are on 10.3, you might consider 10.3.3.1, as a space reclamation
issue for large objects was resolved (
http://issues.apache.org/jira/browse/DERBY-4050) between 10.3 and 10.3.3.1.
 According to that defect, the upgraded version (10.3.3.1) will still not
reclaim space lost prior to the update, so a full offline compression is
required.
-Brett


On Thu, Sep 24, 2009 at 10:03 AM, T K  wrote:

> We have a horrific performance issue with a table of 13 rows, each one
> containing a very small blob, because the table is presumably full of dead
> rows and we are table-scanning; here's part of the explain plan:
>
> Source result set:
> Table Scan ResultSet for SOMETABLE at read
> committed isolation level using instantaneous share row locking chosen by
> the optimizer
> Number of columns fetched=4
> Number of pages visited=8546
> Number of rows qualified=13
> Number of rows visited=85040
> optimizer estimated cost:
> 787747.94
>
> So I assume I have over 85,000 dead rows in the table, and compressing it
> does not reclaim the space. In fact, because we keep adding and deleting
> rows, the performance gets worse by the hour, and according to the above
> plan, Derby has processed over 32MB of data just to match 4 of the 13 rows.
> For the time being, I want to optimize this table scan before I resort to
> indices and/or reusing rows. This is with Derby 10.3
>
> Any thoughts?
>
> Thanks
>
>


Re: Badly performing WHERE caluse

2009-09-23 Thread Brett Wooldridge
One could argue that in the first case (2>1), the optimizer is free to
choose whatever index it wants because the query can provably return no
rows.  However, in the second case (1=1), were the DateTimestamp condition
must be evaluated, it would appear that the optimizer indeed selected the
wrong index.
However, I don't think you can point to 2ms vs. 6000ms as proof of a bad
query plan.  In the first case, the optimizer can prove (to itself) that the
query can return no rows (because of the 2>1 condition along with all AND
predicates) -- and therefore it probably doesn't do any index scan at all.
 Therefore, 2ms.

In the second case, because the 1=1 condition is satisfied, Derby must
*actually* do the index scan as per plan.  Therefore, 6000ms.  Whether using
the DESC index actually provides significant improvement over ASC remains to
be seen, as the comparison currently isn't apples-to-apples.

I would be interested in seeing the result of the DESC vs. the ASC index in
the (1=1) scenario.  You can override the optimizer with a hint, like so:

SELECT * FROM table --DERBY-PROPERTIES index=TBLEVENTS_DATETIMESTAMP_DESC
WHERE
1=1 AND
1=1 AND
DateTimestamp >= ? AND
DateTimestamp <= ? AND
ORDER BY DateTimestamp DESC

Note, the --DERBY-PROPERTIES must come at the end of a literal line (i.e.
there must be a linefeed after that cause) otherwise everything after that
will be taken as a comment.

Let us know the result.  Either way it may be considered a bug in the
optimizer, but depending on the number of rows returned, may not make a
significant difference in performance.

-Brett

On Wed, Sep 23, 2009 at 11:18 PM, Gavin Matthews  wrote:

>
> Hi,
>
> We have a WHERE clause:
>
> WHERE
> 2>1 AND
> 2>1 AND
> DateTimestamp >= ? AND
> DateTimestamp <= ? AND
> ORDER BY DateTimestamp DESC
>
> which reasonably takes 2ms to execute on a table that has 357254 rows, and
> does so by selecting the most appropriate index:
>
> Index Scan ResultSet for TBLEVENTS using index TBLEVENTS_DATETIMESTAMP_DESC
> at serializable isolation level using share row locking chosen by the
> optimizer
>
> However, when we change the WHERE clause to:
>
> WHERE
> 1=1 AND
> 1=1 AND
> DateTimestamp >= ? AND
> DateTimestamp <= ? AND
> ORDER BY DateTimestamp DESC
>
> the execution time increases to 6000ms, and appears to do so because the
> query planner hasn't selected the most appropriate index:
>
> Index Scan ResultSet for TBLEVENTS using index TBLEVENTS_DATETIMESTAMP_ASC
> at serializable isolation level using share row locking chosen by the
> optimizer
>
> The indexes are specified as follows:
>
> CREATE INDEX tblEventsDateTimestampASC ON tblEvents (DateTimestamp ASC);
> CREATE INDEX tblEventsDateTimestampDESC ON tblEvents (DateTimestamp DESC);
>
> Do people agree that this appears to be a bug with the query planner, as
> the
> selection of the most appropriate index should not be influenced by static
> conditions such as 1=1 or 2>1?
>
> Regards,
>
> Gavin
>
> --
> View this message in context:
> http://www.nabble.com/Badly-performing-WHERE-caluse-tp25531166p25531166.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


Re: SELECT query takes 5 secs, what can I do?

2009-09-17 Thread Brett Wooldridge
Glad the updated statistics worked for you.  The specificity of messageId
seemed extremely high, and given only 1 million rows should have been
returning sub-second results.  You don't need indexes on either of the other
two fields, they won't improve performance and will slow insertion speed.
Brett


On Fri, Sep 18, 2009 at 9:31 AM, Andrew Bruno wrote:

> Hey all,
>
> Sorry been flat chat for a while, but I gotta say that the query
> improved dramatically when the stats updated.
>
> When I created the index, I didnt wait for the stats to be updated.
>
> I came back the next day, and the query was running in sub seconds.
> Now ExchangeSync can run well again on this 3G database.
>
> I want to thank everyone for the awesome support, including some of
> those left or right field ideas!
>
> Cheers
> Andrew
>
>
> On Thu, Sep 17, 2009 at 7:26 PM, Knut Anders Hatlen 
> wrote:
> > Brett Wooldridge  writes:
> >
> >> You can force Derby to update statistics with this command:
> >>
> >> alter table  compress [sequential]
> >>
> >> Note this command itself might take a long time -- dozens of minutes --
> but in
> >> a system like yours you could get away with running it once or month or
> so at
> >> some off-peak time.
> >
> > In Derby 10.5 you have a cheaper way of updating the index cardinality
> > statistics. This statement will update the statistics for all the
> > indexes on columns in MYSCHEMA.MYTABLE without doing an expensive
> > compress:
> >
> >  CALL SYSCS_UTIL.SYSCS_UPDATE_STATISTICS('MYSCHEMA', 'MYTABLE', NULL)
> >
> > http://db.apache.org/derby/docs/10.5/ref/rrefupdatestatsproc.html
> >
> > --
> > Knut Anders
> >
>


Re: SELECT query takes 5 secs, what can I do?

2009-09-15 Thread Brett Wooldridge
The original query:

select * from Message m where
   m.messageId='<7997716ed1af3d47a35d74fa2cb6109202553...@somedomain.com>'
   and m.id != 933927 and m.archived=1

and the sub-select query:

SELECT * from Message where ID IN (SELECT ID FROM Message  where
messageId = '<7997716ed1af3d47a35d74fa2cb61092e2a...@somedomaion.local
>')
and archived=1 and id!=987452

are equivalent with respect to the database.  My comment regarding the 'id
!= 987452' clause was only with respect to an index.  It's inclusion in the
query (original) does not impact performance in a negative way.

The optimizer is going to choose to use the index (on messageId), and will
perform that select first, then it will (likely) join that sub-result
against the other two clauses (archived=1 and id != 987452) -- making it
equivalent to your second query.

Do you know how to get a query plan?
http://wiki.apache.org/db-derby/PerformanceDiagnosisTips

The portion of the query with the messageId should have high-specificity due
to the relative uniqueness of the message within the context of all
messages.  For example, you say the table has 1 million rows, and a given
messsage in an organization of 200 users would consume 200 rows, so as you
can see the specificity is very high.  I would not expect that kind of
select to be taking 5 seconds, but rather milliseconds.  It is likely that
your index statistics are seriously out of date (see
http://issues.apache.org/jira/browse/DERBY-269).  One user reported that
updating statistics took a 22 minute query down to less than one second!

You can force Derby to update statistics with this command:

alter table  compress [sequential]

Note this command itself might take a long time -- dozens of minutes -- but
in a system like yours you could get away with running it once or month or
so at some off-peak time.

If you have a test database in which your query is taking 5 seconds, attempt
to run the above alter table, and then re-run your query.  If your query
times are still low, post the query plan for the query here and we can take
a look at it.

Brett


Re: SELECT query takes 5 secs, what can I do?

2009-09-15 Thread Brett Wooldridge
By the way, it was Hibernate that led the original developer to be looping
over a large collection and executing a delete of each object.  Hibernate
can be very handy, but even Gavin and the other Hibernate guys will tell you
it is not designed for (and is not a substitute for) bulk operations.  We
had to "go around" Hibernate in that case and several others to execute true
bulk operations against the database.  Use Hibernate where it is useful, but
go to the bare metal when you need performance.

Brett

On Wed, Sep 16, 2009 at 12:16 AM, Brett Wooldridge <
brett.wooldri...@gmail.com> wrote:

> Because the query specified an id != ... (NOT equal), your specificity will
> be extremely low (almost every row meets that criteria), therefore adding id
> to the index will do little but increase the overhead of the index.
> Similarly, it seems the archived flag would offer little in the way of
> narrowing the result set -- unless the number of archived items is extremely
> small compared to the total row count.
>
> But you said something interesting, you said it is "looping through all the
> messages".  That sounds like more of an algorithmic issue than a DB one.
> For example, rather than looping and firing off 1 select statements at 5
> seconds each -- processing each message -- is there a way to devise 1 select
> (or some similary small number) that returns 1 rows that you can cursor
> through and process?
>
> Anytime I see a pattern like that -- looping over some X programatically
> and running a query for each -- it is a red flag.  In the past I've
> converted similar onesy-twosy patterns to bulk operations with generally one
> or two orders of magnitude improvement.  I worked on a project where a
> "clean-up" job was deleting data one row at a time based on some algorithm,
> but doing it for thousands of rows.  It was taking hours.  After some
> thinking and creative querying (and sub-querying) we were able to generate a
> bulk delete that took less than two minutes.  It involved creating a
> temporary table (this was mysql) with ids to be deleted (populated by a few
> choice queries) and then doing a bulk delete based on a join with that
> table.
>
> Anyway, the point is, rather than trying to optimize a query that is run
> thousands of times, try to optimize the algorithm so that it doesn't need to
> do that.
>
> Just my thoughts.
>
> -Brett
>
>
>
> On Tue, Sep 15, 2009 at 10:02 PM, Andrew Bruno wrote:
>
>> Thank you all for the responses.
>>
>> The db runs as part of an applicatioon that sits on  a windows box, and we
>> have many installed version around the world, so we don't control
>> hardware,etc..  It only has a few db connections, and at the momement this
>> particular instancee is looping through all the messages, and seeing whether
>> they have been archived in the  cloud yet... And as you can imagine, at
>> 5secs a message its taking  a long time.
>>
>> The message table has a bunch of other columns, and since I am using
>> hibernate I need all columns... But I wonder if it would be quicker to
>> select just the id column, and then load/select all columns using where
>> id=.. Sounds crazy.. Kinda like 1+N selects...But any thoughts?
>>
>> As for the compund index, I thought it was not possible to add a pk index
>> I.e. Id as part of a compound, but either way I will try.
>>
>> Also, if I do a count(messageid) instead, are there any other optimization
>> tricks?
>>
>> Keep all thoughts coming, crazy or sound :)
>>
>> Appreciated
>> Andrew
>>
>>
>>
>> -Original Message-
>> From: Rick Hillegas 
>> Sent: Tuesday, 15 September 2009 10:32 PM
>> To: Derby Discussion 
>> Subject: Re: SELECT query takes 5 secs, what can I do?
>>
>> Hi Andrew,
>>
>> You might try adding more columns to your index so that it covers the
>> whole WHERE clause:
>>
>> CREATE INDEX IDX_Message_MessageId ON ExchangeSync.Message (messageId,
>> archived, id)
>>
>> Hope this helps,
>> -Rick
>>
>>
>>
>> Andrew Bruno wrote:
>> > Hello,
>> >
>> > I have a query that used to take 10secs to run, i.e.
>> >
>> > select * from Message m where
>> > m.messageId='<7997716ed1af3d47a35d74fa2cb6109202553...@somedomain.com>'
>> > and m.id != 933927 and m.archived=1
>> >
>> > The Message table has around one million rows.
>> >
>> > I added the following index
>> >
>> > CREATE INDEX IDX_Message_MessageId ON ExchangeSync.Message (messageId)
>> >
>> > and now it takes 5secs.
>> >
>> > Is there anything else I can do?
>> >
>> > Should I add an index on the boolean "archived" column too?
>> >
>> > Any performance hints appreciated.
>> >
>> > Thanks
>> > Andrew
>> >
>>
>>
>>
>


Re: SELECT query takes 5 secs, what can I do?

2009-09-15 Thread Brett Wooldridge
Because the query specified an id != ... (NOT equal), your specificity will
be extremely low (almost every row meets that criteria), therefore adding id
to the index will do little but increase the overhead of the index.
Similarly, it seems the archived flag would offer little in the way of
narrowing the result set -- unless the number of archived items is extremely
small compared to the total row count.

But you said something interesting, you said it is "looping through all the
messages".  That sounds like more of an algorithmic issue than a DB one.
For example, rather than looping and firing off 1 select statements at 5
seconds each -- processing each message -- is there a way to devise 1 select
(or some similary small number) that returns 1 rows that you can cursor
through and process?

Anytime I see a pattern like that -- looping over some X programatically and
running a query for each -- it is a red flag.  In the past I've converted
similar onesy-twosy patterns to bulk operations with generally one or two
orders of magnitude improvement.  I worked on a project where a "clean-up"
job was deleting data one row at a time based on some algorithm, but doing
it for thousands of rows.  It was taking hours.  After some thinking and
creative querying (and sub-querying) we were able to generate a bulk delete
that took less than two minutes.  It involved creating a temporary table
(this was mysql) with ids to be deleted (populated by a few choice queries)
and then doing a bulk delete based on a join with that table.

Anyway, the point is, rather than trying to optimize a query that is run
thousands of times, try to optimize the algorithm so that it doesn't need to
do that.

Just my thoughts.

-Brett


On Tue, Sep 15, 2009 at 10:02 PM, Andrew Bruno wrote:

> Thank you all for the responses.
>
> The db runs as part of an applicatioon that sits on  a windows box, and we
> have many installed version around the world, so we don't control
> hardware,etc..  It only has a few db connections, and at the momement this
> particular instancee is looping through all the messages, and seeing whether
> they have been archived in the  cloud yet... And as you can imagine, at
> 5secs a message its taking  a long time.
>
> The message table has a bunch of other columns, and since I am using
> hibernate I need all columns... But I wonder if it would be quicker to
> select just the id column, and then load/select all columns using where
> id=.. Sounds crazy.. Kinda like 1+N selects...But any thoughts?
>
> As for the compund index, I thought it was not possible to add a pk index
> I.e. Id as part of a compound, but either way I will try.
>
> Also, if I do a count(messageid) instead, are there any other optimization
> tricks?
>
> Keep all thoughts coming, crazy or sound :)
>
> Appreciated
> Andrew
>
>
>
> -Original Message-
> From: Rick Hillegas 
> Sent: Tuesday, 15 September 2009 10:32 PM
> To: Derby Discussion 
> Subject: Re: SELECT query takes 5 secs, what can I do?
>
> Hi Andrew,
>
> You might try adding more columns to your index so that it covers the
> whole WHERE clause:
>
> CREATE INDEX IDX_Message_MessageId ON ExchangeSync.Message (messageId,
> archived, id)
>
> Hope this helps,
> -Rick
>
>
>
> Andrew Bruno wrote:
> > Hello,
> >
> > I have a query that used to take 10secs to run, i.e.
> >
> > select * from Message m where
> > m.messageId='<7997716ed1af3d47a35d74fa2cb6109202553...@somedomain.com>'
> > and m.id != 933927 and m.archived=1
> >
> > The Message table has around one million rows.
> >
> > I added the following index
> >
> > CREATE INDEX IDX_Message_MessageId ON ExchangeSync.Message (messageId)
> >
> > and now it takes 5secs.
> >
> > Is there anything else I can do?
> >
> > Should I add an index on the boolean "archived" column too?
> >
> > Any performance hints appreciated.
> >
> > Thanks
> > Andrew
> >
>
>
>


Re: SELECT query takes 5 secs, what can I do?

2009-09-15 Thread Brett Wooldridge
Consider making a separate archive table and live table.  Looking at  
that query, I doubt any other index will help.


Sent from my iPhone

On Sep 15, 2009, at 17:07, Andrew Bruno  wrote:


Hello,

I have a query that used to take 10secs to run, i.e.

select * from Message m where
m.m 
essageId='<7997716ed1af3d47a35d74fa2cb6109202553...@somedomain.com>'

and m.id != 933927 and m.archived=1

The Message table has around one million rows.

I added the following index

CREATE INDEX IDX_Message_MessageId ON ExchangeSync.Message (messageId)

and now it takes 5secs.

Is there anything else I can do?

Should I add an index on the boolean "archived" column too?

Any performance hints appreciated.

Thanks
Andrew


Re: Derby 10.5.1.1 regression

2009-08-04 Thread Brett Wooldridge
I have created issue DERBY-4337 to track this.
https://issues.apache.org/jira/browse/DERBY-4337

To summarize briefly what I have found (this info is also in the bug).

1. The problem does not occur with the EmbeddedXADataSource
2. The problem exists in 10.4.2.0, not just 10.5.1.1.  But because I skipped
the 10.4.2.0 release it was not noticed earlier.
3. The problem exists in 10.5.2.0.

Further updates will be to made to the issue in JIRA, rather than
here.  I would appreciate all interested parties following the bug
there.

Thanks,
Brett


On Tue, Aug 4, 2009 at 11:57 PM, Bryan Pendleton
wrote:

> Have you opened an issue in the tracking system for this? It seems
> like it's time to do that. You can use the tracking system to attach
> various logs and other information for people to see and discuss.
>
> http://db.apache.org/derby/DerbyBugGuidelines.html
>
> thanks,
>
> bryan
>
>


Re: Derby 10.5.1.1 regression

2009-08-04 Thread Brett Wooldridge
Kristan,
I will try running with the embedded driver (not sure I can easily, but I
think I can).  Because I'm in Tokyo, I'm a bit out of sync time-wise with
North America.  It's midnight here (10:00am in Chicago).  So, I'll try it
when I get into work tomorrow.  It'll probably be "tonight" for you, or the
wee hours of the morning before I post results.

Thanks for bearing with me on this (everyone).

-Brett

On Tue, Aug 4, 2009 at 9:04 PM, Kristian Waagan wrote:

> Brett Wooldridge wrote:
>
>> Anyone have some suggestions for debugging direction?  I hate the thought
>> that I'm stuck on <10.5 forever.  Any other environment details, logging
>> options, etc?
>>
>
> This may be a stupid question, but is it possible to run the application
> using an EmbeddedXADataSource?
> It is still not quite clear to me if there is a real DRDA protocol error,
> or if the protocol error occurs due to a non-DRDA related bug / error on the
> server.
>
> I don't really know where to start looking for problems; in the client
> driver or in the Clob handling on the server side.
> I would eliminate one or more factors;
> - DRDA (by running with the embedded driver only)
> - Clob streaming (insert NULLs, or provide values as strings instead of
> streams for Clob columns?)
>
> Since I don't know the application, I don't know how hard it is to do any
> of this.
> As usual, providing a repro would help a lot, but that may be hard given
> the number of software components involved...
>
> Another thing to try, would be to follow up on Dag's suggestion:
> Do you see the bug when using Derby 10.4 [1]?
>
>
> Regards,
> --
> Kristian
>
> [1] http://db.apache.org/derby/releases/release-10.4.2.0.cgi
>
>>
>> Thanks,
>> Brett
>>
>>
>> On Tue, Aug 4, 2009 at 12:35 AM, Dag H. Wanvik > dag.wan...@sun.com>> wrote:
>>
>>
>>Looks like the client sends an SQLSTT (0x2414) code point (starting an
>>SQL statement to prepare) at a point where a new DRDA command is
>>expected (processCommands). The SQLSTT code point is only allowed
>>inside prepare (parsePRPSQLSTTobjects) or direct execution
>>(parseEXECSQLIMMobjects) contexts. I have no idea why.
>>
>>
>>
>>
>


Re: Derby 10.5.1.1 regression

2009-08-04 Thread Brett Wooldridge
If you are referring to the properties:
derby.stream.error.logSeverityLevel=0
derby.language.logStatementText=true

Then yes, in my post yesterday with the attachment (derby.log), those
properties were enabled.  I assume they would output to the derby.log not
somewhere else?

The
error is actually producing on 10.5.2.0.  I was seeing it on 10.5.1.1
and so I upgraded to see if it went away, but it is the same.

-Brett

On Tue, Aug 4, 2009 at 11:12 PM, Kathey Marsden  wrote:

> Kristian Waagan wrote:
>
>> Brett Wooldridge wrote:
>>
>>> Anyone have some suggestions for debugging direction?  I hate the thought
>>> that I'm stuck on <10.5 forever.  Any other environment details, logging
>>> options, etc?
>>>
>> Did you try setting the properties outlined here:
> http://www.nabble.com/Derby-10.5.1.1-regression-tt24733315.html#a24733315
>
> to see if we get more information in the derby.log?
> I tend to think the stack trace when you get the XCL30/XJ001 errors will be
> very helpful.
> If you still do not see them in the derby.log, it might be helpful to get a
> fix for DERBY-1191, but it does look to me like this error should be logged
> in derby.log.
>
> You might also try with 10.5.2.0  as there were Clob fixes there and
> another user recently found his Clob issue was resolved there.  Note:
> Although 10.5.2.0 passed the vote and is posted to the website, we just
> found a wrong results regression DERBY-4331 which  we are investigating.
>
>
> Kathey
>
>


Re: Derby 10.5.1.1 regression

2009-08-04 Thread Brett Wooldridge
Anyone have some suggestions for debugging direction?  I hate the thought
that I'm stuck on <10.5 forever.  Any other environment details, logging
options, etc?

Thanks,
Brett


On Tue, Aug 4, 2009 at 12:35 AM, Dag H. Wanvik  wrote:

>
> Looks like the client sends an SQLSTT (0x2414) code point (starting an
> SQL statement to prepare) at a point where a new DRDA command is
> expected (processCommands). The SQLSTT code point is only allowed
> inside prepare (parsePRPSQLSTTobjects) or direct execution
> (parseEXECSQLIMMobjects) contexts. I have no idea why.
>
>
>


Re: Filtering using the sort order on two columns

2009-08-04 Thread Brett Wooldridge
Educated guess, but I think an index scan will not be used because the
predicate provides no "stop condition".  See "What's Optimizable?" here:

http://db.apache.org/derby/docs/10.0/manuals/tuning/perf43.html

*Possibly* can you force the use of an index by overriding the optimizer:

http://db.apache.org/derby/docs/10.5/tuning/ctunoptimzoverride.html

>From the first document:

"Sometimes a table scan is the most efficient way to access data, even if a
potentially useful index is available. For example, if the statement returns
virtually all the data in the table, it is more efficient to go straight to
the table instead of looking values up in an index, because then Derby is
able to avoid the intermediate step of retrieving the rows from the index
lookup values."

The trouble is, because your query is open-ended (using > with no <), Derby
doesn't know how many rows will be returned ... maybe it's the entire table
... in which case a table-scan is more efficient.  You *might* also try
using ROW_NUMBER() to limit the query and maybe trick the optimizer (search
the FAQ for LIMIT), but somehow I don't think the optimizer is that smart.

Any reason not to cache the result of the first query, and manually step
through the results rather than constantly re-running a query to get the
"next" row?

-Brett

On Tue, Aug 4, 2009 at 3:30 PM, Daniel Noll  wrote:

> Hi all.
>
> I have a table and index like this:
>
>CREATE TABLE Terms (
>idINTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY NOT NULL,
>field VARCHAR(255) NOT NULL,
>text  VARCHAR(255) NOT NULL,
>docfreq   INTEGER NOT NULL
>)
>
>CREATE INDEX TermsFieldAndText ON Terms (field, text)
>
> Here's some example data.  Table is shown in sorted order as it would
> be listed in the index above:
>
> | field | text  | docfreq  |
> 
> | body  | hello | 10   |
> | tag   | home  | 100  |
> | tag   | work  | 80   |
>
> For a given input term (let's say tag:home) I want to find the _next_
> entry in the table (so it should return tag:work).  My naive
> implementation does this:
>
>SELECT field, text, docfreq
>FROM Terms
>WHERE (field = ? AND text > ?) OR field > ?
>ORDER BY field ASC, text ASC
>
> This does the right thing as far as the output is concerned, but Derby
> does a table scan.  Looking at the query, my impression would be that
> in the worse case scenario, Derby should do two index scans.  Or, if
> it's smart, one index scan.
>
> I'm aware that I might be able to work around this by having instead a
> field with the concatenation of both fields with
> CONCAT(field,'\0',text).  But I wanted to check first whether there is
> a way I can maintain my relatively understandable table structure
> before resorting to hacks.  Basically I just need the data in index
> order, starting from a given point... so I figure there is a simple
> solution.
>
> Daniel
>
>
> --
> Daniel NollForensic and eDiscovery Software
> Senior Developer  The world's most advanced
> Nuixemail data analysis
> http://nuix.com/and eDiscovery software
>


Re: Second thread is blocked until the first thread commit the transaction, I can not understand why...

2009-08-03 Thread Brett Wooldridge
I think you misunderstand the semantics of read_commited.  I suggest reading
this: http://en.wikipedia.org/wiki/Isolation_(database_systems)

Basically, read_committed means this:

"Data records retrieved by a query are not prevented from modification by
some other transaction. Non-repeatable
reads
may
occur, meaning data retrieved in a
SELECT statement
may be modified by some other transaction when it commits. In this isolation
level, read locks are acquired on selected data but they are released
immediately whereas write locks are released at the end of the transaction."

The locking semantic you are looking for does not exist as part of the ANSI
standard.  The isolation levels only define read lock behavior, not write
lock behavior.  Granted they are somewhat intertwined, but still distinct.
 Read_committed only means that a WRITE is not blocked by a READ, it does
not mean that a READ is not blocked by a WRITE.

Also from that page:

However, the [isolation level] definitions above have been criticised in the
paper A Critique of ANSI SQL Isolation
Levels as
being ambiguous, and as not accurately reflecting the isolation provided by
many databases:
This paper shows a number of weaknesses in the anomaly approach to defining
isolation levels. The three ANSI phenomena are ambiguous. Even their
broadest interpretations do not exclude anomalous behavior. This leads to
some counter-intuitive results. In particular, lock-based isolation levels
have different characteristics than their ANSI equivalents. This is
disconcerting because commercial database systems typically use locking.
Additionally, the ANSI phenomena do not distinguish among several isolation
levels popular in commercial systems.


On Mon, Aug 3, 2009 at 4:33 PM, sarah.kho  wrote:

>
> Hi
> I am learning transaction and isolation levels. I tried to use
> read_committed in one thread and then in another thread insert some data
> into a table. the reader thread is blocked and waits until the first thread
> commit the transaction to complete the select statement. What I can not
> understand is: shouldn't the second thread only read what is already
> committed instead of waiting until the inserting thread finishes its job?
>
> Inserted thread code:
>
>  [code]
>
> String userName = "app";
>String password = "app";
>String url = "jdbc:derby://localhost:1527/sample";
>
> Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();
>conn = DriverManager.getConnection(url, userName, password);
>conn.setAutoCommit(false);
>PreparedStatement st = conn.prepareStatement("insert into
> APP.TABLE1(name, lastname) values('a','a')");
>st.executeUpdate();
>
>Thread.sleep(2);
>
>conn.commit();
> [/code]
>
>
> reader thread code:
> [code]
>
>String userName = "app";
>String password = "app";
>String url = "jdbc:derby://localhost:1527/sample";
>
> Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();
>conn = DriverManager.getConnection(url, userName, password);
>conn.setAutoCommit(false);
>
> conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
>
>
>Statement st2 = conn.createStatement();
>
>ResultSet rs = st2.executeQuery("select * from APP.TABLE1");
>
> [/code]
>
> also I tried and execute these code in two different application and the
> result is the same. The reader application waits for almost 20 seconds
> before it read the data.
> --
> View this message in context:
> http://www.nabble.com/Second-thread-is-blocked-until-the-first-thread-commit-the-transaction%2C-I-can-not-understand-why...-tp24786526p24786526.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.
>
>


Re: Derby 10.5.1.1 regression

2009-07-30 Thread Brett Wooldridge
The second batch of output that I posted was from the console.  Here it is
again:

2009-07-30 05:37:47.406 GMT : Execution failed because of a Distributed
Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414; Error Code Value =
0
org.apache.derby.impl.drda.DRDAProtocolException: Execution failed because
of a Distributed Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414;
Error Code Value = 0
at
org.apache.derby.impl.drda.DRDAConnThread.codePointNotSupported(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
2009-07-30 05:37:47.406 GMT : Execution failed because of a Distributed
Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414; Error Code Value =
0
org.apache.derby.impl.drda.DRDAProtocolException: Execution failed because
of a Distributed Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414;
Error Code Value = 0
at
org.apache.derby.impl.drda.DRDAConnThread.codePointNotSupported(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)

The derby server is embedded in our server, and the client is also
"in-server" but communicating over sockets via standard (non-embedded) JDBC
driver.

-Brett



On Thu, Jul 30, 2009 at 9:49 PM, Kathey Marsden  wrote:

> Brett Wooldridge wrote:
>
>> I never see the XJ001 logged, thought it appears to travel over the wire
>> (looking at the DRDA trace) to the client.
>>
> How is the server started? Is the console output recorded anywhere?
>
>


Re: Derby 10.5.1.1 regression

2009-07-30 Thread Brett Wooldridge
I never see the XJ001 logged, thought it appears to travel over the wire
(looking at the DRDA trace) to the client.  With respect to the CLOB columns
and how they are set, that's a good question.  I am using Hibernate so the
SQL (at least how the parameters are set) is opaque to me.   The object has
JPA annotations as follows:

@Column(name = "error")
private String error;

@Lob
@Column(name = "grid_data", length = Integer.MAX_VALUE)
private String gridData;

@Lob
@Column(name = "details", length = Integer.MAX_VALUE)
private String details;

A sharp DRDA trace reader may be able to glean how it is set.  In fact,
looking at the DRDA with my untrained eye, I would say it is via setString()
as I would assume there would be decoration/fanfare on the wire associated
with a streamed object.  But as I said, I'm not a DRDA-savvy guy.  Note the
length properties are only taken as hints to Hibernate that the underlying
object is a CLOB and not a VARCHAR, whether it decides to use setString() or
setCharacterStream() based on that I don't know.

-Brett

On Thu, Jul 30, 2009 at 6:36 PM, Kristian Waagan wrote:

> Brett Wooldridge wrote:
>
>> I have obtained a server-side trace of the failure.
>>
>
> Hi Brett,
>
> I haven't decoded the DRDA trace, but do you happen to have the stack trace
> for the XJ001 SQLException as well?
> Also, how is the value of the CLOB columns being set? (i.e.
> setCharacterStream or setString)
> Is there anything else in derby.log that looks suspicious?
>
>
> Regards,
> --
> Kristian
>
>  I included a statement preceding the failure because it might be relevant.
>>  There is an 'insert' into the table upon which the subsequent 'select'
>> fails.  Both occur on the same connection, but are separated by an XA commit
>> managed by Bitronix JTA.  The 'insert' and subsequent 'select' occur on two
>> separate client threads -- sequentially.  The 'insert' succeeds, signals
>> another thread via event mechanism, which subsequently issues the 'select'
>> -- coincidentally (but typically due to low system activity) on the same
>> connection obtained from a connection pool.
>>
>>   (2009.7.30 5:47:28) Request fill DRDAConnThread_7 5
>>
> [snip - DRDA trace ]
>


Re: Derby 10.5.1.1 regression

2009-07-29 Thread Brett Wooldridge
More information after disabling/enabling other logging:

2009-07-30 05:37:47.406 GMT : Execution failed because of a Distributed
Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414; Error Code Value =
0
org.apache.derby.impl.drda.DRDAProtocolException: Execution failed because
of a Distributed Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414;
Error Code Value = 0
at
org.apache.derby.impl.drda.DRDAConnThread.codePointNotSupported(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
2009-07-30 05:37:47.406 GMT : Execution failed because of a Distributed
Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414; Error Code Value =
0
org.apache.derby.impl.drda.DRDAProtocolException: Execution failed because
of a Distributed Protocol Error:  DRDA_Proto_CMDNSPRM; CODPNT arg  = 2414;
Error Code Value = 0
at
org.apache.derby.impl.drda.DRDAConnThread.codePointNotSupported(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)

I believe the other errors to be occuring as a cascade of this failure.


Derby 10.5.1.1 regression

2009-07-29 Thread Brett Wooldridge
I recently upgraded from 10.3.2.1 to 10.5.1.1 and immediately started to get
failures in my application.

I am using Derby in combination with Hibernate and Bitronix JTA, but because
I when I rollback to 10.3.2.1 the error goes away, I am assuming the problem
is within Derby.  This error is reproducible every time, but creating a
simple testcase from my server may be difficult.  Here is the log4j log
containing the failure flow.  It includes logging by Bitronix and Hibernate
as well.

09-07-29 20:28:48,875 [SessionImpl  ]
[Jetty-5 ] DEBUG - opened session at timestamp:
12488669288
09-07-29 20:28:48,875 [BitronixTransaction  ]
[Jetty-5 ] DEBUG - registering synchronization
org.hibernate.context.jtasessioncontext$cleanupsy...@27a078
09-07-29 20:28:48,875 [AbstractBatcher  ]
[Jetty-5 ] DEBUG - about to open PreparedStatement (open
PreparedStatements: 0, globally: 0)
09-07-29 20:28:48,875 [ConnectionManager]
[Jetty-5 ] DEBUG - opening JDBC connection
09-07-29 20:28:48,875 [PoolingDataSource]
[Jetty-5 ] DEBUG - acquiring connection from a
PoolingDataSource containing an XAPool of resource ziptie-ds with 20
connection(s) (19 still available)
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - trying to recycle a NOT_ACCESSIBLE
connection of an XAPool of resource ziptie-ds with 20 connection(s) (19
still available)
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - current transaction GTRID is
[6E65746C642D310122C64508DB00D5]
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - no NOT_ACCESSIBLE connection enlisted
in this transaction
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - getting a IN_POOL connection from an
XAPool of resource ziptie-ds with 20 connection(s) (19 still available)
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - waiting for IN_POOL connections count
to be > 0, currently is 19
09-07-29 20:28:48,875 [XAPool   ]
[Jetty-5 ] DEBUG - found IN_POOL connection a
JdbcPooledConnection from datasource ziptie-ds in state IN_POOL wrapping
org.apache.derby.client.clientxaconnect...@b2ad57 from an XAPool of resource
ziptie-ds with 20 connection(s) (19 still available)
09-07-29 20:28:48,890 [JdbcPooledConnection ]
[Jetty-5 ] DEBUG - getting connection handle from a
JdbcPooledConnection from datasource ziptie-ds in state IN_POOL wrapping
org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [AbstractXAStatefulHolder ]
[Jetty-5 ] DEBUG - notifying 2
stateChangeEventListener(s) about state changing from IN_POOL to ACCESSIBLE
in a JdbcPooledConnection from datasource ziptie-ds in state IN_POOL
wrapping org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [AbstractXAStatefulHolder ]
[Jetty-5 ] DEBUG - state changing from IN_POOL to
ACCESSIBLE in a JdbcPooledConnection from datasource ziptie-ds in state
IN_POOL wrapping org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [AbstractXAStatefulHolder ]
[Jetty-5 ] DEBUG - notifying 2
stateChangeEventListener(s) about state changed from IN_POOL to ACCESSIBLE
in a JdbcPooledConnection from datasource ziptie-ds in state ACCESSIBLE
wrapping org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [JdbcPooledConnection ]
[Jetty-5 ] DEBUG - connection
org.apache.derby.client.clientxaconnect...@b2ad57 was in state IN_POOL,
testing it
09-07-29 20:28:48,890 [JdbcPooledConnection ]
[Jetty-5 ] DEBUG - testing with query 'VALUES 1'
connection of a JdbcPooledConnection from datasource ziptie-ds in state
ACCESSIBLE wrapping org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [JdbcPooledConnection ]
[Jetty-5 ] DEBUG - successfully tested connection of a
JdbcPooledConnection from datasource ziptie-ds in state ACCESSIBLE wrapping
org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [JdbcPooledConnection ]
[Jetty-5 ] DEBUG - got connection handle from a
JdbcPooledConnection from datasource ziptie-ds in state ACCESSIBLE wrapping
org.apache.derby.client.clientxaconnect...@b2ad57
09-07-29 20:28:48,890 [PoolingDataSource]
[Jetty-5 ] DEBUG - acquired connection from a
PoolingDataSource containing an XAPool of resource ziptie-ds with 20
connection(s) (18 still available)
09-07-29 20:28:48,890 [SQL  ]
[Jetty-5 ] DEBUG - select this_.id as y0_,
this_.execution_id as y1_, this_.device_id as y2_, this_.error as y3_,
this_.grid_data as y4_, this_.start_time as y5_, this_.e

Derby and Bitronix JTA connection failure

2008-03-16 Thread Brett Wooldridge
Hello list,

I am stuck trying to figure out an issue with using the Bitronix JTA 
Transaction Manager with Derby (v10.3.2.1).  Here's the details:

I'm running Derby as a network server on the standard port.  I have a unit test 
which uses the ClientXADatasource to connect to my Derby database, and this 
succeeds.

The client code looks like this:

ClientXADataSource ds = new ClientXADataSource();
ds.setDatabaseName("ziptie");ds.setPortNumber(1527);
ds.setServerName("localhost");Connection connection = 
ds.getConnection();
As I said, this client connects successfully, and the derby.log records this:

Connection number: 2.
2008-03-16 11:45:05.302 GMT Thread[DRDAConnThread_6,5,derby.daemons] (DATABASE 
= ziptie), (DRDAID = {2}), Apache Derby Network Server connected to database 
ziptie

Then, I try to start an application which uses the Bitronix TM to perform a 
similar connection to Derby.  However, this connection fails as recorded by the 
derby.log as well:

Connection number: 3.
2008-03-16 11:45:10.817 GMT Thread[DRDAConnThread_6,5,derby.daemons] (DATABASE 
= ziptie), (DRDAID = {3}), Database not available

The properties for Bitronix look like this:

resource.ds.className=org.apache.derby.jdbc.ClientXADataSource 
resource.ds.uniqueName=ziptie
resource.ds.driverProperties.databaseName=ziptie 
resource.ds.driverProperties.serverName=localhost 
resource.ds.driverProperties.portNumber=1527

When the connection fails, I get a network client trace that looks like this:

(snip)
derby] BEGIN TRACE_DIAGNOSTICS
[EMAIL PROTECTED] java.sql.SQLException
[EMAIL PROTECTED]@ecf7fa] DERBY SQLCA from server
[EMAIL PROTECTED]@ecf7fa] SqlCode= -1
[EMAIL PROTECTED]@ecf7fa] SqlErrd= { 0, 0, 0, 0, 0, 0 }
[EMAIL PROTECTED]@ecf7fa] SqlErrmc   = Database not available
[EMAIL PROTECTED]@ecf7fa] SqlErrp= CSS10030
[EMAIL PROTECTED]@ecf7fa] SqlState   = 08006
[EMAIL PROTECTED]@ecf7fa] SqlWarn=
[EMAIL PROTECTED] SQL state  = 08006
[EMAIL PROTECTED] Error code = -1
[EMAIL PROTECTED] Tokens = Database not available
[EMAIL PROTECTED] Stack trace follows
org.apache.derby.client.am.SqlException: DERBY SQL error: SQLCODE: -1, 
SQLSTATE: 08006, SQLERRMC: Database not available
at org.apache.derby.client.am.Connection.completeSqlca(Unknown Source)
(snip)

I have enabled derby.drda.traceAll on the server and here is what I get for the 
SUCCESSFUL connection:

  (2008.3.16 11:58:53) Request fill DRDAConnThread_6 5

   RECEIVE BUFFER: EXCSAT  (ASCII)   (EBCDIC)
   0 1 2 3 4 5 6 7   8 9 A B C D E F   0123456789ABCDEF  0123456789ABCDEF
   0065D0410001005F  10410010115E8485  .e.A..._.A...^..  ..}..;de
0010   9982A88495839481  89950009116DC485  .m..  rbydncmain..._De
0020   9982A80020115AC4  D5C3F1F0F0F3F061   .Za  rby...!DNC10030/
0030   F1F04BF34BF14BF4  4060404DF5F6F1F7  [EMAIL PROTECTED]@M  10.3.1.4 - 
(5617
0040   F9F45D0014140414  0300072407000724  ..]$...$  94).
0050   0F0007144700  0E1147D8C4C5D9C2  [EMAIL PROTECTED]   ..QDERB
0060   E861D1E5D40026D0  0100020020106D00  .a&. .m.  Y/JVM..}.._.
0070   0611A20004001621  10A98997A3898540  ...!...@  ..s..ziptie
0080   4040404040404040  404040@@@


   (2008.3.16 11:58:53) Reply flush DRDAConnThread_6 5

   SEND BUFFER: EXCSATRD   (ASCII)   (EBCDIC)
   0 1 2 3 4 5 6 7   8 9 A B C D E F   0123456789ABCDEF  0123456789ABCDEF
   009BD04200010095  14430035115ED585  ...B.C.5.^..  ..}n.;Ne
0010   A3A6969992E28599  A58599C39695A399    tworkServerContr
0020   969340E2A38199A3  40D385A5859340C5  [EMAIL PROTECTED]@[EMAIL PROTECTED]  
ol Start Level E
0030   A58595A340C489A2  9781A38388859900  [EMAIL PROTECTED]  vent Dispatcher.
0040   1414041403000724  070007240F000714  ...$...$  
0050   4700101147C1  978183888540C485  @[EMAIL PROTECTED]   ..Apache De
0060   9982A80018116DD5  85A3A6969992E285  ..m.  rby..._NetworkSe
0070   99A58599C39695A3  9996930020115AC3   .Z.  rverControl...!C
0080   E2E2F1F0F0F3F061  F1F04BF34BF24BF1  ...a..K.K.K.  SS10030/10.3.2.1
0090   4060404DF5F9F9F1  F1F05D0010D00200  @[EMAIL PROTECTED]   - (599110)..}..
00A0   02000A14AC000611  A20004...   s..


   (2008.3.16 11:58:53) Request fill DRDAConnThread_6 5

   RECEIVE BUFFER: SECCHK  (ASCII)   (EBCDIC)
   0 1 2 3 4 5 6 7   8 9 A B C D E F   0123456789ABCDEF  0123456789ABCDEF
   002DD04100010027  106E000611A20004  .-.A...'.n..  ..}..>...s..
0010   00162110A98997A3  8985404040404040  ..!...@@  ziptie
0020   404040404047  11A0C1D7D700A8D0  @@..APP.y}
0030   01000200A2200100  162110A98997A389  . ...!..  s..zipti
0040   8540404040404040  404040404621  .@