Thanks, for your response, Mike.

I did consider this and the current repro takes about 4-5 mins and 42MB of disk space. I was able to hit this bug with just creating enough records that will create 2nd AllocPage and delete enough records so that the 2nd Alloc page is empty (to get newHighetPage = -1). I was hoping that this 5 mins increase in overall derbyall run testing time will be accepted. Please let me know.

I believe this is a good case to be covered in derbyall. I was planning to just extend the test1 in OnlineCompressTest to include another number_of_rows parameter which is higher than 4000. As you mentioned in your reply, this extension might cause Lock escalation, I will try to find some work-around. I might have to create another testX method at worse I think.

Thanks
Mayuresh


Mike Matrigali wrote:

also note that depending on the amount of disk space and the time to create your "(very) large table" it may not be appropriate to add your
case to this test which is run as part of everyone's nightly run and
may need to be run by every developer as part of a checkin.  I don't
think there is a fixed requirement, but I think for instance we decided
that tests dealing with 2 gig blobs were too big to be forced into
nightly dev run.

How much disk space and time does your case take?

There is another suite of tests called largeData which is intended for
tests with large disk requirements.  If it saves you time you should
feel free to extend the OnlineCompressTest which your own test class and
reuse as much code as possible.

Mayuresh Nirhali (JIRA) wrote:

[ http://issues.apache.org/jira/browse/DERBY-606?page=comments#action_12450785 ] Mayuresh Nirhali commented on DERBY-606:
----------------------------------------

I looked at the OnlineCompressTest and realized that to reproduce this case, the simplest way is to increase the number of rows added to the table in one of the existing testcases. However, I see a following comment in the testcase,

<snip>
* 4000 rows - reasonable number of pages to test out, still 1 alloc page
     *
* note that row numbers greater than 4000 may lead to lock escalation * issues, if queries like "delete from x" are used to delete all the * rows.
</snip>

This is very relevant to the testcase which I would like to add and so, would like to know the Lock Escalation issue here. Has anyone seen this kind of issue before ? any pointers ??

The repro attached to the bug has almost similar testcase, I have not seen any problems with that so far. So, it might be that the Lock Escalation issue has already been fixed. (I did not find any related JIRA for this though). Can someone please confirm this ?? I can update the comments if that problem has been fixed.

Thanks



SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
--------------------------------------------------------------------

               Key: DERBY-606
               URL: http://issues.apache.org/jira/browse/DERBY-606
           Project: Derby
        Issue Type: Bug
        Components: Store
  Affects Versions: 10.1.1.0
       Environment: Java 1.5.0_04 on Windows Server 2003 Web Edition
          Reporter: Jeffrey Aguilera
       Assigned To: Mayuresh Nirhali
           Fix For: 10.3.0.0

       Attachments: A606Test.java, derby606_v1.diff


SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails with one of the following error messages when applied to a very large table (>2GB): Log operation null encounters error writing itself out to the log stream, this could be caused by an errant log operation or internal log buffer full due to excessively large log operation. SQLSTATE: XJ001: Java exception: ': java.io.IOException'.
or
The exception 'java.lang.ArrayIndexOutOfBoundsException' was thrown while evaluating an expression. SQLSTATE: XJ001: Java exception: ': java.lang.ArrayIndexOutOfBoundsException'.
In either case, no entry is written to the console log or to derby.log.





Reply via email to