Still, the fix is only for a multi-threaded update scenario.  I don't know
the access pattern of your application, so it may or may not help resolve
your issue.  I would expected offline compression of the table to have fixed
your issue.

On Thu, Sep 24, 2009 at 10:35 AM, T K <sanokist...@yahoo.com> wrote:

> Ouch... I have 10.3.3.0! I will consider the upgrade
>
> Thanks Bret.
>
> ------------------------------
> *From:* Brett Wooldridge <brett.wooldri...@gmail.com>
> *To:* Derby Discussion <derby-user@db.apache.org>
> *Sent:* Wednesday, September 23, 2009 9:31:51 PM
> *Subject:* Re: Horrible performance - how can I reclaim table space?
>
> If you are on 10.3, you might consider 10.3.3.1, as a space reclamation
> issue for large objects was resolved (
> http://issues.apache.org/jira/browse/DERBY-4050) between 10.3 and
> 10.3.3.1.  According to that defect, the upgraded version (10.3.3.1) will
> still not reclaim space lost prior to the update, so a full offline
> compression is required.
> -Brett
>
>
> On Thu, Sep 24, 2009 at 10:03 AM, T K <sanokist...@yahoo.com> wrote:
>
>> We have a horrific performance issue with a table of 13 rows, each one
>> containing a very small blob, because the table is presumably full of dead
>> rows and we are table-scanning; here's part of the explain plan:
>>
>>                         Source result set:
>>                                 Table Scan ResultSet for SOMETABLE at read
>> committed isolation level using instantaneous share row locking chosen by
>> the optimizer
>>                                         Number of columns fetched=4
>>                                         Number of pages visited=8546
>>                                         Number of rows qualified=13
>>                                         Number of rows visited=85040
>>                                         optimizer estimated cost:
>> 787747.94
>>
>> So I assume I have over 85,000 dead rows in the table, and compressing it
>> does not reclaim the space. In fact, because we keep adding and deleting
>> rows, the performance gets worse by the hour, and according to the above
>> plan, Derby has processed over 32MB of data just to match 4 of the 13 rows.
>> For the time being, I want to optimize this table scan before I resort to
>> indices and/or reusing rows. This is with Derby 10.3
>>
>> Any thoughts?
>>
>> Thanks
>>
>>
>
>

Reply via email to