Re: Out-of-memory errors

2021-02-18 Thread John English

On 15/02/2021 21:19, Rick Hillegas wrote:
It looks like the subquery isn't doing any good. I would have hoped that 
the optimizer would be smart enough to join the subquery result to the 
primary key on id and then use the resulting row positions to probe into 
the heap. But the plan is just joining the subquery result to a full 
table scan. I don't understand why the subquery plan is so convoluted. 
However, it looks like you eliminated the memory-exhausting sort.


Here for comparison is the result of the original query, which takes 
about the same time despite requiring a sort. I still don't understand 
why the descending index on time is being ignored:


From "Tuning Derby" 
(https://db.apache.org/derby/docs/10.8/tuning/ctunoptimz56859.html):
"Usually, sorting requires an extra step to put the data into the right 
order. This extra step can be avoided for data that are already in the 
right order. *For example, if a single-table query has an ORDER BY on a 
single column, and there is an index on that column*, sorting can be 
avoided if Derby uses the index as the access path."


The query below seems to me to satisfy that requirement.

Statement Name:
null
Statement Text:
SELECT DateTimeFormat(t_time,null) AS t_time,
   facility,event,details,name,username,sector,item
FROM system_log_view
ORDER BY time DESC
NULLS LAST
FETCH NEXT 20 ROWS ONLY
Parse Time: 0
Bind Time: 0
Optimize Time: 0
Generate Time: 0
Compile Time: 0
Execute Time: 0
Begin Compilation Timestamp : null
End Compilation Timestamp : null
Begin Execution Timestamp : null
End Execution Timestamp : null
Statement Execution Plan Text:
Scroll Insensitive ResultSet:
Number of opens = 1
Rows seen = 20
Number of reads from hash table = 20
Number of writes to hash table = 20
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261301.00
optimizer estimated cost: 269004.12
Source result set:
Row Count (1):
Number of opens = 1
Rows seen = 20
Rows filtered = 0
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261301.00
optimizer estimated cost: 269004.12
Source result set:
Project-Restrict ResultSet (5):
Number of opens = 1
Rows seen = 20
Rows filtered = 0
restriction = false
projection = true
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
restriction time (milliseconds) = 0
projection time (milliseconds) = 0
optimizer estimated row count: 261301.00
optimizer estimated cost: 269004.12
Source result set:
Sort ResultSet:
Number of opens = 1
Rows input = 261297
Rows returned = 20
Eliminate duplicates = false
In sorted order = false
Sort information:
Number of merge runs=18
Number of rows input=261297
Number of rows output=261297
Size of merge runs=[14335, 14335, 14335, 14335, 14335, 
14335, 14335, 14335, 14335, 14335, 14335, 14335, 14335, 14335, 14335, 
14335, 14335, 14335]

Sort type=external
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261301.00
optimizer estimated cost: 269004.12
Source result set:
Project-Restrict ResultSet (3):
Number of opens = 1
Rows seen = 261297
Rows filtered = 0
restriction = false
projection = true
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
restriction time (milliseconds) = 0
projection time (milliseconds) = 0
optimizer estimated row count: 261301.00
optimizer estimated cost: 269004.12
Source result set:
Table Scan ResultSet for SYSTEM_LOG at read 
uncommitted isolation level using share row locking chosen by the optimizer

Number of opens = 1
Rows seen = 261297
Rows filtered = 0
Fetch Size = 1
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
 

Re: Out-of-memory errors

2021-02-15 Thread Rick Hillegas
It looks like the subquery isn't doing any good. I would have hoped that 
the optimizer would be smart enough to join the subquery result to the 
primary key on id and then use the resulting row positions to probe into 
the heap. But the plan is just joining the subquery result to a full 
table scan. I don't understand why the subquery plan is so convoluted. 
However, it looks like you eliminated the memory-exhausting sort.


On 2/15/21 8:49 AM, John English wrote:

On 09/02/2021 18:05, Rick Hillegas wrote:
As Bryan points out, please consult the Tuning Guide for information 
on how to view your query plan. In any event, your descending index 
is not a covering index. That is, it does not contain all of the 
columns in your query. That may be what forces the optimizer to 
choose a full table scan.


From "Tuning Derby" 
(https://db.apache.org/derby/docs/10.8/tuning/ctunoptimz56859.html):
Usually, sorting requires an extra step to put the data into the right 
order. This extra step can be avoided for data that are already in the 
right order. For example, if a single-table query has an ORDER BY on a 
single column, and there is an index on that column, sorting can be 
avoided if Derby uses the index as the access path.


This seems to be my situation: I am selecting from (a view into) a 
single table with an ORDER BY on a single column, and there is an 
index on that column.


The timing I reported earlier (using your suggested nested query) 
prompted me to investigate further. I executed it with runtime 
statistics enabled, and below is what was reported. I cannot make head 
nor tail of what is happening here, but it looks incredibly complicated.


Statement Name:
null
Statement Text:
SELECT time as t_time,
 facility,event,details,name,username,sector,item
  FROM system_log
  WHERE id IN
  (
    SELECT id
    FROM system_log
    ORDER BY time DESC
    NULLS LAST
    FETCH NEXT 20 ROWS ONLY
  )
  ORDER BY t_time
Parse Time: 0
Bind Time: 0
Optimize Time: 0
Generate Time: 0
Compile Time: 0
Execute Time: 0
Begin Compilation Timestamp : null
End Compilation Timestamp : null
Begin Execution Timestamp : null
End Execution Timestamp : null
Statement Execution Plan Text:
Scroll Insensitive ResultSet:
Number of opens = 1
Rows seen = 20
Number of reads from hash table = 20
Number of writes to hash table = 20
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261251.00
optimizer estimated cost: 268953.12
Source result set:
Sort ResultSet:
Number of opens = 1
Rows input = 20
Rows returned = 20
Eliminate duplicates = false
In sorted order = false
Sort information:
    Number of merge runs=1
    Number of rows input=20
    Number of rows output=20
    Size of merge runs=[14]
    Sort type=external
    constructor time (milliseconds) = 0
    open time (milliseconds) = 0
    next time (milliseconds) = 0
    close time (milliseconds) = 0
    optimizer estimated row count: 261251.00
    optimizer estimated cost: 268953.12
Source result set:
    Attached subqueries:
    Begin Subquery Number 0
    Any ResultSet  (Attached to 2):
    Number of opens = 261296
    Rows seen = 261296
    constructor time (milliseconds) = 0
    open time (milliseconds) = 0
    next time (milliseconds) = 0
    close time (milliseconds) = 0
    optimizer estimated row count: 68252085001.00
    optimizer estimated cost: 70264271553.12
    Source result set:
    Project-Restrict ResultSet (7):
    Number of opens = 261296
    Rows seen = 5225730
    Rows filtered = 5225710
    restriction = true
    projection = true
    constructor time (milliseconds) = 0
    open time (milliseconds) = 0
    next time (milliseconds) = 0
    close time (milliseconds) = 0
    restriction time (milliseconds) = 0
    projection time (milliseconds) = 0
    optimizer estimated row count: 68252085001.00
    optimizer estimated cost: 70264271553.12
    Source result set:
    Union ResultSet:
    Number of opens = 261296
    Rows seen from the left = 4964453
    Rows seen from the right = 261277
    Rows returned = 5225730
    constructor time (milliseconds) = 0
    open time (milliseconds) = 0
    next time (milliseconds) = 0
    close time (milliseconds) = 0
    optimizer estimated row count: 20.00
    optimizer estimated cost: 0.00
   

Re: Out-of-memory errors

2021-02-15 Thread John English

On 09/02/2021 18:05, Rick Hillegas wrote:
As Bryan points out, please consult the Tuning Guide for information on 
how to view your query plan. In any event, your descending index is not 
a covering index. That is, it does not contain all of the columns in 
your query. That may be what forces the optimizer to choose a full table 
scan.


From "Tuning Derby" 
(https://db.apache.org/derby/docs/10.8/tuning/ctunoptimz56859.html):
Usually, sorting requires an extra step to put the data into the right 
order. This extra step can be avoided for data that are already in the 
right order. For example, if a single-table query has an ORDER BY on a 
single column, and there is an index on that column, sorting can be 
avoided if Derby uses the index as the access path.


This seems to be my situation: I am selecting from (a view into) a 
single table with an ORDER BY on a single column, and there is an index 
on that column.


The timing I reported earlier (using your suggested nested query) 
prompted me to investigate further. I executed it with runtime 
statistics enabled, and below is what was reported. I cannot make head 
nor tail of what is happening here, but it looks incredibly complicated.


Statement Name:
null
Statement Text:
SELECT time as t_time,
 facility,event,details,name,username,sector,item
  FROM system_log
  WHERE id IN
  (
SELECT id
FROM system_log
ORDER BY time DESC
NULLS LAST
FETCH NEXT 20 ROWS ONLY
  )
  ORDER BY t_time
Parse Time: 0
Bind Time: 0
Optimize Time: 0
Generate Time: 0
Compile Time: 0
Execute Time: 0
Begin Compilation Timestamp : null
End Compilation Timestamp : null
Begin Execution Timestamp : null
End Execution Timestamp : null
Statement Execution Plan Text:
Scroll Insensitive ResultSet:
Number of opens = 1
Rows seen = 20
Number of reads from hash table = 20
Number of writes to hash table = 20
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261251.00
optimizer estimated cost: 268953.12
Source result set:
Sort ResultSet:
Number of opens = 1
Rows input = 20
Rows returned = 20
Eliminate duplicates = false
In sorted order = false
Sort information:
Number of merge runs=1
Number of rows input=20
Number of rows output=20
Size of merge runs=[14]
Sort type=external
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 261251.00
optimizer estimated cost: 268953.12
Source result set:
Attached subqueries:
Begin Subquery Number 0
Any ResultSet  (Attached to 2):
Number of opens = 261296
Rows seen = 261296
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
optimizer estimated row count: 68252085001.00
optimizer estimated cost: 70264271553.12
Source result set:
Project-Restrict ResultSet (7):
Number of opens = 261296
Rows seen = 5225730
Rows filtered = 5225710
restriction = true
projection = true
constructor time (milliseconds) = 0
open time (milliseconds) = 0
next time (milliseconds) = 0
close time (milliseconds) = 0
restriction time (milliseconds) = 0
projection time (milliseconds) = 0
optimizer estimated row count: 
68252085001.00
optimizer estimated cost: 70264271553.12
Source result set:
Union ResultSet:
Number of opens = 261296
Rows seen from the left = 4964453
Rows seen from the right = 261277
Rows returned = 5225730
constructor time (milliseconds) 
= 0
   

Re: Out-of-memory errors

2021-02-12 Thread John English

On 10/02/2021 18:51, Rick Hillegas wrote:
The index key (time desc, username, name, facility, event, sector, item, 
details) could potentially be 32867 bytes long. However, an index key 
must be less than 1/2 the page size, according to the "Page size and key 
size" topic at 
https://db.apache.org/derby/docs/10.15/ref/rrefsqlj20937.html That is, 
your new index key is too big.


OK.

With the subquery-based approach, your covering index key would be only 
20 bytes long:


CREATE INDEX log_index ON system_log (time DESC, id);


Timing tests on my development rig (table size 294,217 rows):

SELECT *
FROM system_log
WHERE id IN
  (
SELECT id
FROM system_log
ORDER BY time DESC
NULLS LAST
FETCH NEXT 20 ROWS ONLY
  );

4510ms with no index, 3277ms with index as above.

SELECT *
FROM system_log
ORDER BY time DESC
NULLS LAST
FETCH NEXT 20 ROWS ONLY;

3452ms with no index, 1903ms with index as above.

So it seems that the nested select makes things worse rather than 
better... :(


--
John English


Re: Out-of-memory errors

2021-02-10 Thread Rick Hillegas
The index key (time desc, username, name, facility, event, sector, item, 
details) could potentially be 32867 bytes long. However, an index key 
must be less than 1/2 the page size, according to the "Page size and key 
size" topic at 
https://db.apache.org/derby/docs/10.15/ref/rrefsqlj20937.html That is, 
your new index key is too big.


With the subquery-based approach, your covering index key would be only 
20 bytes long:


CREATE INDEX log_index ON system_log (time DESC, id);





Re: Out-of-memory errors

2021-02-10 Thread John English

On 09/02/2021 19:50, Rick Hillegas wrote:

On 2/9/21 8:21 AM, John English wrote:

On 09/02/2021 18:05, Rick Hillegas wrote:
As Bryan points out, please consult the Tuning Guide for information 
on how to view your query plan. In any event, your descending index 
is not a covering index. That is, it does not contain all of the 
columns in your query. That may be what forces the optimizer to 
choose a full table scan.


OK. The concept of a "covering index" is a new one on me, so I need to 
read up on that. Would it be possible to create a covering index for 
this table just by specifying "time DESC" followed by the other 
columns (i.e. everything except id)?

Yes.

Would it be a reasonable thing to do in terms of resource usage?
It would probably double the on-disk footprint of your table since the 
index would have to contain most of the columns (including the big 
DETAILS column). You have a space vs. time tradeoff here if the 
subquery-based solution won't work for your app.


I've just tried this:

create index log_index on system_log(time desc, username, name, 
facility, event, sector, item, details);


but I then get the following error:

SQL exception: Limitation: Record of a btree secondary index cannot be 
updated or inserted due to lack of space on the page. Use the parameters 
derby.storage.pageSize and/or derby.storage.pageReservedSpace to work 
around this limitation.


In my derby.properties I've added these two lines:

derby.storage.pageSize=32768
derby.storage.pageReservedSpace=0

but I still get the same error. (I also tried 
derby.storage.pageReservedSpace=90 just in case I'd misunderstood which 
direction to set it, but that didn't work either.)


So it looks like there isn't any way to create a covering index for this 
table... or is there something else I can do?


--
John English


Re: Out-of-memory errors

2021-02-09 Thread Rick Hillegas

On 2/9/21 8:21 AM, John English wrote:

On 09/02/2021 18:05, Rick Hillegas wrote:
As Bryan points out, please consult the Tuning Guide for information 
on how to view your query plan. In any event, your descending index 
is not a covering index. That is, it does not contain all of the 
columns in your query. That may be what forces the optimizer to 
choose a full table scan.


OK. The concept of a "covering index" is a new one on me, so I need to 
read up on that. Would it be possible to create a covering index for 
this table just by specifying "time DESC" followed by the other 
columns (i.e. everything except id)?

Yes.

Would it be a reasonable thing to do in terms of resource usage?
It would probably double the on-disk footprint of your table since the 
index would have to contain most of the columns (including the big 
DETAILS column). You have a space vs. time tradeoff here if the 
subquery-based solution won't work for your app.


You might be able to trick the optimizer into generating a more 
efficient plan by re-writing your query as follows:


SELECT time as t_time,
  facility,event,details,name,username,sector,item
   FROM system_log
   WHERE id IN
   (
 SELECT id
 FROM system_log
 ORDER BY time DESC
 NULLS LAST
 FETCH NEXT 20 ROWS ONLY
   )
   ORDER BY t_time
;


That's not so easy, since all the tables in the webapp are generated 
by a showTable() method, which displays results from a view as a table 
which allows for sorting, filtering, selecting the numbers of rows per 
page and so on, using a ridiculously long list of parameters to 
specify all the different possibilities. But I'll think about it.






Re: Out-of-memory errors

2021-02-09 Thread John English

On 09/02/2021 18:05, Rick Hillegas wrote:
As Bryan points out, please consult the Tuning Guide for information on 
how to view your query plan. In any event, your descending index is not 
a covering index. That is, it does not contain all of the columns in 
your query. That may be what forces the optimizer to choose a full table 
scan.


OK. The concept of a "covering index" is a new one on me, so I need to 
read up on that. Would it be possible to create a covering index for 
this table just by specifying "time DESC" followed by the other columns 
(i.e. everything except id)? Would it be a reasonable thing to do in 
terms of resource usage?


You might be able to trick the optimizer into generating a more 
efficient plan by re-writing your query as follows:


SELECT time as t_time,
  facility,event,details,name,username,sector,item
   FROM system_log
   WHERE id IN
   (
     SELECT id
     FROM system_log
     ORDER BY time DESC
     NULLS LAST
     FETCH NEXT 20 ROWS ONLY
   )
   ORDER BY t_time
;


That's not so easy, since all the tables in the webapp are generated by 
a showTable() method, which displays results from a view as a table 
which allows for sorting, filtering, selecting the numbers of rows per 
page and so on, using a ridiculously long list of parameters to specify 
all the different possibilities. But I'll think about it.


--
John English


Re: Out-of-memory errors

2021-02-09 Thread Bryan Pendleton
I think Rick was suggesting that if you got the right query plan,
using your index as efficiently as possible, then that would be the
best solution? You can find a lot of information about this in
http://db.apache.org/derby/docs/10.15/tuning/

Perhaps your statistics are inaccurate, so Derby is not choosing the
index-based plan? I think there are ways to force Derby to choose that
plan, and there are also ways to ensure that your statistics are
accurate.

Specifically, see:
http://db.apache.org/derby/docs/10.15/tuning/ctunperfstatistics.html

bryan

On Tue, Feb 9, 2021 at 4:04 AM John English  wrote:
>
> On 08/02/2021 18:43, Rick Hillegas wrote:
> > I would recommend throwing more memory at your JVM or adding a
> > descending index to system_log.time.
>
> So, there is already a descending index to system_log.time, and I've
> upped the heap to 2GB. I also added the time taken to service the
> request as an HTML comment at the end of the page.
>
> The disk is an SSD, and the table contains about 261,000 rows. Accessing
> the first 20 rows ordered by time desc, it takes about 13 seconds, which
> seems a bit excessive to me. Ordering by event also takes about the same
> (12690ms as opposed to 13151ms), so the index doesn't seem to have much
> effect.
>
> Not sure what to try next... any ideas?
>
> --
> John English


Re: Out-of-memory errors

2021-02-09 Thread John English

On 08/02/2021 18:43, Rick Hillegas wrote:
I would recommend throwing more memory at your JVM or adding a 
descending index to system_log.time.


So, there is already a descending index to system_log.time, and I've 
upped the heap to 2GB. I also added the time taken to service the 
request as an HTML comment at the end of the page.


The disk is an SSD, and the table contains about 261,000 rows. Accessing 
the first 20 rows ordered by time desc, it takes about 13 seconds, which 
seems a bit excessive to me. Ordering by event also takes about the same 
(12690ms as opposed to 13151ms), so the index doesn't seem to have much 
effect.


Not sure what to try next... any ideas?

--
John English


Re: Out-of-memory errors

2021-02-08 Thread John English

On 08/02/2021 18:43, Rick Hillegas wrote:
The MergeInserter is doing a merge sort because there is no usable 
descending index on system_log.time. The storage layer's page cache (the 
ConcurrentCache) is filling up because you have to fault-in the entire 
contents of system_log. The logic in MergeInserter.insert() does not 
seem to be smart enough to realize that it is close to exhausting memory 
and needs to spill a merge run to disk.


I would recommend throwing more memory at your JVM or adding a 
descending index to system_log.time.


Thanks for this.

I already have a descending index on time -- here is the full 
declaration of system_log:


CREATE TABLE system_log (
  id  INTEGER   GENERATED ALWAYS AS IDENTITY,
  timeTIMESTAMP DEFAULT NULL,
  usernameVARCHAR(15),
  nameVARCHAR(520),
  facilityVARCHAR(15)   NOT NULL,
  event   VARCHAR(31)   NOT NULL,
  sector  VARCHAR(15),
  itemVARCHAR(255),
  details VARCHAR(32000),
  CONSTRAINT systemlog_pk   PRIMARY KEY (id)
);

CREATE INDEX log_index ON system_log (time DESC);

Is there a way to check if the index is being used?

Alternatively, the id column is pretty much the same ordering as 
descending time (insertions always use CURRENT_TIMESTAMP); I'm trying to 
think of a way to special-case this one to order by id instead of time, 
to see if that makes a difference. But since I allow the data to be 
sorted on any column for display, it isn't going to help much.


Access to this particular table is slow anyway, which I had thought 
might be due to locking issues (since anything that changes the system 
state gets logged, as well as certain read requests). I'm using the 
default isolation level and autocommit for insertions. Ideas on ways to 
optimise this sort of use case would be welcome!


Meanwhile I'm bumping up the memory. The table is currently about 
200,000 rows, so I assume that at some point when it gets bigger this 
will just start happening again.


--
John English


Re: Out-of-memory errors

2021-02-08 Thread Rick Hillegas
The MergeInserter is doing a merge sort because there is no usable 
descending index on system_log.time. The storage layer's page cache (the 
ConcurrentCache) is filling up because you have to fault-in the entire 
contents of system_log. The logic in MergeInserter.insert() does not 
seem to be smart enough to realize that it is close to exhausting memory 
and needs to spill a merge run to disk.


I would recommend throwing more memory at your JVM or adding a 
descending index to system_log.time.



On 2/8/21 1:41 AM, Peter Ondruška wrote:

OutOfMemoryError: GC overhead limit exceeded happens when JVM spends way much 
garbage collecting. Could it be your heap is too small? One way to check if 
DateTimeFormat may be leaking memory is to run query without this function.

-Original Message-
From: John English 
Sent: Monday, February 8, 2021 10:28 AM
To: Derby Discussion 
Subject: Out-of-memory errors

In the last few days I've suddenly had a bunch of OOM exceptions. I'm using 
Derby 10.9.1.0, Oracle Java 1.8.0 on Ubuntu 64-bit, and haven't upgraded for a 
while (probably years, looking at those numbers).

The place where they happen is in a call to executeQuery() in a method which 
displays a view as a table. Analysing the heap dump for the latest one with the 
Eclipse memory analyser shows this:

One instance of "org.apache.derby.impl.store.access.sort.MergeInserter"
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0"
occupies 134,841,800 (64.65%) bytes. The memory is accumulated in one instance of 
"org.apache.derby.impl.store.access.sort.SortBuffer", loaded by 
"org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which occupies 134,841,496 
(64.65%) bytes.

One instance of "org.apache.derby.impl.services.cache.ConcurrentCache"
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0"
occupies 43,766,832 (20.98%) bytes. The memory is accumulated in one instance of 
"org.apache.derby.impl.services.cache.ConcurrentCache",
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which 
occupies 43,766,832 (20.98%) bytes.

The query itself was:

SELECT DateTimeFormat(t_time,null) AS t_time,
   facility,event,details,name,username,sector,item
FROM system_log_view
ORDER BY time DESC
NULLS LAST
FETCH NEXT 20 ROWS ONLY

The view is nothing special except that t_time is a duplicate of the time 
column (the timestamp of the log entry) used to create a separate formatted 
copy for display purposes:

CREATE VIEW system_log_view AS
  SELECT  time AS t_time,
  facility,
  event,
  details,
  name,
  username,
  sector,
  item,
  time
  FROMsystem_log;

The stack trace shows the error is occurring inside the call to DateTimeFormat, 
which is again nothing special:

public static final String formatDateTime (Timestamp date, String
locale) {
  if (date == null) {
return null;
  }
  else {
String fmt = translate("d-MMM- 'at' HH:mm",locale);
return translatePhrases(fmt.format(date),locale);
  }
}

Here's the start of the stack trace:

java.sql.SQLException: The exception 'java.lang.OutOfMemoryError: GC overhead 
limit exceeded' was thrown while evaluating an expression.
at java.text.DigitList.clone()Ljava/lang/Object; (DigitList.java:736)
at java.text.DecimalFormat.clone()Ljava/lang/Object;
(DecimalFormat.java:2711)
at java.text.SimpleDateFormat.initialize(Ljava/util/Locale;)V
(SimpleDateFormat.java:645)
at
java.text.SimpleDateFormat.(Ljava/lang/String;Ljava/util/Locale;)V
(SimpleDateFormat.java:605)
at java.text.SimpleDateFormat.(Ljava/lang/String;)V
(SimpleDateFormat.java:580)
at
database.Functions.formatDateTime(Ljava/sql/Timestamp;Ljava/lang/String;)Ljava/lang/String;
(Functions.java:51)

Does anyone have any idea what might be happening, or what I can do to find out 
more?

TIA,
--
John English





Re: Out-of-memory errors

2021-02-08 Thread John English

On 08/02/2021 11:28, John English wrote:
In the last few days I've suddenly had a bunch of OOM exceptions. I'm 
using Derby 10.9.1.0, Oracle Java 1.8.0 on Ubuntu 64-bit, and haven't 
upgraded for a while (probably years, looking at those numbers).


The place where they happen is in a call to executeQuery() in a method 
which displays a view as a table.


Following on from what I wrote earlier, another of the heap dumps shows 
the exception happening at the same executeQuery() call, but the first 
few stack trace look like this:


  at org.apache.derby.iapi.types.SQLChar.(Ljava/lang/String;)V 
(Unknown Source)
  at org.apache.derby.iapi.types.SQLVarchar.(Ljava/lang/String;)V 
(Unknown Source)
  at 
org.apache.derby.iapi.types.SQLVarchar.cloneValue(Z)Lorg/apache/derby/iapi/types/DataValueDescriptor; 
(Unknown Source)
  at 
org.apache.derby.impl.sql.execute.BasicSortObserver.getClone([Lorg/apache/derby/iapi/types/DataValueDescriptor;)[Lorg/apache/derby/iapi/types/DataValueDescriptor; 
(Unknown Source)
  at 
org.apache.derby.impl.sql.execute.BasicSortObserver.insertNonDuplicateKey([Lorg/apache/derby/iapi/types/DataValueDescriptor;)[Lorg/apache/derby/iapi/types/DataValueDescriptor; 
(Unknown Source)
  at 
org.apache.derby.impl.store.access.sort.SortBuffer.insert([Lorg/apache/derby/iapi/types/DataValueDescriptor;)I 
(Unknown Source)
  at 
org.apache.derby.impl.store.access.sort.MergeInserter.insert([Lorg/apache/derby/iapi/types/DataValueDescriptor;)V 
(Unknown Source)
  at 
org.apache.derby.impl.sql.execute.SortResultSet.loadSorter()Lorg/apache/derby/iapi/store/access/ScanController; 
(Unknown Source)


The leak suspects seem to be the same:

One instance of "org.apache.derby.impl.store.access.sort.SortBuffer" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf047e888" 
occupies 157,386,240 (74.59%) bytes. The memory is accumulated in one 
instance of "org.apache.derby.impl.store.access.sort.SortBuffer", loaded 
by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf047e888", which 
occupies 157,386,240 (74.59%) bytes.


One instance of "org.apache.derby.impl.services.cache.ConcurrentCache" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf047e888" 
occupies 42,742,864 (20.26%) bytes. The memory is accumulated in one 
instance of "org.apache.derby.impl.services.cache.ConcurrentCache", 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf047e888", 
which occupies 42,742,864 (20.26%) bytes.


--
John English


RE: Out-of-memory errors

2021-02-08 Thread Peter Ondruška
OutOfMemoryError: GC overhead limit exceeded happens when JVM spends way much 
garbage collecting. Could it be your heap is too small? One way to check if 
DateTimeFormat may be leaking memory is to run query without this function.

-Original Message-
From: John English  
Sent: Monday, February 8, 2021 10:28 AM
To: Derby Discussion 
Subject: Out-of-memory errors

In the last few days I've suddenly had a bunch of OOM exceptions. I'm using 
Derby 10.9.1.0, Oracle Java 1.8.0 on Ubuntu 64-bit, and haven't upgraded for a 
while (probably years, looking at those numbers).

The place where they happen is in a call to executeQuery() in a method which 
displays a view as a table. Analysing the heap dump for the latest one with the 
Eclipse memory analyser shows this:

One instance of "org.apache.derby.impl.store.access.sort.MergeInserter" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 134,841,800 (64.65%) bytes. The memory is accumulated in one instance 
of "org.apache.derby.impl.store.access.sort.SortBuffer", loaded by 
"org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which occupies 
134,841,496 (64.65%) bytes.

One instance of "org.apache.derby.impl.services.cache.ConcurrentCache" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 43,766,832 (20.98%) bytes. The memory is accumulated in one instance 
of "org.apache.derby.impl.services.cache.ConcurrentCache",
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which 
occupies 43,766,832 (20.98%) bytes.

The query itself was:

   SELECT DateTimeFormat(t_time,null) AS t_time,
  facility,event,details,name,username,sector,item
   FROM system_log_view
   ORDER BY time DESC
   NULLS LAST
   FETCH NEXT 20 ROWS ONLY

The view is nothing special except that t_time is a duplicate of the time 
column (the timestamp of the log entry) used to create a separate formatted 
copy for display purposes:

   CREATE VIEW system_log_view AS
 SELECT  time AS t_time,
 facility,
 event,
 details,
 name,
 username,
 sector,
 item,
 time
 FROMsystem_log;

The stack trace shows the error is occurring inside the call to DateTimeFormat, 
which is again nothing special:

   public static final String formatDateTime (Timestamp date, String
locale) {
 if (date == null) {
   return null;
 }
 else {
   String fmt = translate("d-MMM- 'at' HH:mm",locale);
   return translatePhrases(fmt.format(date),locale);
 }
   }

Here's the start of the stack trace:

java.sql.SQLException: The exception 'java.lang.OutOfMemoryError: GC overhead 
limit exceeded' was thrown while evaluating an expression.
   at java.text.DigitList.clone()Ljava/lang/Object; (DigitList.java:736)
   at java.text.DecimalFormat.clone()Ljava/lang/Object;
(DecimalFormat.java:2711)
   at java.text.SimpleDateFormat.initialize(Ljava/util/Locale;)V
(SimpleDateFormat.java:645)
   at
java.text.SimpleDateFormat.(Ljava/lang/String;Ljava/util/Locale;)V
(SimpleDateFormat.java:605)
   at java.text.SimpleDateFormat.(Ljava/lang/String;)V
(SimpleDateFormat.java:580)
   at
database.Functions.formatDateTime(Ljava/sql/Timestamp;Ljava/lang/String;)Ljava/lang/String;
(Functions.java:51)

Does anyone have any idea what might be happening, or what I can do to find out 
more?

TIA,
--
John English


Out-of-memory errors

2021-02-08 Thread John English
In the last few days I've suddenly had a bunch of OOM exceptions. I'm 
using Derby 10.9.1.0, Oracle Java 1.8.0 on Ubuntu 64-bit, and haven't 
upgraded for a while (probably years, looking at those numbers).


The place where they happen is in a call to executeQuery() in a method 
which displays a view as a table. Analysing the heap dump for the latest 
one with the Eclipse memory analyser shows this:


One instance of "org.apache.derby.impl.store.access.sort.MergeInserter" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 134,841,800 (64.65%) bytes. The memory is accumulated in one 
instance of "org.apache.derby.impl.store.access.sort.SortBuffer", loaded 
by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which 
occupies 134,841,496 (64.65%) bytes.


One instance of "org.apache.derby.impl.services.cache.ConcurrentCache" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 43,766,832 (20.98%) bytes. The memory is accumulated in one 
instance of "org.apache.derby.impl.services.cache.ConcurrentCache", 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", 
which occupies 43,766,832 (20.98%) bytes.


The query itself was:

  SELECT DateTimeFormat(t_time,null) AS t_time,
 facility,event,details,name,username,sector,item
  FROM system_log_view
  ORDER BY time DESC
  NULLS LAST
  FETCH NEXT 20 ROWS ONLY

The view is nothing special except that t_time is a duplicate of the 
time column (the timestamp of the log entry) used to create a separate 
formatted copy for display purposes:


  CREATE VIEW system_log_view AS
SELECT  time AS t_time,
facility,
event,
details,
name,
username,
sector,
item,
time
FROMsystem_log;

The stack trace shows the error is occurring inside the call to 
DateTimeFormat, which is again nothing special:


  public static final String formatDateTime (Timestamp date, String 
locale) {

if (date == null) {
  return null;
}
else {
  String fmt = translate("d-MMM- 'at' HH:mm",locale);
  return translatePhrases(fmt.format(date),locale);
}
  }

Here's the start of the stack trace:

java.sql.SQLException: The exception 'java.lang.OutOfMemoryError: GC 
overhead limit exceeded' was thrown while evaluating an expression.

  at java.text.DigitList.clone()Ljava/lang/Object; (DigitList.java:736)
  at java.text.DecimalFormat.clone()Ljava/lang/Object; 
(DecimalFormat.java:2711)
  at java.text.SimpleDateFormat.initialize(Ljava/util/Locale;)V 
(SimpleDateFormat.java:645)
  at 
java.text.SimpleDateFormat.(Ljava/lang/String;Ljava/util/Locale;)V 
(SimpleDateFormat.java:605)
  at java.text.SimpleDateFormat.(Ljava/lang/String;)V 
(SimpleDateFormat.java:580)
  at 
database.Functions.formatDateTime(Ljava/sql/Timestamp;Ljava/lang/String;)Ljava/lang/String; 
(Functions.java:51)


Does anyone have any idea what might be happening, or what I can do to 
find out more?


TIA,
--
John English