This issue is very consistently happening when the process is close to
using its heap limit. Perhaps there is some OutOfMemory error being
suppressed?
On Thursday, May 22, 2014 11:45:20 AM UTC-4, Thomas Mueller wrote:
>
> Hi,
>
> You could export the database to a script file, and then create a
H2 is not multi-threaded in this version. So a single long running
statement will block everything else. You'll either need to split it up
into smaller inserts (can still be a single transaction, just smaller
queries), or you could try turning on MULTI_THREADED=1.
There is multi-threaded mode,
qtp507284179-81 is the reader, which seems to be blocked indefinitely.
clojure-agent-send-off-pool-5 is the writer, doing "insert into .. select
csvread ...".
db is jdbc:h2:/data/cancer/cavm-testing;CACHE_SIZE=65536;MVCC=TRUE
--
You received this message because you are subscribed to the Goog
Ah, it's actually "somefile;CACHE_SIZE=65536;;MVCC=TRUE"
Not sure if the double semicolon is a problem.
b.c.
On Tuesday, May 27, 2014 10:45:29 AM UTC-7, Brian Craft wrote:
>
> Version 1.3.171. The only option I'm setting is MVCC, like
> "somefile;MVCC=TRUE". I'll try to get a thread dump in a f
The overall load characteristics of the app are that there are many
readers, doing queries that are *usually* sub-second, and occasional large
batch inserts/deletes of, say, hundreds of thousands of rows. The main
goals are to not block readers, and minimize exposure of readers to
incompletely
Version 1.3.171. The only option I'm setting is MVCC, like
"somefile;MVCC=TRUE". I'll try to get a thread dump in a few minutes.
Does this mean readers should not be blocked during the insert?
On Tuesday, May 27, 2014 4:18:01 AM UTC-7, Noel Grandin wrote:
>
> Can you post a thread dump, so we c
Hi,
my application uses an embedded H2 database (1.3.176) stored to disk.
In some cases, very large table are created within the H2 database (e.g.
240 column, 100k rows, >10KB Varchar-data per row).
When I try to order the records of this table, my application runs out of
memory during fetch (
Hi,
I would like to use the Clob data type for my variable length strings that
average around 300 characters. I see in the doc that I should use "
PreparedStatement.setCharacterStream to store values". Instead I am
simply using PreparedStatment.setString(). This seems to work. Is there
I massaged your test case into a unit test for H2, and it seems to be working
for us.
But maybe there is some more transformation that happens to the raw byte array
before it hits the LZF compressor.
On 2014-05-27 13:46, Jan Kotek wrote:
MapDB uses LZF compression from H2 database. One of ou
Hi,
MapDB uses LZF compression from H2 database. One of our users
reported wrongly decompressed data:
https://github.com/jankotek/MapDB/issues/332[1]
I have not checked yet if this bug affect H2 as well.
Wiil be back in a few days
All best,
Jan
[1] https://github.com/jankotek/MapDB/i
Even though I can reproduce this with our app I fail to create separate
test case to reproduce the problem (i.e. to break the DB). What I can do is
send you database that H2 fails to open. Perhaps it is not really bug
during close but not robust enough recovery during startup. Zipped DB has
10M
On 2014-05-27 02:35, Brian Craft wrote:
When using |SORTED|, b-tree pages are split at the insertion point. This can
improve performance and reduce disk usage.
I don't know what the DIRECT option is up to, but the SORTED option allows the insertion process to be more efficient
because it c
Can you post a thread dump, so we can see where it is blocked?
Also which version is this, and what does your DB URL look like?
On 2014-05-25 20:24, Brian Craft wrote:
I'm doing a large "insert into .. select * from CSVREAD ...", which is in a
transaction. It seems to be blocking
readers, regar
I would try something like:
SELECT NEXT VALUE FOR xxx, NEXT VALUE FOR xxx, NEXT VALUE FOR xxx, etc
And then profile and see where the bottleneck is.
You can also try modifying the CACHE value for a sequence to limit how often it
gets synched to disk.
On 2014-05-25 04:10, Brian Craft wrote:
This was fixed in 1.3.176
On 2014-05-22 23:58, Sudeep Ambekar wrote:
CALL TO_CHAR(sysdate, 'mm/dd/')
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to h2-dat
15 matches
Mail list logo