[h2] How to fix "The database is read only [90097-193]"?

2017-01-14 Thread Johann Petrak
After some problems following a crash and some attempts to fix them, I have 
now
a database which 
1) works fine if I open it read/write (in embedded mode), initialize it 
using
  ";MV_STORE=FALSE"  appended to the URL and running 
  SET DATABASE TRANSACTION CONTROL MVCC
  SET FILES LOG FALSE
  SET FILES NIO SIZE 4096
  SET AUTOCOMMIT FALSE
  SET CACHE_SIZE 1048576
  then processing, then commiting and shutting down
2) throws an exception with error message  "The database is read only 
[90097-193]"
  (see stack trace at the bottom of the email)
  during establishing the connection when I try to open it read only using 
  ";MV_STORE=FALSE;ACCESS_MODE_DATA=r;FILE_LOCK=no" appended to the URL

However, before the crash happened, that database could be used in 
read-only mode without any 
problem. 

What I do not understand at all is why the DB works fine when opening, 
using and storing 
in read/write mode and why it wants it to be writable when I open it 
read-only? If there would be 
any scheduled operations for fixing data, surely that should have happened 
already during the
latest shutdown? 

Is there any way to fix the database so that it can be used in read-only 
mode again apart from
exporting all data and importing into a new database (which I want to avoid 
because the 
database is rather big). 

If there is no way to fix the database in place, what is the fastest and 
most efficient way to 
export and import data or to backup and re-create the database?

How can I make sure that this problem will not occur again in the future?


Stack trace:
Exception in thread "main" org.h2.jdbc.JdbcSQLException: The database is 
read only [90097-193]
at 
org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.engine.Database.checkWritingAllowed(Database.java:1927)
at org.h2.store.PageStore.logUndo(PageStore.java:1061)
at org.h2.store.PageFreeList.allocate(PageFreeList.java:138)
at org.h2.store.PageStore.allocatePage(PageStore.java:1147)
at 
org.h2.store.PageInputStream.allocateAllPages(PageInputStream.java:144)
at org.h2.store.PageLog.recover(PageLog.java:266)
at org.h2.store.PageStore.recover(PageStore.java:1403)
at org.h2.store.PageStore.openExisting(PageStore.java:367)
at org.h2.store.PageStore.open(PageStore.java:288)
at org.h2.engine.Database.getPageStore(Database.java:2482)
at org.h2.engine.Database.open(Database.java:695)
at org.h2.engine.Database.openDatabase(Database.java:273)
at org.h2.engine.Database.(Database.java:267)
at org.h2.engine.Engine.openSession(Engine.java:64)
at org.h2.engine.Engine.openSession(Engine.java:176)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:154)
at org.h2.engine.Engine.createSession(Engine.java:137)
at org.h2.engine.Engine.createSession(Engine.java:27)
at 
org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:349)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:115)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:99)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)




-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Huge memory requirements for simple select order by - why?

2016-11-08 Thread Johann Petrak
I have implemented this for my special case and I keep having problems:
I use a prepared select statement with 
an LIMIT ? OFFSET ? clause I retrieve just 10 of the 100 million rows
every time.  
I initially still got out of memory exceptions and ended up giving 80G heap 
space
to the process.

This works well initially: when I observe the process in jconsole, each 
select 
statement only takes a few minutes to get processed and uses less than 20G.
However the more iterations of carrying out the prepared select with new 
offset limit values are done, the more memory is consumed and even worse,
much worse, the more time it takes. The most recent batch of processing 
10 rows
took all night!
(BTW "processing" means just reading the rows from the result set using 
next(),
and writing them to a file so the slow-down cannot come from my own code). 

Is anything known about this kind of behaviour or could I still be doing 
something
that could get improved and solve my problems? 

My feeling is that even though offset and limit is used, the database 
spends a large
amount of time to actually get to the starting offset of the result set, so 
the time required to do this grows with increasing offset. 

So using this limit/offset approach to get around the problem of huge 
result sets
because there is no server side cursor support does not seem to be a usable 
solution 
to my problem either -- I guess I have to look for a different database. 
Any suggestions
for a good replacement just for this specific task which an be used in 
embedded mode
(without complex installation, setting up, etc.)? 


On Saturday, 5 November 2016 18:06:35 UTC, Johann Petrak wrote:
>
> Thanks for pointing this out -- I missed the corresponding remark in 
>   http://www.h2database.com/html/advanced.html
> I had only been looking at the documentation of setFetchSize here:
>   http://www.h2database.com/javadoc/org/h2/jdbc/JdbcStatement.html
>
> For me this is rather bad news because while this is one concrete example, 
> the software is written to handle arbitrary select statements (which could
> eg already contain limit/offset clauses, but with rather large limit 
> numbers). 
>
> To work around this with limit/offset also has the problem that there may 
> still 
> be a big overhead of creating too many such queries, depending on the 
> complexity
> of the query while creating too few may run in the out of memory 
> problem again. 
>
> On Saturday, 5 November 2016 17:48:44 UTC, Noel Grandin wrote:
>>
>> we don't do server-side cursors yet. which is what you're asking for.
>>
>> use SELECT..LIMIT..OFFSET
>>
>> to limit the amount of data you pull in one SELECT
>> ​
>>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Huge memory requirements for simple select order by - why?

2016-11-05 Thread Johann Petrak
Thanks for pointing this out -- I missed the corresponding remark in 
  http://www.h2database.com/html/advanced.html
I had only been looking at the documentation of setFetchSize here:
  http://www.h2database.com/javadoc/org/h2/jdbc/JdbcStatement.html

For me this is rather bad news because while this is one concrete example, 
the software is written to handle arbitrary select statements (which could
eg already contain limit/offset clauses, but with rather large limit 
numbers). 

To work around this with limit/offset also has the problem that there may 
still 
be a big overhead of creating too many such queries, depending on the 
complexity
of the query while creating too few may run in the out of memory 
problem again. 

On Saturday, 5 November 2016 17:48:44 UTC, Noel Grandin wrote:
>
> we don't do server-side cursors yet. which is what you're asking for.
>
> use SELECT..LIMIT..OFFSET
>
> to limit the amount of data you pull in one SELECT
> ​
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Huge memory requirements for simple select order by - why?

2016-11-05 Thread Johann Petrak
I have a database (db.h2.db, db.trace.db) which contains only one table 
"tab"
with two fields (the database has been created using MV_STORE=FALSE):
  key VARCHAR(100) not null,
  value VARCHAR not null
and one non-unique index on key

My h2 lib is h2-1.4.188.jar

I want to retrieve all rows from the table ordered by key:
  SELECT key,value from tab order by key

My table has about 100 million rows, the average key length is about 10 
characters, the average 
value length is about 300 characters.

I run the query in Java using a prepared statement and then 
iterating over the result of preparedStatement.executeQuery()
after calling connection.setAutoCommit(false) and
preparedStatement.setFetchSize(100)

When I run this (after opening the database read-only, transactions off 
etc) I 
get an out of memory error when allowing for 20G(!!) of heapspace 
(-Xmx2M).
I can watch in jconsole who the heap space keeps growing and growing with 
CPU
usage around 70%  until  GC cannot free any memory any more. 

This is odd since it looks as if the executeQuery method would try to 
actually
put the complete result into memory instead of simply walking the index
and just fetching 100 rows or at least much less than the whole result.

When I run explain on the query it shows:
SELECT
  KEY,
  VALUE
FROM PUBLIC.TAB
  /* PUBLIC.TABINDEXONKEY */
ORDER by 1
/* index sorted */

What is going on here?

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] How to make a huge star-like join as fast as possible?

2016-09-12 Thread Johann Petrak
Thanks, the SELECT was done with -Xmx90G
I will definitely try to increase the max memory rows setting!



On 12 September 2016 at 14:47, Noel Grandin  wrote:

> Yeah, given those sizes, we are almost certainly buffering to disk.
>
> You can try playing with the
>
>   http://h2database.com/html/grammar.html#set_max_memory_rows
>
> setting, that might help.
>
> Also, explicitly giving the VM lots of RAM via the "-Xmx" setting might
> help.
>
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "H2 Database" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/h2-database/OGF28mf-71g/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> h2-database+unsubscr...@googlegroups.com.
> To post to this group, send email to h2-database@googlegroups.com.
> Visit this group at https://groups.google.com/group/h2-database.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] How to make a huge star-like join as fast as possible?

2016-09-12 Thread Johann Petrak
Thank you for all your support and apologies for causing sighs!

Here is the output of EXPLAIN PLAN of the original, flawed SELECT, where I 
had
  LEFT outer JOIN KEY1KEY22P01s4Pref_en AS lu2p4p ON lu2p.KEY2 = l.KEY2 AND 
lu2p.KEY1 = l.KEY1
instad of 
  LEFT outer JOIN KEY1KEY22P01s4Pref_en AS lu2p4p ON lu2p4p.KEY2 = l.KEY2 
AND lu2p4p.KEY1 = l.KEY1

So there was a typo in the join clause where the table name for the field 
to join on was not the same as the table 
specified as the JOIN target.

SELECT
L.KEY1,
L.KEY2,
IFNULL(C.INTEF01, 'dummy'),
IFNULL(C.SPCF01, 'dummy'),
IFNULL(A.INTEF01, 'dummy'),
IFNULL(A.SPCF01, 'dummy'),
IFNULL(LU2S.F02, ''),
IFNULL(LU2OL.DDKEY1S, ''),
IFNULL(L2OL.DDKEY1S, ''),
IFNULL(U2OL.DDKEY1S, ''),
IFNULL(LU2P.P01S, ''),
IFNULL(LU2P4P.P01S4PREF, ''),
IFNULL(LU2PL.PREFKEY1S, ''),
IFNULL(W.F03, -1),
IFNULL(DTHD.F04, ''),
IFNULL(LUS.FQ, 0),
IFNULL(LS.FQ, 0),
IFNULL(US.FQ, 0),
(CONVERT(IFNULL(LUS.FQ, 0),DOUBLE) / IFNULL(LS.FQ, 1)),
(CONVERT(IFNULL(LUS.FQ, 0),DOUBLE) / IFNULL(US.FQ, 1)),
IFNULL(SPRRELS.SCF05, 0.0),
IFNULL(SPRWL.SCF05, 0.0),
IFNULL(SPRWLR.SCF05, 0.0)
FROM PUBLIC.KEY1KEY2PAIRS_EN L
/* PUBLIC.KEY1KEY2PAIRSONKEY1KEY2_EN */
LEFT OUTER JOIN PUBLIC.F01ES C
/* PUBLIC.PRIMARY_KEY_5: KEY2 = L.KEY2 */
ON L.KEY2 = C.KEY2
LEFT OUTER JOIN PUBLIC.AIRPF01 A
/* PUBLIC.PRIMARY_KEY_C: KEY2 = L.KEY2 */
ON L.KEY2 = A.KEY2
LEFT OUTER JOIN PUBLIC.KEY1KEY22F02_EN LU2S
/* PUBLIC.KEY1KEY22F02ONKEY2KEY1_EN: KEY2 = L.KEY2
AND KEY1 = L.KEY1
 */
ON (LU2S.KEY2 = L.KEY2)
AND (LU2S.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY1KEY22DDKEY1S_EN LU2OL
/* PUBLIC.KEY1KEY22DDKEY1SONKEY2KEY1_EN: KEY2 = L.KEY2
AND KEY1 = L.KEY1
 */
ON (LU2OL.KEY2 = L.KEY2)
AND (LU2OL.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY12DDKEY1S_EN L2OL
/* PUBLIC.PRIMARY_KEY_2: KEY1 = L.KEY1 */
ON L2OL.KEY1 = L.KEY1
LEFT OUTER JOIN PUBLIC.KEY22DDKEY1S_EN U2OL
/* PUBLIC.PRIMARY_KEY_8E: KEY2 = L.KEY2 */
ON U2OL.KEY2 = L.KEY2
LEFT OUTER JOIN PUBLIC.KEY1KEY22P01S_EN LU2P
/* PUBLIC.KEY1KEY22P01SONKEY2KEY1_EN: KEY2 = L.KEY2
AND KEY1 = L.KEY1
 */
ON (LU2P.KEY2 = L.KEY2)
AND (LU2P.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY1KEY22P01S4PREF_EN LU2P4P
/* PUBLIC.KEY1KEY22P01S4PREF_EN.tableScan */
ON (LU2P.KEY2 = L.KEY2)
AND (LU2P.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY1KEY22PREFKEY1S_EN LU2PL
/* PUBLIC.KEY1KEY22PREFKEY1SONKEY2KEY1_EN: KEY2 = L.KEY2 */
ON LU2PL.KEY2 = L.KEY2
LEFT OUTER JOIN PUBLIC.XXF03S_EN W
/* PUBLIC.PRIMARY_KEY_A: KEY2 = L.KEY2 */
ON W.KEY2 = L.KEY2
LEFT OUTER JOIN PUBLIC.KEY1KEY2EE_EN LUS
/* PUBLIC.KEY1KEY2EEKEY1_EN: KEY1 = L.KEY1 */
ON (L.KEY1 = LUS.KEY1)
AND (L.KEY2 = LUS.KEY2)
LEFT OUTER JOIN PUBLIC.KEY1EE_EN LS
/* PUBLIC.PRIMARY_KEY_7: KEY1 = L.KEY1 */
ON L.KEY1 = LS.KEY1
LEFT OUTER JOIN PUBLIC.KEY2EE_EN US
/* PUBLIC.PRIMARY_KEY_74: KEY2 = L.KEY2 */
ON L.KEY2 = US.KEY2
LEFT OUTER JOIN PUBLIC.F04S DTHD
/* PUBLIC.PRIMARY_KEY_1: KEY2 = L.KEY2 */
ON L.KEY2 = DTHD.KEY2
LEFT OUTER JOIN PUBLIC.F05_ALL SPRRELS
/* PUBLIC.PRIMARY_KEY_F: KEY2 = L.KEY2 */
ON L.KEY2 = SPRRELS.KEY2
LEFT OUTER JOIN PUBLIC.F054XXYY_EN SPRWL
/* PUBLIC.PRIMARY_KEY_11: KEY2 = L.KEY2 */
ON L.KEY2 = SPRWL.KEY2
LEFT OUTER JOIN PUBLIC.F054XXYYCC_EN SPRWLR
/* PUBLIC.PRIMARY_KEY_3D: KEY2 = L.KEY2 */
ON L.KEY2 = SPRWLR.KEY2
ORDER BY 1, 2
/* index sorted */

And here is the output for the corrected SQL:

SELECT
L.KEY1,
L.KEY2,
IFNULL(C.INTEF01, 'dummy'),
IFNULL(C.SPCF01, 'dummy'),
IFNULL(A.INTEF01, 'dummy'),
IFNULL(A.SPCF01, 'dummy'),
IFNULL(LU2S.F02, ''),
IFNULL(LU2OL.DDKEY1S, ''),
IFNULL(L2OL.DDKEY1S, ''),
IFNULL(U2OL.DDKEY1S, ''),
IFNULL(LU2P.P01S, ''),
IFNULL(LU2P4P.P01S4PREF, ''),
IFNULL(LU2PL.PREFKEY1S, ''),
IFNULL(W.F03, -1),
IFNULL(DTHD.F04, ''),
IFNULL(LUS.FQ, 0),
IFNULL(LS.FQ, 0),
IFNULL(US.FQ, 0),
(CONVERT(IFNULL(LUS.FQ, 0),DOUBLE) / IFNULL(LS.FQ, 1)),
(CONVERT(IFNULL(LUS.FQ, 0),DOUBLE) / IFNULL(US.FQ, 1)),
IFNULL(SPRRELS.SCF05, 0.0),
IFNULL(SPRWL.SCF05, 0.0),
IFNULL(SPRWLR.SCF05, 0.0)
FROM PUBLIC.KEY1KEY2PAIRS_EN L
/* PUBLIC.KEY1KEY2PAIRSONKEY1KEY2_EN */
LEFT OUTER JOIN PUBLIC.F01ES C
/* PUBLIC.PRIMARY_KEY_5: KEY2 = L.KEY2 */
ON L.KEY2 = C.KEY2
LEFT OUTER JOIN PUBLIC.AIRPF01 A
/* PUBLIC.PRIMARY_KEY_C: KEY2 = L.KEY2 */
ON L.KEY2 = A.KEY2
LEFT OUTER JOIN PUBLIC.KEY1KEY22F02_EN LU2S
/* PUBLIC.KEY1KEY22F02ONKEY2KEY1_EN: KEY2 = L.KEY2
AND KEY1 = L.KEY1
 */
ON (LU2S.KEY2 = L.KEY2)
AND (LU2S.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY1KEY22DDKEY1S_EN LU2OL
/* PUBLIC.KEY1KEY22DDKEY1SONKEY2KEY1_EN: KEY2 = L.KEY2
AND KEY1 = L.KEY1
 */
ON (LU2OL.KEY2 = L.KEY2)
AND (LU2OL.KEY1 = L.KEY1)
LEFT OUTER JOIN PUBLIC.KEY12DDKEY1S_EN L2OL

Re: [h2] How to make a huge star-like join as fast as possible?

2016-09-10 Thread Johann Petrak
Thank you, I should have done this in the first place! 
It showed that because of a type, the index was not actually used for one 
of the many joined tables. 

After fixing this, strangely, the database size was reduced significantly 
-- from 740G to about 400G (although I did run "SHUTDOWN COMPACT" earlier
which did not reduce the database size) and the join does not eat up all my 
SSD disk space any more. 

However, it still takes hours to do the join - although a lot of data is 
getting joined here, it still feels that doing this "manually" by 
merging together all the tables should be much faster.  
Does the order of keys in a multi-key join have an influence on the 
performance? So if the central table is indexed by key1, then key2
and the SELECT includes ORDER BY key1,key2, and another table gets joined 
that also has key1 and key2 (which together are unique),
then does it make a difference to the speed of the join if the index in the 
joined table is (key1,key2) versus (key2,key1)? If all tables which
have key1 and key2 are indexed in the same key order as the final order by 
clause, then it should be possible for the database to just do the
join by sequential merging of those tables instead of individual index 
look-ups, so the speedup should be by a factor of about k*log(n) per 
index, no?

On Friday, 9 September 2016 07:43:04 UTC+1, Noel Grandin wrote:
>
> what does EXPLAIN PLAN say for one of the queries ?​
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] How to make a huge star-like join as fast as possible?

2016-09-08 Thread Johann Petrak
I have a large database with one central table that contains just two key 
columns (both are varchar(100)): key1, key2 and
a large number of additional tables, which are indexed by key1, or key2 or 
key1 and key2 and contain
each a number of additional columns. 
All join conditions are on unique keys, so if we join by key2, then key2 is 
a unique key in the table that gets joined with
the central table, if we join by key1 and key2, then the combination of 
these again is a unique key in the table that gets joined
with the central table.
The central table contains about 20 million rows.

The aim is to select the result of a left outer join of all the additional 
tables with the central key table and write that to a file. 
This should be a fairly easy operation and could be done by a sequential 
merge since all tables are using the same keys for
which there are indexes.   

However, this operation takes hours to days (even though everything is done 
on a fast multicore machine with SDDs, no transaction log, lock mode 0) and 
also 
sometimes uses up a lot of diskspace (not sure if this is related to the 
exact tables I join with and which fields they contain).

Is there some way how I could be doing this wrong or some strategy to speed 
an operation like this up significantly?
Should this be fairly efficient or is it normal for this to take quite a 
long time? 
 

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] How to get rid of the "The database is read only [90097-188] 90097/90097" error?

2016-09-05 Thread Johann Petrak
I got into the dreaded situation where after a crash of the process, my 
databse was left with 
a lock file. After removing the lock file and trying to open the database 
in read-only mode
(ACCESS_MODE_DATA=r) I kept getting the error message:
  The database is read only [90097-188] 90097/90097
When I tried to open the database in writable mode, I got out of memory 
conditions. 

After allocating a huge amount of memory (120G), I could open the database 
in writable mode,
close it and since then opening in writable mode works again and I can 
create new tables or 
select data from existing tables.

HOWEVER, when I try to open the database in readonly mode, I still just get 
an exception and
the error message
  The database is read only [90097-188] 90097/90097

I am using h2-1.4.188.jar and all my databases are created using 
MV_STORE=FALSE
(because many of the operations I need appear to be much slower with 
MV_STORE=TRUE).

My database is quite big - over 350G of data in dozens of tables, so I am 
really paranoid of
not being able to recover from this. I also need to be able to open the 
database as read-only
because all my programs that only select data from its tables are written 
to do that. 

Is there any way to recover from that error message and that problem?

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] H2 error [90011-187] - Need a way to use relative dir

2015-08-27 Thread Johann Petrak
My point would have been that file URLs (and file paths in java) have well 
defined semantics and the JDBC URL here includes the path portion of a file 
URL. 
A large number of these path portions will get computed automatically. Your 
change (Also the one that requires the drive letter on 
Windows) enforces something that diverges from the standard way of doing 
things with file URLs. In my view, the previous 
implementation was correct, there was nothing broken that needed to be 
fixed, there apparently were just a few people who did not understand
relative file URLs.  I think it is odd to deal with this by changing the 
semantics of file URLs (which is the case by requiring that of two
semantically equivalent ways to write down a correct file path only the one 
is supported that includes a redundant part) just for this one application 
and 
requiring that anyone who wants to use H2 has to change their semantics of 
file URLs too. 
Anyone who knows about the semantics of URLs will expect "relPath" to work 
equally to "./relPath" because in practically all other
situation that is exactly how it works. With H2, it will require to look in 
the manual or post a question here to figure out the special
requirement imposed on relative paths.

I am very thankful for H2 which is a brilliant piece of software which has 
made my life incredibly easier many times already,
and I fully understand that this is your decision and I will find a way to 
work around this in my application (or just
use a previous version for now), please understand that my concerns are 
meant to be constructive and helpful in the long run and not
just something I am too lazy to change on my side!

Best,
  Johann

On Thursday, 27 August 2015 06:48:20 UTC+1, Thomas Mueller wrote:
>
> Hi,
>
> > I think that is an unfortunate change
>
> I understand it is unfortunate for you, however there were many people 
> that made a mistake, and run into problems, because they didn't find the 
> database file, or created multiple databases.
>
> > because the DB is embedded in a system which does its own URL-handling 
> (part of which is a normalization that removes one or more "./" so there is 
> very little I can do. 
>
> If you can't change the application to use the prefix "jdbc:h2:./" instead 
> "jdbc:h2:", then I'm sorry about that, but there is nothing I can do about 
> it...
>
> > but maybe a setting could be added (unless it already exists) to switch 
> this on or off.
>
> I think there are already too many switches...
>
> Regards,
> Thomas
>
>
> On Wed, Aug 26, 2015 at 1:44 PM, Christian MICHON <
> christian.mic...@gmail.com> wrote:
>
>> I'll second that, but maybe a setting could be added (unless it already 
>> exists) to switch this on or off. Like this we could use this switch to 
>> ease legacy issues while preserving the directions chosen by the lead devs 
>> of H2.
>>
>> As for moving to 1.4.188, some times you have no choice but to move on. I 
>> recently tried to use linked table on SQlite database from ITIS (taxonomy) 
>> and it works only with 1.4.188. Anything else will trigger a NPE, and if 
>> not fixed on an older branch like 1.3, you have to move to the latest or 
>> use other tricks than linked tables.
>>
>>
>> On Tuesday, August 25, 2015 at 7:29:12 PM UTC+2, Johann Petrak wrote:
>>>
>>> I think that is an unfortunate change - it breaks practically all 
>>> existing relative URL paths in my case because none of them includes that 
>>> "./" part and it violates the specification of what a relative URL can be 
>>> which nowhere excludes URLs without the "./" part. Actually, the "./" part 
>>> is explicitly superfluous and usually a nuisance that one wants to get read 
>>> of when normalizing URLs. 
>>>
>>> In my case it may be a reason not to upgrade to 1.4 because the DB is 
>>> embedded in a system which does its own URL-handling (part of which is a 
>>> normalization that removes one or more "./" so there is very little I can 
>>> do. 
>>>
>>> Is there any chance to make it work with all relative URLs again? 
>>>
>>> I know of no other database or other software which enforces this.
>>>
>>> Johann
>>>
>>> On Tuesday, 28 April 2015 19:36:57 UTC+2, Thomas Mueller wrote:
>>>>
>>>> Hi,
>>>>
>>>> Many users ran into problems because they used something like 
>>>> "jdbc:h2:test" and then either didn't find the database file, or created a 
>>>> second database when running the application

Re: [h2] H2 error [90011-187] - Need a way to use relative dir

2015-08-25 Thread Johann Petrak
I think that is an unfortunate change - it breaks practically all existing 
relative URL paths in my case because none of them includes that "./" part 
and it violates the specification of what a relative URL can be which 
nowhere excludes URLs without the "./" part. Actually, the "./" part is 
explicitly superfluous and usually a nuisance that one wants to get read of 
when normalizing URLs. 

In my case it may be a reason not to upgrade to 1.4 because the DB is 
embedded in a system which does its own URL-handling (part of which is a 
normalization that removes one or more "./" so there is very little I can 
do. 

Is there any chance to make it work with all relative URLs again? 

I know of no other database or other software which enforces this.

Johann

On Tuesday, 28 April 2015 19:36:57 UTC+2, Thomas Mueller wrote:
>
> Hi,
>
> Many users ran into problems because they used something like 
> "jdbc:h2:test" and then either didn't find the database file, or created a 
> second database when running the application in a different directory. 
> That's why in version 1.4.x, now relative path only work when using ".", as 
> in "jdb:h2:./test".
>
> The documentation is wrong. I will update it.
>  
>
>> jdbc:h2:file:data/sample 
>>
>
> It should be: jdbc:h2:file:./data/sample
>  
> Regards,
> Thomas
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] NPE/performance degradation with version 1.4.185?

2015-08-21 Thread Johann Petrak
Thank you that fixed the problem!

h2-1.5.188.jar: 103.34user 16.55system 1:10.39elapsed 170%CPU 
(with MV_STORE=FALSE)

Johann

On Friday, 21 August 2015 14:11:14 UTC+2, Noel Grandin wrote:
>
>
>
> On 2015-08-21 02:03 PM, Johann Petrak wrote: 
> > 
> > Any ideas? I will stay with version h2-1.3.176 for now, but is this 
> something you can confirm? 
> > Am I doing it wrong? Is there something that can be done to make the 1.4 
> version work as 
> > fast as the 1.3 version? 
>
> For now, turn off the new MVStore engine by appending MV_STORE=FALSE to 
> your DB URL. 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] NPE/performance degradation with version 1.4.185?

2015-08-21 Thread Johann Petrak
Have been away from this problem for a while but now I made a new 
performance comparison, loading
about 9 million rows into a table with just two columns. 
The table looks like this:
CREATE TABLE t1 ( 
  f1 VARCHAR(100) NOT NULL PRIMARY KEY, 
  t2 VARCHAR(100) NOT NULL
);

The code I use prepares the statement 
  INSERT INTO t1 (f1,f2) VALUES(?,?)
and then, for each row, executes
  insertSt.setObject(1,v1)
  insertSt.setObject(2,v2)
A single commit is executed at the very end of al inserts.
The connections is opened in embedded mode and before anything is inserted 
into
the table the following statements are executed:
  SET AUTOCOMMIT FALSE
  SET UNDO_LOG 0
  SET LOCK_MODE 0
  SET LOG 0
  SET CACHE_SIZE 1048576

Exactly the same code is run on a newly created database file for each of 
the following jar versions:
h2-1.3.174.jar: 105.28user 18.24system 1:10.56elapsed 175%CPU
h2-1.3.176.jar: 112.24user 18.32system 1:12.28elapsed 180%CPU
h2-1.4.188.jar: 1242.22user 32.65system 17:29.98elapsed 121%CPU

As you can see, the 1.4 jar is more than 10 times slower for this and the 
slowdown
seems to come mainly from CPU. 

Any ideas? I will stay with version h2-1.3.176 for now, but is this 
something you can confirm?
Am I doing it wrong? Is there something that can be done to make the 1.4 
version work as
fast as the 1.3 version?

Many thanks,
  Johann

On Wednesday, 4 March 2015 07:43:30 UTC+1, Thomas Mueller wrote:
>
> Hi,
>
> Could you please re-try with the latest version (1.4.186)? A memory leak 
> was fixed there, which could also cause performance problems with a large 
> (well, medium size) database. If it is still a problem, then could you 
> please post a test case (or describe what you do in more details)?
>
> Regards,
> Thomas
>
>
> On Monday, March 2, 2015, Johann Petrak > 
> wrote:
>
>> Has anyone else seen a degradation in performance when moving from 
>> version 1.3.174 to version 1.4.185? 
>> A simple java program which simple does a large number of inserts (in 
>> embedded mode) appears to be at 
>> least 20 times slower using the newer version with my configuration. 
>>
>> Also, I have seen situations with the newer version where the program 
>> appears to hang on shutdown 
>> and eventually I get the following exception:
>> Exception in thread "main" org.h2.jdbc.JdbcSQLException: General error: 
>> "java.lang.NullPointerException" [5-185]
>> at 
>> org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
>> at org.h2.message.DbException.get(DbException.java:168)
>> at org.h2.message.DbException.convert(DbException.java:295)
>> at 
>> org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
>> at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2384)
>> at 
>> org.h2.mvstore.MVStore$BackgroundWriterThread.run(MVStore.java:2570)
>> Caused by: java.lang.NullPointerException
>> at org.h2.mvstore.db.ValueDataType.compare(ValueDataType.java:102)
>> at 
>> org.h2.mvstore.db.TransactionStore$ArrayType.compare(TransactionStore.java:1744)
>> at org.h2.mvstore.MVMap.areValuesEqual(MVMap.java:601)
>> at org.h2.mvstore.MVMap.replace(MVMap.java:615)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:806)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
>> at org.h2.mvstore.MVMap.rewrite(MVMap.java:782)
>> at org.h2.mvstore.MVStore.compactRewrite(MVStore.java:1796)
>> at org.h2.mvstore.MVStore.compact(MVStore.java:1688)
>> at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2380)
>>
>> but i have never seen this with the older version. 
>>
>> This is happening with Java 1.8.0_20-b26 on 64 bit intel server with the 
>> database file residing on an SSD drive. 
>> The program that executes the insert statements first executes the 
>> following
>> SET AUTOCOMMIT FALSE
>> SET UNDO_LOG 0
>> SET LOCK_MODE 0
>> SET LOG 0
>> SET CACHE_SIZE 1048576
>> and it never executes a commit until the very end (before shutdown). 
>> It uses a prepared INSERT statement and sets the value of its fields 
>> using statement.setObject(col,val) where value in 
>> that case is always a String object. 
>>
>> Thanks,
>>   johann
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "H2 Database" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to h2-database+un

[h2] NPE/performance degradation with version 1.4.185?

2015-03-02 Thread Johann Petrak
Has anyone else seen a degradation in performance when moving from 
version 1.3.174 to version 1.4.185? 
A simple java program which simple does a large number of inserts (in 
embedded mode) appears to be at 
least 20 times slower using the newer version with my configuration. 

Also, I have seen situations with the newer version where the program 
appears to hang on shutdown 
and eventually I get the following exception:
Exception in thread "main" org.h2.jdbc.JdbcSQLException: General error: 
"java.lang.NullPointerException" [5-185]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at 
org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2384)
at org.h2.mvstore.MVStore$BackgroundWriterThread.run(MVStore.java:2570)
Caused by: java.lang.NullPointerException
at org.h2.mvstore.db.ValueDataType.compare(ValueDataType.java:102)
at 
org.h2.mvstore.db.TransactionStore$ArrayType.compare(TransactionStore.java:1744)
at org.h2.mvstore.MVMap.areValuesEqual(MVMap.java:601)
at org.h2.mvstore.MVMap.replace(MVMap.java:615)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:806)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:823)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:782)
at org.h2.mvstore.MVStore.compactRewrite(MVStore.java:1796)
at org.h2.mvstore.MVStore.compact(MVStore.java:1688)
at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2380)

but i have never seen this with the older version. 

This is happening with Java 1.8.0_20-b26 on 64 bit intel server with the 
database file residing on an SSD drive. 
The program that executes the insert statements first executes the following
SET AUTOCOMMIT FALSE
SET UNDO_LOG 0
SET LOCK_MODE 0
SET LOG 0
SET CACHE_SIZE 1048576
and it never executes a commit until the very end (before shutdown). 
It uses a prepared INSERT statement and sets the value of its fields using 
statement.setObject(col,val) where value in 
that case is always a String object. 

Thanks,
  johann

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Is it safe and reasonable to open a read-only database in embedded mode from two processes without locking?

2015-02-28 Thread Johann Petrak
Thanks -- this is what I had been thinking, but I thought it would be
better to bring it up since
this option is not explicitly mentioned in the documentation.

Another interesting situation is where two processes access the same
database, again in embedded
mode, and one of the processes updates just table A while the second
process just reads from table B.
Would it be safe for the second process to open the database in read-only
mode and without locking, while
the first process opens it r/w and creates a lock file and would they
interfere with each other?


On 28 February 2015 at 17:43, Ryan How  wrote:

>  I'll let someone more qualified give you a better answer.
>
> But, yes I can't see any reason why it wouldn't work (The same as if it
> was on a read only filesystem?)
>
> I find embedded mode has better performance, but it is more profound in
> lots of smaller queries. If it is larger less frequent queries there isn't
> as much difference.
>
> You might have to test your specific case if you are trying to squeeze
> every bit of performance possible.
>
> Not sure if the cache might be better with the server though. Otherwise
> each process will have it's own cache.
>
>  On 28/02/2015 8:49 PM, Johann Petrak wrote:
>
> Is it possible, safe and reasonable to add
>
> ;FILE_LOCK=NO;ACCESS_MODE_DATA=rto the URL from two processes which use the 
> same database in embedded mode?
> If yes, can I expect better performance from doing it that way than from
> using a local server?
>
>
> thanks
>   johann
>  --
> You received this message because you are subscribed to the Google Groups
> "H2 Database" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to h2-database+unsubscr...@googlegroups.com.
> To post to this group, send email to h2-database@googlegroups.com.
> Visit this group at http://groups.google.com/group/h2-database.
> For more options, visit https://groups.google.com/d/optout.
>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "H2 Database" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/h2-database/JtWn2CjvIp0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> h2-database+unsubscr...@googlegroups.com.
> To post to this group, send email to h2-database@googlegroups.com.
> Visit this group at http://groups.google.com/group/h2-database.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Is it safe and reasonable to open a read-only database in embedded mode from two processes without locking?

2015-02-28 Thread Johann Petrak
Is it possible, safe and reasonable to add

;FILE_LOCK=NO;ACCESS_MODE_DATA=r
to the URL from two processes which use the same database in embedded mode?
If yes, can I expect better performance from doing it that way than from
using a local server?


thanks
  johann

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Very slow / read-only exception when connecting to a database the first time after moving it

2013-12-10 Thread Johann Petrak
I created a database (two files) on system A, then moved the files to 
system B and when I want to connect to that DB on system B, the first time 
just establishing the connection takes many minutes. But after that, every 
subsequent time the connection is established in less than a second. Also, 
if I attempt to connect to the database with ";ACCESS_MODE_DATA=r" appended 
to the URL the first time I get an exception complaining that the DB is 
read-only, but it is no problem the subsequent times.
So it seems as if the DB needs to write and re-organize something on the 
disk after I move it ... is there some explanation of what goes on here 
somewhere?

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] Prevent trimming of spaces with CSVREAD?

2013-12-07 Thread Johann Petrak
I want to use CSVREAD to read in a tab-separated values file, but the way I 
do it, leading spaces seem to get trimmed from the fields that are read. 

I am using something like 
INSERT ... SELECT ...
 CSVREAD('test.tsv', UPPER(STRINGDECODE('field1\tfield2\tfield3')),
 STRINGDECODE('charset=UTF-8 fieldSeparator=\t escape= 
fieldDelimiter='))

But when the TSV file contains two different lines like this:

val1\tval2\tval3
val1\tval2\tval3

they will both be read in into rows like for the first of these lines (no 
leading spaces in the second field). 

How does this happen and how can I prevent it?
Is there anything else to be aware of so that the field values are read and 
preserved absolutely identically to what is in the TSV file?

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.