Re: [h2] 1.4 beta creates much bigger database file

2015-02-12 Thread Damien Coraboeuf
Actually,

I was using the Tomcat JDBC pool with the default settings. The connection
within the pool were never released - and it seems this prevented H2 to do
its cleanup. After I've put the maxAge property on the pool (auto release
after 10 minutes), the problem disappeared. The database file grows
steadily but comes down to a normal size every 10 minutes.

I'd wish the database would not grow as fast, but it's already much better.

Damien.

On Thu, Feb 12, 2015 at 6:05 PM, Thomas Mueller 
thomas.tom.muel...@gmail.com wrote:

 Hi,

 Disk space is re-used after 45 seconds (the default retention time). Disk
 space should stabilise at some point, unless if you add more and more data.

 checkpoint should reduce disk space usage, but you may need to call it a
 few times.

 Regards,
 Thomas




 On Friday, February 6, 2015, Damien Coraboeuf damien.corabo...@gmail.com
 wrote:

 FYI, we were using 1.4.184 together with a JDK 1.8.0u11, on a CentOS 6,
 and the database kept growing by chunks of ~ 10 MB. We have upgraded to JDK
 1.8.0u31, and although the file keeps growing, it is now by tiny chunks of
 kilobytes. So much better. I just hope there is some kind of runtime
 cleanup going on, because it would still go over 1.0 GB in 30 days :(

 Is there a way to launch some SQL commands at runtime to make the
 database shrink?

 Damien.

 On Wednesday, 4 February 2015 23:01:07 UTC+1, Max Lord wrote:

 I also had problems with an mvstore db growing out of control (about 3GB
 for 1M rows). I updated to 1.4.184, reimported, and it was much smaller
 (100MB).

 So the recent changes have had a very positive effect.

 Unfortunately, I was using 1.4.178 because that was the current jar
 bundled with the jdbc-h2 ruby gem. That version doesn't seem like such a
 good default.

 On Monday, February 2, 2015 at 2:42:28 PM UTC-5, Damien Coraboeuf wrote:

 I have replaced by BLOB columns by BINARY(32000) ones (more than enough
 in our case). After exporting the database in SQL ('script' command),
 recreating a blank database and reimporting the SQL ('runscript'), I went
 from 1.7 Gb to 17 Mb.

 I'll monitor the database in the next days to see if the inflation
 starts again.

 Damien.

 On Monday, 2 February 2015 17:40:49 UTC+1, Damien Coraboeuf wrote:

 Hi,

 Speaking of real world example - we are using H2 1.4.x to hold results
 for a continuous delivery chain. With 1.4.177, our database was  600 Mb,
 and after a 'shutdown defrag', we went down to... 11 Mb. We switched to
 1.4.184 but now, the database has grown up to 1.7 Gb. That's a serious
 issue for us :(

 The URL we use is:

 jdbc:h2:/opt/ontrack/database/data;MODE=MYSQL

 Damien.

 On Monday, 5 January 2015 18:15:56 UTC+1, Thomas Mueller wrote:

 Hi,

 OK, that's nice! There is still quite a lot of room for improvements,
 and I don't consider this completely fixed, but will not work on it with
 very high priority any longer.

 Regards,
 Thomas


 On Sunday, December 21, 2014, Steve McLeod steve@gmail.com
 wrote:

 Hi Thomas,

 The database file size in 1.4.184 is much, much better than in
 earlier 1.4.x releases.

 I've done some trials and these are my findings:

 1.3.176: Fully loaded database after shutdown is 317 Mb
 1.4.184: Fully loaded database after shutdown is 380 Mb

 This seems reasonable.


 On Friday, 19 December 2014 17:15:29 UTC+8, Thomas Mueller wrote:

 Hi,

 Version 1.4.184 should produce smaller database files than previous
 version (1.4.x - 1.4.182), maybe half or a third of the old file size. 
 It
 would be great to get some real-world results!

 Regards,
 Thomas



 On Tue, May 6, 2014 at 6:24 PM, Thomas Mueller 
 thomas.to...@gmail.com wrote:

 Hi,

 Some initial results: you can shrink the database by running
 shutdown compact or shutdown defrag. Each time this is run, it 
 shrinks
 a few MB (up to some point, of course). This works, but it's 
 relatively
 slow. Now the task is to make it faster. There are two ways: shrink it
 fully to the minimum size, and shrink it incrementally (like now) but
 faster. I'm working on that now.

 Regards,
 Thomas



 On Tue, May 6, 2014 at 11:39 AM, Steve McLeod steve@gmail.com
  wrote:

 Hi Thomas,

 I've sent you a private email with a link to the new database
 file, made with H2 1.4.178

 Regards,

 Steve


 On Monday, 5 May 2014 07:46:16 UTC+2, Thomas Mueller wrote:

 Hi,

 The database file should shrink if you run shutdown defrag.

 The current compact algorithm is quite inefficient, that means
 the databases file is quite big on average. The highest priority is 
 still
 to ensure it always works correctly, and when that's done I will 
 work on
 more efficiently re-using disk space and specially compact the file 
 faster
 when closing the database.

 Could you send me the new database file? It would be nice to
 have a real-world database file to test this. The last file you 
 sent helped
 a lot, thanks to it I found some problems that completely prevented 
 the
 file to shrink.

 Regards,
 Thomas




Re: [h2] Can't use h2 database in a thread only with Java 8

2015-02-12 Thread Eric Chatellier
Le 02/02/2015 11:07, Eric Chatellier a écrit :

 Now that Java 8 is becoming main line, and i always experiencing this bug, i
 will investigate harder... 

I can't isolate this bug, but i'm experiencing it 100% in my application.

So, i've to idea how to fix this issue :(

-- 
Éric Chatellier - www.codelutin.com - 02.40.50.29.28

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Can't use h2 database in a thread only with Java 8

2015-02-12 Thread Christoph Läubrich

Try to use the 'JDBC Probe' of JProfiler that migh give you an idea

- how connections are opened/closed
- what JDBC statements get executed
- what threads are involved

Am 12.02.2015 12:36, schrieb Eric Chatellier:

Le 02/02/2015 11:07, Eric Chatellier a écrit :

   

Now that Java 8 is becoming main line, and i always experiencing this bug, i
will investigate harder...
 

I can't isolate this bug, but i'm experiencing it 100% in my application.

So, i've to idea how to fix this issue :(

   


--
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] MVStore.cacheChunkRef memory usage

2015-02-12 Thread Trask Stalnaker
The code below dumps a heap with ~86mb cacheChunkRef.  If you bump 
OUTER_LOOP_COUNT to 1,000,000, it dumps a heap with ~469mb cacheChunkRef.

public class MVStoreTest {

private static final int OUTER_LOOP_COUNT = 10;

public static void main(String[] args) throws Exception {
Connection connection =
DriverManager.getConnection(jdbc:h2:~/test;compress=true, 
sa, );

Statement statement = connection.createStatement();
statement.execute(create table xyz (x varchar, y bigint));
statement.execute(create index xyz_idx on xyz (x, y));

PreparedStatement preparedStatement =
connection.prepareStatement(insert into xyz (x, y) values 
(?, ?));

long start = System.currentTimeMillis();
for (int i = 0; i  OUTER_LOOP_COUNT; i++) {
for (int j = 0; j  100; j++) {
preparedStatement.setString(1, x + j);
preparedStatement.setLong(2, i);
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
if ((i + 1) % 1000 == 0) {
long end = System.currentTimeMillis();
System.out.println((i + 1) +   + (end - start));
start = end;
}
}

ManagementFactory.getPlatformMBeanServer().invoke(

ObjectName.getInstance(com.sun.management:type=HotSpotDiagnostic),
dumpHeap,
new Object[] {heapdump.hprof, true},
new String[] {java.lang.String, boolean});

connection.close();
}
}


On Wednesday, February 11, 2015 at 10:56:18 PM UTC-8, Thomas Mueller wrote:

 Hi,

 That means the LIRS cache keeps too many non-resident cold entries. I 
 wonder how to best reproduce this problem... Do you have a simple test case 
 (a description what you do would be enough in this case)?

 Regards,
 Thomas

 On Thu, Feb 12, 2015 at 3:17 AM, Trask Stalnaker trask.s...@gmail.com 
 javascript: wrote:

 Hi,

 I was looking over a heap dump and was surprised by MVStore.cacheChunkRef 
 consuming 29mb memory.

 MVStore.cache is consuming 14mb which makes sense given the 16mb default 
 cache limit.

 Eclipse MemoryAnalyzer OQL shows 100,000+ 
 org.h2.mvstore.cache.CacheLongKeyLIRS$Entry objects with memory = 0 and 
 value = null:

 select * from org.glowroot.shaded.h2.mvstore.cache.CacheLongKeyLIRS$Entry
 where value = null and memory = 0


 Is this memory consumption expected?  Is there a concern memory may grow 
 unbounded given enough of these objects with memory = 0 since H2 won't 
 count them against it's internal memory limit?

 Using latest 1.4.185.

 Thanks,
 Trask

  -- 
 You received this message because you are subscribed to the Google Groups 
 H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to h2-database...@googlegroups.com javascript:.
 To post to this group, send email to h2-da...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] [90007-169] The object is already closed - Weird behavior of JdbcResultSet

2015-02-12 Thread Duc Nguyen
Hi,

Some of our customers have reported about getting The object is already 
closed error, and that the application hasn't been returning correct 
results since when those errors appeared. Debugging with DEBUG_LEVEL_3 
config param added to connection URL, we found these traces:

02-12 14:11:56 jdbc[3]: 
/**/Statement stat25629 = conn4.createStatement(1004, 1007);
02-12 14:11:56 jdbc[14]: 
/*SQL */COMMIT;
02-12 14:11:56 jdbc[3]: 
/**/stat25628.execute(Select COUNT(*) AS rowcount From VHost);
02-12 14:11:56 lock: 3 shared read lock requesting for VHOST
02-12 14:11:56 lock: 3 shared read lock ok VHOST
02-12 14:11:56 jdbc[3]: 
/*SQL #:1*/Select COUNT(*) AS rowcount From VHost;
02-12 14:11:56 lock: 3 shared read lock unlock VHOST
02-12 14:11:56 jdbc[3]: 
/**/stat25628.getUpdateCount();
02-12 14:11:56 jdbc[3]: 
/**/stat25629.getResultSet();
02-12 14:11:56 lock: 13 shared read lock requesting for VPORTGROUP
02-12 14:11:56 lock: 13 shared read lock ok VPORTGROUP
02-12 14:11:56 jdbc[14]: 
/**/Statement stat25630 = conn15.createStatement(1004, 1007);
02-12 14:11:56 jdbc[14]: 
/**/stat25630.execute(update LocatorForward set 
MACAddress='d8c7c8:cc5e18',SwitchIPAddress='172.17.5.45',LinkCount='0',linkIpAddr='',TimeStmp='2015-02-12
 
14:11:33.241',IfIndex='4018',Slot='1',Port='1',VlanID='3105',IfSpeed='0',IfAdminStatus='1',PortDuplexMode='-1',userid=NULL,domain='0',disposition='1',unp='',classsource='0',serviceId='3105',isid='0',Chassis='0'
 
where MACAddress='d8c7c8:cc5e18' and SwitchIPAddress='172.17.5.45' and 
VlanID='3105');
02-12 14:11:56 jdbc[3]: 
/**/stat25629.close();
02-12 14:11:56 jdbc[3]: 
/**/Statement stat25631 = conn4.createStatement(1004, 1007);
02-12 14:11:56 jdbc[3]: 
/**/stat25631.execute(Select COUNT(*) AS rowcount From VDataCenter);
02-12 14:11:56 jdbc[3]: 
/**/PreparedStatement prep14217 = conn4.prepareStatement(select ID, NAME, 
VMID, HOSTID, MACADDRESS, NETWORKNAME, ADDRESSTYPE, VCENTERID, STATUS, 
TIMESTAMP, CREATETIMESTAMP, PORTKEY, PORTGROUPKEY from VNETWORK where 
status='A' and vmid=13451 and vcenterid=1, 1003, 1007);
02-12 14:11:56 jdbc[3]: 
/**/stat25629.execute(update RevDnsInfo set 
IpAddress='172.31.46.71',IpName='',InfoTime='2015-02-12 
14:11:50.858',NeedTime='1970-01-01 10:00:00.000' where 
IpAddress='172.31.46.71');
02-12 14:11:56 jdbc[3]: exception
org.h2.jdbc.JdbcSQLException: The object is already closed [90007-169]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
at org.h2.message.DbException.get(DbException.java:169)
at org.h2.message.DbException.get(DbException.java:146)
at org.h2.message.DbException.get(DbException.java:135)
at org.h2.jdbc.JdbcStatement.checkClosed(JdbcStatement.java:928)
at org.h2.jdbc.JdbcStatement.checkClosedForWrite(JdbcStatement.java:915)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:160)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:152)


Please look at stat25629, it was first created, then called to 
getResultSet() without executing any query, and close() before it called to 
execute(update RevDnsInfo set 
IpAddress='172.31.46.71',IpName='',InfoTime='2015-02-12 
14:11:50.858',NeedTime='1970-01-01 10:00:00.000' where 
IpAddress='172.31.46.71'). Even the executed query is not correct, it 
should be a select query instead.

Our application uses Embedded H2 database, autocommit=true, 
LOCK_TIMEOUT=5000ms and no other configuration. The application allows many 
jdbc connections to be run at once and maintain them by code, however the 
code has been made sure that one connection can only be accessed by one 
thread at once, till it finishes its job and released.

Is there any possible case that the above scenario can happen?

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] 1.4 beta creates much bigger database file

2015-02-12 Thread Thomas Mueller
Hi,

Disk space is re-used after 45 seconds (the default retention time). Disk
space should stabilise at some point, unless if you add more and more data.

checkpoint should reduce disk space usage, but you may need to call it a
few times.

Regards,
Thomas




On Friday, February 6, 2015, Damien Coraboeuf damien.corabo...@gmail.com
wrote:

 FYI, we were using 1.4.184 together with a JDK 1.8.0u11, on a CentOS 6,
 and the database kept growing by chunks of ~ 10 MB. We have upgraded to JDK
 1.8.0u31, and although the file keeps growing, it is now by tiny chunks of
 kilobytes. So much better. I just hope there is some kind of runtime
 cleanup going on, because it would still go over 1.0 GB in 30 days :(

 Is there a way to launch some SQL commands at runtime to make the database
 shrink?

 Damien.

 On Wednesday, 4 February 2015 23:01:07 UTC+1, Max Lord wrote:

 I also had problems with an mvstore db growing out of control (about 3GB
 for 1M rows). I updated to 1.4.184, reimported, and it was much smaller
 (100MB).

 So the recent changes have had a very positive effect.

 Unfortunately, I was using 1.4.178 because that was the current jar
 bundled with the jdbc-h2 ruby gem. That version doesn't seem like such a
 good default.

 On Monday, February 2, 2015 at 2:42:28 PM UTC-5, Damien Coraboeuf wrote:

 I have replaced by BLOB columns by BINARY(32000) ones (more than enough
 in our case). After exporting the database in SQL ('script' command),
 recreating a blank database and reimporting the SQL ('runscript'), I went
 from 1.7 Gb to 17 Mb.

 I'll monitor the database in the next days to see if the inflation
 starts again.

 Damien.

 On Monday, 2 February 2015 17:40:49 UTC+1, Damien Coraboeuf wrote:

 Hi,

 Speaking of real world example - we are using H2 1.4.x to hold results
 for a continuous delivery chain. With 1.4.177, our database was  600 Mb,
 and after a 'shutdown defrag', we went down to... 11 Mb. We switched to
 1.4.184 but now, the database has grown up to 1.7 Gb. That's a serious
 issue for us :(

 The URL we use is:

 jdbc:h2:/opt/ontrack/database/data;MODE=MYSQL

 Damien.

 On Monday, 5 January 2015 18:15:56 UTC+1, Thomas Mueller wrote:

 Hi,

 OK, that's nice! There is still quite a lot of room for improvements,
 and I don't consider this completely fixed, but will not work on it with
 very high priority any longer.

 Regards,
 Thomas


 On Sunday, December 21, 2014, Steve McLeod steve@gmail.com
 wrote:

 Hi Thomas,

 The database file size in 1.4.184 is much, much better than in
 earlier 1.4.x releases.

 I've done some trials and these are my findings:

 1.3.176: Fully loaded database after shutdown is 317 Mb
 1.4.184: Fully loaded database after shutdown is 380 Mb

 This seems reasonable.


 On Friday, 19 December 2014 17:15:29 UTC+8, Thomas Mueller wrote:

 Hi,

 Version 1.4.184 should produce smaller database files than previous
 version (1.4.x - 1.4.182), maybe half or a third of the old file size. 
 It
 would be great to get some real-world results!

 Regards,
 Thomas



 On Tue, May 6, 2014 at 6:24 PM, Thomas Mueller 
 thomas.to...@gmail.com wrote:

 Hi,

 Some initial results: you can shrink the database by running
 shutdown compact or shutdown defrag. Each time this is run, it 
 shrinks
 a few MB (up to some point, of course). This works, but it's relatively
 slow. Now the task is to make it faster. There are two ways: shrink it
 fully to the minimum size, and shrink it incrementally (like now) but
 faster. I'm working on that now.

 Regards,
 Thomas



 On Tue, May 6, 2014 at 11:39 AM, Steve McLeod steve@gmail.com
 wrote:

 Hi Thomas,

 I've sent you a private email with a link to the new database
 file, made with H2 1.4.178

 Regards,

 Steve


 On Monday, 5 May 2014 07:46:16 UTC+2, Thomas Mueller wrote:

 Hi,

 The database file should shrink if you run shutdown defrag.

 The current compact algorithm is quite inefficient, that means
 the databases file is quite big on average. The highest priority is 
 still
 to ensure it always works correctly, and when that's done I will 
 work on
 more efficiently re-using disk space and specially compact the file 
 faster
 when closing the database.

 Could you send me the new database file? It would be nice to have
 a real-world database file to test this. The last file you sent 
 helped a
 lot, thanks to it I found some problems that completely prevented 
 the file
 to shrink.

 Regards,
 Thomas



 On Sunday, May 4, 2014, Steve McLeod steve@gmail.com wrote:

 Hi Thomas,

 I tested the same large data import with H2 1.4.178, and there
 is no improvement over H2 1.4.177.

 Here are the file sizes, in both cases after the app has stopped:

 H2 1.3.176: pokercopilot.h2.db  301,669,352  bytes
 H2 1.4.178: pokercopilot.mv.db 1,023,037,440  bytes

 Let me know what I can do to help.

 Regards,

 Steve


 On Saturday, 19 April 2014 11:44:05 UTC+2, Steve McLeod wrote:

 Hi Thomas,

 Great! Glad I could help make your superb product even