Re: Automatically compact databases from time to time (as a background process)

2012-03-05 Thread IntensiveH2
Thanks for the analyse.
In the log file related to  the DB with the issue, I found :
INFO   | jvm 1| 2012/02/05 19:12:14 | 2012/02/05 19:12:14 ERROR 
[aseWorkerThread-cockpitDB] mserver.H2Listener - [DATABASE] 
exceptionThrown sql=UPDATE MCTASK SET 
name=?,description=?,idmccommand=?,isdeleted=?,creationdate=?,updatedate=?,timeout=?,issplit=?,idmccodeflowtype=?,idmcworfklow=?,parallelexecution=?,idmccodestatuspropagation=?
 
WHERE id = ?
INFO   | jvm 1| 2012/02/05 19:12:14 | org.h2.jdbc.JdbcSQLException: 
Timeout trying to lock table ; SQL statement:
INFO   | jvm 1| 2012/02/05 19:12:14 | UPDATE MCTASK SET 
name=?,description=?,idmccommand=?,isdeleted=?,creationdate=?,updatedate=?,timeout=?,issplit=?,idmccodeflowtype=?,idmcworfklow=?,parallelexecution=?,idmccodestatuspropagation=?
 
WHERE id = ? [50200-161]
Do you think that this message is related to the problem?

Also, do you have also analysed the 12GB database (under veryhuge folder) 
because I'm interesting to know if the issue is the same?

Thierry.
Le vendredi 2 mars 2012 16:33:09 UTC+1, Thomas Mueller a écrit :
>
> Hi,
>
> I have downloaded the file now and ran the Recover tool. The database
> file is 3 GB, and the cockpit.h2.sql file is 9 GB. The end (tail) of
> the .h2.sql script is:
>
>  Statistics 
> -- page count: 1552048, free: 24350
> -- page data bytes: head 486669, empty 15198569, rows 8567178 (38% full)
> -- free 7%, 113513 page(s)
> -- data leaf 0%, 11842 page(s)
> -- data node 0%, 843 page(s)
> -- data overflow 1%, 20810 page(s)
> -- btree leaf 0%, 1170 page(s)
> -- btree node 0%, 17 page(s)
> -- free list 0%, 7 page(s)
> -- stream trunk 0%, 2764 page(s)
> -- stream data 90%, 1401079 page(s)
>
> That means 90% of the file size is transaction log. This is what I
> expected. The transaction that was kept open was:
>
>  Transaction log 
>
> -- session 257 table 115 - 728
> -- session 257 table 115 + ( /* key:794 */ 442, CAST('Discover and
> Provision Cisco_Others' AS VARCHAR_IGNORECASE), CAST('' AS
> VARCHAR_IGNORECASE), 95, 0, NULL
> , NULL, 0, 0, 34, 76, 1, 49)
>
> So it was connection #257 that did this: delete from table MCTASK
> (table 115, O_115) where key=728, insert into MCTASK (the values
> above). This connection was never committed or rolled back, so the
> complete transaction log has to be kept around.
>
> The database itself is quite small. I was you store lots of XML data,
> uncompressed, as VARCHAR. I suggest to use CLOB instead, and compress
> the data. This will save additional space and probably speed up
> things.
>
> You use quite many memory tables. I'm not sure if this is on purpose.
>
> Regards,
> Thomas
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/h2-database/-/U3xAfgZeGSMJ.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-03-02 Thread Thomas Mueller
Hi,

I have downloaded the file now and ran the Recover tool. The database
file is 3 GB, and the cockpit.h2.sql file is 9 GB. The end (tail) of
the .h2.sql script is:

 Statistics 
-- page count: 1552048, free: 24350
-- page data bytes: head 486669, empty 15198569, rows 8567178 (38% full)
-- free 7%, 113513 page(s)
-- data leaf 0%, 11842 page(s)
-- data node 0%, 843 page(s)
-- data overflow 1%, 20810 page(s)
-- btree leaf 0%, 1170 page(s)
-- btree node 0%, 17 page(s)
-- free list 0%, 7 page(s)
-- stream trunk 0%, 2764 page(s)
-- stream data 90%, 1401079 page(s)

That means 90% of the file size is transaction log. This is what I
expected. The transaction that was kept open was:

 Transaction log 

-- session 257 table 115 - 728
-- session 257 table 115 + ( /* key:794 */ 442, CAST('Discover and
Provision Cisco_Others' AS VARCHAR_IGNORECASE), CAST('' AS
VARCHAR_IGNORECASE), 95, 0, NULL
, NULL, 0, 0, 34, 76, 1, 49)

So it was connection #257 that did this: delete from table MCTASK
(table 115, O_115) where key=728, insert into MCTASK (the values
above). This connection was never committed or rolled back, so the
complete transaction log has to be kept around.

The database itself is quite small. I was you store lots of XML data,
uncompressed, as VARCHAR. I suggest to use CLOB instead, and compress
the data. This will save additional space and probably speed up
things.

You use quite many memory tables. I'm not sure if this is on purpose.

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-03-02 Thread Thomas Mueller
Hi,

> I added TRACE_LEVEL_FILE=3 during shutdown compact and added result in 
> attachment.

This will not really help much. It shows the database shrunk, that's
it. It doesn't say why the database was big.

Could you send me (or upload somewhere) the large database, or run the
Recover tool on a large database, and check if there is a large
transaction log?

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-28 Thread Noel Grandin

Hi

OK, so it turns out my original idea was a bad one.
Because of the way the locking works, it is not possible to display that 
information in the sessions table.


However, using JMX works. The attached patch correctly display ongoing 
transactions using the H2 JMX facility.

See the unit-test in the unit-test for how it works.

Regards, Noel.

On 2012-02-27 12:18, IntensiveH2 wrote:

Any news regarding my issue?

The last status was:
- command compact works but closes all current connection
- I tried "TRANSACTION_TIMEOUT" = 33554432 but I have issue.
- I tried code with TRANSACTION_START but WHERE TRANSACTION_START IS
NOT NULL returns nothing.

On 23 fév, 10:56, IntensiveH2  wrote:

Hi,

I tested the compact command.
I have no issue regarding "corruption" but I need time to perform
additional validations.
But "compact command" closed all opened connection and if you have a
"pool" of connection, it's a problem
The error message is:
org.h2.jdbc.JdbcSQLException: Database is already closed (to disable
automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the
db URL) [90121-164]

Thierry.

On 15 fév, 13:13, Noel Grandin  wrote:








Hi
I had a quick bash at implementing this by taking an exclusive lock on
the DB and re-using the existing compacting code.
So I added a
   COMPACT
command.
But the unit test I added indicates that I'm corrupting the database
somehow.
Patch attached - Thomas, perhaps you have an idea?
Regards, Noel.
On 2012-02-15 11:51, IntensiveH2 wrote:

Thanks for the quick answer but currently it's not possible to reboot
my application on a regular basis because my application must ensure
scheduling, failover, HA 
and it's not possible to stop it.
I really need to have a solution at runtime without restarting my
application.
Thanks.
On 15 f�v, 10:40, Noel Grandinwrote:

Not sure what Thomas' plans are,
but what I can suggest is that you set this setting to a nice large number
-Dh2.maxCompactTime=6
And reboot your application on a regular basis.
See herehttp://www.h2database.com/html/grammar.html#shutdown
which sets the maximum time to compact the database on shutdown (in
milliseconds).
On 2012-02-15 11:05, IntensiveH2 wrote:

Hi,
I use H2 in a commercial product and I have an issue regarding the
size of the DB (with a customer).
Currently the DB size is 12 GB and after a shutdown compact/restart
the new size is 800 MB
When you start connection on DB (12 GB) it took around 70 minutes to
start (on solaris) before to shutdow it with the compact option.
This is not acceptable (from customer comment).
NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
result is the same for all version)
Do you have planned (urgently) to defragment/compact at runtime
(similar to SHUTDOWN COMPACT) in a background thread?
Do you have another solution to avoid this issue (long startup and
compact at runtime)?
Best regards.
Thierry.

  online-compact.diff
11KAfficherTélécharger


--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Index: src/main/org/h2/engine/Session.java
===
--- src/main/org/h2/engine/Session.java (revision 4123)
+++ src/main/org/h2/engine/Session.java (working copy)
@@ -488,6 +488,7 @@
 unlinkLobMap = null;
 }
 unlockAll();
+transactionStart = 0;
 }
 
 private void checkCommitRollback() {
@@ -516,6 +517,7 @@
 autoCommit = true;
 autoCommitAtTransactionEnd = false;
 }
+transactionStart = 0;
 }
 
 /**
@@ -834,9 +836,6 @@
  * Wait for some time if this session is throttled (slowed down).
  */
 public void throttle() {
-if (currentCommandStart == 0) {
-currentCommandStart = System.currentTimeMillis();
-}
 if (throttle == 0) {
 return;
 }
@@ -860,10 +859,14 @@
  */
 public void setCurrentCommand(Command command) {
 this.currentCommand = command;
-if (queryTimeout > 0 && command != null) {
+if (command != null) {
 long now = System.currentTimeMillis();
 currentCommandStart = now;
-cancelAt = now + queryTimeout;
+if (queryTimeout > 0) {
+cancelAt = now + queryTimeout;
+}
+} else {
+currentCommandStart = 0;
 }
 }
 
@@ -1054,6 +1057,7 @@
 public void begin() {
 autoCommitAtTransactionEnd = true;
 autoCommit = false;
+transactionStart = System.currentTimeMillis();
 }
 
 public long getSessionStart() {
@@ -1061,9 +1065,6 @@
 }
 
 public long getTransactionStart() {
-

Re: Automatically compact databases from time to time (as a background process)

2012-02-27 Thread IntensiveH2
Any news regarding my issue?

The last status was:
- command compact works but closes all current connection
- I tried "TRANSACTION_TIMEOUT" = 33554432 but I have issue.
- I tried code with TRANSACTION_START but WHERE TRANSACTION_START IS
NOT NULL returns nothing.

On 23 fév, 10:56, IntensiveH2  wrote:
> Hi,
>
> I tested the compact command.
> I have no issue regarding "corruption" but I need time to perform
> additional validations.
> But "compact command" closed all opened connection and if you have a
> "pool" of connection, it's a problem
> The error message is:
> org.h2.jdbc.JdbcSQLException: Database is already closed (to disable
> automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the
> db URL) [90121-164]
>
> Thierry.
>
> On 15 fév, 13:13, Noel Grandin  wrote:
>
>
>
>
>
>
>
> > Hi
>
> > I had a quick bash at implementing this by taking an exclusive lock on
> > the DB and re-using the existing compacting code.
> > So I added a
> >   COMPACT
> > command.
>
> > But the unit test I added indicates that I'm corrupting the database
> > somehow.
>
> > Patch attached - Thomas, perhaps you have an idea?
>
> > Regards, Noel.
>
> > On 2012-02-15 11:51, IntensiveH2 wrote:
>
> > > Thanks for the quick answer but currently it's not possible to reboot
> > > my application on a regular basis because my application must ensure
> > > scheduling, failover, HA 
> > > and it's not possible to stop it.
> > > I really need to have a solution at runtime without restarting my
> > > application.
> > > Thanks.
>
> > > On 15 f�v, 10:40, Noel Grandin  wrote:
> > >> Not sure what Thomas' plans are,
>
> > >> but what I can suggest is that you set this setting to a nice large 
> > >> number
> > >> -Dh2.maxCompactTime=6
> > >> And reboot your application on a regular basis.
>
> > >> See herehttp://www.h2database.com/html/grammar.html#shutdown
> > >> which sets the maximum time to compact the database on shutdown (in
> > >> milliseconds).
>
> > >> On 2012-02-15 11:05, IntensiveH2 wrote:
>
> > >>> Hi,
> > >>> I use H2 in a commercial product and I have an issue regarding the
> > >>> size of the DB (with a customer).
> > >>> Currently the DB size is 12 GB and after a shutdown compact/restart
> > >>> the new size is 800 MB
> > >>> When you start connection on DB (12 GB) it took around 70 minutes to
> > >>> start (on solaris) before to shutdow it with the compact option.
> > >>> This is not acceptable (from customer comment).
> > >>> NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
> > >>> result is the same for all version)
> > >>> Do you have planned (urgently) to defragment/compact at runtime
> > >>> (similar to SHUTDOWN COMPACT) in a background thread?
> > >>> Do you have another solution to avoid this issue (long startup and
> > >>> compact at runtime)?
> > >>> Best regards.
> > >>> Thierry.
>
> >  online-compact.diff
> > 11KAfficherTélécharger

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-23 Thread IntensiveH2
Hi,

I tested the compact command.
I have no issue regarding "corruption" but I need time to perform
additional validations.
But "compact command" closed all opened connection and if you have a
"pool" of connection, it's a problem
The error message is:
org.h2.jdbc.JdbcSQLException: Database is already closed (to disable
automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the
db URL) [90121-164]

Thierry.

On 15 fév, 13:13, Noel Grandin  wrote:
> Hi
>
> I had a quick bash at implementing this by taking an exclusive lock on
> the DB and re-using the existing compacting code.
> So I added a
>   COMPACT
> command.
>
> But the unit test I added indicates that I'm corrupting the database
> somehow.
>
> Patch attached - Thomas, perhaps you have an idea?
>
> Regards, Noel.
>
> On 2012-02-15 11:51, IntensiveH2 wrote:
>
>
>
>
>
>
>
> > Thanks for the quick answer but currently it's not possible to reboot
> > my application on a regular basis because my application must ensure
> > scheduling, failover, HA 
> > and it's not possible to stop it.
> > I really need to have a solution at runtime without restarting my
> > application.
> > Thanks.
>
> > On 15 f�v, 10:40, Noel Grandin  wrote:
> >> Not sure what Thomas' plans are,
>
> >> but what I can suggest is that you set this setting to a nice large number
> >> -Dh2.maxCompactTime=6
> >> And reboot your application on a regular basis.
>
> >> See herehttp://www.h2database.com/html/grammar.html#shutdown
> >> which sets the maximum time to compact the database on shutdown (in
> >> milliseconds).
>
> >> On 2012-02-15 11:05, IntensiveH2 wrote:
>
> >>> Hi,
> >>> I use H2 in a commercial product and I have an issue regarding the
> >>> size of the DB (with a customer).
> >>> Currently the DB size is 12 GB and after a shutdown compact/restart
> >>> the new size is 800 MB
> >>> When you start connection on DB (12 GB) it took around 70 minutes to
> >>> start (on solaris) before to shutdow it with the compact option.
> >>> This is not acceptable (from customer comment).
> >>> NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
> >>> result is the same for all version)
> >>> Do you have planned (urgently) to defragment/compact at runtime
> >>> (similar to SHUTDOWN COMPACT) in a background thread?
> >>> Do you have another solution to avoid this issue (long startup and
> >>> compact at runtime)?
> >>> Best regards.
> >>> Thierry.
>
>
>
>  online-compact.diff
> 11KAfficherTélécharger

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-23 Thread IntensiveH2
Hi,

I tried your proposal but I have the issue below (with public long
transactionTimeout = 33554432):

INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: Error
in WrapperListener.start callback.  java.lang.StackOverflowError
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error:
java.lang.StackOverflowError
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.HashMap$KeyIterator.(HashMap.java:826)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.HashMap$KeyIterator.(HashMap.java:826)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.HashMap.newKeyIterator(HashMap.java:840)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.HashMap$KeySet.iterator(HashMap.java:874)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.HashSet.iterator(HashSet.java:153)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.AbstractCollection.toArray(AbstractCollection.java:120)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.Collections$SynchronizedCollection.toArray(Collections.java:
1566)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
java.util.ArrayList.(ArrayList.java:131)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.util.New.arrayList(New.java:71)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.engine.Database.getSessions(Database.java:1410)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.store.PageStore.getFirstUncommittedSection(PageStore.java:830)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.store.PageStore.checkpoint(PageStore.java:427)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.store.PageStore.commit(PageStore.java:1432)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.engine.Database.commit(Database.java:1767)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.engine.Session.rollback(Session.java:511)
INFO   | jvm 1| 2012/02/23 08:54:21 | WrapperManager Error: at
org.h2.store.PageStore.commit(PageStore.java:1457)
On 20 fév, 19:49, Thomas Mueller  wrote:
> Hi,
>
> If you want, I have some code that could be used as a starting point.
> It's work in progress.
>
> Warning: I havent test it; I didn't run it at all. It could be
> completely wrong, and possibly corrupt the database. Also, the term
> "TRANSACTION_TIMEOUT" is misleading and has to be changed (it should
> be something like "really maximum transaction log size; if the
> transaction log is larger than this then the transaction of the
> session with the oldest open transaction is rolled back").
>
> http://h2database.com/p.html#29e32a33bca7541cb70ea3cc9d483814
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-21 Thread IntensiveH2
Also, another information.
I added DEFRAG_ALWAYS=TRUE and now when I shutdow compact the
database, I have the following error:
first time
java.lang.RuntimeException: Undo entry not written
at org.h2.message.DbException.throwInternalError(DbException.java:
228)
at org.h2.store.PageLog.addUndo(PageLog.java:486)
at org.h2.store.PageStore.free(PageStore.java:1211)
at org.h2.store.PageStore.free(PageStore.java:1194)
at org.h2.store.PageStore.swap(PageStore.java:675)
at org.h2.store.PageStore.compact(PageStore.java:586)
at org.h2.engine.Database.closeOpenFilesAndUnlock(Database.java:1198)
at org.h2.engine.Database.close(Database.java:1148)
at org.h2.engine.Database.removeSession(Database.java:1027)
at org.h2.engine.Session.close(Session.java:563)
at
org.h2.command.dml.TransactionCommand.update(TransactionCommand.java:
110)
at org.h2.command.CommandContainer.update(CommandContainer.java:73)
at org.h2.command.Command.executeUpdate(Command.java:226)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:177)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:152)

second time
java.lang.RuntimeException: not free: [399989] stream data key:75832
pos:11 remaining:0
at org.h2.message.DbException.throwInternalError(DbException.java:
228)
at org.h2.store.PageStore.compact(PageStore.java:693)
at org.h2.store.PageStore.compact(PageStore.java:523)
at org.h2.engine.Database.closeOpenFilesAndUnlock(Database.java:1198)
at org.h2.engine.Database.close(Database.java:1148)
at org.h2.engine.Database.removeSession(Database.java:1027)
at org.h2.engine.Session.close(Session.java:563)
at
org.h2.command.dml.TransactionCommand.update(TransactionCommand.java:
110)
at org.h2.command.CommandContainer.update(CommandContainer.java:73)
at org.h2.command.Command.executeUpdate(Command.java:226)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:177)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:152)
at
com.infovista.mserver.actions.OJBLaunchForH2$OJBRun.stopNow(OJBLaunchForH2.java:
116)
at
com.infovista.mserver.actions.OJBLaunch.actionPerformed(OJBLaunch.java:
256)
at com.infovista.mserver.ManagementServer.stop(ManagementServer.java:
2545)
at
com.infovista.mserver.ManagementServerSvc.stop(ManagementServerSvc.java:
79)
at org.tanukisoftware.wrapper.WrapperManager
$13.run(WrapperManager.java:4255)

If I removed the option, no error.

On 20 fév, 19:49, Thomas Mueller  wrote:
> Hi,
>
> If you want, I have some code that could be used as a starting point.
> It's work in progress.
>
> Warning: I havent test it; I didn't run it at all. It could be
> completely wrong, and possibly corrupt the database. Also, the term
> "TRANSACTION_TIMEOUT" is misleading and has to be changed (it should
> be something like "really maximum transaction log size; if the
> transaction log is larger than this then the transaction of the
> session with the oldest open transaction is rolled back").
>
> http://h2database.com/p.html#29e32a33bca7541cb70ea3cc9d483814
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread Thomas Mueller
Hi,

If you want, I have some code that could be used as a starting point.
It's work in progress.

Warning: I havent test it; I didn't run it at all. It could be
completely wrong, and possibly corrupt the database. Also, the term
"TRANSACTION_TIMEOUT" is misleading and has to be changed (it should
be something like "really maximum transaction log size; if the
transaction log is larger than this then the transaction of the
session with the oldest open transaction is rolled back").

http://h2database.com/p.html#29e32a33bca7541cb70ea3cc9d483814

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread IntensiveH2
Please, can you send me directly the jar file also. I don't have the
build environment yet.


On 20 fév, 17:25, Noel Grandin  wrote:
> Make sure you can build and test the H2 source code locally, and I'll
> make you a patch tomorrow for displaying the transaction start time in
> the SESSIONS table.
>
> Then you can log the output of that table regularly to see who the culprit is.
>
>
>
>
>
>
>
> On Mon, Feb 20, 2012 at 18:16, IntensiveH2  wrote:
> > For information
> > My connection string is:
> > jdbc:h2:file:D:\ProductDev\cockpitconsole\build\projects\mserver\dist
> > \h2\cockpit
> > \cockpit;MVCC=TRUE;AUTOCOMMIT=OFF;LOCK_MODE=3;LOG=2;CACHE_TYPE=SOFT_LRU
> > I change also some settings like:
> > statement.execute("set WRITE_DELAY 0");
> > statement.execute("SET MAX_MEMORY_ROWS 1");
> > statement.execute("SET MAX_LOG_SIZE 32");
>
> > Also, When I shutdow defrag the DB size moves from 12GB to 800 MB but
> > after a Recover tool to extract SQL and a redo in a fresh DB the size
> > is  200MB.
> > After, I shutdown compact again and the DB size is 80 MB.
>
> > I don't really understand  how I can progress on the subject.
> > It's very important for me because the DB grows on customer side
> > (200MB per day for nothing) and I don't know how to fix it.
>
> > On 18 fév, 08:27, Thomas Mueller  wrote:
> >> Hi,
>
> >> > Can I add some debug settings on customer side?
>
> >> There are two options:
>
> >> - Use the trace options to analyze the JDBC method calls (append
> >> ;TRACE_LEVEL_FILE=3 to the database URL)
>
> >> - Use the Recover tool to find out if there are uncommitted
> >> transactions in the (large) database file (or a copy of the file; in
> >> this case you could even copy the database file while it's in use,
> >> because you don't need a 'save' copy of the database just to analyze
> >> the transaction log).
>
> >> > Perhaps we need to implement some kind of default timeout on how long a
> >> > transaction can stay open?
>
> >> Thats a very good idea. I wonder if other databases support something
> >> like this?
>
> >> At the very least there should be a warning in the .trace.db file. I
> >> will have a look at that.
>
> >> Regards,
> >> Thomas
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "H2 Database" group.
> > To post to this group, send email to h2-database@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > h2-database+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/h2-database?hl=en.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread Noel Grandin
Make sure you can build and test the H2 source code locally, and I'll
make you a patch tomorrow for displaying the transaction start time in
the SESSIONS table.

Then you can log the output of that table regularly to see who the culprit is.

On Mon, Feb 20, 2012 at 18:16, IntensiveH2  wrote:
> For information
> My connection string is:
> jdbc:h2:file:D:\ProductDev\cockpitconsole\build\projects\mserver\dist
> \h2\cockpit
> \cockpit;MVCC=TRUE;AUTOCOMMIT=OFF;LOCK_MODE=3;LOG=2;CACHE_TYPE=SOFT_LRU
> I change also some settings like:
> statement.execute("set WRITE_DELAY 0");
> statement.execute("SET MAX_MEMORY_ROWS 1");
> statement.execute("SET MAX_LOG_SIZE 32");
>
> Also, When I shutdow defrag the DB size moves from 12GB to 800 MB but
> after a Recover tool to extract SQL and a redo in a fresh DB the size
> is  200MB.
> After, I shutdown compact again and the DB size is 80 MB.
>
> I don't really understand  how I can progress on the subject.
> It's very important for me because the DB grows on customer side
> (200MB per day for nothing) and I don't know how to fix it.
>
> On 18 fév, 08:27, Thomas Mueller  wrote:
>> Hi,
>>
>> > Can I add some debug settings on customer side?
>>
>> There are two options:
>>
>> - Use the trace options to analyze the JDBC method calls (append
>> ;TRACE_LEVEL_FILE=3 to the database URL)
>>
>> - Use the Recover tool to find out if there are uncommitted
>> transactions in the (large) database file (or a copy of the file; in
>> this case you could even copy the database file while it's in use,
>> because you don't need a 'save' copy of the database just to analyze
>> the transaction log).
>>
>> > Perhaps we need to implement some kind of default timeout on how long a
>> > transaction can stay open?
>>
>> Thats a very good idea. I wonder if other databases support something
>> like this?
>>
>> At the very least there should be a warning in the .trace.db file. I
>> will have a look at that.
>>
>> Regards,
>> Thomas
>
> --
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-database@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread IntensiveH2
For information
My connection string is:
jdbc:h2:file:D:\ProductDev\cockpitconsole\build\projects\mserver\dist
\h2\cockpit
\cockpit;MVCC=TRUE;AUTOCOMMIT=OFF;LOCK_MODE=3;LOG=2;CACHE_TYPE=SOFT_LRU
I change also some settings like:
statement.execute("set WRITE_DELAY 0");
statement.execute("SET MAX_MEMORY_ROWS 1");
statement.execute("SET MAX_LOG_SIZE 32");

Also, When I shutdow defrag the DB size moves from 12GB to 800 MB but
after a Recover tool to extract SQL and a redo in a fresh DB the size
is  200MB.
After, I shutdown compact again and the DB size is 80 MB.

I don't really understand  how I can progress on the subject.
It's very important for me because the DB grows on customer side
(200MB per day for nothing) and I don't know how to fix it.

On 18 fév, 08:27, Thomas Mueller  wrote:
> Hi,
>
> > Can I add some debug settings on customer side?
>
> There are two options:
>
> - Use the trace options to analyze the JDBC method calls (append
> ;TRACE_LEVEL_FILE=3 to the database URL)
>
> - Use the Recover tool to find out if there are uncommitted
> transactions in the (large) database file (or a copy of the file; in
> this case you could even copy the database file while it's in use,
> because you don't need a 'save' copy of the database just to analyze
> the transaction log).
>
> > Perhaps we need to implement some kind of default timeout on how long a
> > transaction can stay open?
>
> Thats a very good idea. I wonder if other databases support something
> like this?
>
> At the very least there should be a warning in the .trace.db file. I
> will have a look at that.
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread IntensiveH2
It should be useful to add this kind of information as JMX operation
(like list sessions).


On 20 fév, 10:59, Noel Grandin  wrote:
> Hi
>
> In order to monitor for bad transactions, how about we add a
> TRANSACTION_START column to the SESSIONS meta-table?
>
> Regards, Noel Grandin
>
> On 2012-02-18 09:27, Thomas Mueller wrote:
>
>
>
>
>
>
>
> > Hi,
>
> >> Can I add some debug settings on customer side?
> > There are two options:
>
> > - Use the trace options to analyze the JDBC method calls (append
> > ;TRACE_LEVEL_FILE=3 to the database URL)
>
> > - Use the Recover tool to find out if there are uncommitted
> > transactions in the (large) database file (or a copy of the file; in
> > this case you could even copy the database file while it's in use,
> > because you don't need a 'save' copy of the database just to analyze
> > the transaction log).
>
> >> Perhaps we need to implement some kind of default timeout on how long a
> >> transaction can stay open?
> > Thats a very good idea. I wonder if other databases support something
> > like this?
>
> > At the very least there should be a warning in the .trace.db file. I
> > will have a look at that.
>
> > Regards,
> > Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread Noel Grandin

Hi

In order to monitor for bad transactions, how about we add a 
TRANSACTION_START column to the SESSIONS meta-table?


Regards, Noel Grandin

On 2012-02-18 09:27, Thomas Mueller wrote:

Hi,


Can I add some debug settings on customer side?

There are two options:

- Use the trace options to analyze the JDBC method calls (append
;TRACE_LEVEL_FILE=3 to the database URL)

- Use the Recover tool to find out if there are uncommitted
transactions in the (large) database file (or a copy of the file; in
this case you could even copy the database file while it's in use,
because you don't need a 'save' copy of the database just to analyze
the transaction log).


Perhaps we need to implement some kind of default timeout on how long a
transaction can stay open?

Thats a very good idea. I wonder if other databases support something
like this?

At the very least there should be a warning in the .trace.db file. I
will have a look at that.

Regards,
Thomas



--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-20 Thread IntensiveH2
I used the recover tool. What should I look for in the SQL file
generated by the recover?

On 18 fév, 08:27, Thomas Mueller  wrote:
> Hi,
>
> > Can I add some debug settings on customer side?
>
> There are two options:
>
> - Use the trace options to analyze the JDBC method calls (append
> ;TRACE_LEVEL_FILE=3 to the database URL)
>
> - Use the Recover tool to find out if there are uncommitted
> transactions in the (large) database file (or a copy of the file; in
> this case you could even copy the database file while it's in use,
> because you don't need a 'save' copy of the database just to analyze
> the transaction log).
>
> > Perhaps we need to implement some kind of default timeout on how long a
> > transaction can stay open?
>
> Thats a very good idea. I wonder if other databases support something
> like this?
>
> At the very least there should be a warning in the .trace.db file. I
> will have a look at that.
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-17 Thread Thomas Mueller
Hi,

> Can I add some debug settings on customer side?

There are two options:

- Use the trace options to analyze the JDBC method calls (append
;TRACE_LEVEL_FILE=3 to the database URL)

- Use the Recover tool to find out if there are uncommitted
transactions in the (large) database file (or a copy of the file; in
this case you could even copy the database file while it's in use,
because you don't need a 'save' copy of the database just to analyze
the transaction log).

> Perhaps we need to implement some kind of default timeout on how long a
> transaction can stay open?

Thats a very good idea. I wonder if other databases support something
like this?

At the very least there should be a warning in the .trace.db file. I
will have a look at that.

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-16 Thread IntensiveH2
Hi,

It is possible, based on this huge DB, to see if some transaction are
not commited (I have the DB of 12GB)?
Can I add some debug settings on customer side?

Thanks.

On 16 fév, 10:44, Thomas Mueller  wrote:
> Hi,
>
> I would first analyze why it grows so much. Possibly an uncommitted 
> transaction?
>
> Regards,
> Thomas
>
>
>
>
>
>
>
> On Thu, Feb 16, 2012 at 10:06 AM, Wolfgang Pedot  wrote:
> > I have a related question:
>
> > Triggered by this request I tried what happens if I execute "shutdown
> > compact" but do not close my connection-pool as well and as one
> > could have expected the database was reopened at the first query after
> > the shutdown without a problem. I also tried starting a new
> > transaction while
> > the compact was busy and that worked as well.
>
> > I guess this is not quite so different from what you are trying to
> > implement (DB-Lock), are there any dangers to this aproach?
> > The nice thing is that I dont have to restart my application, I just
> > let it reopen the database. It does block all access to the database
> > during operation but thats fine with me.
>
> > regards
> > Wolfgang Pedot
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "H2 Database" group.
> > To post to this group, send email to h2-database@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > h2-database+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/h2-database?hl=en.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-16 Thread Noel Grandin
Perhaps we need to implement some kind of default timeout on how long a 
transaction can stay open?


On 2012-02-16 11:44, Thomas Mueller wrote:

Hi,

I would first analyze why it grows so much. Possibly an uncommitted transaction?

Regards,
Thomas


On Thu, Feb 16, 2012 at 10:06 AM, Wolfgang Pedot  wrote:

I have a related question:

Triggered by this request I tried what happens if I execute "shutdown
compact" but do not close my connection-pool as well and as one
could have expected the database was reopened at the first query after
the shutdown without a problem. I also tried starting a new
transaction while
the compact was busy and that worked as well.

I guess this is not quite so different from what you are trying to
implement (DB-Lock), are there any dangers to this aproach?
The nice thing is that I dont have to restart my application, I just
let it reopen the database. It does block all access to the database
during operation but thats fine with me.

regards
Wolfgang Pedot

--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-16 Thread Thomas Mueller
Hi,

I would first analyze why it grows so much. Possibly an uncommitted transaction?

Regards,
Thomas


On Thu, Feb 16, 2012 at 10:06 AM, Wolfgang Pedot  wrote:
> I have a related question:
>
> Triggered by this request I tried what happens if I execute "shutdown
> compact" but do not close my connection-pool as well and as one
> could have expected the database was reopened at the first query after
> the shutdown without a problem. I also tried starting a new
> transaction while
> the compact was busy and that worked as well.
>
> I guess this is not quite so different from what you are trying to
> implement (DB-Lock), are there any dangers to this aproach?
> The nice thing is that I dont have to restart my application, I just
> let it reopen the database. It does block all access to the database
> during operation but thats fine with me.
>
> regards
> Wolfgang Pedot
>
> --
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-database@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-16 Thread Wolfgang Pedot
I have a related question:

Triggered by this request I tried what happens if I execute "shutdown
compact" but do not close my connection-pool as well and as one
could have expected the database was reopened at the first query after
the shutdown without a problem. I also tried starting a new
transaction while
the compact was busy and that worked as well.

I guess this is not quite so different from what you are trying to
implement (DB-Lock), are there any dangers to this aproach?
The nice thing is that I dont have to restart my application, I just
let it reopen the database. It does block all access to the database
during operation but thats fine with me.

regards
Wolfgang Pedot

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-15 Thread IntensiveH2
Hi

Thanks for your patch but customer is on production and I must to have
a robust solution.
Could you send me a jar file instead of a patch file (It's more easy
and especially faster for me to validate it).
When do you have a reliable solution (related to your previous comment
"that I'm corrupting the database somehow")?

Thanks
Thierry.
On 15 fév, 13:13, Noel Grandin  wrote:
> Hi
>
> I had a quick bash at implementing this by taking an exclusive lock on
> the DB and re-using the existing compacting code.
> So I added a
>   COMPACT
> command.
>
> But the unit test I added indicates that I'm corrupting the database
> somehow.
>
> Patch attached - Thomas, perhaps you have an idea?
>
> Regards, Noel.
>
> On 2012-02-15 11:51, IntensiveH2 wrote:
>
>
>
>
>
>
>
> > Thanks for the quick answer but currently it's not possible to reboot
> > my application on a regular basis because my application must ensure
> > scheduling, failover, HA 
> > and it's not possible to stop it.
> > I really need to have a solution at runtime without restarting my
> > application.
> > Thanks.
>
> > On 15 f�v, 10:40, Noel Grandin  wrote:
> >> Not sure what Thomas' plans are,
>
> >> but what I can suggest is that you set this setting to a nice large number
> >> -Dh2.maxCompactTime=6
> >> And reboot your application on a regular basis.
>
> >> See herehttp://www.h2database.com/html/grammar.html#shutdown
> >> which sets the maximum time to compact the database on shutdown (in
> >> milliseconds).
>
> >> On 2012-02-15 11:05, IntensiveH2 wrote:
>
> >>> Hi,
> >>> I use H2 in a commercial product and I have an issue regarding the
> >>> size of the DB (with a customer).
> >>> Currently the DB size is 12 GB and after a shutdown compact/restart
> >>> the new size is 800 MB
> >>> When you start connection on DB (12 GB) it took around 70 minutes to
> >>> start (on solaris) before to shutdow it with the compact option.
> >>> This is not acceptable (from customer comment).
> >>> NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
> >>> result is the same for all version)
> >>> Do you have planned (urgently) to defragment/compact at runtime
> >>> (similar to SHUTDOWN COMPACT) in a background thread?
> >>> Do you have another solution to avoid this issue (long startup and
> >>> compact at runtime)?
> >>> Best regards.
> >>> Thierry.
>
>
>
>  online-compact.diff
> 11KAfficherTélécharger

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-15 Thread Noel Grandin

Hi

I had a quick bash at implementing this by taking an exclusive lock on 
the DB and re-using the existing compacting code.

So I added a
 COMPACT
command.

But the unit test I added indicates that I'm corrupting the database 
somehow.


Patch attached - Thomas, perhaps you have an idea?

Regards, Noel.

On 2012-02-15 11:51, IntensiveH2 wrote:

Thanks for the quick answer but currently it's not possible to reboot
my application on a regular basis because my application must ensure
scheduling, failover, HA 
and it's not possible to stop it.
I really need to have a solution at runtime without restarting my
application.
Thanks.

On 15 fév, 10:40, Noel Grandin  wrote:

Not sure what Thomas' plans are,

but what I can suggest is that you set this setting to a nice large number
-Dh2.maxCompactTime=6
And reboot your application on a regular basis.

See herehttp://www.h2database.com/html/grammar.html#shutdown
which sets the maximum time to compact the database on shutdown (in
milliseconds).

On 2012-02-15 11:05, IntensiveH2 wrote:








Hi,
I use H2 in a commercial product and I have an issue regarding the
size of the DB (with a customer).
Currently the DB size is 12 GB and after a shutdown compact/restart
the new size is 800 MB
When you start connection on DB (12 GB) it took around 70 minutes to
start (on solaris) before to shutdow it with the compact option.
This is not acceptable (from customer comment).
NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
result is the same for all version)
Do you have planned (urgently) to defragment/compact at runtime
(similar to SHUTDOWN COMPACT) in a background thread?
Do you have another solution to avoid this issue (long startup and
compact at runtime)?
Best regards.
Thierry.


--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Index: src/docsrc/help/help.csv
===
--- src/docsrc/help/help.csv(revision 4119)
+++ src/docsrc/help/help.csv(working copy)
@@ -1494,6 +1494,16 @@
 SET WRITE_DELAY 2000
 "
 
+"Commands (Other)","COMPACT","
+COMPACT
+","
+Performs an online compaction and defrag of the database. Note that this will 
obtain an exclusive lock on the database for the duration of the operation.
+
+Admin rights are required to execute this command, as it affects all 
connections.
+","
+COMPACT
+"
+
 "Commands (Other)","SHUTDOWN","
 SHUTDOWN [ IMMEDIATELY | COMPACT | DEFRAG ]
 ","
Index: src/main/org/h2/command/Parser.java
===
--- src/main/org/h2/command/Parser.java (revision 4118)
+++ src/main/org/h2/command/Parser.java (working copy)
@@ -328,6 +328,8 @@
 c = parseCheckpoint();
 } else if (readIf("COMMENT")) {
 c = parseComment();
+} else if (readIf("COMPACT")) {
+c = parseCompact();
 }
 break;
 case 'd':
@@ -541,6 +543,11 @@
 return new TransactionCommand(session, type);
 }
 
+private Prepared parseCompact() {
+Set command = new Set(session, SetTypes.ONLINE_COMPACT);
+return command;
+}
+
 private TransactionCommand parseRollback() {
 TransactionCommand command;
 if (readIf("TRANSACTION")) {
Index: src/main/org/h2/command/dml/Set.java
===
--- src/main/org/h2/command/dml/Set.java(revision 4118)
+++ src/main/org/h2/command/dml/Set.java(working copy)
@@ -346,6 +346,11 @@
 addOrUpdateSetting(name, null, getIntValue());
 break;
 }
+case SetTypes.ONLINE_COMPACT: {
+session.getUser().checkAdmin();
+database.onlineCompact(session);
+break;
+}
 default:
 DbException.throwInternalError("type="+type);
 }
Index: src/main/org/h2/command/dml/SetTypes.java
===
--- src/main/org/h2/command/dml/SetTypes.java   (revision 4118)
+++ src/main/org/h2/command/dml/SetTypes.java   (working copy)
@@ -199,6 +199,11 @@
  */
 public static final int REDO_LOG_BINARY = 37;
 
+/**
+ * The type of a SET ONLINE_COMPACT statement.
+ */
+public static final int ONLINE_COMPACT = 38;
+
 private static final ArrayList TYPES = New.arrayList();
 
 private SetTypes() {
@@ -245,6 +250,7 @@
 list.add(VARIABLE, "@");
 list.add(QUERY_TIMEOUT, "QUERY_TIMEOUT");
 list.add(REDO_LOG_BINARY, "REDO_LOG_BINARY");
+list.add(ONLINE_COMPACT, "ONLINE_COMPACT");
   

Re: Automatically compact databases from time to time (as a background process)

2012-02-15 Thread Noel Grandin
If you want to look at it yourself, the relevant code is in 
org.h2.store.PageStore#compact(int)


On 2012-02-15 11:51, IntensiveH2 wrote:

Thanks for the quick answer but currently it's not possible to reboot
my application on a regular basis because my application must ensure
scheduling, failover, HA 
and it's not possible to stop it.
I really need to have a solution at runtime without restarting my
application.
Thanks.

On 15 fév, 10:40, Noel Grandin  wrote:

Not sure what Thomas' plans are,

but what I can suggest is that you set this setting to a nice large number
-Dh2.maxCompactTime=6
And reboot your application on a regular basis.

See herehttp://www.h2database.com/html/grammar.html#shutdown
which sets the maximum time to compact the database on shutdown (in
milliseconds).

On 2012-02-15 11:05, IntensiveH2 wrote:








Hi,
I use H2 in a commercial product and I have an issue regarding the
size of the DB (with a customer).
Currently the DB size is 12 GB and after a shutdown compact/restart
the new size is 800 MB
When you start connection on DB (12 GB) it took around 70 minutes to
start (on solaris) before to shutdow it with the compact option.
This is not acceptable (from customer comment).
NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
result is the same for all version)
Do you have planned (urgently) to defragment/compact at runtime
(similar to SHUTDOWN COMPACT) in a background thread?
Do you have another solution to avoid this issue (long startup and
compact at runtime)?
Best regards.
Thierry.


--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-15 Thread IntensiveH2
Thanks for the quick answer but currently it's not possible to reboot
my application on a regular basis because my application must ensure
scheduling, failover, HA 
and it's not possible to stop it.
I really need to have a solution at runtime without restarting my
application.
Thanks.

On 15 fév, 10:40, Noel Grandin  wrote:
> Not sure what Thomas' plans are,
>
> but what I can suggest is that you set this setting to a nice large number
> -Dh2.maxCompactTime=6
> And reboot your application on a regular basis.
>
> See herehttp://www.h2database.com/html/grammar.html#shutdown
> which sets the maximum time to compact the database on shutdown (in
> milliseconds).
>
> On 2012-02-15 11:05, IntensiveH2 wrote:
>
>
>
>
>
>
>
> > Hi,
>
> > I use H2 in a commercial product and I have an issue regarding the
> > size of the DB (with a customer).
>
> > Currently the DB size is 12 GB and after a shutdown compact/restart
> > the new size is 800 MB
> > When you start connection on DB (12 GB) it took around 70 minutes to
> > start (on solaris) before to shutdow it with the compact option.
> > This is not acceptable (from customer comment).
>
> > NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
> > result is the same for all version)
>
> > Do you have planned (urgently) to defragment/compact at runtime
> > (similar to SHUTDOWN COMPACT) in a background thread?
> > Do you have another solution to avoid this issue (long startup and
> > compact at runtime)?
>
> > Best regards.
> > Thierry.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Automatically compact databases from time to time (as a background process)

2012-02-15 Thread Noel Grandin

Not sure what Thomas' plans are,

but what I can suggest is that you set this setting to a nice large number
-Dh2.maxCompactTime=6
And reboot your application on a regular basis.

See here
http://www.h2database.com/html/grammar.html#shutdown
which sets the maximum time to compact the database on shutdown (in 
milliseconds).


On 2012-02-15 11:05, IntensiveH2 wrote:

Hi,

I use H2 in a commercial product and I have an issue regarding the
size of the DB (with a customer).

Currently the DB size is 12 GB and after a shutdown compact/restart
the new size is 800 MB
When you start connection on DB (12 GB) it took around 70 minutes to
start (on solaris) before to shutdow it with the compact option.
This is not acceptable (from customer comment).

NOTE: current version used of h2 are 1.3.159, 1.3.161 and 1.3.163 (the
result is the same for all version)

Do you have planned (urgently) to defragment/compact at runtime
(similar to SHUTDOWN COMPACT) in a background thread?
Do you have another solution to avoid this issue (long startup and
compact at runtime)?

Best regards.
Thierry.



--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.