[h2] Re: unable to write: the database is read only after system crash

2013-10-02 Thread Dieter Cailliau
Yes I'm using 2-phase-commit because I use xa-datasource (this is an xml 
element in jboss' standalone-full-ha.xml), but I never really found out how 
to configure the thing (jboss as 7.2.0.Final with h2) to perform recovery. 
I've pasted the config xml below.

Note: having recovery is also not 'a must' for me actually, so would there 
be a way to proceed from this situation, somehow skipping recovery?

Note: i'm using xa because i need to send/rollback JMS messages 
transactionally along with the db transaction.

subsystem xmlns=urn:jboss:domain:datasources:1.1
datasources
xa-datasource jndi-name=java:jboss/datasources/H2-bms-DS 
pool-name=H2-bms-DS enabled=true use-java-context=true
xa-datasource-property name=URL

jdbc:h2:${jboss.server.data.dir:/tmp}/bms;DB_CLOSE_DELAY=-1;LOCK_TIMEOUT=${com.x.lock.timeout.ms:5000}
/xa-datasource-property
driverh2/driver
security
user-namebms/user-name
/security
recovery
recover-credential
user-namebms/user-name
/recover-credential
/recovery
/xa-datasource
drivers
driver name=h2 module=com.h2database.h2

xa-datasource-classorg.h2.jdbcx.JdbcDataSource/xa-datasource-class
/driver
/drivers
/datasources
/subsystem

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] DB2 Mode Support WITH UR syntax (uncommitted read)

2013-10-02 Thread Bert Roex
The DB2 Mode in H2 is quite complete although we use the WITH UR syntax in 
some queries which does not seem to be supported.

Could you please expand the DB2 Mode to allow the use of the WITH UR DB2 
syntax?

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] AggregateFunction and DatabaseEventListener#setProgress

2013-10-02 Thread Nicolas Fortin (OrbisGIS)
Hi,

AggregateFunction and table function can be time consuming, as h2 provide a 
way to get the current task progression.

 I'm wondering if there is a way to acquire the Connection's 
DatabaseEventListener ?

Regards,

-Nicolas Fortin
Atelier SIG
IRSTV FR CNRS 2488

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [h2] corrupt database 1.3.168

2013-10-02 Thread Dieter Cailliau
I'm sure the corruption was not there using 1.3.161 because its just an 
empty scheme I created once back then (i can send it if it helps).

Whatever happens in that file later on, happens by a more recent h2.jar 
(1.3.168 in this report).

I completely understand there's nothing you can do without a .db file or 
reproducible test.


Op dinsdag 1 oktober 2013 19:36:04 UTC+2 schreef Thomas Mueller:

 Hi,

 Well, the problem is, if the database was created with an old version, 
 then the corruption might have been there already when you switched to a 
 new version. Corruption problems are very hard to analyze, because there is 
 no log file that tells us which change in the database file was created by 
 which version of H2...

 Of course, if the problem is reproducible, yes, that's another case. 
 Reproducible problems are usually quite easy to analyze and fix.

 Regards,
 Thomas



 On Tue, Oct 1, 2013 at 4:12 PM, Dieter Cailliau 
 dieter@gmail.comjavascript:
  wrote:

 I assume that that fact (that the h2 version with which i created the 
 schema = 1.3.161) is not relevant for the problem experienced in 1.3.168 
 which is still a problem in 1.3.168? 
 It is true that i have created a schema once using an older h2, and kept 
 that db file as a file to start of in my app, even though i've been 
 upgrading h2.jar along in my app. 
 If this is a problem (using this db file against an uptodate h2.jar), 
 please tell me. 

 Op woensdag 21 augustus 2013 07:43:45 UTC+2 schreef Thomas Mueller:

 Hi,

 The empty database was created with version 1.3.161, as the 
 CREATE_BUILD is 161. Since then, a few bugs have been fixed that could 
 cause corruption; see the change log for details. I suggest to try again 
 with a more recent version of H2. If it is still a problem, it would be 
 great if you could provide a simple, reproducible test case.

 Regards,
 Thomas


 On Tuesday, August 13, 2013, Dieter Cailliau wrote:

 jdbc:h2:/tmp/x;DB_CLOSE_DELAY=**-1;LOCK_TIMEOUT=5000
 It's a small db (3M on a windows7) with small tables (about 100 rows).
 I'm using multiple threads.
 I attached the empty schema. If it helps i can send you a (non-corrupt) 
 populated one. I'm still trying to reproduce the corruption.
 While reading the list, i learned that h2 performs a corruption 
 correction at open-time: i think that does not help, because the opening 
 succeeds, but later that query fails.

 Thanks!

 Op dinsdag 13 augustus 2013 10:18:19 UTC+2 schreef Noel Grandin:


 On 2013-08-13 10:05, Dieter Cailliau wrote: 
  
  Here's another example of ArrayIndexOutOfBoundsException at 
  org.h2.index.PageDataLeaf.**getR**owAt: this happened on 1.3.168. 
 The most 
  recent fix i found in changelog about this kind of error was 
 1.3.166, 
  so probably there's still something wrong since then, or have there 
  been fixes related to this, since 1.3.168? 

 What does your database URL look like? 

  -- 
 You received this message because you are subscribed to the Google 
 Groups H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to h2-database+unsubscribe@**googlegroups.com.
 To post to this group, send email to h2-database@googlegroups.com.
 Visit this group at 
 http://groups.google.com/**group/h2-databasehttp://groups.google.com/group/h2-database
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .
  
  

  -- 
 You received this message because you are subscribed to the Google Groups 
 H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to h2-database...@googlegroups.com javascript:.
 To post to this group, send email to h2-da...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] H2 Clustering with more than 2 nodes

2013-10-02 Thread dcheney
I'd be grateful for your insights into a design idea so that I don't head 
down a dead-end road.  Alternatives appreciated!

Our requirement is for a relatively small DB with a low rate of 
transactions.  I'd like to be able to have a db clone on each of 3 nodes. 
 Clients would be directed to one node via a virtual IP; the cluster would 
apply all operations at that IP and to two copies.  If either *copy* (not 
the 'master' at the virtual IP) fails the cluster continues to operate 
normally; after the node is recovered CreateCluster is re-run as usual, 
albeit with 3 nodes.  If the master (the node at the virtual IP) fails our 
code selects a new master from the 2 copies; that node takes over the 
virtual IP and the cluster begins operating as though a copy has failed.

As far as I can see this imposes only 2 requirements on H2 that are 
different than usual: 1) support for a client JDBC url naming a single tcp 
server while in cluster mode, and 2) support for 3 nodes in a CreateCluster 
-serverList.

Re (1): an attempt to connect to a cluster using a single node JDBC url 
(e.g. jdbc:h2:tcp://172.16.35.53:9101/msp) receives a response: Cannot 
connect to dbname.  Check your URL. Clustering error - database currently 
runs in cluster mode: server list server:port,server:port.  Is the 
server topology exposed so that client-side code can redirect a failed 
operation to the alternate server?  The design I am suggesting hides the 
topology - and requires that a single node URL be accepted by the server.

I've begun exploring this a bit (2) a bit.  I have a working 2 node cluster 
(nodes .55 and .53).  I run the server on a third node (.51):
java -cp /usr/lib/java/*:. org.h2.tools.Server -tcp -tcpPort 9101 
-tcpAllowOthers -baseDir base  
Then on my 'first' server (.55) I successfully run CreateCluster, which 
creates the db files on node .51:
java -cp /usr/lib/java/*:. org.h2.tools.CreateCluster \
-urlSource jdbc:h2:tcp://172.16.35.55:9101/msp \
-urlTarget jdbc:h2:tcp://172.16.35.51:9101/msp \
-serverList 172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101
Expecting the cluster to only use 2 nodes, I put the new .51 node second in 
the serverlist.  The H2 log on .51 shows an update to the CLUSTER server 
list and includes all three servers:
10-02 09:14:27 index: SYS_DATA remove ( /* key:77 */ 58, 0, 6, 'SET CLUSTER 
')
10-02 09:14:27 pageStore: log - s: 1 table: 0 row: ( /* key:77 */ 58, 0, 6, 
'SET CLUSTER ')
10-02 09:14:27 index: SYS_DATA add ( /* key:78 */ 58, 0, 6, 'SET CLUSTER 
''172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101''')
10-02 09:14:27 pageStore: log + s: 1 table: 0 row: ( /* key:78 */ 58, 0, 6, 
'SET CLUSTER ''172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101''')
10-02 09:14:27 pageStore: log commit s: 1
10-02 09:14:27 lock: 1 exclusive write lock unlock SYS
10-02 09:14:27 jdbc[20]: 
/*SQL t:1*/SET CLUSTER 
'172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101';
Code in SessionRemote loops through the serverlist applying any SQL 
operation, so this looks very promising.

However, at the start of handling a SQL command, H2 removes that value and 
replaces it with a 2 node list, and the new node is not in that list:
10-02 09:17:06 jdbc[21]: 
/*SQL */SET CLUSTER TRUE;
10-02 09:17:06 lock: 1 exclusive write lock requesting for SYS
10-02 09:17:06 lock: 1 exclusive write lock added for SYS
10-02 09:17:06 index: SYS_DATA remove ( /* key:78 */ 58, 0, 6, 'SET CLUSTER 
''172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101''')
10-02 09:17:06 pageStore: log - s: 1 table: 0 row: ( /* key:78 */ 58, 0, 6, 
'SET CLUSTER ''172.16.35.55:9101,172.16.35.51:9101,172.16.35.53:9101''')
10-02 09:17:06 index: SYS_DATA add ( /* key:79 */ 58, 0, 6, 'SET CLUSTER 
''172.16.35.55:9101,172.16.35.53:9101''')
10-02 09:17:06 pageStore: log + s: 1 table: 0 row: ( /* key:79 */ 58, 0, 6, 
'SET CLUSTER ''172.16.35.55:9101,172.16.35.53:9101''') 
I'll be looking into this.

I also see this interesting comment at line 397: 
  // TODO cluster: support more than 2 connections
What would need to be done?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] Small MVMap reopen bug?

2013-10-02 Thread Brian Bray
This test throws a Map is closed exception (in 1.3.173) and I don't 
believe it should.  Looks like MVMap.openMap(...) is not checking for 
!old.isClosed()?

@Test
public void testMVMapClearReOpen() {
// open MVMap
File f = new File(System.getProperty(java.io.tmpdir), cache.data);
MVStore store = new 
MVStore.Builder().fileName(f.getAbsolutePath()).open();
MVMapString,Object map = store.openMap(foo);

// should initally be empty
assertTrue(map.isEmpty());

// add one item and close
map.put(foo, bar);
assertEquals(1, map.getSize());
map.close();

// re-open, should still have 1 item it, clear all items and close
map = store.openMap(foo);
assertEquals(1, map.getSize());
map.clear();
assertEquals(0, map.getSize());
map.close();

// re-open, should have 0 items
map = store.openMap(foo);
assertTrue(map.isEmpty());
map.close();
}

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


[h2] Re: H2 Clustering with more than 2 nodes

2013-10-02 Thread dcheney
sweet - I am able to cluster 3 nodes, where cluster is defined as writes 
to the first server in a serverlist are replicated on the other (two) 
nodes.   What I had reported in (2) was due to a misconfiguration - I still 
had only two nodes in a local configuration file read (and repeatedly 
applied) by mybatis SessionFactory.

Re (1): I found that on occasion I *am* able to connect to a cluster using 
a JDBC URL naming a single server.  Looking into this I found that the H2 
value of CLUSTER is set prior to every write - and reset following a 
successful connect.  I can connect to a cluster naming a single server (the 
first one) prior to a write to the DB.  Following a write I can connect 
using a URL naming any 2 of the 3 servers.  Connecting resets the state so 
that I can again connect using a single server until the next write:

10-02 16:15:02 jdbc[14]: /*SQL */SET AUTO_RECONNECT TRUE;
10-02 16:15:02 jdbc[14]: /*SQL */SET CLUSTER TRUE;
10-02 16:15:02 jdbc[14]: /*SQL */SET TRACE_LEVEL_SYSTEM_OUT 3;
10-02 16:15:02 jdbc[14]: /*SQL */SET DB_CLOSE_DELAY -1;
10-02 16:15:04 index: SYS_DATA remove ( /* key:119 */ 58, 0, 6, 'SET 
CLUSTER ''172.16.35.55:9101,172.16.35.53:9101,172.16.35.51:9101''')
10-02 16:15:04 pageStore: log - s: 1 table: 0 row: ( /* key:119 */ 58, 0, 
6, 'SET CLUSTER ''172.16.35.55:9101,172.16.35.53:9101,172.16.35.51:9101''')
10-02 16:15:04 index: SYS_DATA add ( /* key:120 */ 58, 0, 6, 'SET CLUSTER 
')
10-02 16:15:04 pageStore: log + s: 1 table: 0 row: ( /* key:120 */ 58, 0, 
6, 'SET CLUSTER ')

Though this is a cluster, SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS 
WHERE NAME='CLUSTER'; returns '' from successful connect til first write. 
 Following a write it returns the expected list of servers.  I suspect this 
is not as intended - though it may prove useful in my efforts :) 


-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [h2] Proposal: Trigger2 Interface

2013-10-02 Thread Gili
Ping.

Can we at least open an issue to track this? I don't want this to fall 
through the cracks.

Gili

On Tuesday, July 30, 2013 8:11:33 PM UTC-4, Gili wrote:

  Hi Thomas,

 Good thing you asked for the updated test, because I now see a huge 
 performance difference:

 Multiple PreparedStatement: 4.623 seconds
 One PreparedStatement: 1.944 seconds

 I've attached the updated testcase for your review.

 On a side-note, it took me a while to track down a deadlock for the 
 second case. H2 was throwing:

 org.h2.jdbc.JdbcSQLException: Timeout trying to lock table DEPARTMENTS

 I tracked it down to the following:

1. The testcase invokes delete from departments 
2. TriggerCreateOnce.fire() invokes PreparedStatement.execute(), where 
the PreparedStatement is for DELETE FROM permissions WHERE id=?
 3. The PreparedStatement invokes ConstraintReferential.existsRow() to 
ensure that no rows reference permissions as a foreign key, but in so 
doing blocks trying to establish a table lock. 

 I believe this is caused by the fact that DEPARTMENTS has a ON DELETE 
 CASCADE constraint on the permissions table. The CASCADE tries to lock 
 DEPARTMENTS but the PreparedStatement (running in the system connection) 
 already has it locked. Using the system connections from Trigger.init() 
 lead to very ambiguous and hard-to-debug failures :(
  
 Anyway, I modified the PreparedStatement to use SELECT * FROM 
 instead of DELETE FROM to work around this issue.
  
 So, in conclusion:
  
- I believe I demonstrated there is a noticeable performance benefit 
for the Trigger2 interface. 
- I believe there is an ease-of-use benefit to steering users away 
from the use of system connection (due to the aforementioned problems).
 
 Action items:
  
- Can H2 throw a deadlock exception when a system connection and user 
connection running under the same thread attempt to lock the same table 
(instead of a Timeout exception)?
 - What are the next steps for Trigger2? 
- If we introduce Trigger2.initTransaction()/closeTransaction() is 
there still a legitimate need for system connections, or can we remove 
 them 
from init()?
 
 Thanks,
 Gili

 On 30/07/2013 4:59 PM, Thomas Mueller wrote:
  
 Hi, 

   In your test case, you didn't actually *execute* the statement. So it 
 was prepare a statement versus do nothing

  I think a more realistic use case is: (a) prepare, bind the values, and 
 execute a simple statement, versus (b) just bind the values and execute it.

  Regards,
 Thomas

  
  

 On Mon, Jul 29, 2013 at 11:42 PM, cowwoc cow...@bbs.darktech.org wrote:

  On 29/07/2013 4:53 PM, Thomas Mueller wrote:
  
 Hi, 

  In your test case, you didn't actually *execute* the statement. So it 
 was prepare a statement versus do nothing. Well, if the difference in 
 time is so small, then I guess it doesn't make much sense to support this 
 feature.
  

 That was by design. I thought we were trying to measure the 
 difference between preparing a statement once per transaction (as I was 
 proposing) versus preparing it once per trigger fire(). Was that not the 
 case? 


  
  So, instead of continuing to discuss this back and forth, let's just 
 keep the current trigger interface as it is, and whenever we do *have* to 
 change it, then let's keep this discussion in mind. Specially, instead of 
 passing 6 parameters, pass an object that contains that data (Metadata in 
 your case). This was done in other places already: CreateTableData.
  

  Okay.

 Gili

   
  Regards,
 Thomas

  

 On Sat, Jul 20, 2013 at 8:37 PM, cowwoc cow...@bbs.darktech.org wrote:

  Hi Thomas, 


 On 20/07/2013 1:10 PM, Thomas Mueller wrote:
   
 Hi, 

   but that can lead to deadlocks ...
  e.g. see this previous discussion:

  As part of that discussion, I wrote: I suggest to use 
 PreparedStatement, and always create a new PreparedStatement (for each 
 invokation of the trigger). Internally, the database caches a low-level 
 part of a PreparedStatement,...

  So, I wonder if caching prepared statements is really a problem?
  

  I wrote a quick benchmark against an in-memory database that 
 inserts a million rows, then drops them. I ran this against a trigger that 
 creates a new PreparedStatement in fire() and with a trigger that creates a 
 new PreparedStatement in init().

 Multiple PreparedStatement: 1.746 seconds
 One PreparedStatement: 1.427 seconds

 So we're talking about an overhead of 0.319 ms per invocation. 
 Please double check the attached benchmark to make sure I'm not doing 
 anything wrong. 


  
   I am under the impression that it doesn't matter whether a Trigger's 
 resources get cleaned up due to the Trigger being dropped or the database 
 being closed. Do you have a use-case that counters that?
  
  The use case is: you might want to drop a table when the trigger is 
 removed, but do nothing if the database is