[h2] Re: generated keys are "cleared" inside of Session during INSERT (to the table with AUTOINCREMENT column) when there is a trigger (Java code, obviously) that does another INSERT (to different tab

2019-06-14 Thread Vitali

I was working with codebase of few months old where described issue was 
faced.  As we are maintaining some large projects with H2 as spatial 
embedded database , we build customized releases from source code from time 
to time.  I took latest codebase  and tested in the morning.  I should 
acknowledge that our unit tests  are not issuing anymore problems with 
generated keys (seems JDBC API in underlying case properly returns them).  
Haven't yet checked more deeply (by debugging), but it seems the case is 
resolved with latest codebase.




-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/c436f918-208a-424a-871c-07bdaec8da8d%40googlegroups.com.


[h2] generated keys are "cleared" inside of Session during INSERT (to the table with AUTOINCREMENT column) when there is a trigger (Java code, obviously) that does another INSERT (to different table w

2019-06-13 Thread Vitali

Autoincrement primary key column in H2 causes allocation of new value 
during execution of INSERT.  Normally these values should be returned from  
Statement.getGeneratedKeys().  Somehow this API works fine with H2.196.  In 
latest code some major refactoring was done  using new class in H2 - 
GeneratedKeys.  Now there is a possibility that  generated keys collected 
in GeneratedKeys data structure that is held by Session during INSERT 
execution  are GONE, if during INSERT there is a trigger that does another 
INSERT... 



public void setCurrentCommand(Command command, Object 
generatedKeysRequest) {
this.currentCommand = command;
// Preserve generated keys in case of a new query due to possible 
nested
// queries in update
if (command != null && !command.isQuery()) {
getGeneratedKeys().clear(generatedKeysRequest);
}

This occurs inside of Session.setCurrentCommand... 


I suppose this 
// Preserve generated keys in case of a new query due to possible 
nested
// queries in update

is TODO?


I see that it could be solved by moving responsibility to maintain 
GeneratedKeys 
data structure directly to Command/CommandContainer  objects..  So, then 
nesting of queries (in a form of stack) won't clear this state in "shared"  
Session.

Each  command (like INSERT) then is able to return its generated keys 
properly.



-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/6fceb305-8c11-47a4-bea5-2baac3d8d0f2%40googlegroups.com.


[h2] Re: Unique index or primary key violation: "PRIMARY KEY ON """".PAGE_INDEX" error

2018-03-22 Thread Vitali
I am experiencing similar problem with an error:

Unique index or primary key violation: "PRIMARY KEY ON 
.PAGE_INDEX"; SQL statement: ALTER TABLE 
XXX ADD CONSTRAINT 
YYY FOREIGN KEY(LOOKUPID)

Legacy database created by H2 1.4.196  is opened by driver 1.4.197 
successfully first time. Then disconnect and second attempt to connect by 
1.4.197 driver fails with this error forever.

I am planning to debug this issue, more details and may be fixes are 
later.. 

MV_STORE=false  option is used.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Re: Redhat warning about production use of H2

2018-03-20 Thread Vitali
My story.

We use H2 in production in one large project in one large organization with 
few hundreds of users.  It's desktop GIS tool written in Java (fork of  
known uDig platform) having H2 as an embedded spatial database (using 
Hatbox/Geodb dependencies to handle spatial indexing and functions).  An 
idea is to import all necessary data from central database (Postgre) to an 
embedded H2 and work autonomously. Large portions of spatial data are then 
managed in H2 and using solution based on SymmetricDS changes are collected 
and then synchronized between H2 and PostgreSQL by user request.

H2 is very critical component, we always build own version from source 
code, with some locally managed optimizations (not yet contributed back to 
community) and PageStore mechanism is in use (no MVSTORE).   I would say, 
performance of H2 for all cases a typical user of our system has is very 
very good.   Database corruption is the only one BIG problem for us. We 
know it.   GIS application is heavily multi threading application, 
concurrent transactions, sometimes all kind of OOM errors, etc.  Our users 
also know this problem and sometimes it happens for them.  We have an 
established workflow of recovering the database and enough experience of 
that.  During 3 years of production  we fix nearly 10-15 databases (many Gb 
in size) per year in average. But we have hundreds of users working with it 
daily without any issues.

Have to say that we build more fresh H2 bundles from time to time and 
nowadays number of corruptions has reduced.

I would agree that it's not production database for many scenarios, from 
another point of view we successfully use it in production (and some other 
teams definitely),  corruption issues are quickly resolved and they are not 
critical to the customer. An overall performance of the solution makes 
customers in the end happy.


-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Re: LocalResult hold by PreparedStatement consumes a lot of Memory

2017-05-13 Thread Vitali
One of test cases I have written recently has failed with OOM.   It 
actually tests some SELECT logic loading "heavy" records with large binary 
objects (geometries).  Investigation that I have done showed exactly the 
described below case:  prepared statements (every SELECT in my test 
produces a bit different SQL query) are cached (Apache DBCP2  library is 
used for connection pooling with an option *ds.setPoolPreparedStatements(true); 
*) and they hold last result set -  (field "lastResult" of  Query class 
that is extended by Select class).

The chain is:
JdbcPreparedStatement  ->  CommandInterface (CommandContainer) -> Prepared 
(Select -> Query) -> LocalResult (Query#lastResult).

So, prepared statements caching is ON in connection pool,  
OPTIMIZE_REUSE_RESULTS  option has default value, lot of unique SQL queries 
that leads to continuous caching of prepared statements with complete 
result set hold in memory until connection is closed or prepared statements 
are released from connection's cache.  We do not know how PS caching is 
configured in general way.
My conclusion is that  cleaning of this internally  cached result set is a 
must after we know that  logic around PS is complete and goes to the 
cache. I understand why result set is cached within Query object. We 
need clearly distinguish cases when last result is still needed and when 
it's not needed anymore for sure.

I suppose  PreparedStatement.clearParameters() is  exactly the moment when 
we can safely get rid of the "last result".This method is also called 
by DBCP2  before PS goes to the cache.  

Prepared class gets  empty method that can be overrided by subclasses:

public void clearCachedResults(){

}



Query class  overrides it:

@Override
public void clearCachedResults() {
closeLastResult();
lastResult = null;
}



CommandInterface  gets a method as well:

public void clearCachedResults();



CommandContainer  implements it as:

public void clearCachedResults(){
prepared.clearCachedResults();
}



and JdbcPreparedStatement  has additional logic to call

 command.clearCachedResults();



inside of its #clearParameters().


Last result is cleaned when it's not needed, changes are minimal,  test 
case now runs successfully, additional debugging prooves cleaning of 
results and GC.


If an approach is accepted I can make change request in GIT. May be naming 
of methods could be better, let's think.

Vitali.




-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Recovery tool does not handle LOBs correctly (explanation and fix is proposed)

2016-10-16 Thread Vitali

Finally,  PULL  request was created at github.


On Wednesday, July 20, 2016 at 8:49:29 AM UTC+3, Thomas Mueller Graf wrote:
>
> Hi,
>
> It would be better if the recovery mechanism does not use an internal 
> mechanism of H2 (not use the "hidden" INFORMATION_SCHEMA.LOBS tables, and 
> instead use "real" tables). This is fixed in the MVStore case, where "real" 
> tables are used (actually just INFORMATION_SCHEMA.LOB_BLOCKS).
>
> So there are two ways to fix this for the PageStore: use a "real" table as 
> well (INFORMATION_SCHEMA.LOB_BLOCKS just as for the MVStore case), or your 
> fix. When using your fix, we would need test cases as well.
>
> Regards,
> Thomas
>
>
>
> On Wednesday, July 13, 2016, Vitali <vit...@gmail.com > 
> wrote:
>
>> Hi.
>>
>> Recently I have observed that recovery tool strangely  breaks LOBs (in 
>> particular CLOB objects in my case, but same is applicable to BLOB).   
>> After recovery  CLOB values are not what they were in original H2 file 
>> before recovery process is run.
>>
>> More than a day of debugging and some understanding came. 
>>
>> It seems other improvements made for LOB types handling affected recovery 
>> workflow.  In particular,  functions "READ_CLOB_DB" or "READ_BLOB_DB"  do 
>> not work as they worked long time ago.
>>
>>
>> In  recovery script  we can see this type of SQL in case when table has  
>> LOB columns:
>>
>>
>> CREATE TABLE O_1460(C0 CLOB, C1 CLOB);
>> INSERT INTO O_1460 VALUES(READ_CLOB_DB(1, 1006), READ_CLOB_DB(2, 3692));
>>
>> ...
>> CREATE TABLE O_1462(C0 CLOB, C1 CLOB);
>> INSERT INTO O_1462 VALUES(READ_CLOB_DB(5, 1006), READ_CLOB_DB(6, 4152));
>>
>>
>> These are temporary tables  populated during recovery. In my case the 
>> table really looks like  (CLOB, CLOB)  - it does not matter, main idea is 
>> that table has some LOB column.
>>
>> let's continue.
>>
>> READ_CLOB_DB(5, 1006) leads to   Recover.readClobDb  which creates 
>> ValueLobDb  object,  but inserting this object into table O_1462  calls the 
>> method  LobStorageBackend.copyLob:
>>
>> long lobId = getNextLobId();
>> String sql = "INSERT INTO " + LOB_MAP + "(LOB, 
>> SEQ, POS, HASH, BLOCK) " +
>> "SELECT ?, SEQ, POS, HASH, BLOCK FROM " + 
>> LOB_MAP + " WHERE LOB = ?";
>> PreparedStatement prep = prepare(sql);
>> prep.setLong(1, lobId);
>> prep.setLong(2, oldLobId);
>> prep.executeUpdate();
>> reuse(sql, prep);
>>
>> sql = "INSERT INTO " + LOBS + "(ID, BYTE_COUNT, 
>> TABLE) " +
>> "SELECT ?, BYTE_COUNT, ? FROM " + LOBS + " 
>> WHERE ID = ?";
>> prep = prepare(sql);
>> prep.setLong(1, lobId);
>> prep.setLong(2, tableId);
>> prep.setLong(3, oldLobId);
>> prep.executeUpdate();
>> reuse(sql, prep);
>>
>> v = ValueLobDb.create(type, database, tableId, 
>> lobId, null, length);
>>
>>
>> At this moment tables INFORMATION_SCHEMA.LOBS, LOBS_MAP are completely 
>> empty.
>>
>> So, it means "getNextLobId()" returns always 1. So, instead of original 
>> LOBID which is passed in READ_CLOB_DB(6, 4152)  we get a LOB reference  in  
>> target table  O_1462  that has LOBID=1.
>>
>> If there are 400 CLOB values in H2 database then all of them  gets 
>> LOBID=1.
>>
>> When the whole  recovery script is run including also logic below
>>
>> ...
>> DELETE FROM INFORMATION_SCHEMA.LOBS;
>> INSERT INTO INFORMATION_SCHEMA.LOBS SELECT * FROM O_2;
>> UPDATE INFORMATION_SCHEMA.LOBS SET TABLE = -2;
>> DELETE FROM INFORMATION_SCHEMA.LOB_MAP;
>> INSERT INTO INFORMATION_SCHEMA.LOB_MAP SELECT * FROM O_6;
>> DELETE FROM INFORMATION_SCHEMA.LOB_DATA;
>> INSERT INTO INFORMATION_SCHEMA.LOB_DATA SELECT * FROM O_10;
>> ...
>>
>> INSERT INTO SYMDS.SYM_ON_I_FOR_SYM_TRNSFRM_TBL_FLD_CONFIG SELECT * FROM 
>> O_1460;
>> ...
>> DROP TABLE O_1460;
>> ...
>>
>>
>> we get local database where all CLOB values  instead of referencing to 
>> correct

[h2] Re: What version to pick for production

2016-09-28 Thread Vitali
We are using successfully  H2 1.4.x in production for almost 2 years in a 
big organization with 300 of users.  It's an embedded database behind GIS 
desktop system which is a main tool these 300 users are using daily. So, 
very business critical case :) Data (spatial data) is taken from central 
database (Postgre) and delivered to H2 where all kind of CRUD operations 
are performed.  Changes are collected and synchronized back to central 
database.

We always compile H2 from code with minor local changes optimizing the 
performance for certain SQL scenarios (probably some improvements are worth 
to contribute to master branch). PageStore mechanism is used. So far the 
biggest problem was the performance  and stability of BLOB/CLOB types. 
Since we got rid of them towards VARBINARY/VARCHAR types  the number of 
corrupted database cases dropped to almost 0. Actually it was mistake to 
use BLOB in datamodel design from the beginning, once I have learned better 
H2 internals an understanding that BLOB is not needed in our case solved 
the majority of database corruption cases from users.

There is so much to do still in H2, but it's great  Java product, thanks to 
developers! 

P.S. Our optimizations concerned nested selects like   WHERE FK_COLUMN 
IN (SELECT... ) . Often when index exists for FK_COLUMN, parent table 
contains 20 records and nested SELECT retrieves thousands of values 
implementation is not optimal; I try to write soon about this case with a 
patch suggested.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Recovery tool does not handle LOBs correctly (explanation and fix is proposed)

2016-07-13 Thread Vitali
Hi.

Recently I have observed that recovery tool strangely  breaks LOBs (in 
particular CLOB objects in my case, but same is applicable to BLOB).   
After recovery  CLOB values are not what they were in original H2 file 
before recovery process is run.

More than a day of debugging and some understanding came. 

It seems other improvements made for LOB types handling affected recovery 
workflow.  In particular,  functions "READ_CLOB_DB" or "READ_BLOB_DB"  do 
not work as they worked long time ago.


In  recovery script  we can see this type of SQL in case when table has  
LOB columns:


CREATE TABLE O_1460(C0 CLOB, C1 CLOB);
INSERT INTO O_1460 VALUES(READ_CLOB_DB(1, 1006), READ_CLOB_DB(2, 3692));

...
CREATE TABLE O_1462(C0 CLOB, C1 CLOB);
INSERT INTO O_1462 VALUES(READ_CLOB_DB(5, 1006), READ_CLOB_DB(6, 4152));


These are temporary tables  populated during recovery. In my case the table 
really looks like  (CLOB, CLOB)  - it does not matter, main idea is that 
table has some LOB column.

let's continue.

READ_CLOB_DB(5, 1006) leads to   Recover.readClobDb  which creates 
ValueLobDb  object,  but inserting this object into table O_1462  calls the 
method  LobStorageBackend.copyLob:

long lobId = getNextLobId();
String sql = "INSERT INTO " + LOB_MAP + "(LOB, SEQ, 
POS, HASH, BLOCK) " +
"SELECT ?, SEQ, POS, HASH, BLOCK FROM " + 
LOB_MAP + " WHERE LOB = ?";
PreparedStatement prep = prepare(sql);
prep.setLong(1, lobId);
prep.setLong(2, oldLobId);
prep.executeUpdate();
reuse(sql, prep);

sql = "INSERT INTO " + LOBS + "(ID, BYTE_COUNT, 
TABLE) " +
"SELECT ?, BYTE_COUNT, ? FROM " + LOBS + " 
WHERE ID = ?";
prep = prepare(sql);
prep.setLong(1, lobId);
prep.setLong(2, tableId);
prep.setLong(3, oldLobId);
prep.executeUpdate();
reuse(sql, prep);

v = ValueLobDb.create(type, database, tableId, lobId
, null, length);


At this moment tables INFORMATION_SCHEMA.LOBS, LOBS_MAP are completely 
empty.

So, it means "getNextLobId()" returns always 1. So, instead of original 
LOBID which is passed in READ_CLOB_DB(6, 4152)  we get a LOB reference  in  
target table  O_1462  that has LOBID=1.

If there are 400 CLOB values in H2 database then all of them  gets LOBID=1.

When the whole  recovery script is run including also logic below

...
DELETE FROM INFORMATION_SCHEMA.LOBS;
INSERT INTO INFORMATION_SCHEMA.LOBS SELECT * FROM O_2;
UPDATE INFORMATION_SCHEMA.LOBS SET TABLE = -2;
DELETE FROM INFORMATION_SCHEMA.LOB_MAP;
INSERT INTO INFORMATION_SCHEMA.LOB_MAP SELECT * FROM O_6;
DELETE FROM INFORMATION_SCHEMA.LOB_DATA;
INSERT INTO INFORMATION_SCHEMA.LOB_DATA SELECT * FROM O_10;
...

INSERT INTO SYMDS.SYM_ON_I_FOR_SYM_TRNSFRM_TBL_FLD_CONFIG SELECT * FROM 
O_1460;
...
DROP TABLE O_1460;
...


we get local database where all CLOB values  instead of referencing to 
correct INFORMATION_SCHEMA.LOB_DATA blocks as in original H2 file they 
reference same  LOB object,  that has had LOBID=1.


-

Problem is that  LobStorageBackend.copyLob:  should work differently at 
recovery  case because  system tables with LOB information is empty. 

I did a 1 minute fix that solves the problem:


ValueLobDb:

private boolean isRecoveryReference = false;
public void setRecoveryReference(boolean isRecoveryReference) {
this.isRecoveryReference = isRecoveryReference;
}

public boolean isRecoveryReference() {
return isRecoveryReference;
}

Recover:

/**
 * INTERNAL
 */
public static Value.ValueClob readClobDb(Connection conn, long lobId,
long precision) {
DataHandler h = ((JdbcConnection) conn).getSession().getDataHandler
();
verifyPageStore(h);
ValueLobDb lob =  ValueLobDb.create(Value.CLOB, h, 
LobStorageFrontend.TABLE_TEMP,
lobId, null, precision);
lob.setRecoveryReference(true);
return lob;
}

/**
 * INTERNAL
 */
public static Value.ValueBlob readBlobDb(Connection conn, long lobId,
long precision) {
DataHandler h = ((JdbcConnection) conn).getSession().getDataHandler
();
verifyPageStore(h);
ValueLobDb lob = ValueLobDb.create(Value.BLOB, h, LobStorageFrontend
.TABLE_TEMP,
lobId, null, precision);
lob.setRecoveryReference(true);
return lob;
}


And in LobStorageBackend:

@Override
public ValueLobDb copyLob(ValueLobDb old, int tableId, long length) {
int type = old.getType();
long oldLobId = old.getLobId();
assertNotHolds(conn.getSession());
// see locking discussion 

Re: [h2] NPE in at org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453) becuase of TRIGGER initialization..

2015-05-08 Thread Vitali
I will provide a patch.  Just 1 line of code is moved earlier in Database 
class and some minor change in Lob-related class, I tested and it works 
smoothly.

If you have ever heard about SymmetricDS library for synchronization of 
databases, they do support H2, triggers are created automatically by 
SymmetricDS that capture any change in table , log it in a special way and 
synchronize with a target database later. It's excellent framework we are 
using for synchronization Postgre and H2. But it uses quite a lot of CLOBs, 
also keeps it's configuration in CLOBs. So when triggers are initialized 
they try to read something from CLOB and we get the problem above.  Old 
1.4.181 is not affected.

Vitali.



On Friday, May 8, 2015 at 7:12:07 PM UTC+3, Thomas Mueller wrote:

 Hi,

 OK I see, so you are running SQL statements while initializing the 
 trigger. It sounds like we need to initialize the LOB storage before 
 creating the triggers then... Patches are welcome!

 Regards,
 Thomas

 On Wednesday, May 6, 2015, Vitali vit...@gmail.com javascript: wrote:



 Hi, Thomas.

 I think it's not a LOB-storage related problem. The problem as I 
 specified in e-mail is that  triggers during initialization are trying to 
 read some CLOB data   but LobStorageBackend is not yet initialized with 
 a connection.  Look to classes from stacktrace.  It is because triggers are 
 initialized earlier than LOB storage is initialized in Database class.



 Vitali.

 On Wednesday, May 6, 2015 at 9:21:48 AM UTC+3, Thomas Mueller wrote:

 Hi,

 There were some changes in this area, and an important bugfix in version 
 1.4.187. If the LOB data was already removed before that, then I'm afraid 
 the data can not be restored easily. See also the change log for details.

 Regards,
 Thomas


 On Sunday, May 3, 2015, Vitali vit...@gmail.com wrote:

 Hi.


 H2 containts triggers that at initialization time are trying to read 
 from some table from LOB column.

 Because it occurs inorg.h2.engine.Database.open(Database.java:735):

 org.h2.jdbc.JdbcSQLException: General error: 
 java.lang.NullPointerException; SQL statement:
 select * from SMGIS.SYM_ON_U_FOR_TRG_TSS_HNK_FLD_CONFIG [5-187]
 at 
 org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
 at org.h2.message.DbException.get(DbException.java:168)
 at org.h2.message.DbException.convert(DbException.java:295)
 at org.h2.command.Command.executeQuery(Command.java:209)
 at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:79)
 at 
 org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.getTemplates(AbstractEmbeddedTrigger.java:265)
 at 
 org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.init(AbstractEmbeddedTrigger.java:78)
 at org.jumpmind.symmetric.db.h2.H2Trigger.init(H2Trigger.java:51)
 at org.h2.schema.TriggerObject.load(TriggerObject.java:81)
 at 
 org.h2.schema.TriggerObject.setTriggerAction(TriggerObject.java:136)
 at 
 org.h2.schema.TriggerObject.setTriggerClassName(TriggerObject.java:118)
 at org.h2.command.ddl.CreateTrigger.update(CreateTrigger.java:115)
 at org.h2.engine.MetaRecord.execute(MetaRecord.java:58)
 at org.h2.engine.Database.open(Database.java:735)
 at org.h2.engine.Database.openDatabase(Database.java:266)
 at org.h2.engine.Database.init(Database.java:260)
 at org.h2.engine.Engine.openSession(Engine.java:60)
 at org.h2.engine.Engine.openSession(Engine.java:167)
 at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
 at org.h2.engine.Engine.createSession(Engine.java:128)
 at org.h2.engine.Engine.createSession(Engine.java:26)
 at 
 org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
 at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:108)
 at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:92)
 at org.h2.Driver.connect(Driver.java:72)
 at java.sql.DriverManager.getConnection(DriverManager.java:571)
 at java.sql.DriverManager.getConnection(DriverManager.java:215)

 We get NPE as:

 Caused by: java.lang.NullPointerException
 at 
 org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453)
 at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:495)
 at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:38)
 at org.h2.result.LocalResult.cloneLobs(LocalResult.java:265)
 at org.h2.result.LocalResult.addRow(LocalResult.java:281)
 at org.h2.command.dml.Select.queryFlat(Select.java:585)
 at org.h2.command.dml.Select.queryWithoutCache(Select.java:685)
 at org.h2.command.dml.Query.query(Query.java:322)
 at org.h2.command.dml.Query.query(Query.java:290)
 at org.h2.command.dml.Query.query(Query.java:36)
 at org.h2.command.CommandContainer.query(CommandContainer.java:90)
 at org.h2.command.Command.executeQuery(Command.java:197)

 because  getLobStorage().init();   line is  a bit later in 
 Database class.


 Seems there was no problem

Re: [h2] NPE in at org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453) becuase of TRIGGER initialization..

2015-05-06 Thread Vitali


Hi, Thomas.

I think it's not a LOB-storage related problem. The problem as I specified 
in e-mail is that  triggers during initialization are trying to read some 
CLOB data   but LobStorageBackend is not yet initialized with a 
connection.  Look to classes from stacktrace.  It is because triggers are 
initialized earlier than LOB storage is initialized in Database class.



Vitali.

On Wednesday, May 6, 2015 at 9:21:48 AM UTC+3, Thomas Mueller wrote:

 Hi,

 There were some changes in this area, and an important bugfix in version 
 1.4.187. If the LOB data was already removed before that, then I'm afraid 
 the data can not be restored easily. See also the change log for details.

 Regards,
 Thomas


 On Sunday, May 3, 2015, Vitali vit...@gmail.com javascript: wrote:

 Hi.


 H2 containts triggers that at initialization time are trying to read from 
 some table from LOB column.

 Because it occurs inorg.h2.engine.Database.open(Database.java:735):

 org.h2.jdbc.JdbcSQLException: General error: 
 java.lang.NullPointerException; SQL statement:
 select * from SMGIS.SYM_ON_U_FOR_TRG_TSS_HNK_FLD_CONFIG [5-187]
 at 
 org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
 at org.h2.message.DbException.get(DbException.java:168)
 at org.h2.message.DbException.convert(DbException.java:295)
 at org.h2.command.Command.executeQuery(Command.java:209)
 at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:79)
 at 
 org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.getTemplates(AbstractEmbeddedTrigger.java:265)
 at 
 org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.init(AbstractEmbeddedTrigger.java:78)
 at org.jumpmind.symmetric.db.h2.H2Trigger.init(H2Trigger.java:51)
 at org.h2.schema.TriggerObject.load(TriggerObject.java:81)
 at 
 org.h2.schema.TriggerObject.setTriggerAction(TriggerObject.java:136)
 at 
 org.h2.schema.TriggerObject.setTriggerClassName(TriggerObject.java:118)
 at org.h2.command.ddl.CreateTrigger.update(CreateTrigger.java:115)
 at org.h2.engine.MetaRecord.execute(MetaRecord.java:58)
 at org.h2.engine.Database.open(Database.java:735)
 at org.h2.engine.Database.openDatabase(Database.java:266)
 at org.h2.engine.Database.init(Database.java:260)
 at org.h2.engine.Engine.openSession(Engine.java:60)
 at org.h2.engine.Engine.openSession(Engine.java:167)
 at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
 at org.h2.engine.Engine.createSession(Engine.java:128)
 at org.h2.engine.Engine.createSession(Engine.java:26)
 at 
 org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
 at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:108)
 at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:92)
 at org.h2.Driver.connect(Driver.java:72)
 at java.sql.DriverManager.getConnection(DriverManager.java:571)
 at java.sql.DriverManager.getConnection(DriverManager.java:215)

 We get NPE as:

 Caused by: java.lang.NullPointerException
 at org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453)
 at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:495)
 at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:38)
 at org.h2.result.LocalResult.cloneLobs(LocalResult.java:265)
 at org.h2.result.LocalResult.addRow(LocalResult.java:281)
 at org.h2.command.dml.Select.queryFlat(Select.java:585)
 at org.h2.command.dml.Select.queryWithoutCache(Select.java:685)
 at org.h2.command.dml.Query.query(Query.java:322)
 at org.h2.command.dml.Query.query(Query.java:290)
 at org.h2.command.dml.Query.query(Query.java:36)
 at org.h2.command.CommandContainer.query(CommandContainer.java:90)
 at org.h2.command.Command.executeQuery(Command.java:197)

 because  getLobStorage().init();   line is  a bit later in 
 Database class.


 Seems there was no problem in 1.4.181 that we have been using. Were there 
 any changes in initialization logic of Database? Is loading from 
 third-party tables in trigger a legal approach now?

 Vitali.

 -- 
 You received this message because you are subscribed to the Google Groups 
 H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to h2-database+unsubscr...@googlegroups.com.
 To post to this group, send email to h2-database@googlegroups.com.
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Re: H2 seems to create a huge number of orphan lobs since 1.4.183.

2015-05-05 Thread Vitali


I agree about H2. It's great database. Only because we found H2  embedded 
database  with good simple enough spatial capabilities (in our case based 
on hatbox and geotools) we were able to complete a project for a big 
customer with 400 of users  managing 25 Gb of business critical spatial 
data daily!

After going deep to H2 internals and what i have heard from you, guys, we 
are convinced that h2gis is a way to the future that should give a 
performance boost comparing to current technology we are using.  I am not a 
license  guru , we are doing projects based on open source libraries, 
applications and tools for private customers. The core app is UDIG based  
and H2 lives in it as an embedded database. All dependencies and components 
are commercial-friendly using licenses as LGPL and others, etc.

If license is an issue we can always create something simple as geodb  
wrapper   for H2 which is with spatial data type and built spatial index 
support now. Anyway all profit comes more or less from JTS and it's know 
fact. Whether it's geodb library or H2GIS - they rely on JTS. You may of 
course have some advanced computational  functionality, but we need really 
basics from spatial database. It's not for  analysis , it's for storing and 
managing spatial features, big amounts of them, all advanced  spatial 
analysis is anyway done in business logic.

Have you ever tried H2 as a spatial database behind some GIS interface like 
UDig or other desktop Java GIS app? It is definitely great db with 
excellent performance.


Coming back to the original topic of this thread I would conclude that BLOB 
type is absolutely unnecessary for storing geometries. 2Gb question was 
more like theoretical..

Vitali.






On Tuesday, May 5, 2015 at 10:59:43 AM UTC+3, Nicolas Fortin (OrbisGIS) 
wrote:

 Hi Vitali,

 PostGIS maximum geometry field size is 1Gb[1], then i don't think its a 
 problem for H2 having a theoretical limit of 2.1 Gb (precisely 231-5 [2]) 
 . It is not very practical to have such huge geometry as it is slow to 
 render and process (no stream processing for vectorial geometries in jts).

 As Thomas have said you have now access to native spatial r-tree index 
 with the same syntax as PostGIS ( operator):
 http://www.h2database.com/html/advanced.html#spatial_features

 There is just a missing piece with spatial index is the support for sql 
 alias in h2. In order to link  operator with spatial predicates.

 I understand if GPLv3 license of H2GIS is a issue for you that you want to 
 update geotools and geodb (you don't need hatbox anymore).

 All our 744 units test of H2GIS are running fine with H2 MvStore. We 
 greatly appreciate the power of H2 database :)

 Regards,


  [1]:http://www.postgresql.org/about/
  [2]:
 http://stackoverflow.com/questions/3038392/do-java-arrays-have-a-maximum-size

  -- 
 Nicolas Fortin
 IRSTV FR CNRS 2488
 GIShttp://orbisgis.org
 Spatial DB http://h2gis.org
 Noise  http://noisemap.orbisgis.org



 Le dimanche 3 mai 2015 21:51:09 UTC+2, Vitali a écrit :

 Hello.

 I would like to share some   observations.  Recently H2 got a Geometry 
 type, logic around it seems is growing, also some extra tiers like H2GIS 
 are under development.  All together this seems as a future of spatial 
 support in H2.  But already for many years  the spatial support was 
 provided by a combination of geodb + hatbox libraries and integration in 
 GeoTools  world (as  H2 data store  interface for storing/managing spatial 
 features with geometries).
 All these was done on BLOB type where a geometry WKB is stored. 

 BLOB became completely useless as a type for handling WKB of geometries. 
 Because of  this change that any access of BLOB value makes a copy of it. 
 HATBOX and GEODB libs based  on JTS library  provide functions to work with 
 WKB. But any call of these functions makes a read of BLOB value which makes 
 a copy in memory.  Some spatial conflation operations being not-optimized 
 (having polynomial complexity with applying spatial predicates between any 
 combination of input geometries from 2 tables e.g.)  now have a 
 catastrophic performance and memory consumption.  Cases where  old H2 just 
 worked 10 secods performing some kind of spatial operation between 2 layers 
 (tables) now runs 2 hours , 3Gb of database file (instead of 400Mb 
 normally) and outofmemory error finally. And long cleanings of temporary 
 LOB storage on app start, app close, transaction commit after such 
 operations.

 I understand real reasons of this BLOB  copying approach.  But the 
 conclusion is that BLOB is not a right type for geometries. In typical GIS 
 (like UDIG) thousands of records are extracted every second for multiple 
 layers during rendering and other types of requests need geometries. Now 
 BLOB became inefficient.

 Alltogether very likely I will do refactoring of geodb, hatbox and 
 GeoTools to work with GEOMETRY type which is basically VARBINARY kind of 
 which means WKB

Re: [h2] Re: H2 seems to create a huge number of orphan lobs since 1.4.183.

2015-05-04 Thread Vitali


On Monday, May 4, 2015 at 3:32:39 PM UTC+3, Thomas Mueller wrote:

 Hi,

  But already for many years  the spatial support was provided by a 
 combination of geodb + hatbox libraries and integration in GeoTools  world

 Yes. However, those don't use the built-in R tree. Do they use an external 
 R tree?


Hatbox provides R-Tree.  It is based on H2 infrastructure (some auxiliary 
table is created where nodes are stored).  What is built-in R-Tree would 
mean?


  All these was done on BLOB type where a geometry WKB is stored. 

 A small BLOB is stored inline, so it might not be that bad.

   any access of BLOB value makes a copy of it

Yes, that is what I meant.  For certain scenarios it has significant 
performance issues anyway as I experienced. Millions of temporary LOB 
entries when you have just dozens of thousands of spatial records and some 
not very optimized spatial query.


 Access is making a copy of the reference of a large BLOB.

  Isn't it  2Gb is a limit for binary types?

 I reality the problem is the memory usage (heap memory).


That  should not be a problem.  Typically in GIS application the biggest 
result  sets extracted from the database are not hold or cached long time 
but rather used to render spatial features and immediately any references 
released in JVM. Whether it's BLOB or BINARY anyway it's loaded to memory 
to parse Geometry from WKB.  May be with VARBINARY a  bit more data is kept 
during short period  of time in memory than would be with BLOBs.  May be I 
would consider an approach in ValueGeometry  that bytes are kept just until 
geometry is requested, then lazily Geometry is parsed and bytes are 
released. So that at any point of time whether bytes are hold or Geometry 
as an object. From bytes to Geometry, from Geometry to bytes when necessary.

In SELECT scenarious bytes are needed until Geometry object is created and 
then it is used outside of result set or locally during command execution. 
I am not sure how relevant in scope of the whole database infrastructure 
does this sound. 
Am I right that until local result set data structure is fully composed it 
is not returned to caller?  Then if result set is huge then all bytes are 
anyway kept in memory until result set is delivered and the client starts 
to request Geometry objects when bytes would be cleaned...


Vitali.


 Regards,
 Thomas



 On Sun, May 3, 2015 at 9:51 PM, Vitali vit...@gmail.com javascript: 
 wrote:

 Hello.

 I would like to share some   observations.  Recently H2 got a Geometry 
 type, logic around it seems is growing, also some extra tiers like H2GIS 
 are under development.  All together this seems as a future of spatial 
 support in H2.  But already for many years  the spatial support was 
 provided by a combination of geodb + hatbox libraries and integration in 
 GeoTools  world (as  H2 data store  interface for storing/managing spatial 
 features with geometries).
 All these was done on BLOB type where a geometry WKB is stored. 

 BLOB became completely useless as a type for handling WKB of geometries. 
 Because of  this change that any access of BLOB value makes a copy of it. 
 HATBOX and GEODB libs based  on JTS library  provide functions to work with 
 WKB. But any call of these functions makes a read of BLOB value which makes 
 a copy in memory.  Some spatial conflation operations being not-optimized 
 (having polynomial complexity with applying spatial predicates between any 
 combination of input geometries from 2 tables e.g.)  now have a 
 catastrophic performance and memory consumption.  Cases where  old H2 just 
 worked 10 secods performing some kind of spatial operation between 2 layers 
 (tables) now runs 2 hours , 3Gb of database file (instead of 400Mb 
 normally) and outofmemory error finally. And long cleanings of temporary 
 LOB storage on app start, app close, transaction commit after such 
 operations.

 I understand real reasons of this BLOB  copying approach.  But the 
 conclusion is that BLOB is not a right type for geometries. In typical GIS 
 (like UDIG) thousands of records are extracted every second for multiple 
 layers during rendering and other types of requests need geometries. Now 
 BLOB became inefficient.

 Alltogether very likely I will do refactoring of geodb, hatbox and 
 GeoTools to work with GEOMETRY type which is basically VARBINARY kind of 
 which means WKB is just read to memory. But it is what usually is needed to 
 GIS app - to get a geometry almost every time when data is read. Also 
 because  JTS geometry is lazily cached in ValueGeometry various logic in H2 
 (like custom spatial functions call multiple times) gets benefits.  I think 
 H2GIS toolkit more or less uses this approach already.

 The only concern is that are there any limitations for cases like lake 
 boundary that consists from hundreds of  thousands of vertices.. Isn't it  
 2Gb is a limit for binary types? Then it's fine..  But how do older 
 PageStore and modern MVStore handle

[h2] Re: Performances of nested queries

2015-05-03 Thread Vitali
I would add also my case here.

Seems a query like

SELECT * FROM SOMETABLE WHERE SOMETABLE_ID IN (SELECT   ID FROM 
SOMEFUNCTION( .. static input parameters.. ))

also is not scalable very well.  SOMETABLE_ID  is a primary key with an 
index. SOMEFUNCTION is a deterministic  function returning ResultSet, 
parameters are static
Let's say main table contains 8 records,  SOMEFUNCTION  returns 3000 
IDs.

The query above works 4 seconds.
If I rewrite it as


SELECT * FROM SOMETABLE INNER JOIN (SELECT   ID FROM SOMEFUNCTION( .. 
static input parameters.. ))  ids ON ids.ID = SOMETABLE_ID

then it works 10ms.

More items are  in set  - performance drops exponentially.

I think this case is much more widespread than the case that Quentine 
reported :)

A variant of the case is a prepared statement query like:
SELECT * FROM SOMETABLE WHERE SOMETABLE_ID IN (SELECT   ID FROM TABLE(ID 
INT=?))
when   SetInteger , for example,  is passed to a prepared statement and  
time complexity should be O(n) iterating  over IDs (that even passed a Java 
collection) and primary key index is asked to retrieve a record.

Inner join also helps:
SELECT * FROM SOMETABLE INNER JOIN (SELECT   ID FROM TABLE(ID INT=?)) ids 
ON ids.ID = SOMETABLE_ID

There is no problem to use inner joins, but for applications generating SQL 
dynamically an approach with   SOMETABLE_ID IN (...) is preferable. Imagine 
UPDATE, DELETE queries where  joins are not possible directly e.g.

Vitali



-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Re: H2 seems to create a huge number of orphan lobs since 1.4.183.

2015-05-03 Thread Vitali
Hello.

I would like to share some   observations.  Recently H2 got a Geometry 
type, logic around it seems is growing, also some extra tiers like H2GIS 
are under development.  All together this seems as a future of spatial 
support in H2.  But already for many years  the spatial support was 
provided by a combination of geodb + hatbox libraries and integration in 
GeoTools  world (as  H2 data store  interface for storing/managing spatial 
features with geometries).
All these was done on BLOB type where a geometry WKB is stored. 

BLOB became completely useless as a type for handling WKB of geometries. 
Because of  this change that any access of BLOB value makes a copy of it. 
HATBOX and GEODB libs based  on JTS library  provide functions to work with 
WKB. But any call of these functions makes a read of BLOB value which makes 
a copy in memory.  Some spatial conflation operations being not-optimized 
(having polynomial complexity with applying spatial predicates between any 
combination of input geometries from 2 tables e.g.)  now have a 
catastrophic performance and memory consumption.  Cases where  old H2 just 
worked 10 secods performing some kind of spatial operation between 2 layers 
(tables) now runs 2 hours , 3Gb of database file (instead of 400Mb 
normally) and outofmemory error finally. And long cleanings of temporary 
LOB storage on app start, app close, transaction commit after such 
operations.

I understand real reasons of this BLOB  copying approach.  But the 
conclusion is that BLOB is not a right type for geometries. In typical GIS 
(like UDIG) thousands of records are extracted every second for multiple 
layers during rendering and other types of requests need geometries. Now 
BLOB became inefficient.

Alltogether very likely I will do refactoring of geodb, hatbox and GeoTools 
to work with GEOMETRY type which is basically VARBINARY kind of which means 
WKB is just read to memory. But it is what usually is needed to GIS app - 
to get a geometry almost every time when data is read. Also because  JTS 
geometry is lazily cached in ValueGeometry various logic in H2 (like custom 
spatial functions call multiple times) gets benefits.  I think H2GIS 
toolkit more or less uses this approach already.

The only concern is that are there any limitations for cases like lake 
boundary that consists from hundreds of  thousands of vertices.. Isn't it  
2Gb is a limit for binary types? Then it's fine..  But how do older 
PageStore and modern MVStore handle this type? Any performance issues?

Vitali.


-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] NPE in at org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453) becuase of TRIGGER initialization..

2015-05-02 Thread Vitali
Hi.


H2 containts triggers that at initialization time are trying to read from 
some table from LOB column.

Because it occurs inorg.h2.engine.Database.open(Database.java:735):

org.h2.jdbc.JdbcSQLException: General error: 
java.lang.NullPointerException; SQL statement:
select * from SMGIS.SYM_ON_U_FOR_TRG_TSS_HNK_FLD_CONFIG [5-187]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.command.Command.executeQuery(Command.java:209)
at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:79)
at 
org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.getTemplates(AbstractEmbeddedTrigger.java:265)
at 
org.jumpmind.symmetric.db.AbstractEmbeddedTrigger.init(AbstractEmbeddedTrigger.java:78)
at org.jumpmind.symmetric.db.h2.H2Trigger.init(H2Trigger.java:51)
at org.h2.schema.TriggerObject.load(TriggerObject.java:81)
at org.h2.schema.TriggerObject.setTriggerAction(TriggerObject.java:136)
at 
org.h2.schema.TriggerObject.setTriggerClassName(TriggerObject.java:118)
at org.h2.command.ddl.CreateTrigger.update(CreateTrigger.java:115)
at org.h2.engine.MetaRecord.execute(MetaRecord.java:58)
at org.h2.engine.Database.open(Database.java:735)
at org.h2.engine.Database.openDatabase(Database.java:266)
at org.h2.engine.Database.init(Database.java:260)
at org.h2.engine.Engine.openSession(Engine.java:60)
at org.h2.engine.Engine.openSession(Engine.java:167)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
at org.h2.engine.Engine.createSession(Engine.java:128)
at org.h2.engine.Engine.createSession(Engine.java:26)
at 
org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:108)
at org.h2.jdbc.JdbcConnection.init(JdbcConnection.java:92)
at org.h2.Driver.connect(Driver.java:72)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)

We get NPE as:

Caused by: java.lang.NullPointerException
at org.h2.store.LobStorageBackend.copyLob(LobStorageBackend.java:453)
at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:495)
at org.h2.value.ValueLobDb.copyToResult(ValueLobDb.java:38)
at org.h2.result.LocalResult.cloneLobs(LocalResult.java:265)
at org.h2.result.LocalResult.addRow(LocalResult.java:281)
at org.h2.command.dml.Select.queryFlat(Select.java:585)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:685)
at org.h2.command.dml.Query.query(Query.java:322)
at org.h2.command.dml.Query.query(Query.java:290)
at org.h2.command.dml.Query.query(Query.java:36)
at org.h2.command.CommandContainer.query(CommandContainer.java:90)
at org.h2.command.Command.executeQuery(Command.java:197)

because  getLobStorage().init();   line is  a bit later in Database 
class.


Seems there was no problem in 1.4.181 that we have been using. Were there 
any changes in initialization logic of Database? Is loading from 
third-party tables in trigger a legal approach now?

Vitali.

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Corruption errors with new 1.4 branch

2014-09-17 Thread Vitali
Hello, Thomas.

I prepare some test case for you in coming days.

Vitali.

On Wednesday, September 17, 2014 9:54:21 AM UTC+3, Thomas Mueller wrote:

 Hi,

 I will try to improve the error message / stack trace, but maybe the 
 object is already closed is unrelated to the other problem (Chunk ... 
 not found).

 If you have a reproducible test case, that would be great. Or a database 
 you can send me, so I can reproduce it.

 Regards,
 Thomas


 On Tuesday, September 16, 2014, Vitali vit...@gmail.com javascript: 
 wrote:

 Usually  everything starts with
 Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed; 
 SQL statement:

 exception  on INSERT or UPDATE query where BLOB type is taking part.

 UPDATE X.A SET TYYPPI = ?,ITEMID = ?,SHAPE = ?  WHERE (A_ID = 
 ?) [90007-181]
 at 
 org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
 at org.h2.message.DbException.get(DbException.java:179)
 at org.h2.message.DbException.get(DbException.java:155)
 at org.h2.message.DbException.get(DbException.java:144)
 at org.h2.store.LobStorageMap.createBlob(LobStorageMap.java:128)
 at org.h2.value.ValueLobDb.link(ValueLobDb.java:254)
 at org.h2.mvstore.db.MVPrimaryIndex.add(MVPrimaryIndex.java:120)
 at org.h2.mvstore.db.MVTable.addRow(MVTable.java:620)
 at org.h2.table.Table.updateRows(Table.java:466)
 at org.h2.command.dml.Update.update(Update.java:145)
 at org.h2.command.CommandContainer.update(CommandContainer.java:78)
 at org.h2.command.Command.executeUpdate(Command.java:254)
 at 
 org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:198)
 at 
 org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198)
 at 
 org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198)
 at org.geotools.jdbc.JDBCDataStore.update(JDBCDataStore.java:1570)


 looks like something happens with BLOB's  InputStream that is closed 
 before it should be used..

 Currently with high probability of exceptions  I get this problem in  
 INSERT...SELECT..   queries where BLOBs are as well.

 Vitali.

 On Monday, September 15, 2014 10:55:41 AM UTC+3, Thomas Mueller wrote:

 Hi,

 Could you provide a reproducible test case? If not, I would need to have 
 a more detailed description on the operations that were run.

 Regards,
 Thomas



 On Mon, Sep 15, 2014 at 9:38 AM, Vitali Diatchkov vit...@gmail.com 
 wrote:


 I  am observing  similar  issue with lost chunks. See my post about 
 BLOBs  at 13.09.2014.  Randomly in multi-threaded environment and MVStore  
 enabled  the database  becomes corrupted. 

 -- 
 You received this message because you are subscribed to the Google 
 Groups H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to h2-database...@googlegroups.com.
 To post to this group, send email to h2-da...@googlegroups.com.
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to h2-database+unsubscr...@googlegroups.com.
 To post to this group, send email to h2-database@googlegroups.com.
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] Re: Corruption errors with new 1.4 branch

2014-09-16 Thread Vitali
Usually  everything starts with
Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed; SQL 
statement:

exception  on INSERT or UPDATE query where BLOB type is taking part.

UPDATE X.A SET TYYPPI = ?,ITEMID = ?,SHAPE = ?  WHERE (A_ID = 
?) [90007-181]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.store.LobStorageMap.createBlob(LobStorageMap.java:128)
at org.h2.value.ValueLobDb.link(ValueLobDb.java:254)
at org.h2.mvstore.db.MVPrimaryIndex.add(MVPrimaryIndex.java:120)
at org.h2.mvstore.db.MVTable.addRow(MVTable.java:620)
at org.h2.table.Table.updateRows(Table.java:466)
at org.h2.command.dml.Update.update(Update.java:145)
at org.h2.command.CommandContainer.update(CommandContainer.java:78)
at org.h2.command.Command.executeUpdate(Command.java:254)
at 
org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:198)
at 
org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198)
at 
org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198)
at org.geotools.jdbc.JDBCDataStore.update(JDBCDataStore.java:1570)


looks like something happens with BLOB's  InputStream that is closed before 
it should be used..

Currently with high probability of exceptions  I get this problem in  
INSERT...SELECT..   queries where BLOBs are as well.

Vitali.

On Monday, September 15, 2014 10:55:41 AM UTC+3, Thomas Mueller wrote:

 Hi,

 Could you provide a reproducible test case? If not, I would need to have a 
 more detailed description on the operations that were run.

 Regards,
 Thomas



 On Mon, Sep 15, 2014 at 9:38 AM, Vitali Diatchkov vit...@gmail.com 
 javascript: wrote:


 I  am observing  similar  issue with lost chunks. See my post about 
 BLOBs  at 13.09.2014.  Randomly in multi-threaded environment and MVStore  
 enabled  the database  becomes corrupted. 

 -- 
 You received this message because you are subscribed to the Google Groups 
 H2 Database group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to h2-database...@googlegroups.com javascript:.
 To post to this group, send email to h2-da...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/h2-database.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] 1.4.181 BLOB related issue

2014-09-14 Thread Vitali Diatchkov
Hello.

The issue I describe is very critical for us. We are developing huge GIS 
system based on UDig platform where H2 serves as a local embedded database 
with spatial data type support coming with Hatbox , GeoDB, GeoTools 
libraries. We use BLOB data type to store geometries as WKB (well known 
binary), Hatbox provides spatial indexing,  etc. Everything has been 
working smoothly during recent year of the development with 1.3.174 and 
MVCC enabled.   Switch to 1.4.181 with MVStore enabled  led to BLOB columns 
related issues.  The most typical exception is on INSERT and it is below:

Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed; SQL 
statement:
insert into  . values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) 
[90007-181]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.store.LobStorageMap.createBlob(LobStorageMap.java:128)
at org.h2.value.ValueLobDb.link(ValueLobDb.java:254)
at org.h2.mvstore.db.MVPrimaryIndex.add(MVPrimaryIndex.java:120)
at org.h2.mvstore.db.MVTable.addRow(MVTable.java:620)
at org.h2.command.dml.Insert.insertRows(Insert.java:156)
at org.h2.command.dml.Insert.update(Insert.java:114)
at org.h2.command.CommandContainer.update(CommandContainer.java:78)
at org.h2.command.Command.executeUpdate(Command.java:254)
at 
org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:198)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at 
org.jumpmind.db.sql.JdbcSqlTransaction.addRow(JdbcSqlTransaction.java:394)

1 column in the underlying table has type BLOB , some tables where this 
exception occur has 2 BLOB columns.  The problem occurs randomly on INSERTs 
to different tables having BLOB columns for geometries.  Once this 
exception  occurs then one of the following problems in the end is another 
type of exception on an attempt to SELECT..  and it is about missing chunks 
of data 

Caused by: java.lang.IllegalStateException: Chunk 332 no longer exists 
[1.4.181/9]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:762)
at org.h2.mvstore.MVStore.getChunkIfFound(MVStore.java:861)
at org.h2.mvstore.MVStore.getChunk(MVStore.java:844)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1795)
at org.h2.mvstore.MVMap.readPage(MVMap.java:769)
at org.h2.mvstore.Page.getChildPage(Page.java:252)
at org.h2.mvstore.MVMap.binarySearch(MVMap.java:469)
at org.h2.mvstore.MVMap.binarySearch(MVMap.java:470)
at org.h2.mvstore.MVMap.get(MVMap.java:451)
at org.h2.mvstore.MVMapConcurrent.remove(MVMapConcurrent.java:71)
at org.h2.store.LobStorageMap.removeLob(LobStorageMap.java:292)
at org.h2.store.LobStorageMap.removeLob(LobStorageMap.java:278)
at org.h2.value.ValueLobDb.close(ValueLobDb.java:227)
at org.h2.engine.Session.endTransaction(Session.java:552)
at org.h2.engine.Session.commit(Session.java:536)
at org.h2.command.Command.stop(Command.java:152)
at org.h2.command.Command.executeUpdate(Command.java:284)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:184)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:158)


Transactions are manually managed (no autocommit) in the application. 
Usually there is  a 1 transaction opened, being committed or rolled back 
from time to time, other connections are used for simultaneous reading.
We temporarily switched back to previous version that was stable but 
looking forward to MVStore.


I am planning to do some debugging my myself and may be provide more info 
later. For now it seams something is broken/not stable/not complete  in 
MVStore for BLOBs..   Any comments from H2 team? You may get whatever help 
you need from us to debug and provide more info to you..  Anyway H2 is a 
great piece of software, thanks , guys :) 

Vitali.

-- 
You received this message because you are subscribed to the Google Groups H2 
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.