Re: [h2] Possible bug in LZF compression

2014-06-02 Thread Jan Kotek
Hi,

I confirm the issue is on my side. I made optimization in MapDB which causes 
the 
bug (replaced for loop copy with System.arrayCopy)

Thanks for help and sorry for false alarm.

Jan

On Wednesday, May 28, 2014 13:41:24 Thomas Mueller wrote:


Hi,


Yes, it looks like this doesn't affect H2. Maybe it is related to the changes 
to the 
LZF compressor in MapDB? My test case:


byte[] data = 
StringUtils.convertHexToBytes("00ef76fa135e7d216e829a53845a983469ac1e
4edb6120b79667d667e7d4f856010101010022bf4569010023002102
123eeaa90e2f5786ce028e60ec03702706dadecee373a90b09b88a99cc668f46ac33
58c8ea6433279c678846fb6e06eeccd82e2fe888f2ac203476d3918cd4057901
0038ff9e00be438253be4382530100109bf45901002300210
2123eeaa90e2f5786ce028e60ec03702706dadecee373a90b09b88a99cc668f46ac3
8bf80f10129594a7e949cc43c3bd6f8670ba5ab59874305f6839406738a9cf901
0038ff9e0081bd175381bd1753");
CompressLZF lzf = new CompressLZF();
byte[] out = new byte[data.length];
int len = lzf.compress(data, data.length, out, 0);
byte[] test = new byte[data.length];
lzf.expand(out, 0, len, test, 0, data.length);
System.out.println(StringUtils.convertBytesToHex(test));
System.out.println(Arrays.hashCode(data));
System.out.println(Arrays.hashCode(test));


Regards,
Thomas


On Tue, May 27, 2014 at 1:57 PM, Noel Grandin  wrote:


I massaged your test case into a unit test for H2, and it seems to be working 
for us.

But maybe there is some more transformation that happens to the raw byte array 
before it hits the LZF compressor.






-- You received this message because you are subscribed to the Google Groups 
"H2 
Database" group.To unsubscribe from this group and stop receiving emails from 
it, 
send an email to h2-database+unsubscr...@googlegroups.com[2].To post to this 
group, send email to h2-database@googlegroups.com[3].Visit this group at 
http://groups.google.com/group/h2-database[4].For more options, visit 
https://groups.google.com/d/optout[5].




-- You received this message because you are subscribed to the Google Groups 
"H2 
Database" group.To unsubscribe from this group and stop receiving emails from 
it, 
send an email to h2-database+unsubscr...@googlegroups.com[6].To post to this 
group, send email to h2-database@googlegroups.com[3].Visit this group at 
http://groups.google.com/group/h2-database[4].For more options, visit 
https://groups.google.com/d/optout[5].






[1] mailto:noelgran...@gmail.com
[2] mailto:h2-database%2bunsubscr...@googlegroups.com
[3] mailto:h2-database@googlegroups.com
[4] http://groups.google.com/group/h2-database
[5] https://groups.google.com/d/optout
[6] mailto:h2-database+unsubscr...@googlegroups.com

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


[h2] Possible bug in LZF compression

2014-05-27 Thread Jan Kotek
Hi,

MapDB uses LZF compression from H2 database. One of our users 
reported wrongly decompressed data:
https://github.com/jankotek/MapDB/issues/332[1] 

I have not checked yet if this bug affect H2 as well.
Wiil be back in a few days

All best,
Jan


[1] https://github.com/jankotek/MapDB/issues/332

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.


Re: [h2] MVStore with Serializable Transaction Isolation

2014-05-07 Thread Jan Kotek
Hi,

sorry for late reply, but perhaps you find this interesting.

MapDB is db engine similar to MVStore. On its own it does not support 
transactions or snapshots, but it has separate wrapper which provides those 
features (search TxMaker and TxEngine). I think it could be perhaps used to 
provide 
snapshots and serializable transactions for  MVStore as well. 

Jan

On Monday, April 21, 2014 02:28:02 Kieron Wilkinson wrote:


Hi Thomas,


I've been thinking about this the last few days, writing various unit tests to 
learn 
how MVStore works and I think that, in retrospect, hardcore serialisation 
isolation 
is really quite a difficult thing to achieve in a generic way. This article was 
quite 
useful to me to think about the difference, though no doubt you are more 
familiar 
with these things than I am: 
http://blogs.msdn.com/b/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx


So, I wonder whether it actually makes more sense to concentrate on 
implementing snapshot isolation instead, particularly as the concept seems to 
fit 
in well with what MVStore already does. I'll be honest and say that I don't 
think I 
would be able to dedicate the time required to get serialisation isolation 
correct 
and working well. I also think it could be even more difficult to achieve in a 
general 
key-value store where the queries could be basically anything (what would you 
lock 
against when somebody uses some value-based predicate?). But maybe I'm not 
thinking about the problem clearly enough...


I do need some sort of serialisation isolation, but I think actually I can do 
that much 
more easily on top, as I only need to do enough in that case to satisfy my 
particular 
needs, which are fairly well defined at an application level.


I also think that as a starting point, I can do a rather naive implementation 
of 
snapshot isolation, where you don't track what is inserted/deleted, just that 
something has, and you fail all other concurrent transactions that don't "win". 
That 
does mean the concurrency is massively limited when there are many 
inserts/deletes, and user code will get a lots of rollbacks, but I thought 
might be a 
good starting point to get the basics in there.


Please let me know what you think.


Thanks,
Kieron






Seems like I have quite a lot of learning to do though, so might take a little 
while.




I assume by what you said that I can change the public API in incompatible 
ways? If I 
start with what you suggested, and I very well might, that would already 
potentially 
break code if you wanted to merge any changes back in.


Anyway I'll let you know if I manage to put together anything interesting.


Thanks,
Kieron


Hi,


> I want to be able to block or force a rollback rather than seeing the old 
> value



It's hard to say what is the best way to implement it. It would be nice if the 
TransactionStore could be extended. The first thing to do is probably create 
separate top-level class files, and then possible split TransactionMap into 3 
classes: 
TransactionMapBase, TransactionMapReadCommitted, TransactionMapSerializable. 
Or something like that. A part of the logic is already there: 
TransactionMap.trySet, 
which returns true if the entry could be updated. For the database, I ended up 
implementing some of the logic in MVSecondaryIndex.add, as there are some 
complications with what exactly is a duplicate entry.


>  a serializable-style of isolation requires a lock per entry


It's possible to implement it that way, but I wouldn't, as it doesn't scale (it 
would 
need too much memory and probably get too slow). Instead, I would use just one, 
or 
a fixed number of lock objects. The risk is that the thread that is waiting for 
one 
particular row is woken up even if the different row is unlocked, so the thread 
would have to wait again. But that way you don't need much memory. But it 
depends on the use case.


As for the license: if you write your own class (without copying H2 source 
code), 
then you can use your own license and don't have to publish the code (but if 
you 
want, you can, of course). If you modify an existing H2 class, then you would 
have to 
provide or publish those changes (just the changes, not the source code of the 
rest of your application).


Regards,
Thomas




On Fri, Apr 18, 2014 at 1:28 PM, Kieron Wilkinson  wrote:





http://> h2database.com/html/mvstore.html#transactions[1]




>Yes, that's what it's doing. But not there are some differences between 
"serializable" (what you want) and "read committed" (what the >TransactionStore 
supports right now) - for details, 
see http://www.postgresql.org/docs/9.3/static/transaction-iso.html[2] 
and http://en.wikipedia.org/wiki/Isolation_(database_systems)[3] 

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.

Re: [h2] H2 database and huge data

2013-07-07 Thread Jan Kotek
This is probably spam, but since nobody answered. 
 
Try MapDB, it is embedded db engine. It only has embedded mode and does not 
have SQL.
 
It imports 200 million records in a few seconds if Data Pump is used.
 
Jan

On Wednesday 26 June 2013 22:32:34 Shiva wrote:

Hi
We have been using H2 database with our application and very happy with it.
We had some tables with upto 8 million records and it worked very well.

Now We have a new requirement to load a table with 200 million records.
What is the best way to handle this situation with H2 ? these are the 
reference data and we will query this millions of times.

These are the current solutions we have been discussing so far,

If we have to load all 200 million records then initial loading time will be 
more, every time when we start application, need to wait more.
How about running H2 in server mode on a separate jvm? that will hit the 
network performance since we need to query this table millions of times over 
network.
Does H2 support partitioning? should we break this table into multiple tables 
of each 20 million records and then load them on demand.
Any one came across similar situation or any other solution to recommend? 
Any help in this is greatly appreciated.
Thanks
Shiva
  
-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Key-Value interface proposal

2010-11-02 Thread Jan Kotek
Hi,

there does not seems to be enough interest. I also have other stuff to do.
So I will not work on this project.

Regards,
Jan

On Fri, Oct 29, 2010 at 9:13 AM, Jan Kotek  wrote:
> Hi Thomas,
>
> I think I did not explain my intention well. I dont want to make any
> deep refactoring of H2.
> I was very happy with SQL until I hit some performance limits in
> desktop application.
>
> So I would like to create 'java.util.Map' view to SQL table.
> JDBM on top of H2 would not be key value store in usual way.
>
> An example, lets have table table:
>   create table Test (id int primary key, val varchar)
> This would map into:
>   java.util.TreeMap.
>
> This map instance would have JDBC Statement inside and manipulate table 
> values.
> 'map.put(1,"aaa")' would do insert operation into table.
> For serialized object, value would be blob with variable size.
>
> This would be just 'view'. Table would have to be created with SQL first.
>
> All of this can be implemented using pure SQL.  It does not require
> changes in H2 code.
> But SQL can be slow, even when using prepared statement.
>
> And there are other thinks which would require some level of integration.
> For example object instance cache would need to know, when row is
> updated or locked.
> With SQL probably possible, but one can end up with 'MANY' prepared 
> statements.
>
> Jan
>
>
> On Thu, Oct 28, 2010 at 8:02 PM, Thomas Mueller
>  wrote:
>> Hi,
>>
>> There are advantages and disadvantages. Maybe the requirements are
>> just too different. I know other databases are/were split at that
>> level (Berkley DB) but others are not (SQLite).
>>
>> It would probably make sense for JDBM2 use case to split H2 into "SQL
>> layer" and "storage layer". So that you only need to use the "storage
>> layer" and can get a smaller jar file (no need to create another
>> project for that, or use Maven or anything like that; maybe just
>> re-organize classes into different packages). However this split might
>> be painful for the H2 use case. One problem of course is that such a
>> split would take quite some time / work.
>>
>> I suggest you try to split H2 as follows: basically all "storage
>> layer" classes start with "Page" now. So that's simple. Additionally
>> you need Index, Cursor, Data, FileStore, the FileSystem abstraction.
>> I'm not sure about Table and Column (probably yes). You probably don't
>> need Expression. I suggest to try to split H2 yourself (removing
>> functionality you think you don't need) and tell us the result (even
>> if the result was that you didn't understand the source code).
>>
>> Regards,
>> Thomas
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "H2 Database" group.
>> To post to this group, send email to h2-datab...@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> h2-database+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/h2-database?hl=en.
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Key-Value interface proposal

2010-10-29 Thread Jan Kotek
Hi Thomas,

I think I did not explain my intention well. I dont want to make any
deep refactoring of H2.
I was very happy with SQL until I hit some performance limits in
desktop application.

So I would like to create 'java.util.Map' view to SQL table.
JDBM on top of H2 would not be key value store in usual way.

An example, lets have table table:
   create table Test (id int primary key, val varchar)
This would map into:
   java.util.TreeMap.

This map instance would have JDBC Statement inside and manipulate table values.
'map.put(1,"aaa")' would do insert operation into table.
For serialized object, value would be blob with variable size.

This would be just 'view'. Table would have to be created with SQL first.

All of this can be implemented using pure SQL.  It does not require
changes in H2 code.
But SQL can be slow, even when using prepared statement.

And there are other thinks which would require some level of integration.
For example object instance cache would need to know, when row is
updated or locked.
With SQL probably possible, but one can end up with 'MANY' prepared statements.

Jan


On Thu, Oct 28, 2010 at 8:02 PM, Thomas Mueller
 wrote:
> Hi,
>
> There are advantages and disadvantages. Maybe the requirements are
> just too different. I know other databases are/were split at that
> level (Berkley DB) but others are not (SQLite).
>
> It would probably make sense for JDBM2 use case to split H2 into "SQL
> layer" and "storage layer". So that you only need to use the "storage
> layer" and can get a smaller jar file (no need to create another
> project for that, or use Maven or anything like that; maybe just
> re-organize classes into different packages). However this split might
> be painful for the H2 use case. One problem of course is that such a
> split would take quite some time / work.
>
> I suggest you try to split H2 as follows: basically all "storage
> layer" classes start with "Page" now. So that's simple. Additionally
> you need Index, Cursor, Data, FileStore, the FileSystem abstraction.
> I'm not sure about Table and Column (probably yes). You probably don't
> need Expression. I suggest to try to split H2 yourself (removing
> functionality you think you don't need) and tell us the result (even
> if the result was that you didn't understand the source code).
>
> Regards,
> Thomas
>
> --
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Key-Value interface proposal

2010-10-27 Thread Jan Kotek
Hi,

Time ago I contributed NIO  and soft cache code into H2.
Now I am JDBM2 maintainer. It is key-value database, with about 10
years of history.  It provides 'java.util.Map' transparently persisted
to disk.

JDBM have page storage engine similar to H2. It is less advanced,
supports only single transaction and does not have concurrent access.

I would like to drop current engine and rewrite JDBM on top of H2.
Basically it would be 'java.util.Map' view to SQL table. This would
happend in two phases:

First JDBM Maps would be implemented on top of H2 using SQL. This
would not require H2 modification and would be done as separate
project.

In second phase JDBM would integrate more tightly with H2 to make it
faster. Probably by using engine and indexes directly and extending
remote protocol. At this stage JDBM would be integrated into H2 as
subproject.

JDBM code is very simple. Without storage engine and indexes it has
only 'java.util.Map' interface implementation, object instance cache
and custom serialization. It is about 50 KB of code including unit
tests and comments. H2.jar would grow about 30 KB. JDBM code is
currently under Apache 2 licence.

As part of this H2 would get btree key delta compression. JDBM has it
now, works great with numbers, strings and other data types
I would also work on BLOB handling, JDBM depends on this part.

So what do you think?`

Regards,
Jan Kotek

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



JDBM2 key value DB [OT]

2010-07-25 Thread Jan Kotek
Hi,

I am happy to announce first beta version of JDBM2 database for Java.

JDBM provides HashMap and TreeMap which are transparently persisted to
disk. It is fast, simple, transactional and scales up to 1e9 records.
JDMB have minimal memory footprint, is highly embeddable (standalone
jar have only 145KB). It can be compared to BerkleyDB Java Edition,
but is under Apache 2 License.

Under hood it have page store, transactions an btree + htree indexes.
It is optimized to have very small footprint, for Android phones and
Swing applications. First beta version was released recently, which
have all planed features and stable file format.

It follows similar goals as H2 database, but it is unrelated project.
Tom kindly give permission to post this announcement here.

Regards,

Jan Kotek

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: JVM to C in 1.2.132?

2010-04-27 Thread Jan Kotek
Hi,
sorry for late reply, I was on holiday.

I wrote a few patches for H2 (NIO, soft reference cache, customized
indexing). I don't know theoretical SQL stuff, but Thomas's code is
really easy to follow and understand. It took me 4 hours in train to
understand most of code !!! Finding right location is just mather of
10 minutes debugging.

I consider H2 as best Java code I ever saw. Considering complexity of
problems it solves, number of features and incredible compactness of
code.
I really wish all programs would be written in the same way.

Jan

On Fri, Apr 9, 2010 at 11:48 AM, Rami Ojares  wrote:
> Also a big thanks from me for sharing your experiences in such a clear
> headed fashion.
> There are people on this list who are interested to hear other people's
> experiences about h2 codebase.
> And to hear comparisons with other databases was most interesting!
>
> - rami
>
> On 9.4.2010 10:15, James Gregurich wrote:
>>
>> The key feature that attracted my attention was MVCC. Based on what I have
>> read, that technique is what all the big boys are using to maximize their
>> performance. I figured that approach would likely work well for my needs.
>>
>> I also spent time studying the source code for Firebird&  Postgresql to
>> see if I could adapt either of them to work as an embedded db. Neither
>> project was coded very well. Firebird's code seemed fairly chaoticthough
>> better than SQLite. PostgresSQL is fairly well-written, but has an
>> insufficient level of abstraction. It's code is too married to the original
>> way it was intended to run..namely a collection of processes that
>> communicate via IPC. It would be a major effort to yank out all the IPC and
>> replace it with normal mutexes and memory management. Firebird would be
>> easier to adapt, but it was still not trivial because their code isn't well
>> written. It is hard to trace through Firebird's code and figure out how it
>> works. Firebird needs to be coded from scratch using modern C++ techniques.
>>
>> Finally, I'm doing commercial software with lots of proprietary IPso
>> anything GPL is poisoncan't use it.
>>
>> H2's code is fairly well written and structured. Its easy to follow,
>> understand and trace through. The license doesn't require me to make our
>> core intellectual property public. So, It is the best choice IMO.
>>
>> I'd say the authors of H2 should have been more disciplined when coding,
>> but overall, they did a good job. There tends to be a fair amount of
>> redundant storing of references to objects. For instance, if a subclass
>> needs a reference to an object and its parent already has a reference to
>> that object, instead of reusing the parent's reference, the subclass stores
>> a second reference to it. Also, there needs to be religious use of prefixes
>> to distinguish data members, local variables and static variables from one
>> another. naming variables like "newConnection" with no indication of scope
>> makes code harder to read and understand. Also, I prefer better
>> encapsulation of data members. I instruct my guys that only constructors,
>> destructors and accessor functions may directly reference data members...all
>> other member functions must go through accessorsits a little extra work
>> up front to write accessors, but it generally pays for itself when you need
>> to evolve the code later on.
>>
>> BTW: I did discover what I think is a race condition in SET EXCLUSIVE if
>> multithreaded mode is on. It looked to me like you could end up with two
>> sessions thinking they had exclusive access. However, that was in an older
>> version of your code. You may have fixed that by now. I don't know yet. I
>> just refreshed my copy of your repository today after many months.
>>
>>
>> On Apr 8, 2010, at 11:47 PM, Sylvain Pointeau wrote:
>>
>>
>>>
>>> Seems interesting, could you point on the issues in sqlite3 and compare
>>> them with H2 and how H2 solved them?
>>>
>>> Best regards,
>>> Sylvain
>>>
>>>
>>
>>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "H2 Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/h2-database?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Problem with JaQu + Scala

2010-02-23 Thread Jan Kotek
Hi,

there is difference since Scala generates public setters and private
fields (try to decompile that class with JAD).
Try to declare your field with @BeanProperty. Otherwise I dont know

J

On Tue, Feb 9, 2010 at 5:03 PM, Alex  wrote:
> Hi!
>
> I'm having problems while using JaQu from Scala: the following code
>
> class Bar() {
>  var open: java.lang.Double = 0.
> }
> val b = new Bar
> b.open = 123.45
> db.insert(b)
>
> creates a table "BAR", but inserts no values, wheres is the is class
> defined in Java
>
> public class Bar() {
>  public Double open;
> }
>
> the code works correctly. Am I missing anything?
>
> Thanks!
>
> Cheers,
> Alex
>
> --
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Slow query for compound IN condition?

2010-02-23 Thread Jan Kotek
Hi,
I had simular problem. H2 does not optimize AND conditions very well.

I would recommend using subqueries.

Regards,
j

On Tue, Feb 23, 2010 at 5:06 PM, kensystem  wrote:
> Let me apologize about the formatting, I thought that G Groups would
> have retained the HTML-table formatting I coped pasted from the info
> schema (it certainly allowed in in the textarea anyway)
> CREATE table OM (pk BIGINT primary key auto_increment, b INTEGER)
> CREATE INDEX b ON om(b)
>
> Also I typod '500,00' but meant the table has '500,000' rows.
>
> Also to be clearer, the 'pk' column is a primary key and so only about
> 40 items in the set matched the query; even so adding the second AND-
> IN seems to slow it down exponentially as that list grows; its as if
> the entire dataset and not subset is searched for each value in each
> list.
>
> --
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.



Re: Query optimizer patch

2009-12-18 Thread Jan Kotek
Hi,

One more thing. There is mistake in patch, it does not speed up
BETWEEN operator yet.
One must use  ' WHERE  ipix >= 100 AND ipix <= 200'  instead.

Jan

On Fri, Dec 18, 2009 at 9:13 PM, Jan Kotek  wrote:
> Hi Thomas,
>
> I guess I need to provide some performance tests to backup my case.
> I would like to know your opinion on missing parts (getCost() and
> updateAggregate() on ConditionRangeSet)
>
>> With your patch, it would do an index lookup for A 1..100020. That's far 
>> from optimal.
>> Better would be two index lookups: 1..20, and then 10..100020.
>
> Not always. In some cases scan between two boundaries is faster then
> many index lookups. Correct solution would be to implement both and
> let query optimizer decide
>
> Patch to make index lookup for each range would be way more invasive.
> Also this patch does optimal lookup with single BETWEEN (currenly it
> uses full scan)
>
> And it is still way then current situation, now index is completely
> ignored and whole table is scanned!
>
>
>>This is what the MultiDimension tool already does
>
> I will need some time to grasp this. But this patch improves even
> single BETWEEN select.
> And SQL query on line 161 can really benefit from this.
>
>>I guess the main problem is that the index conditions can go over 100.
> I have query with 2000 OR conditions. If contains operation is very
> fast, scan is actually faster then index looks up.
>
> Time for query is:
>   N*M where
>     N=number of OR conditions (number of ranges)
>     M=number of rows scanned.
> This patch removes N.
>
> Regards,
> Jan Kotek
>
>
>
>
>
> On Fri, Dec 18, 2009 at 6:17 PM, Thomas Mueller
>  wrote:
>> Hi,
>>
>> Thanks for the patch, but I don't want to apply it for the following reasons:
>>
>> It doesn't help a lot in my view. Specially, it doesn't use the index
>> efficiently. Let's say you have an index on the column A and the the
>> query looks like this: WHERE (A BETWEEN 1 AND 20) OR (A BETWEEN 10
>> and 100020). With your patch, it would do an index lookup for A
>> 1..100020. That's far from optimal. Better would be two index lookups:
>> 1..20, and then 10..100020. This is what the MultiDimension tool
>> already does, see
>> http://code.google.com/p/h2database/source/browse/trunk/h2/src/main/org/h2/tools/MultiDimension.java#110
>> - it's doing that currently using an inner join: SELECT D.* FROM data
>> D, TABLE(_FROM_ BIGINT=?, _TO_ BIGINT=?) WHERE column BETWEEN _FROM_
>> AND _TO_. I know this is non-standard SQL, but it is far more optimal.
>> It's using the index in the most optimal way, and there is no need to
>> build (and parse) very large and different SQL statements for
>> different number of OR conditions.
>>
>>> In some cases (spatial indexing)
>>> number of conditions can go over 100. Table scan can became very slow.
>>
>> I guess the main problem is that the index conditions can go over 100.
>> It doesn't make much sense to try to speed up checking the index
>> conditions for every row - that's the wrong solution. It will still
>> check every row, just checking will be a bit faster. The right
>> solution is to avoid checking at all, avoiding scanning the rows.
>>
>> Regards,
>> Thomas
>>
>> --
>>
>> You received this message because you are subscribed to the Google Groups 
>> "H2 Database" group.
>> To post to this group, send email to h2-datab...@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> h2-database+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/h2-database?hl=en.
>>
>>
>>
>

--

You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.




Re: Query optimizer patch

2009-12-18 Thread Jan Kotek
Hi Thomas,

I guess I need to provide some performance tests to backup my case.
I would like to know your opinion on missing parts (getCost() and
updateAggregate() on ConditionRangeSet)

> With your patch, it would do an index lookup for A 1..100020. That's far from 
> optimal.
> Better would be two index lookups: 1..20, and then 10..100020.

Not always. In some cases scan between two boundaries is faster then
many index lookups. Correct solution would be to implement both and
let query optimizer decide

Patch to make index lookup for each range would be way more invasive.
Also this patch does optimal lookup with single BETWEEN (currenly it
uses full scan)

And it is still way then current situation, now index is completely
ignored and whole table is scanned!


>This is what the MultiDimension tool already does

I will need some time to grasp this. But this patch improves even
single BETWEEN select.
And SQL query on line 161 can really benefit from this.

>I guess the main problem is that the index conditions can go over 100.
I have query with 2000 OR conditions. If contains operation is very
fast, scan is actually faster then index looks up.

Time for query is:
   N*M where
 N=number of OR conditions (number of ranges)
 M=number of rows scanned.
This patch removes N.

Regards,
Jan Kotek





On Fri, Dec 18, 2009 at 6:17 PM, Thomas Mueller
 wrote:
> Hi,
>
> Thanks for the patch, but I don't want to apply it for the following reasons:
>
> It doesn't help a lot in my view. Specially, it doesn't use the index
> efficiently. Let's say you have an index on the column A and the the
> query looks like this: WHERE (A BETWEEN 1 AND 20) OR (A BETWEEN 10
> and 100020). With your patch, it would do an index lookup for A
> 1..100020. That's far from optimal. Better would be two index lookups:
> 1..20, and then 10..100020. This is what the MultiDimension tool
> already does, see
> http://code.google.com/p/h2database/source/browse/trunk/h2/src/main/org/h2/tools/MultiDimension.java#110
> - it's doing that currently using an inner join: SELECT D.* FROM data
> D, TABLE(_FROM_ BIGINT=?, _TO_ BIGINT=?) WHERE column BETWEEN _FROM_
> AND _TO_. I know this is non-standard SQL, but it is far more optimal.
> It's using the index in the most optimal way, and there is no need to
> build (and parse) very large and different SQL statements for
> different number of OR conditions.
>
>> In some cases (spatial indexing)
>> number of conditions can go over 100. Table scan can became very slow.
>
> I guess the main problem is that the index conditions can go over 100.
> It doesn't make much sense to try to speed up checking the index
> conditions for every row - that's the wrong solution. It will still
> check every row, just checking will be a bit faster. The right
> solution is to avoid checking at all, avoiding scanning the rows.
>
> Regards,
> Thomas
>
> --
>
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>
>
>

--

You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.




Query optimizer patch

2009-12-15 Thread Jan Kotek
Hi all,

I found one case where H2 query optimizer does not work. I created
patch which solve it

My application is using this type of query:

  create table star (
 starId int primary key AUTO_INCREMENT,
 ipix long );
  create index star_ipix on star(ipix);

  //insert 100 000 records

  //first case, simple query boundaries
  select * from star where
ipix >= 100 and ipix <= 200

  //second case, query multiple boundaries
  select * from star where
   ipix >= 100 and ipix <= 200 or
   ipix >= 300 and ipix <= 400 or
   ipix >= 500 and ipix <= 600

First problem is that IndexConditions boundaries are not set by
optimizer. Select query always triggers full table scan, even if there
is an index.

Second problem is with multiple 'or' conditions. Conditions are
evaluated one by one, until one of them fails. This evaluation is
performed for each row in table. In some cases (spatial indexing)
number of conditions can go over 100. Table scan can became very slow.


My patch (attached) solve this. It replaces first case with range and
sets IndexConditions.
In second case it merges multiple ranges into RangeSet which requires
only one evaluation (constant time versus N).
IndexConditions are correctly set even for second case.

It is not final version yet, but already works well. It also opens
door for much more optimizations.
I would like to know your comments.

TODOs:
 * implement getCost(), updateAggregate() and simular stuff on ConditionRangeSet
 * ConditionRangeSet may createIndexConditions on wrong table (not sure)
 * more unit tests


Regards,
Jan Kotek

--

You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.




h2-rangeset-optimization.patch
Description: Binary data


Re: RangeSet for Spatian Index RFC

2009-12-03 Thread Jan Kotek
RangeSet is not Spatial Index directly. It is data structure which can
store areas decomposed to pixel number (using quadtree or other
method).

I am not sure I can explain it well. So I will go ahead and make
prototype with some examples.

Jan.


On Thu, Dec 3, 2009 at 2:35 PM,   wrote:
>
>
> I'm not sure I understand your concept or rangesets.
> Could you elaborate on it?
>
> Do you have one array of long per Axis for an arbitrary
> number of axis? e.g. axis0=latitude axis1=longitude?
>
> Marcus
>> To get more details visit project and source code:
>> http://code.google.com/p/healpix-rangeset/
>>
> http://healpix-rangeset.googlecode.com/svn/trunk/healpix-rangeset/src/org/asterope/healpix/LongRangeSet.java
>>
>> My plan is to make RangeSet native SQL type in H2. It would make
>> possible to pass RangeSet as JDBC param for WHERE cause. RangeSet
>> would be also table column type. Some examples:
>
> --
>
> You received this message because you are subscribed to the Google Groups "H2 
> Database" group.
> To post to this group, send email to h2-datab...@googlegroups.com.
> To unsubscribe from this group, send email to 
> h2-database+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/h2-database?hl=en.
>
>
>

--

You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.




RangeSet for Spatian Index RFC

2009-12-02 Thread Jan Kotek
Hi,
I was read discussion about spatial indexes in H2. I think basic
problem is that H2 is missing data structure which can hold spatial
data.

Most solutions are using geometry object stored serialized in
database. On each query it needs to be deserialized, run logic and
thrown away.
This of course hurts performance.

Other solution is to split area into pixels and store pixel numbers in
1:N relation. SQL WHERE condition is than used to filter data. But it
needs  HUGE sql queries.

I am working on astronomical program which uses H2 database and
Healpix sphere pixelization. I was facing problem howto handle sets
with 1e10 pixel numbers. Solution was to compress data into RangeSet.

RangeSet uses long[] array to store ranges. Even position is first
number in range, odd position is last number in range. Values are
sorted and binary search is used to find values. There is no object
instance overhead, it is just array of primitives and consumes very
little memory.

To get more details visit project and source code:
http://code.google.com/p/healpix-rangeset/
http://healpix-rangeset.googlecode.com/svn/trunk/healpix-rangeset/src/org/asterope/healpix/LongRangeSet.java

My plan is to make RangeSet native SQL type in H2. It would make
possible to pass RangeSet as JDBC param for WHERE cause. RangeSet
would be also table column type. Some examples:

This would save thousands of BETWEENs (and query parsing overhead)
   create table shops(name String, position long);
   select name from shops where position in ? (RangeSet JDBC param);

More advanced, RangeSet is in other table
   create table towns (name String, area RangeSet);
   insert into towns(?,?) // String and RangeSet JDBC params
   //now select shops which are inside any city
   select s.name from shop s, town t where s.position inside t.area

Or select area of countryside (is not covered by town)
   select complement(union(t.area)) from town t


Most important is that RangeSet would operate directly on top of
PageStore pages. There would be no deserialization overhead. It can
also directly interact with index and query optimizer. Performance
should be really good.

I would like to know your opinions.

Jan

--

You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-datab...@googlegroups.com.
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.




Re: 2D spatial data

2009-11-11 Thread Jan Kotek

Hi Marcus,

> A few Gigabyte. 1-2 German states from OpenStreetMap.

I have simular problem. I have astronomical application which is using
spherical spatial index. Areas are mapped into RangeSets using Healpix
< http://healpix.jpl.nasa.gov/ >.

My typical RangeSet have 1e10 pixels and 1e7 ranges. I also have
tables with 1e8 stars (points) which needs to be queried using
RangeSet.

Such data have really huge overhead on SQL parsing side. It is not
possible to write SELECT * FROM X WHERE i BEETWEN ? AND ? and i
BEETWEN ? AND ?,   result query would have gigabytes. RangeSet
implemented with pure SQL also does not allow union, intersections and
other usual spatial stuff.

I started optimizing Healpix by using special data structures which
may hold such data. I already speed up RangeSet generation from
minutes to miliseconds. Memory requirements decreased from GB to MB.
Now I am writing utils to perform memory efficient unions,
intersections etc.

My ultimate goal is to port my range collections to H2 SQL. RangeSet
would be JDBC parameter and stored serialized on server. This would
prevent SQL query parsing overhead.

It is hard, because RangeSet would operate at PageStore level. It also
modifies Indexes (to prevent full scan on each query). Finally there
would be changes into query parser and optimizer to make it aware of
RangeSets.

If everything goes well, I should have working prototype for H2 in 7 months.

Jan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Parameters not working on multi expression SQL

2009-09-02 Thread Jan Kotek

Hi,
I need to insert multiple values in one ProparedStatement. But when I
have multiple expression separated by ';' , parameters does not seems
to work. Is it bug or 'feature'?

Is there way to workaround it? I am using iBatis.

Thanks, jan

Failing test case (in Scala):
val sql = """
create table test (id int, blabla int );
insert into test (id, blabla)  values (?,?);
  """;

  val conn = SqlFactory.createTestConnection //jdbc:h2:mem:
  val st = conn.prepareStatement(sql);
  st.setInt(1,10)
  st.setInt(2,20)

Stack trace:
org.h2.jdbc.JdbcSQLException: Invalid value 1 for parameter
parameterIndex [90008-117]
at org.h2.message.Message.getSQLException(Message.java:105)
at org.h2.message.Message.getSQLException(Message.java:116)
at org.h2.message.Message.getInvalidValueException(Message.java:172)
at 
org.h2.jdbc.JdbcPreparedStatement.setParameter(JdbcPreparedStatement.java:1227)
at 
org.h2.jdbc.JdbcPreparedStatement.setInt(JdbcPreparedStatement.java:296)
at org.asterope.dao.H2Test.testStarSequenceInsert(H2Test.scala:37)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: H2 Database Engine: New version 1.1.113 available

2009-05-23 Thread Jan Kotek

This feature is not active by default.



On Sat, May 23, 2009 at 3:43 AM, Thotheolh  wrote:
>
> 'A second level soft-references cache is now supported. It speeds up
> large databases, but reduces performance for small databases.' What
> does it mean ? I have some small personal databases so if I use H2, I
> would get a slowed down database for my small personal db ? Do I need
> to activate this feature or is it a default in H2 ?
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Transaction isolation problem

2009-05-17 Thread Jan Kotek

Hi all,

I have problem: data inserted into DB are not visible until commit
even in same transaction.
I am not sure if it is bug, but I believe inserted data should be
visible in same transaction.
I am using 1.1.108.

example:
>insert into CoeliObjectId values (1,'blabla',1);
>select count(*) from CoeliObjectId as id where id.objectId='blabla';
(returns 0)

example2:
>insert into CoeliObjectId values (1,'blabla',1);
>commit;
>select count(*) from CoeliObjectId as id where id.objectId='blabla';
(returns 1)

Workaround is to commit after each change (with 1e6 records hell
slow). Is there way howto speed up those commits? For example to
disable file sync or something like that?

Thanks,
Jan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: WeakReference page cache

2009-05-17 Thread Jan Kotek

Hi Thomas,
you are right, this implementation is way better.

Jan

On Sun, May 10, 2009 at 6:58 PM, Thomas Mueller
 wrote:
>
> Hi,
>
> I wrote that "soft references doesn't make sense" but I was wrong... I
> always mix up weak and soft references... I believe _weak_ references
> don't make sense.
>
> I have now integrated your patch (it is committed to the trunk). I
> made a few changes:
>
> I combined both settings into the cache type (CACHE_TYPE=LRU, TQ,
> SOFT_LRU, SOFT_TQ). The prefix WEAK_ is also implemented but not
> documented. I read again the WeakReference / SoftReference
> documentation and it looks like it doesn't make sense to support it,
> but it's only a few lines of code, so that's OK. If you find out that
> it helps please tell me.
>
> New system property h2.cacheTypeDefault (default: LRU).
>
> I simplified the code to generate the Cache object and moved it to
> CacheLRU (I want to avoid creating a new class CacheFactory).
>
> I have re-implemented the SoftHashMap. I think it doesn't make sense
> to wrap the (Integer) key in a SoftReference. You want the garbage
> collector to throw away the rows (values), not the keys, because the
> values need much more memory. You wrote your implementation was
> running out of memory, probably that was why. See also:
> http://www.roseindia.net/javatutorials/implementing_softreference_based_hashmap.shtml
>
> Thanks again for your help!
>
> Regards,
> Thomas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: Java 1.5, but still support Java 1.4 using Retrotranslator, Retroweaver, or using -target jsr14:

2009-05-17 Thread Jan Kotek

Hi Thomas,
I was using retrotranslator before, it works very nice. It does not
just 'strip away' everything, but have it's own implementation of
enums and other things. It also handles some 1.5 specific classes (for
example StringBuilder is replaced by StringBuffer).

I would recommend providing two distribution jars (one native 1.5
bytecode, one backport to 1.3). Native 1.5 bytecode can run little bit
faster.



Regards,
Jan




On Sat, May 16, 2009 at 6:44 PM, Thomas Kellerer
 wrote:
>
> I don't think support for 1.4 is important. How long is it that this
> has been de-supported by Sun?
>
> Even 1.5 will be de-supported by Sun end of this year (October).
>
> Just my .02€
>
> Thomas
>
>
> On 16 Mai, 12:20, Thomas Mueller  wrote:
>> Hi,
>>
>> So far the H2 source code is compatible to Java 1.4. I like to use
>> generics, extended for loops, and so on. Java 5.0 concurrency
>> utilities are not required and I don't plan to use them in the near
>> future.
>>
>> Java 1.4 could still be supported using Retrotranslator, Retroweaver,
>> or using -target jsr14:
>>
>> http://www.ibm.com/developerworks/java/library/j-jtp02277.htmlhttp://retrotranslator.sourceforge.nethttp://retroweaver.sourceforge.net
>>
>> Would this be a problem for anybody? Does anybody have experience
>> using any of the above technologies?
>>
>> Regards,
>> Thomas
>>
>> P.S. I didn't know about -target jsr14. Here is an example:
>>
>> public class TestJSR14 {
>>     public static void main(String[] args) {
>>         for(String a : args) {
>>             System.out.println(a);
>>         }
>>     }}
>>
>> javac -source 1.5 -target jsr14 TestJSR14.java
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: WeakReference page cache

2009-05-05 Thread Jan Kotek

Hi Thomas,
>
> That's bad. I guess that's not acceptable. Is it only when using the
> SoftReference, or also when using WeakReference? Soft anyway doesn't
> make sense (in my view).

It is only with soft cache and only under corner cases. In my test
case I tryed to load huge file into memory. I dont think it is
problem, it is more like 'feature'.
Weak references does not have this problem, they are GCed very fast.

>
> Why do you think so? I don't plan to add a dependency to the Google 
> collection.
>

MapMaker have reference disposal in separate thread, so memory can be
reclaimed faster. Problem is that this depends on concurrent-util from
1.5. So it is not propably an option.

I still think my patch is good and I am not making any changes. But
maybe I would update documentation and recommend WEAK with -server JRE
command line option (will improve GC).

Regards,
Jan Kotek

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: WeakReference page cache

2009-04-30 Thread Jan Kotek

Hi Thomas,
patch is yours, if you would like to modify parameters do it.  This
way it does not require so much codding in parameters parsing. And it
also better reflect implementation

I found one problem with soft cache. When heap is full of cached
pages, and program starts huge&fast allocation (big collection etc),
references are not cleared fast enought and it may fail with out of
memory exception.
Using MapMaker from Google collection would propably improve this very
much, but it depends on JRE 1.5. In future I may implement this as
optional dependency for JRE 1.5 (without breaking compablity with 1.3)


Jan


On Thu, Apr 30, 2009 at 5:37 AM, Thomas Mueller
 wrote:
>
> Hi,
>
> Thanks a lot! Unfortunately, I did not yet have time to analyze the
> patch. I think the SOFT type is not usefull (as you found out). I'm
> not sure if using a second parameter makes sense, I think it would
> simplify things if there is only one type. What about
>
> CACHE_TYPE=LRU, TQ, WEAK_LRU, WEAK_TQ
>
> Regards,
> Thomas
>
> On Sun, Apr 26, 2009 at 2:47 PM, Jan Kotek  wrote:
>> Hi Thomas,
>> here is final version of cache. I changed design. Now it is
>> implemented as 2nd level cache which wraps original cache (proxy
>> pattern). So it does not interfier with existing caching mechanism. It
>> adds new connection parameter CACHE2_TYPE with possible values WEAK,
>> SOFT and HARD. It is not used by default, maybe in future you can make
>> SOFT default.
>>
>> I also modified documentation:
>> H2 also supports 2nd level cache. It uses WEAK, SOFT or HARD
>> references. With WEAK reference
>> page is stored in cache until it is Garbage Collected (GCed). With
>> SOFT reference pages are
>> GCed only when system is low on memory. With HARD reference pages are
>> never GCed, so you need
>> to make sure you have enought memory to fit entire DB into it.By
>> default H2 does not use 2nd
>> level cache, in future SOFT may became default option. 2nd level cache
>> can be enabled by parameter
>> in database connection URL: (jdbc:h2:~/test;CACHE2_TYPE=SOFT)
>>
>> I ran your unit test with SOFT as default, only TestMemoryUsage' was
>> failing (propably because of cache). It should be compatibile with JRE
>> 1.3. For my usages it increases performance significantly (JRE 1.6)
>> with -Xmx512m
>>
>> This is final version of my patch. I will not modify unless bug
>> reports are reported. It also updates documentation and change log.
>>
>> Thomas: feel free to modify licence as you need.
>>
>> Best regards,
>> Jan
>>
>>
>> On Sun, Apr 19, 2009 at 5:42 PM, Thomas Mueller
>>  wrote:
>>>
>>> Hi,
>>>
>>> Thanks for the patch! What about using a new cache type? The easiest
>>> solution is to replace the the 2Q algorithm with the soft reference
>>> cache; better would be to use a new type (for example SOFT).
>>>
>>>> So I implemented cache using SoftReference. My prototype is attached.
>>>> It should be fine for read-only usage.
>>>
>>> What about combining it with LRU? So that written pages are kept in
>>> the LRU, and the read part is kept in the soft reference map?
>>>
>>>> Please try it with lot of memory free memory and read only scenario.
>>>> If it will be better then current cache I will continue and finish
>>>> implementation.
>>>
>>> I'm sorry, but I don't have much time to test it currently... But I'm
>>> sure it will be better than the current cache algorithm, because it
>>> doesn't require any settings and still will use all available memory.
>>>
>>>> I  need motivation :-)
>>>
>>> If you like, we could call it the "Jan Kotek" cache :-)
>>> jdbc:h2:~/test;CACHE=JAN_KOTEK
>>>
>>> If you want to make integrating the patch simpler for me, I suggest
>>> you follow the patch guidelines described at
>>> http://www.h2database.com/html/build.html#providing_patches
>>>
>>> Regards,
>>> Thomas
>>>
>>> >
>>>
>>
>> 
>>
>> >
>>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: New "page store" database file format

2009-04-26 Thread Jan Kotek

Hi Thomas,

I have question if you have any plans to increase cuncurrency
performance with new storage format.
My suggestion would be:
 * replace synchonization with less expensive locking (semaphor,
reentrance lock...)
 * more fine grained locking (maybe per page, not storage)
 * WRITE_LOCK, READ_LOCK etc
 * make some classes (Cache, StorageFile etc) concurrency safe.

I also have question if it is worth to start any work in this direction.
No one is complaining and Apache Derby which handles concurrency well
is already here.
Other issues would be more complicated code, JRE 1.3/1.4 compability,
testing etc..

Regards,
Jan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-04-06 Thread Jan Kotek

Maybe it was not closed properly. Can you try Channel implementation?
Jan

On Sun, Apr 5, 2009 at 1:03 PM, Thomas Mueller
 wrote:
>
> Hi,
>
>> There is a problem though: I could not connect to a previous database
>> created with jdbc:h2:..  using jdbc:h2:nioMapped:... Is it normal? Is
>> so, do you think that a future version will take care to make the
>> database compatible?
>
> No, it's not normal. Is there an exception? Could you post the stack
> trace? If you have time, could you also create a very simple test case
> that deletes the database, creates it with one mode and then opens it
> with the other? Just edit the code below. It works for me (I also use
> Mac OS X):
>
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import org.h2.tools.DeleteDbFiles;
>
> public class Test {
>    public static void main(String[] args) throws Exception {
>        DeleteDbFiles.execute("~", "test", true);
>        Class.forName("org.h2.Driver");
>        Connection conn = DriverManager.getConnection(
>                "jdbc:h2:~/test");
>        conn.createStatement().execute(
>                "create table test(n varchar) as select 'Hello'");
>        conn.close();
>        conn = DriverManager.getConnection(
>                "jdbc:h2:nioMapped:~/test");
>        ResultSet rs = conn.createStatement().executeQuery(
>                "select * from test");
>        rs.next();
>        System.out.println("nioMapped: " + rs.getString(1));
>        conn.close();
>        conn = DriverManager.getConnection(
>                "jdbc:h2:nio:~/test");
>        rs = conn.createStatement().executeQuery(
>                "select * from test");
>        rs.next();
>        System.out.println("nio: " + rs.getString(1));
>        conn.close();
>    }
>
> }
>
> Regards,
> Thomas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-03-14 Thread Jan Kotek

There is bug in build. jdbc:h2:nio is openning MappedByteBuffer
instead of file channel. And on windows XP it is failing with
exception:

Caused by: java.lang.IllegalStateException: Can't overwrite cause
at java.lang.Throwable.initCause(Throwable.java:320)
at 
org.h2.store.fs.FileSystemDiskNio.openFileObject(FileSystemDiskNio.java:73)
at org.h2.store.FileStore.(FileStore.java:84)
at org.h2.store.FileStore.open(FileStore.java:135)
at org.h2.store.FileStore.open(FileStore.java:117)
at org.h2.engine.Database.openFile(Database.java:435)
at org.h2.store.DiskFile.(DiskFile.java:139)
at org.h2.engine.Database.openFileIndex(Database.java:465)
at org.h2.engine.Database.open(Database.java:540)
at org.h2.engine.Database.openDatabase(Database.java:223)
... 118 more

Jan

On Sat, Mar 14, 2009 at 7:43 PM, Jan Kotek  wrote:
> Hi Thomas,
> Wow, that was quick release :-)
> I am working on new patch for Soft and Weak reference cache.
>
> Jan
>
> Thanks for credit.
>
> On Thu, Mar 12, 2009 at 9:23 PM, Thomas Mueller
>  wrote:
>>
>> Hi,
>>
>>>> I run your complete benchmarks  and can confirm that both NIO
>>>> implementations are slower. Maybe on other OS (I have winXP) it will
>>>> be faster.
>>>
>>> I think you misread the benchmarks? Or perhaps I do :-) But Thomas wrote:
>>>
>>>> > Statements per second: 68628 (regular)
>>>> > Statements per second: 76598 (NIO mapped, with mapped.load())
>>>> > Statements per second: 83756 (NIO mapped, without mapped.load())
>>>> > Statements per second: 83031 (NIO channel)
>>
>> This was on Mac OS, so NIO is faster on a Mac. On Windows XP, I get
>> different results: Regular is about 54000, NIO is about 5. So on
>> Windows, NIO is slower. On Windows XP, I couldn't even test NIO
>> mapped, because it runs out of memory.
>>
>> Regards,
>> Thomas
>>
>> >>
>>
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-03-14 Thread Jan Kotek

Hi Thomas,
Wow, that was quick release :-)
I am working on new patch for Soft and Weak reference cache.

Jan

Thanks for credit.

On Thu, Mar 12, 2009 at 9:23 PM, Thomas Mueller
 wrote:
>
> Hi,
>
>>> I run your complete benchmarks  and can confirm that both NIO
>>> implementations are slower. Maybe on other OS (I have winXP) it will
>>> be faster.
>>
>> I think you misread the benchmarks? Or perhaps I do :-) But Thomas wrote:
>>
>>> > Statements per second: 68628 (regular)
>>> > Statements per second: 76598 (NIO mapped, with mapped.load())
>>> > Statements per second: 83756 (NIO mapped, without mapped.load())
>>> > Statements per second: 83031 (NIO channel)
>
> This was on Mac OS, so NIO is faster on a Mac. On Windows XP, I get
> different results: Regular is about 54000, NIO is about 5. So on
> Windows, NIO is slower. On Windows XP, I couldn't even test NIO
> mapped, because it runs out of memory.
>
> Regards,
> Thomas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---




Re: NIO storage

2009-03-11 Thread Jan Kotek

Hi Thomas,

Load position is right.

FileChannel is safe, but I would strongly discurage to put
MappedByteBuffer as default implementation in next 10 years :-)

I run your complete benchmarks  and can confirm that both NIO
implementations are slower. Maybe on other OS (I have winXP) it will
be faster.

I have patch which is replacing byte[] by ByteBuffer in all layers up
to the StoragePage, but it is failing in one test case. I dont have
energy to finish this patch, but I would be happy to send it.

I wrote this patch with intention to rewrite H2 SQL to Kilim Java
microthreads. (something like Actors in Scala or Erlang). H2 in this
case would use async IO and would be able to fetch more pages at same
time.
If you want take look at Kilim and sample http server.

http://www.malhar.net/sriram/kilim/
http://www.kotek.net/asynchttpd

Jan



On Wed, Mar 11, 2009 at 7:45 AM, Thomas Mueller
 wrote:
>
> Hi,
>
>> feel free to change licence and modify code
>
> Thanks!
>
> I ran the benchmark 'TestPerformance' against the regular and the NIO
> version. The result is:
>
> Statements per second: 68628 (regular)
> Statements per second: 76598 (NIO mapped, with mapped.load())
> Statements per second: 83756 (NIO mapped, without mapped.load())
> Statements per second: 83031 (NIO channel)
>
>> what are your plans with FileChannel?
>
> I will make two prefixes:
> jdbc:h2:nio:... (FileChannel)
> jdbc:h2:nioMapped:... (memory mapped files)
>
> I might make nio the default later on (not now yet).
>
>> I tried 'cleaner hack'
>
> Your version didn't work for me at first. Then I added
> cleanerMethod.setAccessible(true), and now it works. See my code in
> the previous mail. If it fails (for whatever reason, for example when
> using an JDK where the method doesn't exist) it will fall back to
> using System.gc(). I will also add an option to disable it.
>
>> There is one modification to my patch: Try to use
>> MappedByteBuffer.load() right after buffer is initialized (at remap()
>> method). This should improve performance.
>
> For my benchmark () it actually decreases performance. Did I add it at
> the right place?
>
>        // maps new MappedByteBuffer, old one is disposed during GC
>        mapped = file.getChannel().map(mode, 0, file.length());
>        if (SysProperties.NIO_LOAD_MAPPED) {
>            mapped.load();
>        }
>        mapped.position(oldPos);
>
> In any case, I will add a system property (h2.nioLoadMapped, default false).
>
> Regards,
> Thomas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-03-10 Thread Jan Kotek

HI Thomas,

1) feel free to change licence and modify code

2) I implemented two new FileObject (FileChannel and
MappedDiskBuffer), what are your plans with FileChannel?

3) I tryed 'cleaner hack' (is in commented out in my patch) but unit
tests were failing with security exception.

4) There is one modification to my patch: Try to use
MappedByteBuffer.load() right after buffer is initialized (at remap()
method). This should improve performance.

http://java.sun.com/j2se/1.4.2/docs/api/java/nio/MappedByteBuffer.html#load()


Regards,
Jan





On Sun, Mar 8, 2009 at 5:40 PM, Thomas Mueller
 wrote:
>
> Hi again,
>
> The "support files over 2 GB" is not required: you can use
> FileSystemSplit (the prefix split:, database URL
> jdbc:h2:split:nio:~/test). If required, I can make this the default
> when using NIO, but let's first see what are the performance
> characteristics.
>
> As a workaround for the NIO problem
> http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038 , the
> 'cleaner hack' can be made to work like this:
>
> boolean useSystemGc =
> Boolean.valueOf(System.getProperty("h2.nioUseGc")).booleanValue();
> try {
>    Method cleanerMethod = mapped.getClass().getMethod("cleaner");
>    cleanerMethod.setAccessible(true);
>    Object cleaner = cleanerMethod.invoke(mapped);
>    Method clearMethod = cleaner.getClass().getMethod("clear", new Class[0]);
>    clearMethod.invoke(cleaner, new Object[0]);
> } catch (Throwable e) {
>    useSystemGc = true;
> }
> if (useSystemGc) {
>    WeakReference bufferWeakRef = new WeakReference(mapped);
>    mapped = null;
>    long start = System.currentTimeMillis();
>    while (bufferWeakRef.get() != null) {
>        if (System.currentTimeMillis() - start > GC_TIMEOUT_MS) {
>            throw new RuntimeException("Timeout (" + GC_TIMEOUT_MS
>                    + " ms) reached while trying to GC mapped buffer");
>        }
>        System.gc();
>        Thread.yield();
>    }
> }
>
> Regards,
> Thomas
>
>
> On Sun, Mar 8, 2009 at 6:23 PM, Thomas Mueller
>  wrote:
>> Hi,
>>
>> Thanks a lot for the patch! I will try to integrate it, however I will
>> have to change a few things.
>>
>> First, are you OK with the H2 license? I will add you as the 'initial
>> developer'.
>>
>> Then I will change the formatting (spaces instead of tabs, always use
>> {}). I will create a new class FileSystemDiskNio that extends
>> FileSystemDisk, and create a new prefix "nio:". Then you have used
>> generics, I will remove them (at the moment I like to support Java
>> 1.4).
>>
>> I did run a benchmark, but only a very simple one (using the H2 Console):
>>
>> DROP TABLE IF EXISTS TEST;
>> CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255));
>> @LOOP 10 INSERT INTO TEST VALUES(?, 'Hello');
>> @LOOP 10 SELECT * FROM TEST WHERE ID=?;
>> @LOOP 10 UPDATE TEST SET NAME='Hi' WHERE ID=?;
>> drop all objects delete files;
>>
>> Result:
>>
>> NIO: (jdbc:h2:nio:~/test)
>> 3177 ms
>> 1387 ms
>> 3766 ms
>>
>> Regular: (jdbc:h2:~/test)
>> 1496 ms
>> 2809 ms
>> 4268 ms
>>
>> Regards,
>> Thomas
>>
>>
>>
>> On Fri, Mar 6, 2009 at 10:21 AM, Mikkel Kamstrup Erlandsen
>>  wrote:
>>> 2009/3/6 Jan Kotek 
>>>>
>>>> 1) My patch adds two more classes and does not change others. It is
>>>> completely __OPTIONAL__
>>>> 2) 1.3 is __still supported__, but can use only classical file
>>>> storage. Multiple version of jars for each VM are __not needed__.
>>>> 3) NIO does not depend on 1.5 collections, minimal JRE version is 1.4
>>>>
>>>> It would be nice to get comments on my code or NIO perfomance. Should
>>>> I provide compiled JARs? Is someone interested on testing?
>>>
>>> Hi Jan,
>>>
>>> Just noting that I am eagerly wanting to try your patches as we are running
>>> a performance critical H2 DB here on State and University Library of
>>> Denmark, and we have seen significant performance gains porting some other
>>> systems to NIO.
>>>
>>> However my time is pretty much consumed by some looming deadlines so my NIO
>>> adventures are put on hold for now. I'll hopefully get a little time for
>>> this some time next week... Unless someone else beats me to it (here's for
>>> hoping :-))
>>>
>>> --
>>> Cheers,
>>> Mikkel
>>>
>>> >>
>>>
>>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: Performance on 1e6 records

2009-03-08 Thread Jan Kotek

My fault, when I add connection pooling performance rocks.

On Mon, Mar 2, 2009 at 7:06 PM, Jan Kotek  wrote:
>
> Thanks,
> increasing cache size helped.
>
> Jan
>
>
> On Mar 2, 12:13 pm, Ewald  wrote:
>> Hi Jan.
>>
>> Just out of interest... Is there an index on table.s1 and more
>> importantly, how much memory did you allocate for the database to use
>> as cache ?  This can have a tremendous impact.  On own of my databases
>> just increasing the memory to 32mb from 16mb caused a 3 second query
>> to become a sub-second query.
>>
>> Best regards
>> Ewald
>>
>> On Mar 1, 9:05 pm, Jan Kotek  wrote:
>>
>>
>>
>> > Hi,
>>
>> > I took H2 for quick spin on table with 1e6 records on disk. It was dog
>> > slow on queries like
>> > "SELECT * FROM table WHERE table.s1 LIKE 'Peter'
>>
>> > Is H2 designed to handle this amount of data? Or I am doing something
>> > wrong?
>>
>> > I also took look into source code. I found two major problems:
>>
>> > 1) Does one node in BTree consumes one page in storage?!  This would
>> > explain slow performance on btrees indexes.
>>
>> > 2) Each read is synchronized on Storage. So DB can not fetch multiple
>> > pages/columns from disk at one time.
>>
>> > Are there any plans about rewriting indexes and storage generally?
>>
>> > Thanks,
>> > Jan Kotek
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-03-06 Thread Jan Kotek

1) My patch adds two more classes and does not change others. It is
completely __OPTIONAL__
2) 1.3 is __still supported__, but can use only classical file
storage. Multiple version of jars for each VM are __not needed__.
3) NIO does not depend on 1.5 collections, minimal JRE version is 1.4

It would be nice to get comments on my code or NIO perfomance. Should
I provide compiled JARs? Is someone interested on testing?

Jan


On Thu, Mar 5, 2009 at 9:39 PM, Chris Schanck  wrote:
> Yeah, the fact that there are seperate, incompatible jars for 1.4, 1.5 and 6
> is a pain. But it might be worth it.
>
> On Thu, Mar 5, 2009 at 1:26 PM, Alex  wrote:
>>
>> I think that this backport uses different package names
>> (edu.emory.mathcs.backport.java.util.concurrent) and so the code using
>> it directly would not be 1.5 compatible.
>> So you still need some adapter glue to use the backport or 1.5
>> concurrent in the same program depending on the environment.
>>
>> On Mar 5, 1:04 pm, Chris Schanck  wrote:
>> > I would agree on NIO, but feel compelled to point out there is a well
>> > maintained backport of the concurrent stuff whichis tested back to 1.3
>> > athttp://backport-jsr166.sourceforge.net/. And the speedup from
>> > ConcurrentHashMap in some situations is considerable (multi-reader
>> > case).
>> > Likewise for explicit reader/writer locking.
>> >
>> > On Thu, Mar 5, 2009 at 12:26 PM, Alex  wrote:
>> >
>> > > Requiring NIO for H2 will most likely break compatibility with a whole
>> > > bunch of embedded JVMs that either do not support NIO or have a
>> > > broken/
>> > > unreliable/slow NIO implementation.
>> > > If NIO is introduced it should be a configurable option so that one
>> > > could still use a regular IO instead.
>> >
>> > > The same goes for requiring Java 1.5 classes like
>> > > "java.util.concurrent". This will not play well with embedded JVMs
>> > > where many are still at 1.4 API spec.
>> >
>> > > My .02
>> >
>> > --
>> > C. Schanck
>>
>
>
>
> --
> C. Schanck
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



WeakReference page cache

2009-03-04 Thread Jan Kotek

Hi,
there are two implementations of disk cache for fixed size. But why
there is no cache based on weak references? Are there any catches?

Jan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: NIO storage

2009-03-04 Thread Jan Kotek
Hi Thomas,
I spend three evenings with your code, very readable and nice. As
result I wrote two new implementations of FileObject .

FileObjectDiskChannel.java -
  * uses NIO FileChannel instead of random access file. There are no
catches, so should be safe to use.

FileObjectDiskMapped -
  * uses NIO MappedByteBuffer.
  * This is 'first shot' implementation and should be used carefully.
  * Buffer is remaped every time file size changes (so bad performance
in this case).
  * Hack with System.gc() to force old buffer to unmap, As result file
can be safely closed and deleted (even on Windows).
  * not tryed MappedByteBuffer.load() yet (can improve or degrade
performance for reading)
  * Can handle only files smaller then 2 GB (can be fixed)

Both FileObjects are passing all unit test. I did not validate performance yet.

I would be happy if you would take those patch for spin, run your
performance tests and send me your thoughts.

Also what is chance of integrating this to main branch? As optional
engines with prefix on connection URL(jdbcChannel, jdbcMapped)?

Regards,
Jan Kotek


On Tue, Mar 3, 2009 at 8:43 PM, Thomas Mueller
 wrote:
>
> Hi,
>
>> is there any progress/plan for NIO based storage? I will play with H2
>> in this direction so I would like to know situation.
>
> No, currently there are no plans (at least not from my side). However
> I plan to write a new storage engine that is based on blocks (2 KB or
> 4 KB each). See PageStore and related classes.
>
> Regards,
> Thomas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



h2-nio.patch
Description: Binary data


NIO storage

2009-03-02 Thread Jan Kotek

Hi,
is there any progress/plan for NIO based storage? I will play with H2
in this direction so I would like to know situation.

Jan Kotek


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Re: Performance on 1e6 records

2009-03-02 Thread Jan Kotek

Thanks,
increasing cache size helped.

Jan


On Mar 2, 12:13 pm, Ewald  wrote:
> Hi Jan.
>
> Just out of interest... Is there an index on table.s1 and more
> importantly, how much memory did you allocate for the database to use
> as cache ?  This can have a tremendous impact.  On own of my databases
> just increasing the memory to 32mb from 16mb caused a 3 second query
> to become a sub-second query.
>
> Best regards
> Ewald
>
> On Mar 1, 9:05 pm, Jan Kotek  wrote:
>
>
>
> > Hi,
>
> > I took H2 for quick spin on table with 1e6 records on disk. It was dog
> > slow on queries like
> > "SELECT * FROM table WHERE table.s1 LIKE 'Peter'
>
> > Is H2 designed to handle this amount of data? Or I am doing something
> > wrong?
>
> > I also took look into source code. I found two major problems:
>
> > 1) Does one node in BTree consumes one page in storage?!  This would
> > explain slow performance on btrees indexes.
>
> > 2) Each read is synchronized on Storage. So DB can not fetch multiple
> > pages/columns from disk at one time.
>
> > Are there any plans about rewriting indexes and storage generally?
>
> > Thanks,
> > Jan Kotek
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---



Performance on 1e6 records

2009-03-01 Thread Jan Kotek

Hi,

I took H2 for quick spin on table with 1e6 records on disk. It was dog
slow on queries like
"SELECT * FROM table WHERE table.s1 LIKE 'Peter'

Is H2 designed to handle this amount of data? Or I am doing something
wrong?

I also took look into source code. I found two major problems:

1) Does one node in BTree consumes one page in storage?!  This would
explain slow performance on btrees indexes.

2) Each read is synchronized on Storage. So DB can not fetch multiple
pages/columns from disk at one time.

Are there any plans about rewriting indexes and storage generally?

Thanks,
Jan Kotek





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~--~~~~--~~--~--~---