Firstly note that this test generates fairly widely varying results, so you need to run it a couple of times and average the output.
That said, this performance here is completely dependant on the CACHE_SIZE constant in FileNioMemData. A value of anything > approx 80 produces decent performance. However, the point is that now we are approaching trade-off zone. If we want more performance, we need to let enough data remain uncompressed to fit the working set of your queries. However, that will be different from program to program, and even different depending on what the user is doing with the program. But a larger cache means less compressed data, which means the DB will use more RAM. So right now I am leaning towards sizing the cache as a % of the size of the DB, and adding a setting to allow users to override that %. -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To unsubscribe from this group and stop receiving emails from it, send an email to h2-database+unsubscr...@googlegroups.com. To post to this group, send email to h2-database@googlegroups.com. Visit this group at https://groups.google.com/group/h2-database. For more options, visit https://groups.google.com/d/optout.