I haven't tried 1.4, yet. In 1.3 it really looks like a row size issue to
me. Given the same amount of underlying data, splitting it across rows of
different lengths gives very odd results. I would expect that the overhead
of storing rows would go down as the number of rows goes down (with longe
Hi,
This feature is a bit hidden (right now; once we find it's really useful it
will be better documented). See
http://h2database.com/javadoc/org/h2/engine/DbSettings.html#COMPRESS
> Are there levels of compression that can be specified on the URL or
simply true/false?
Internally, yes, but you c
Quick question... I cannot find the ";compress=true" URL option in the
documentation.
Are there levels of compression that can be specified on the URL or simply
true/false?
On Tuesday, June 24, 2014 2:09:20 AM UTC-4, Thomas Mueller wrote:
>
> Hi,
>
> Did you try with the latest version of H2
Hi,
Did you try with the latest version of H2 (1.4.x)? Please note you will
need to compact the database using "shutdown defrag" after you finished.
And apppend ";compress=true" to the database URL, as Noel wrote.
Regards,
Thomas
On Saturday, June 21, 2014, Noel Grandin wrote:
> If you're wo
Following up on the large db files, I ran a few tests, loading into this
table:
CREATE TABLE IF NOT EXISTS `scores` (`id` INT NOT NULL PRIMARY KEY,`scores`
VARBINARY(400) NOT NULL)
I varied the settings of LOG, and UNDO_LOG, used one csvread or a sequence
of INSERT statements, inside a transac