Re: [h2] 1.4 beta creates much bigger database file
Hi Thomas, The database file size in 1.4.184 is much, much better than in earlier 1.4.x releases. I've done some trials and these are my findings: 1.3.176: Fully loaded database after shutdown is 317 Mb 1.4.184: Fully loaded database after shutdown is 380 Mb This seems reasonable. On Friday, 19 December 2014 17:15:29 UTC+8, Thomas Mueller wrote: Hi, Version 1.4.184 should produce smaller database files than previous version (1.4.x - 1.4.182), maybe half or a third of the old file size. It would be great to get some real-world results! Regards, Thomas On Tue, May 6, 2014 at 6:24 PM, Thomas Mueller thomas.to...@gmail.com javascript: wrote: Hi, Some initial results: you can shrink the database by running shutdown compact or shutdown defrag. Each time this is run, it shrinks a few MB (up to some point, of course). This works, but it's relatively slow. Now the task is to make it faster. There are two ways: shrink it fully to the minimum size, and shrink it incrementally (like now) but faster. I'm working on that now. Regards, Thomas On Tue, May 6, 2014 at 11:39 AM, Steve McLeod steve@gmail.com javascript: wrote: Hi Thomas, I've sent you a private email with a link to the new database file, made with H2 1.4.178 Regards, Steve On Monday, 5 May 2014 07:46:16 UTC+2, Thomas Mueller wrote: Hi, The database file should shrink if you run shutdown defrag. The current compact algorithm is quite inefficient, that means the databases file is quite big on average. The highest priority is still to ensure it always works correctly, and when that's done I will work on more efficiently re-using disk space and specially compact the file faster when closing the database. Could you send me the new database file? It would be nice to have a real-world database file to test this. The last file you sent helped a lot, thanks to it I found some problems that completely prevented the file to shrink. Regards, Thomas On Sunday, May 4, 2014, Steve McLeod steve@gmail.com wrote: Hi Thomas, I tested the same large data import with H2 1.4.178, and there is no improvement over H2 1.4.177. Here are the file sizes, in both cases after the app has stopped: H2 1.3.176: pokercopilot.h2.db 301,669,352 bytes H2 1.4.178: pokercopilot.mv.db 1,023,037,440 bytes Let me know what I can do to help. Regards, Steve On Saturday, 19 April 2014 11:44:05 UTC+2, Steve McLeod wrote: Hi Thomas, Great! Glad I could help make your superb product even better. On Friday, 18 April 2014 21:38:27 UTC+2, Thomas Mueller wrote: Hi, Thanks a lot for the database! I know what the problem is now, but I couldn't fix it yet. The database file (pokercopilot2.mv.db) has about 181 MB of live data, the rest (about 78%) is not used. The mechanism to get rid of the unused space is not working as it should for this case (I think the problem is that b-tree nodes are not processed correctly). This will be fixed in the next release. Regards, Thomas On Fri, Apr 18, 2014 at 5:29 PM, Steve McLeod steve@gmail.com wrote: Hi Thomas, I've sent a link to file privately to your email address. Regards, Steve On Friday, 18 April 2014 14:04:37 UTC+2, Thomas Mueller wrote: Hi, Hm, that didn't help much. Could you send me the (compressed) database files please? If it's too big, what is the compressed size of the files? Regards, Thomas On Fri, Apr 18, 2014 at 1:07 PM, Steve McLeod steve@gmail.com wrote: Hi Thomas, Thanks for the suggestion. I tried adding ;retention_time=1000 to the URL, and this resulted in a small improvement. pokercopilot.h2.db 302,018,560 bytes pokercopilot.mv.db 999,120,896 bytes pokercopilot.mv.db with RETENTION_TIME=1000: 811,728,896 bytes These numbers all reflect a loading of data in a newly created database that consisted of roughly 2,400,000 INSERTS and UPDATES with plenty of SELECTS and almost no DELETES. After the loading was complete, I let the application keep running with the database open for a few minutes, then close the application and therefore the database. Here is the full JDBC url I'm using: jdbc:h2:/Users/steve/Library/Application Support/com.barbarysoftware. pokercopilot/database/pokercopilot;DATABASE_EVENT_LISTENER='co m.barbarysoftware.pokercopilot.database.DatabaseListener';COMPRESS_ LOB=DEFLATE;CACHE_SIZE=65536;RETENTION_TIME=1000 Let me know if there is anything else I can do to help diagnose this. Regards, Steve On Thursday, 17 April 2014 17:15:50 UTC+2, Thomas Muel -- You received this message because you are subscribed to the Google Groups H2 Database group. To unsubscribe from this group and stop receiving emails from it, send an email to h2-database+unsubscr...@googlegroups.com. To post to this group, send email to h2-database@googlegroups.com. Visit this group at http://groups.google.com/group/h2-database.
Re: [h2] MERGE error cased by NOT NULL columns (that are not set null)
Hi, there has to be some other problem. Now I see - I intended to update the table, so not providing default columns was not an issue. But somehow the original data in the database changed and some of the rows that I wanted to update were deleted. So now the merge tried inserting those rows and of course that fails when SHEET value is not given because of the NOT NULL constraint. My bad, I did not check that. I have to inspect what happened to the original rows that are missing from the table and prune the merge dataset. Thanks for the quick answer anyways. Cheers, David On Friday, December 19, 2014 9:12:30 PM UTC+1, Rami Ojares wrote: Merge does either an insert or update. If it does an insert it must always insert a complete row. If the user does not provide a value for some column then dbms uses the default value for that column. If the user did not explicitly define a default value for the column then the default value is NULL. Ergo the dbms tries to insert a row where column sheet has value NULL but NULL is not allowed for that column. Does this make sense? - rami On 19.12.2014 21:04, David Hájek wrote: Hi, I was trying to fix some screwed up data by running this by hand: MERGE INTO SheetOrders (id, createdAt, createdBy) KEY(id) VALUES (154, '2014-11-20 08:36:07.209', 4), (164, '2014-11-24 10:25:06.395', 3), (165, '2014-11-24 10:25:46.987', 3), (171, '2014-11-27 10:35:09.764', 3), (172, '2014-11-27 10:35:41.147', 3), (173, '2014-11-27 11:58:11.06', 3), (174, '2014-11-27 12:00:32.21', 3), (176, '2014-12-01 14:58:41.081', 3), (178, '2014-12-03 09:48:16.415', 3), (179, '2014-12-03 10:07:49.815', 3), (181, '2014-12-05 09:36:35.754', 3), (182, '2014-12-08 08:47:11.349', 3), (183, '2014-12-09 12:48:11.538', 3), (184, '2014-12-09 14:16:26.757', 3), (185, '2014-12-11 14:43:40.76', 3), (186, '2014-12-11 16:23:17.315', 3), (187, '2014-12-12 09:02:29.67', 3), (188, '2014-12-12 12:10:01.006', 3); On a table defined as: CREATE TABLE IF NOT EXISTS SheetOrders ( id IDENTITY PRIMARY KEY, printOrder BIGINT NULL, product BIGINT NULL, sheet BIGINT NOT NULL, amount INT NOT NULL, note VARCHAR(255), status INT NOT NULL, createdAt TIMESTAMP NOT NULL, createdBy BIGINT NOT NULL, FOREIGN KEY (printOrder) REFERENCES PrintOrders(id), FOREIGN KEY (product) REFERENCES Products(id), FOREIGN KEY (sheet) REFERENCES Sheets(id), FOREIGN KEY (createdBy) REFERENCES Users(id) ); What I get is: NULL not allowed for column SHEET; SQL statement: MERGE INTO SheetOrders (id, createdAt, createdBy) KEY(id) VALUES (154, '2014-11-20 08:36:07.209', 4), (164, '2014-11-24 10:25:06.395', 3), (165, '2014-11-24 10:25:46.987', 3), ... I don't understand this behavior. I updated several other tables in the same manner - all had NOT NULL and FOREIGN KEY columns and none gave me any errors. I would suppose that when I don't specify the NOT NULL column as NULL explicitly in the merge, the DB should not even care about that column in a MERGE that does not mention that column. What is happening? How do I get around this? I also tried to set the columns NULL with ALTER before the MERGE (then the MERGE worked fine) and then back to NOT NULL which gave me this error afterwards: Column SHEET contains null values; SQL statement: ALTER TABLE SheetOrders ALTER COLUMN sheet SET NOT NULL [90081-175] [ERROR:90081, SQLSTATE:90081] But when I queried the table for NULL values like such: SELECT * FROM SheetOrders WHERE sheet = NULL; I got 0 rows back, which means there are no NULL values in that column. So what's the deal? Is the DB corrupt? Is this a bug? Undocumented feature? Why can't I MERGE into the table and set the column to NOT NULL? Regards, David -- You received this message because you are subscribed to the Google Groups H2 Database group. To unsubscribe from this group and stop receiving emails from it, send an email to h2-database...@googlegroups.com javascript:. To post to this group, send email to h2-da...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/h2-database. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups H2 Database group. To unsubscribe from this group and stop receiving emails from it, send an email to h2-database+unsubscr...@googlegroups.com. To post to this group, send email to h2-database@googlegroups.com. Visit this group at http://groups.google.com/group/h2-database. For more options, visit https://groups.google.com/d/optout.
Re: [h2] .mv.db file size grows large
Sorry, the exception only happens when closing a connection with a large amount (114508) of MERGE queries executed, and I couldn't post the test case for you. The table only contains three columns as name VARCHAR PRIMARY KEY, df INT, category_counts VARCHAR This is reproducible in both 1.4.183 and 1.4.184. In 1.4.184 I got org.h2.jdbc.JdbcSQLException: General error: java.lang.IllegalStateException: Negative position -1593 [1.4.184/6] [5-184] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) ~[h2-1.4.184.jar:1.4.184] at org.h2.message.DbException.get(DbException.java:168) ~[h2-1.4.184.jar:1.4.184] at org.h2.message.DbException.convert(DbException.java:295) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2374) ~[h2-1.4.184.jar:1.4.184] Caused by: java.lang.IllegalStateException: Negative position -1593 [1.4.184/6] at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:773) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.MVStore.readPageChunkReferences(MVStore.java:1267) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1245) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1249) ~[h2-1.4.184.jar:1.4.184] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1249) ~[h2-1.4.184.jar:1.4.184] I'm falling back to 1.4.182 now and I found that the .mv.db file is no longer that large (around 500M), not sure if I suddenly change anything. Regards, Pishen 2014-12-20 16:11 GMT+08:00 Thomas Mueller thomas.tom.muel...@gmail.com: Hi, Is this exception reproducible with version 1.4.184, with a fresh database? If yes, would it be possible to send me or post a reproducible test case? Regards, Thomas On Wed, Dec 17, 2014 at 5:36 PM, pishen tsai pishe...@gmail.com wrote: The .mv.db file is not growing that large (now around 200M) even without re-opening this time. But I got an Exception that I have no idea why... org.h2.jdbc.JdbcSQLException: General error: java.lang.IllegalStateException: Negative position -1847 [1.4.183/6] [5-183] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) ~[h2-1.4.183.jar:1.4.183] at org.h2.message.DbException.get(DbException.java:168) ~[h2-1.4.183.jar:1.4.183] at org.h2.message.DbException.convert(DbException.java:295) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2369) ~[h2-1.4.183.jar:1.4.183] Caused by: java.lang.IllegalStateException: Negative position -1847 [1.4.183/6] at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:768) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.MVStore.readPageChunkReferences(MVStore.java:1262) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1240) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1244) ~[h2-1.4.183.jar:1.4.183] at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1244) ~[h2-1.4.183.jar:1.4.183] Regards, Pishen 2014-12-16 0:13 GMT+08:00 pishen tsai pishe...@gmail.com: Thanks, I will try it in days, and will report the result after that. Regards, Pishen 2014-12-16 0:09 GMT+08:00 Thomas Mueller thomas.tom.muel...@gmail.com: Hi, Could you try again with the latest version of H2? (Please not there is a bug that prevents index usage if you use auto-increment column; but I think you don't use this, so it shouldn't affect you). Regards, Thomas On Mon, Dec 1, 2014 at 12:18 PM, pishen tsai pishe...@gmail.com wrote: Hi, I have surveyed a bit and found that h2 need to compact the database when the JDBC connection is closed. Hence I tried to close and reopen the connection every 1000~2000 INSERT/UPDATE. Now the size of .mv.db grows much slower, but the size is still larger than 44M, maybe I have to enlarge the shutdown delay for the DB to compact more. May tried that later. Thanks, pishen Thomas Mueller於 2014年12月1日星期一UTC+8下午2時50分53秒寫道: Hi, It is a known problem that the .mv.db file grows large, and I'm working on reducing the file size. However it shouldn't grow _that_ large. Could you post your test case please? Regards, Thomas On Sunday, November 30, 2014, pishen tsai pish...@gmail.com wrote: Hello, I'm using Anorm (in Play framework 2.3.6) to store one table with 45 rows in H2 (1.4.182). The table is created by CREATE TABLE terms (name VARCHAR(20) PRIMARY KEY, df INT) After frequently INSERT and UPDATE all the data into the table, the .mv.db file size