Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
Hmm. So after a normal vacuum, "Non-sequential pages" is basically 1 for 
everything, tables and indexes alike. On a read-only "vacuum into" it's 
anywhere from 22% to 99.5%, usually being more towards 99%.

Numbers for "Table X without any indicies" seem identical apart from the 
Non-sequential pages being close to 99%. So the bloat seems to be all in the 
indexes.

Brain now fried. Can provide any additional requested info next week.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 2:18 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

But now, here's the weird part ladies and gentlemen.

I started this on one drive, vacuum into completed, but then I had to do real 
work on that drive, so copied the file over to an unused drive to do the 
previously reported timing without it being biased by other things going on. 
But I saw something weird, which I just ran again to confirm.

"vacuum into" created a larger file when the original file was read only than 
it did when the original file was writable.

Have run this now twice with the same output file size, so my bad eyes are at 
least being consistent in what they're seeing. Am on Windows 7 here. When the 
original db file was read only (Windows properties, not any flag to the CLI) 
the resulting file from "vacuum into" was 6% bigger. When I undid the read only 
checkbox and ran the same thing again, the resulting file was the same size as 
the original.

Original file size:  20,467,359,744
Vacuum into size when original is writable:  20,467,359,744
Vacuum into size when original is read only: 21,760,516,096


Timing was also between the vacuum into, and regular vacuum times for the 
writable file. Well, wall clock time anyway. "user" was way up.

sqlite> vacuum into 'vac_into_from_ro_file.sqlite';
Run Time: real 1107.448 user 853.185069 sys 87.048558


Gonna run sqlite3_analyzer.exe on them now to see what's up. But closing in on 
the end of the work day/week here.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 1:53 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
But now, here's the weird part ladies and gentlemen.

I started this on one drive, vacuum into completed, but then I had to do real 
work on that drive, so copied the file over to an unused drive to do the 
previously reported timing without it being biased by other things going on. 
But I saw something weird, which I just ran again to confirm.

"vacuum into" created a larger file when the original file was read only than 
it did when the original file was writable.

Have run this now twice with the same output file size, so my bad eyes are at 
least being consistent in what they're seeing. Am on Windows 7 here. When the 
original db file was read only (Windows properties, not any flag to the CLI) 
the resulting file from "vacuum into" was 6% bigger. When I undid the read only 
checkbox and ran the same thing again, the resulting file was the same size as 
the original.

Original file size:  20,467,359,744
Vacuum into size when original is writable:  20,467,359,744
Vacuum into size when original is read only: 21,760,516,096


Timing was also between the vacuum into, and regular vacuum times for the 
writable file. Well, wall clock time anyway. "user" was way up.

sqlite> vacuum into 'vac_into_from_ro_file.sqlite';
Run Time: real 1107.448 user 853.185069 sys 87.048558


Gonna run sqlite3_analyzer.exe on them now to see what's up. But closing in on 
the end of the work day/week here.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 1:53 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread Richard Hipp
On 2/8/19, David Raymond  wrote:
> Non-scientific "let's just try it" results
>
> Short version:
> Original file had been vacuumed already as the last thing that had happened
> to it.
> File size: 20,467,359,744
>
> sqlite> vacuum into 'vac_into.sqlite';
> Run Time: real 589.577 user 222.941029 sys 57.829571
>
> sqlite> vacuum;
> Run Time: real 1429.063 user 236.325915 sys 199.322478
>
> Synchronous was off, journal mode was normal rollback journal.

Fair enough.  I wasn't thinking about the overhead of journaling when
a normal VACUUM copies the new database back over top of itself.  It
is having to do two complete copies of the database, not just one.  So
you may well get more than 2x better performance.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread Chris Locke
If you renamed file1.db to file1.bak, opened file1.bak, vacuum into
file1.db, close file1.bak, you have a backup pre-vacuum (just in case...)
and 'streamlines' the process some-what.
Obviously, you'd have to rename the file back again if the vacuum failed
(out of disk space, etc)

Just a thought


Chris

On Fri, Feb 8, 2019 at 4:22 PM David Raymond 
wrote:

> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...
>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread Richard Hipp
On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum results in larger database after running pragma integrity_check

2017-04-04 Thread Richard Hipp
On 4/4/17, Richard Hipp  wrote:
> On 4/4/17, Ben Newberg  wrote:
>> I've noticed with 3.18.0 that it's possible to make a database increase
>> in
>> size after running pragma integrity_check (which returns "ok") and then
>> running vacuum.
>
> I can now repro the behavior and have bisected to this check-in:
> https://www.sqlite.org/src/timeline?c=aa02bd
>
> Still do not understand how a (read-only) integrity_check might affect
> a subsequent VACUUM operation, however.

Dan has now checked in a fix for the problem on trunk.

   https://www.sqlite.org/src/timeline?c=e5bb7d

Because this problem never results in an incorrect answer (as far as
we can determine) only a sub-optimal vacuum, we are not calling it a
bug.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum results in larger database after running pragma integrity_check

2017-04-04 Thread Richard Hipp
On 4/4/17, Ben Newberg  wrote:
> I've noticed with 3.18.0 that it's possible to make a database increase in
> size after running pragma integrity_check (which returns "ok") and then
> running vacuum.

I can now repro the behavior and have bisected to this check-in:
https://www.sqlite.org/src/timeline?c=aa02bd

Still do not understand how a (read-only) integrity_check might affect
a subsequent VACUUM operation, however.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum results in larger database after running pragma integrity_check

2017-04-04 Thread Richard Hipp
Can you email me the database that does this?

On 4/4/17, Ben Newberg  wrote:
> I've noticed with 3.18.0 that it's possible to make a database increase in
> size after running pragma integrity_check (which returns "ok") and then
> running vacuum.
>
> Alternatively, vacuuming without running pragma integrity_check first keeps
> the database the same size as before.
>
> The page size on the database in question is 1024. The database starts with
> no pages in the freelist. I can't reproduce this with 3.17.0.
>
> SQLite version 3.18.0 2017-03-28 18:48:43
> Enter ".help" for usage hints.
>
> --1) Before the vacuum. Database page count = 3077, and the freelist page
> count is 0:
> sqlite> .dbinfo
> database page size:  1024
> write format:1
> read format: 1
> reserved bytes:  0
> file change counter: 52
> database page count: 3077
> freelist page count: 0
> schema cookie:   19
> schema format:   4
> default cache size:  0
> autovacuum top root: 0
> incremental vacuum:  0
> text encoding:   1 (utf8)
> user version:0
> application id:  0
> software version:3008010
> number of tables:2
> number of indexes:   1
> number of triggers:  0
> number of views: 0
> schema size: 309
>
> --2) Running a vacuum (without pragma integrity_check) results in the same
> size of database: 3077 page count and 0 freelist page count:
> sqlite> vacuum;
> sqlite> .dbinfo
> database page size:  1024
> write format:1
> read format: 1
> reserved bytes:  0
> file change counter: 53
> database page count: 3077
> freelist page count: 0
> schema cookie:   20
> schema format:   4
> default cache size:  0
> autovacuum top root: 0
> incremental vacuum:  0
> text encoding:   1 (utf8)
> user version:0
> application id:  0
> software version:3018000
> number of tables:2
> number of indexes:   1
> number of triggers:  0
> number of views: 0
> schema size: 309
>
> --3) Now running pragma integrity_check which returns "ok", and then
> vacuuming. This increases the database page count to 3236:
> sqlite> pragma integrity_check;
> ok
> sqlite> vacuum;
> sqlite> .dbinfo
> database page size:  1024
> write format:1
> read format: 1
> reserved bytes:  0
> file change counter: 54
> database page count: 3236
> freelist page count: 0
> schema cookie:   21
> schema format:   4
> default cache size:  0
> autovacuum top root: 0
> incremental vacuum:  0
> text encoding:   1 (utf8)
> user version:0
> application id:  0
> software version:3018000
> number of tables:2
> number of indexes:   1
> number of triggers:  0
> number of views: 0
> schema size: 309
> sqlite>
>
> Here is the full schema:
> CREATE TABLE bids_list (report_id INTEGER, price_id INTEGER, premium_id
> INTEGER, period_inactive INTEGER DEFAULT (0) CHECK (period_inactive IN (0,
> 1)));
> CREATE TABLE bids_dates (report_id integer primary key, date text, current
> integer check (current in (0, 1)));
> CREATE INDEX idx_price_id ON bids_list (price_id);
> /* No STAT tables available */
>
> Is the pragma fixing something in the index perhaps?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-03 Thread Richard Hipp
On 11/3/16, Christian Czech  wrote:
>
> This bug in a release version did a lot of damage to us.
>
> Especially frustrating was that nobody really cared when I reported it:

We cared, and we looked into the problem, but we couldn't reproduce it
at that time.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-03 Thread Christian Czech

sqlite is a great tool! And I  also use (and bought) the SEE version.

This bug in a release version did a lot of damage to us.

Especially frustrating was that nobody really cared when I reported it:

-

We have a problem with the latest stable release of SQLite. When 
compiled for 32 bit under windows and vacuum a database file larger than 
2GB, memory storage usage gets up to 2GB and than vacuum crashes with 
not enough memory. It seems that a temp file is not generated, not for 
standard and not for Wal database.


Anybody else got into this issue?



I only want to thank Stephan for reporting the same problem seperately 8 
days later and for getting the attention.


And Dan for fixing it.

In Pre-release Snapshot sqlite-snapshot-201611021450.tar.gz bug is 
really fixed. No issues anymore thanks finally!


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Bernardo Sulzbach

On 11/02/2016 05:18 PM, Christian Czech wrote:

It is a fundamental bug. I hope one day it gets fixed. Otherwise 3.15.0
is useless.



It has been fixed: http://www.sqlite.org/src/info/3028845329c9b7ac

--
Bernardo Sulzbach
http://www.mafagafogigante.org/
mafagafogiga...@gmail.com
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Christian Czech

I already reported this in this forum a while ago but nobody cared.

It is only in the 32bit version when RAM usage gets over 2GB on windows. On 
64 bit of course this does not come up if u have enough ram. It is only 
with 3.15.0. It is easy to reproduce.


Problem is that intermediate temp file is never used. Regardless of settings.

It is a fundamental bug. I hope one day it gets fixed. Otherwise 3.15.0 is 
useless.




Am 2. November 2016 13:14:51 schrieb "Hennekens, Stephan" 
:



Hello,

Since last version of sqlite  (3.15.0) VACUUM causes an 'Out of memory' 
error, at least on my bigger databases (> 1 Gb). I came across this error 
when I installed the latest DISQLite3 version 
(http://www.yunqa.de/delphi/products/sqlite3/index). I was also able to 
reproduce the error when using Sqlite3.dll directly via a Delphi wrapper class.
In case needed you can download a compressed 2 GB file from here: 
http://www.synbiosys.alterra.nl/downloads/lvd.7z


Regards,

Stephan Hennekens
Wageningen Environmental Research (Alterra), The Netherlands
Tel: +31 (0)317 485887 | Mobile: +31 (0)6 20921403 | WUR internal: 85887
Skype: stephan.hennekens






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Richard Hipp
On 11/2/16, Dan Kennedy  wrote:
>
> We can reproduce it here now too. There will likely be a fix soon...
>

Dan's fix has been checked in and is available in the "Pre-release
Snapshot" on the download page.  https://www.sqlite.org/download.html

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Dan Kennedy

On 11/02/2016 09:36 PM, Ralf Junker wrote:
I am the author of the DISQLite3 Delphi wrapper. Assurance to all 
DISQLite3 users: The leak is not DISQLite3 specific. DISQLite3 5.15.0 
behaves exactly like the official SQLite 3.15.0 release.


The problem can reliably be reproduced using the official sqlite3.exe 
3.15.0 binary on Win32.


Bisection shows that the error entered the SQLite code base here, 
which is the first version which runs out of memory during VACUUM:


http://www.sqlite.org/src/info/29d63059b4d2bb61


We can reproduce it here now too. There will likely be a fix soon...

Dan.





I am investigating further.

Ralf

On 01.11.2016 14:11, Hennekens, Stephan wrote:


Since last version of sqlite  (3.15.0) VACUUM causes an 'Out of
memory' error, at least on my bigger databases (> 1 Gb). I came
across this error when I installed the latest DISQLite3 version
(http://www.yunqa.de/delphi/products/sqlite3/index). I was also able
to reproduce the error when using Sqlite3.dll directly via a Delphi
wrapper class. In case needed you can download a compressed 2 GB file
from here: http://www.synbiosys.alterra.nl/downloads/lvd.7z

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Ralf Junker
I am the author of the DISQLite3 Delphi wrapper. Assurance to all 
DISQLite3 users: The leak is not DISQLite3 specific. DISQLite3 5.15.0 
behaves exactly like the official SQLite 3.15.0 release.


The problem can reliably be reproduced using the official sqlite3.exe 
3.15.0 binary on Win32.


Bisection shows that the error entered the SQLite code base here, which 
is the first version which runs out of memory during VACUUM:


http://www.sqlite.org/src/info/29d63059b4d2bb61

I am investigating further.

Ralf

On 01.11.2016 14:11, Hennekens, Stephan wrote:


Since last version of sqlite  (3.15.0) VACUUM causes an 'Out of
memory' error, at least on my bigger databases (> 1 Gb). I came
across this error when I installed the latest DISQLite3 version
(http://www.yunqa.de/delphi/products/sqlite3/index). I was also able
to reproduce the error when using Sqlite3.dll directly via a Delphi
wrapper class. In case needed you can download a compressed 2 GB file
from here: http://www.synbiosys.alterra.nl/downloads/lvd.7z

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM causes Out of memory error

2016-11-02 Thread Donald Griggs
Stephan,

I don't have the skills to fix your difficulty, but FWIW, I did reproduce
your symptom using the database you provided in case that helps others
diagnose it properly.

I used:
-- SQLite version 3.15.0
-- official command line utility
-- Windows 7
-- 16GB installed ram, plenty of hard drive

I tried:
-- PRAGMA temp_store set to default (zero), and again with it set to 1.
-- Integrity check was OK.

Regards,
   Donald G.


==

C:\Users\c058905\Downloads>sqlite3 lvd.db
SQLite version 3.15.0 2016-10-14 10:20:30
Enter ".help" for usage hints.
sqlite> pragma integrity_check
   ...> ;
ok
sqlite> PRAGMA temp_store=1;
sqlite> PRAGMA temp_store;
1
sqlite> vacuum;
Error: out of memory

>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/05/2015 10:25 AM, Jim Wilcoxson wrote:
> It's easy to do this in Python using iterdump(),

Be aware that the pysqlite dump implementation is incomplete in many
ways.  Some issues:

- - Doesn't dump in a transaction so will have irregularities if things
change during the dump

- - Deletes the entire sqlite_sequence table instead of the entry for
the table being dumped/restored

- - Runs ANALYZE on all tables during restore if sqlite_stat1 exists
(ignores later versions of stat), for every table restored

- - Doesn't handle virtual tables (certainly can't restore them -
probably chokes on backing them up)

- - Messes up if any table names or columns have a double quote in their
name (probably single quotes too)

- - Doesn't disable foreign key processing during restore which can
result in a huge mess

- - Who knows how many other issues

pysqlite's iterdump is 50 lines long.  The APSW dump code (also in
Python) is 230 lines, and the SQLite shell C code is about 200 lines.
 pysqlite is definitely missing many of the finer details.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlSrRo8ACgkQmOOfHg372QQnfACgz5idM01KvQEDcuXWKaU9M21R
OqkAoN/TDCCgOOD5jW2Iqi/obGt57dRY
=4OWi
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Jay Kreibich

On Jan 5, 2015, at 5:03 PM, Eduardo Morras  wrote:

> On Mon, 5 Jan 2015 14:42:28 -0600
> Jay Kreibich  wrote:
> 
>> 
>> 
>> On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson  wrote:
>> 
>>> Simon - instead of using vacuum, it's much faster to create a new
>>> database from the old one, then rename it.  It's easy to do this in
>>> Python using iterdump(), or you can connect to the new (empty)
>>> database, do your create table statements, attach the old database
>>> as olddb, then do:
>>> 
>>> insert into table1 select * from olddb.table1;
>>> insert into table2 select 8 from olddb.table2;
>>> 
>>> This also lets you do the create table stmts w/o indexes, and add
>>> the indexes after the inserts.  Not sure if that is faster or not
>>> for your data.
>> 
>> If you look at code for VACUUM, that’s more or less what it does…
>> only it is very smart about it, properly preserving ROWID values, as
>> well as exact table definitions, sequences, analytics, and all the
>> meta-data in the database.
> 
> Sqlite3 Backup API should work too, and covers your vacuum to file proposal.

No, the backup API does a page-by-page copy.  It’s goal is to make a 
bit-perfect copy of the original database file.  It makes no attempt to VACUUM 
the database as it is backed-up, and has no understanding of the data it is 
copying.  The backup-API does not remove empty space or re-order pages, nor 
does it allow changes to database page size and other alterations supported by 
VACUUM.  The backup API also operates outside the SQL transaction system.

VACUUM TO would be a simplified version of VACUUM that only requires 2x the 
disk space (rather than 3x) and much, much less I/O (possibly 1/3 or so).  It 
would be faster for many applications that control all access to a database, 
such as applications that use SQLite DBs as their document file format.

 -j

--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Eduardo Morras
On Mon, 5 Jan 2015 14:42:28 -0600
Jay Kreibich  wrote:

> 
> 
> On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson  wrote:
> 
> > Simon - instead of using vacuum, it's much faster to create a new
> > database from the old one, then rename it.  It's easy to do this in
> > Python using iterdump(), or you can connect to the new (empty)
> > database, do your create table statements, attach the old database
> > as olddb, then do:
> > 
> > insert into table1 select * from olddb.table1;
> > insert into table2 select 8 from olddb.table2;
> > 
> > This also lets you do the create table stmts w/o indexes, and add
> > the indexes after the inserts.  Not sure if that is faster or not
> > for your data.
> 
> If you look at code for VACUUM, that’s more or less what it does…
> only it is very smart about it, properly preserving ROWID values, as
> well as exact table definitions, sequences, analytics, and all the
> meta-data in the database.

Sqlite3 Backup API should work too, and covers your vacuum to file proposal.

You can, from sqlite3 shell, .dump the database to .sql file or pipe output to 
gzip or xz and use the compressed file for import again. Don't need full 
compression to minimize the io, -6 for gzip and -3 for xz is enough.

> 
>   -j


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Jay Kreibich


On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson  wrote:

> Simon - instead of using vacuum, it's much faster to create a new
> database from the old one, then rename it.  It's easy to do this in
> Python using iterdump(), or you can connect to the new (empty)
> database, do your create table statements, attach the old database as
> olddb, then do:
> 
> insert into table1 select * from olddb.table1;
> insert into table2 select 8 from olddb.table2;
> 
> This also lets you do the create table stmts w/o indexes, and add the
> indexes after the inserts.  Not sure if that is faster or not for your
> data.

If you look at code for VACUUM, that’s more or less what it does… only it is 
very smart about it, properly preserving ROWID values, as well as exact table 
definitions, sequences, analytics, and all the meta-data in the database.

The “copy to new database” accounts for the 2x size requirement.  In the case 
of VACUUM, the data is then copied from the fresh DB back to the original 
database file in a transaction-safe way; this touches every page in the 
original file, requiring a rollback log of some sort (journal or WAL)— and that 
accounts for the 3x space requirement.

While you can roll your own, the method you propose is not transaction safe and 
is difficult to use in a production environment unless you know you can shut 
down all services using the DB and force them to re-open connections to the new 
file.  That’s true of some applications, but far from all of them.

About five years ago I proposed a VACUUM TO  variant of VACUUM that would 
more or less do what you’re talking about, without the copy-back operation.  It 
would only require 2x the disk space, and be much faster for applications that 
were able to use it.  I still think it would be a useful, low-cost feature.

http://www.mail-archive.com/sqlite-users@sqlite.org/msg50941.html

  -j


--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Jay Kreibich



On Jan 5, 2015, at 8:43 AM, Nelson, Erik - 2  
wrote:

> RSmith wrote on Monday, January 05, 2015 7:43 AM
>> 
>> 
>>> On 01/05/2015 06:22 PM, Simon Slavin wrote:
 I have a database file which is 120GB in size.  It consists of two huge 
 tables and an index. //...
>>> Probably running out of space wherever temp files are created.
>> 
> 
> I ran into this a while ago- used pragma temp_store_directory

Which is, for good or bad, depreciated.  Not sure why.

  -j

--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Nelson, Erik - 2
Simon Slavin wrote on  Monday, January 05, 2015 10:40 AM
> 
> On 5 Jan 2015, at 2:43pm, Nelson, Erik - 2
>  wrote:
> 
> > RSmith wrote on Monday, January 05, 2015 7:43 AM
> >
> >> I haven't done this, but I seem to remember there was a way to tell
> >> SQLite where to make temp files, or override the system default at
> >> any rate - which may help.
> >
> > I ran into this a while ago- used pragma temp_store_directory
> 
> That's what I'm trying now.  Unfortunately the directory I'm trying to
> use has spaces in and is several folders down a hierarchy.  I'm just
> hoping that the string I've used to set the path doesn't need spaces or
> slashes escaped.
> 
> Simon.

If you're using the command line, you could try a relative reference like "."

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Simon Slavin

On 5 Jan 2015, at 2:43pm, Nelson, Erik - 2  
wrote:

> RSmith wrote on Monday, January 05, 2015 7:43 AM
> 
>> I haven't done this, but I seem to remember there was a way to tell
>> SQLite where to make temp files, or override the system default at any
>> rate - which may help.
> 
> I ran into this a while ago- used pragma temp_store_directory

That's what I'm trying now.  Unfortunately the directory I'm trying to use has 
spaces in and is several folders down a hierarchy.  I'm just hoping that the 
string I've used to set the path doesn't need spaces or slashes escaped.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Nelson, Erik - 2
RSmith wrote on Monday, January 05, 2015 7:43 AM
> 
> On 2015/01/05 13:32, Dan Kennedy wrote:
> > On 01/05/2015 06:22 PM, Simon Slavin wrote:
> >> I have a database file which is 120GB in size.  It consists of two
> huge tables and an index. //...
> > Probably running out of space wherever temp files are created.
> 
> I haven't done this, but I seem to remember there was a way to tell
> SQLite where to make temp files, or override the system default at any
> rate - which may help.
> 

I ran into this a while ago- used pragma temp_store_directory

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread RSmith


On 2015/01/05 13:32, Dan Kennedy wrote:

On 01/05/2015 06:22 PM, Simon Slavin wrote:

I have a database file which is 120GB in size.  It consists of two huge tables 
and an index. //...

Probably running out of space wherever temp files are created.


I haven't done this, but I seem to remember there was a way to tell SQLite where to make temp files, or override the system default 
at any rate - which may help.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Simon Slavin

On 5 Jan 2015, at 11:32am, Dan Kennedy  wrote:

> Probably running out of space wherever temp files are created.

Oh, that makes a lot of sense.  The free space on the boot volume for that 
system is only 37GB.  Okay, I can move the file to another computer.

Thanks for the fast and useful answer and to Igor for another possibility.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Igor Korot
Hi, Simon,

On Mon, Jan 5, 2015 at 6:32 AM, Dan Kennedy  wrote:
> On 01/05/2015 06:22 PM, Simon Slavin wrote:
>>
>> I have a database file which is 120GB in size.  It consists of two huge
>> tables and an index.
>> Its journal_mode is DELETE.
>>
>> It is on a partition with 803GB of free space.  By my calculations I have
>> 6.7 times the amount of free space as the database is taking up.
>>
>> I use the SQLite shell tool version 3.7.9 to run VACUUM on it.  The Shell
>> tool bails out reporting
>>
>> CPU Time: user 2113.596836 sys 437.660032
>> Error: near line 5 : database or disk full.
>>
>> My understanding is that VACUUM can't take more than three times the
>> current size of the database file.  What does the above error mean under
>> these circumstances ?
>
>
> Probably running out of space wherever temp files are created.

Or the disk is failing...

Thank you.

>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Dan Kennedy

On 01/05/2015 06:22 PM, Simon Slavin wrote:

I have a database file which is 120GB in size.  It consists of two huge tables 
and an index.
Its journal_mode is DELETE.

It is on a partition with 803GB of free space.  By my calculations I have 6.7 
times the amount of free space as the database is taking up.

I use the SQLite shell tool version 3.7.9 to run VACUUM on it.  The Shell tool 
bails out reporting

CPU Time: user 2113.596836 sys 437.660032
Error: near line 5 : database or disk full.

My understanding is that VACUUM can't take more than three times the current 
size of the database file.  What does the above error mean under these 
circumstances ?


Probably running out of space wherever temp files are created.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Simon Slavin

On 18 Jul 2014, at 4:45pm, Carlos Ferreira  wrote:

> This is on a standard windows 7 64 bits laptop.
> 
> I am not using a shell command. I am using the win32/win64 precompiled Dlls.

Can you please check to see if using VACUUM in the shell tool has the same 
problem with the same database ?  This is because the SQLite team wrote the 
shell tool themselves and if there's anything wrong with it they have full 
source and can do lots of testing.  Precompiled shell tool can be downloaded 
from the sqlite download page.

If the shell tool handles VACUUM okay and your code doesn't, that indicates 
that the problem is in your code.  That doesn't mean we won't help you figure 
out what's wrong, it just tells us where to look for anything unusual.

> I will check the pragma integrity check...

Please do.  You can use the shell tool for that too, before you try VACUUM in 
it.

> One thing I noticed before was that the maximum size for a blob inside a
> record field is much smaller than what I imagined.
> 
> If I use blobs with more than 200 Mb, things go wrong when trying to read it
> back using the direct blob functions.
> 
> I am going to check carefully if there is any other limit I am hitting

Limits are here:



I don't think your problem is with limits, it's more likely to be memory 
handling in someone's code, but if the shell tool fails to do VACUUM we'll be 
very interested.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Carlos Ferreira
Hi,

This is on a standard windows 7 64 bits laptop.

I am not using a shell command. I am using the win32/win64 precompiled Dlls.

I will check the pragma integrity check...

One thing I noticed before was that the maximum size for a blob inside a
record field is much smaller than what I imagined.

If I use blobs with more than 200 Mb, things go wrong when trying to read it
back using the direct blob functions.

I am going to check carefully if there is any other limit I am hitting
because in this case the only difference between the database that fails and
the database that works is that the working database has much smaller
blobs...and proximately the same number of records.
 
Regards.

Carlos


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: sexta-feira, 18 de Julho de 2014 16:34
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Vacuum command fails


On 18 Jul 2014, at 4:11pm, Carlos Ferreira <car...@csiberkeley.com> wrote:

> I also have a problem with VACCUM. ( both 32 and 64 bits )
> 
> Whenever I deal with small databases, it works fine, but as soon as 
> the DB is more than 200 Mb, the vaccum command bails out with code 21.
>   or 

Is this on a standard type of computer, or in a small device ?

If you use the VACUUM command from the SQLite shell tool (copy the database
onto a standard computer if necessary), rather than your own application, do
you get the same result ?

Does your database show any corruption when you use "PRAGMA
integrity_check;" ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Simon Slavin

On 18 Jul 2014, at 4:11pm, Carlos Ferreira  wrote:

> I also have a problem with VACCUM. ( both 32 and 64 bits )
> 
> Whenever I deal with small databases, it works fine, but as soon as the DB
> is more than 200 Mb, the vaccum command bails out with code 21.
>   or 

Is this on a standard type of computer, or in a small device ?

If you use the VACUUM command from the SQLite shell tool (copy the database 
onto a standard computer if necessary), rather than your own application, do 
you get the same result ?

Does your database show any corruption when you use "PRAGMA integrity_check;" ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Carlos Ferreira
Hi,

I also have a problem with VACCUM. ( both 32 and 64 bits )

Whenever I deal with small databases, it works fine, but as soon as the DB
is more than 200 Mb, the vaccum command bails out with code 21.

  or 

Any idea ?

I can make my own copy and rename ..that is probably what Vaccum does...but
using vaccum would be very nice if it would work ok...

Thanks

Carlos



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of RSmith
Sent: sexta-feira, 18 de Julho de 2014 15:41
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Vacuum command fails


On 2014/07/18 16:30, Nelson, Erik - 2 wrote:
> veeresh kumar wrote:
>
>   
>> Now I am running the the application on D: Drive (it has 841 GB free
>>   space) and C: drive has space (333 GB free). It looks like VACUUM 
>> uses C:drive space ??? Also command seems to be working fine...
>>
> Maybe I'm missing something, but I felt like this was pretty clearly
answered before.
>
> PRAGMA temp_store_directory returns and controls the directory that is
used for temp files.  You can query it, and it will  probably resolve to
your C: drive, which is probably where your TEMP or TMP environment
variables point.
>
> If your temp store directory doesn't have enough space, the VACUUM will
fail.

And to add, you can change this by changing the TEMP or TMP directive in
your OS, it is not up to SQLite to decide where your OS intends temp files
to be, but it is changable.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread RSmith


On 2014/07/18 16:30, Nelson, Erik - 2 wrote:

veeresh kumar wrote:

  

Now I am running the the application on D: Drive (it has 841 GB free
  space) and C: drive has space (333 GB free). It looks like VACUUM uses
C:drive space ??? Also command seems to be working fine...


Maybe I'm missing something, but I felt like this was pretty clearly answered 
before.

PRAGMA temp_store_directory returns and controls the directory that is used for 
temp files.  You can query it, and it will  probably resolve to your C: drive, 
which is probably where your TEMP or TMP environment variables point.

If your temp store directory doesn't have enough space, the VACUUM will fail.


And to add, you can change this by changing the TEMP or TMP directive in your OS, it is not up to SQLite to decide where your OS 
intends temp files to be, but it is changable.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Nelson, Erik - 2
veeresh kumar wrote:

 
> Now I am running the the application on D: Drive (it has 841 GB free
>  space) and C: drive has space (333 GB free). It looks like VACUUM uses
> C:drive space ??? Also command seems to be working fine...
> 

Maybe I'm missing something, but I felt like this was pretty clearly answered 
before.  

PRAGMA temp_store_directory returns and controls the directory that is used for 
temp files.  You can query it, and it will  probably resolve to your C: drive, 
which is probably where your TEMP or TMP environment variables point.

If your temp store directory doesn't have enough space, the VACUUM will fail.

Erik

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread veeresh kumar
Hi Simon,,

Happy to provide you with answers...Answers Inline...

One thing i would like to know is does the VACUUM command uses C: drive space 
or the drive where my application is running...It seems like it requires C: 
drive to have enough space.




On Thursday, 17 July 2014 2:14 PM, Simon Slavin  wrote:
 



On 17 Jul 2014, at 10:03pm, veeresh kumar  wrote:

> I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store 
> = FILE). But it did not help. I get SQLITE_NOMEM error message.

What version of SQLite are you using ?  The simplest way is probably to tell us 
the result of "sqlite_version()" give by your own application.


Veeresh: Sqlite Version 3.8.4.3

Are you using any PRAGMAs at all in your code ?  In other words, can we assume 
that all PRAGMAs are set to their defaults ?


Veeresh: Yes, we are using PRAGMA and
all PRAGMA are set to default except PRAGMA temp_store which is set to 1

If you open your database in the SQLite shell tool and execute "PRAGMA 
integrity_check;" on it, do you get an error ?

If you open your database in the SQLite shell tool and execute "VACUUM;" on it, 
do you still get an error ?

> Its not a space issue as I do have enough space in the hard drive.

Just to make this clear, do you have at least three times as much free space on 
your hard drive as the size of the database ?

Veeresh: The application runs on Windows 

Earlier I was running the application on another machine where C: drive was 
having less space (like 1 GB) and D Drive had (20 GB free space) and the VACUUM 
failed there.

Now I am running the the application on D: Drive (it has 841 GB free
 space) and C: drive has space (333 GB free). It looks like VACUUM uses 
C:drive space ??? Also command seems to be working fine...

I am sorry for these detailed questions which may seem to be attacking you, but 
we have run out of obvious causes for this problem and are trying to check all 
the unlikely things.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread veeresh kumar
My earlier email didnt went due to size limit. May be it might come later. Here 
is my summary

What I understood after this exercise i.e executing VACUUM command is

- It requires C: drive free space and not the drive where your application is 
running. 

Since PRAGMA temp_store_directory is depricated, we dont have option to set the 
temp path.

- We have to forcefully set PRAGMA temp_store = FILE in order for the command 
to use disk space and not RAM space.Default setting of temp_store isn't working 
in my case.


I ran the application by making sure it meets above criteria and the command 
was successful. If my above understanding is correct, i guess we need to update 
help document. 

Thank you
-Veeresh



On Thursday, 17 July 2014 10:30 AM, Simon Slavin  wrote:
 



On 17 Jul 2014, at 6:24pm, veeresh kumar  wrote:

> I tried to execute Vacuum on the sqlite database of size 14GB and it failed 
> with an error "out of memory". I was under the impression that it copies the 
> database file and then performs vacuum on that file. Here it looks like its 
> using RAM.

What's your

PRAGMA temp_store

set to ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread RSmith


On 2014/07/17 23:03, veeresh kumar wrote:

I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store = 
FILE). But it did not help. I get SQLITE_NOMEM error message.


Might I offer one more possible (although probably not very likely) problem - I have just recently posted an experiment in memory 
usage where I had the SQLITE_NOMEM error returned after trying to execute a query for which the cache size limit exceeded available 
system resources in 32-bit mode of the test software. Could it be your page size multiplied by the cache pages (both selectable or 
viewable via their respective pragmas) exceeds about 2GB? If so a 14GB Vacuum would probably try to use a lot of memory and if the 
cache limit exceeds the resource limit that error might happen.


Not very likely, but possible. Good luck!




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Simon Slavin

On 17 Jul 2014, at 10:03pm, veeresh kumar  wrote:

> I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store 
> = FILE). But it did not help. I get SQLITE_NOMEM error message.

What version of SQLite are you using ?  The simplest way is probably to tell us 
the result of "sqlite_version()" give by your own application.

Are you using any PRAGMAs at all in your code ?  In other words, can we assume 
that all PRAGMAs are set to their defaults ?

If you open your database in the SQLite shell tool and execute "PRAGMA 
integrity_check;" on it, do you get an error ?

If you open your database in the SQLite shell tool and execute "VACUUM;" on it, 
do you still get an error ?

> Its not a space issue as I do have enough space in the hard drive.

Just to make this clear, do you have at least three times as much free space on 
your hard drive as the size of the database ?

I am sorry for these detailed questions which may seem to be attacking you, but 
we have run out of obvious causes for this problem and are trying to check all 
the unlikely things.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread veeresh kumar
I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store = 
FILE). But it did not help. I get SQLITE_NOMEM error message.

Its not a space issue as I do have enough space in the hard drive. Whenver 
Vacuum command is executed, its using the RAM and thats were it goes out of 
memory. 



On Thursday, 17 July 2014 12:11 PM, Roger Binns  wrote:
 


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17/07/14 10:42, veeresh kumar wrote:
> When i execute the command PRAGMA temp_store, it returned me 0. What is
> the ideal value that needs to be set?

A quick google search would have found the answer:

  https://sqlite.org/pragma.html#pragma_temp_store

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlPIH8wACgkQmOOfHg372QRtLwCfYUzGhB4UKejmTT0qcVRRHNQy
bRgAn2MdaOspER3bgVcwlgKjLq3G8akr
=B7gd
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Simon Slavin

On 17 Jul 2014, at 6:24pm, veeresh kumar  wrote:

> I tried to execute Vacuum on the sqlite database of size 14GB and it failed 
> with an error "out of memory".

What kind of error is this ?  Are you getting SQLITE_NOMEM returned from a 
SQLite call or is this being returned from some part of your operating system ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Nelson, Erik - 2

Luuk wrote:

>so, 0 means temp is written to disk.

>Back to the the question.

>How much free disk space is there?

>I think you need more than 14Gb of free space if your database is 14Gb in size.

>http://lmgtfy.com/?q=sqlite+vacuum+how+much+disk+space+is+needed=1

>"This means that when VACUUMing a database, as much as twice the size of the 
>original database file is required in free disk space. "

>So, you will need 28Gb of free disk space..


Sometimes *where* the free space is makes a difference... for example, if the 
temporary file is generated in /tmp, then /tmp needs to have sufficient free 
space.  In my experience, /tmp is frequently too small to vacuum large 
databases.  The tmp location can be controlled by a (deprecated, IIRC, and 
hopefully never removed) pragma.

Erik

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Luuk

On 17-7-2014 21:11, Roger Binns wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17/07/14 10:42, veeresh kumar wrote:

When i execute the command PRAGMA temp_store, it returned me 0. What is
the ideal value that needs to be set?


A quick google search would have found the answer:

   https://sqlite.org/pragma.html#pragma_temp_store



so, 0 means temp is written to disk.

Back to the the question.

How much free disk space is there?

I think you need more than 14Gb of free space if your database is 14Gb 
in size.


http://lmgtfy.com/?q=sqlite+vacuum+how+much+disk+space+is+needed=1

"This means that when VACUUMing a database, as much as twice the size of 
the original database file is required in free disk space. "

(source: http://sqlite.org/lang_vacuum.html)


So, you will need 28Gb of free disk space..
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17/07/14 10:42, veeresh kumar wrote:
> When i execute the command PRAGMA temp_store, it returned me 0. What is
> the ideal value that needs to be set?

A quick google search would have found the answer:

  https://sqlite.org/pragma.html#pragma_temp_store

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlPIH8wACgkQmOOfHg372QRtLwCfYUzGhB4UKejmTT0qcVRRHNQy
bRgAn2MdaOspER3bgVcwlgKjLq3G8akr
=B7gd
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread veeresh kumar
When i execute the command PRAGMA temp_store, it returned me 0. What is the 
ideal value that needs to be set?



On Thursday, 17 July 2014 10:30 AM, Simon Slavin  wrote:
 



On 17 Jul 2014, at 6:24pm, veeresh kumar  wrote:

> I tried to execute Vacuum on the sqlite database of size 14GB and it failed 
> with an error "out of memory". I was under the impression that it copies the 
> database file and then performs vacuum on that file. Here it looks like its 
> using RAM.

What's your

PRAGMA temp_store

set to ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-17 Thread Simon Slavin

On 17 Jul 2014, at 6:24pm, veeresh kumar  wrote:

> I tried to execute Vacuum on the sqlite database of size 14GB and it failed 
> with an error "out of memory". I was under the impression that it copies the 
> database file and then performs vacuum on that file. Here it looks like its 
> using RAM.

What's your

PRAGMA temp_store

set to ?

Simon.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command in a transaction?

2013-12-10 Thread RSmith


On 2013/12/11 01:41, veeresh kumar wrote:

Thanks Igor and Simon for your inputs. I was under the impression that VACUUM 
would also help performance since it does defragmentation.



Hi Veeresh,

Vacuum does de-fragment the database, it also amalgamates any transaction files and so on - so you are not wrong about that. The 
idea that it speeds up the database or improves performance if you will, is not wrong, but mostly negligible.


Consider what is meant by "defragmentation" for a database - the pointer position of a "next page" holding data would be a different 
numerical value and/or some transactional data may still reside in a different file altogether, that's all. The things that actually 
require processing time (especially within SQLite) are not really affected by this in a meaningful way.  This may well have a slight 
seek-penalty for the physical hardware even though an un-fragmented DB file might still be very much fragmented on the physical 
layer, but the modern drive caches are adequate in size to make the penalty disappear - and if you are using SSD, the point becomes 
completely moot.


Further to this, as Simon noted, 50GB is easily handled for SQLite - are your queries taking excessive amounts of time to run (as 
opposed to on PostGres/MySQL or such) or is there some other reason you believe it would improve performance? (For instance, does it 
improve performance for you currently? - If so, something else might be wrong).


On the other hand, the process of Vacuuming a database file has to do a lot of data moving and consequently takes quite a while - 
but unless you have an app running 24/7 (such as Web-Page back-end) this should not matter much.


Cheers,
Ryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command in a transaction?

2013-12-10 Thread veeresh kumar
Thanks Igor and Simon for your inputs. I was under the impression that VACUUM 
would also help performance since it does defragmentation.





On Tuesday, 10 December 2013 3:02 PM, Simon Slavin  wrote:
 

On 10 Dec 2013, at 8:04pm, veeresh kumar  wrote:

> -If we put inside a transaction, definitely we would get a SQLITE_BUSY 
> error.Any way to avoid this error? 

It is normal for SQL engines to prevent VACUUM inside a transaction, and to 
lock the database from other threads/processes/users while it operates.  For 
instance, Postgres will issue a consistent error message if you don't close 
your transaction before you do a VACUUM.  Since it is not a 'real' SQL command 
(in that it operates on file structure, not tables, rows or columns) it 
shouldn't be a part of a transaction.

Don't forget what VACUUM does: it doesn't operated just on data in the 
database, it reorganises the entire database file, and can move every thing 
into a different place and delete entire pages.  It can even change _rowid_s. 
no way it can operate while allowing other SQL commands to work, so it may as 
well issue an explicit lock.

> The database size that we use in the application can grow upto 50 GB.


A 50GB database in SQLite is no problem, and no reason to run VACUUM.  The time 
to run VACUUM is when

(a) space is at a premium, or you need to take copies and
(b) VACUUM will save significant space

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command in a transaction?

2013-12-10 Thread Simon Slavin

On 10 Dec 2013, at 8:04pm, veeresh kumar  wrote:

> -If we put inside a transaction, definitely we would get a SQLITE_BUSY 
> error.Any way to avoid this error? 

It is normal for SQL engines to prevent VACUUM inside a transaction, and to 
lock the database from other threads/processes/users while it operates.  For 
instance, Postgres will issue a consistent error message if you don't close 
your transaction before you do a VACUUM.  Since it is not a 'real' SQL command 
(in that it operates on file structure, not tables, rows or columns) it 
shouldn't be a part of a transaction.

Don't forget what VACUUM does: it doesn't operated just on data in the 
database, it reorganises the entire database file, and can move every thing 
into a different place and delete entire pages.  It can even change _rowid_s. 
no way it can operate while allowing other SQL commands to work, so it may as 
well issue an explicit lock.

> The database size that we use in the application can grow upto 50 GB.


A 50GB database in SQLite is no problem, and no reason to run VACUUM.  The time 
to run VACUUM is when

(a) space is at a premium, or you need to take copies and
(b) VACUUM will save significant space

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command in a transaction?

2013-12-10 Thread Igor Tandetnik

On 12/10/2013 3:04 PM, veeresh kumar wrote:

I wanted to know what is the good way to execute Vacuum command.
- Inside a transaction or without transaction?


Doesn't really matter (unless you want to combine it with other 
statements in a single transaction). VACUUM command is no different 
from, say, a large UPDATE statement. Like any other statement in SQLite, 
if it's not part of an explicit transaction, it will create an implicit 
one, just for this statement.



- What would happen if we terminate the Vacuum command in-between when its not 
executed in a transaction? Does the database gets corrupted if we stop/kill the 
operation inbetween?


Same thing that happens if you interrupt an UPDATE statement: the usual 
rollback mechanism will repair the damage eventually.



-If we put inside a transaction, definitely we would get a SQLITE_BUSY 
error.Any way to avoid this error?


You would get this error whether or not you start an explicit 
transaction. There is no way to use the database while VACUUM runs.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and large indices: best practice?

2013-09-02 Thread Dennis Jenkins
On Fri, Aug 30, 2013 at 3:52 PM, Simon Slavin  wrote:

> Great question.  If all you are doing is writing (never any UPDATE or
> DELETE FROM or DROP) then VACUUM won't save any space in your database
> file.  It would defragment your database and might thereby increase speed a
> little, but this is mostly for old-fashioned systems that get most of their
> speed from 'read-ahead buffering'.  I would not expect much of a time
> saving (less than 1%) from any modern setup (test it !).  If it's
> difficult, annoying or time-consuming to do the VACUUM I wouldn't bother.
>

This has not been my experience.

I have a legacy system that exports data by writing it to a sqlite
database.  This file is only ever written to once (not updated or deleted
while being written, either), vacuumed, zipped and sent to a receiving
system.  The file size varies greatly, based on the source data. The
database schema has ~20 tables in it, some with many rows, some with very
few.  There are a few indices (created with the schema, before the data is
populated).

Vacuuming the database file shrinks it between 10% and 20%.  This surprised
me; I was expecting near zero savings. However, I did not research why, I
just accepted it and moved on.

I suggest to the OP to perform some experiments.  That is what I did.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and large indices: best practice?

2013-09-02 Thread Markus Schaber
Hi,

Im Auftrag von uncle.f

> Hello all,
> 
> I have a process that runs 24/7 and permanently inserts data into an SQLite
> database:
> 
> 1) Create new database at midnight, keep it open for writing for the next
> 24 hours
> 2) There are no deletions and not even reads during the database creation
> 3) I use only bulk inserts wrapped in a transaction (for each minute of
> data) that may contain a few thousands of rows in every transaction.
> 4) My journal mode is MEMORY.
> 5) Once the insertion process is done with I build several indices and close
> the database
> 6) After that the database file is moved over the network to a storage device
> 
> The database will only be used again for reading and will remain unmodified
> forever.
> 
> Each database is fairly large (3-5 GB) and considering it will never be
> modified again I would like to take all possible measures to ensure that the
> file size / fragmentation / data access times are all as low as possible.
> 
> So my question is about how to ensure most efficient data allocation for such
> scenario. I thought of several options:
> 
> 1) VACUUM before creating indices, then create indices, then move database
> off to storage
> 2) Create indices, then VACUUM, then move off to storage
> 3) Create indices, move to storage, VACUUM when already on storage (using
> SQLite process running locally on storage device)
> 
> ... or any other sequence of those 3 steps (vacuum, indexing, moving to
> storage)

Did you consider running ANALYZE, too? That will populate the statistics which 
the query planner can use to gain optimal index usage.



Best regards

Markus Schaber

CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM silently fails, database size not changed

2013-08-31 Thread Mario M. Westphal
Unfortunately this does not help :-(
 
It may have still something to do with WAL.
Before I run the VACUUM the WAL file is quite small.
After the VACUUM has completed, it is about 20 MB - about the same size as
the properly compacted database would be.
But when I run a pragma wal_checkpoint; the WAL file does not shrink and
neither the database.
When I close the database, the WAL is deleted but the database remains at
120 MB.
 
Running a vacuum with the sqlite3 command line utility afterwards shrinks
the database to 20 MB. 
 
There must be some kind of condition or lock or whatever in my app which
prevents the vacuum to work properly.
It returns success and the WAL file grows, but the database size is not
reduced. I must be missing a step
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM silently fails, database size not changed

2013-08-31 Thread Dan Kennedy

On 08/31/2013 08:29 PM, Mario M. Westphal wrote:

I run the VACUUM command at the end of a diagnosis and cleanup operation on
my database.
I use the SQLite 3 API on Windows 7. Latest version of SQLite.
My database uses the WAL mode.
  
The database size is 120 MB when I run the sqlite3_execute("VACUUM",...)

command.
After about 20 seconds of heavy disk activity, sqlite3_execute returns with
SQLITE_OK.
The databate size on disk remains unchanged although I can see the sizeof
the WAL file rise to about 20 MB.
  
I had expected that the database shrinks because a lot of data has been

removed.
I closed my application and used the command line sqlite3.exe to VACUUM it.
The database shrinks from 120 MB to 20 MB!
  
Question:
  
I checked for open transactions: None.

I checked for pending statements (with sqlite3_next_stmt()): None.
  
Are there other reasons why VACUUM runs (it takes maybe 20 seconds for it to

return, and there is high disk activity during the execute) successfully but
apparently does nothing?


Perhaps you have auto-checkpointing turned off, or set to a very large
value:

  http://www.sqlite.org/pragma.html#pragma_wal_autocheckpoint

If so, the changes related to VACUUM are still in the WAL file. To
flush them through to (and truncate) the db file, run an
explicit checkpoint using "PRAGMA wal_checkpoint" after the VACUUM.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and large indices: best practice?

2013-08-30 Thread Simon Slavin

On 30 Aug 2013, at 9:32pm, uncle.f  wrote:

> I thought of several options:
> 
> 1) VACUUM before creating indices, then create indices, then move database
> off to storage
> 2) Create indices, then VACUUM, then move off to storage
> 3) Create indices, move to storage, VACUUM when already on storage (using
> SQLite process running locally on storage device)
> 
> ... or any other sequence of those 3 steps (vacuum, indexing, moving to
> storage)

Thank you very much for your detailed explanation of your procedure which saves 
lots of questions.

> Another question would be, do I even need to VACUUM considering the way my
> database is being produced?

Great question.  If all you are doing is writing (never any UPDATE or DELETE 
FROM or DROP) then VACUUM won't save any space in your database file.  It would 
defragment your database and might thereby increase speed a little, but this is 
mostly for old-fashioned systems that get most of their speed from 'read-ahead 
buffering'.  I would not expect much of a time saving (less than 1%) from any 
modern setup (test it !).  If it's difficult, annoying or time-consuming to do 
the VACUUM I wouldn't bother.

If you do decide to VACUUM then of your three options listed above I would do 
(2).  Theoretically the indices would benefit from being vacuumed, and 
theoretically moving the VACUUMed file to the new storage medium should ensure 
it's defragmented.

> I would appreciate a reply from somebody  who is aware of SQLite internals
> rather than an "educated guess" :-)

Then you asked in the wrong place.  You take what you can get by posting here.  
The idea is that if someone posts a wrong answer it'll be seen and corrected by 
another reader.  It's open source information.

If you want only replies from experts I assume you'll be happy to pay Hwaci's 
consultancy fee.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and PRAGMA temp_store

2013-06-25 Thread Mario M. Westphal
Great :-)
 
I guess that PRAGMA temp_store=MEMORY then does not add additional
performance on Windows and I can safely let it to DEFAULT or FILE.
This will avoid the excessive memory usage during VACUUM for my use case.
 
Thanks.
 
-- Mario
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and PRAGMA temp_store

2013-06-23 Thread Richard Hipp
On Sun, Jun 23, 2013 at 5:06 AM, Mario M. Westphal  wrote:

>
> When creating/opening a file in Windows, an application can mark this file
> as "temporary" (flag: FILE_ATTRIBUTE_TEMPORARY) and Windows will try to
> keep
> it in memory if sufficient memory is available, avoiding all writes to the
> medium. Does SQLite use this feature on Windows when it creates temporary
> files?
>

Yes.  It sets FILE_ATTRIBUTE_TEMPORARY, FILE_ATTRIBUTE_HIDDEN, and
FILE_ATTRIBUTE_DELETE_ON_CLOSE.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM fails once in a while with status=14 (unable to open database file)

2012-08-21 Thread Dominique Pellé
Richard Hipp  wrote:

> On Tue, Aug 21, 2012 at 5:30 AM, Dominique Pellé
> wrote:
>
>> Hi
>>
>> Once in a while, I see an error when doing a "VACUUM" operation.
>>
>> sqlite3_exec3(...) returns status=14 (unable to open database file).
>> I suppose that it fails to open a temporary database when doing
>> the VACUUM operation, but I don't see why.
>>
>
> You might have plenty of disk space left, but do you have plenty of *temp*
> disk space left?  I assume you are on Linux?  Is /var/tmp mounted on a
> separate filesystem?  Do you have plenty of disk space in /var/tmp?

It's on Linux x86_84.

/var/tmp is on the / file system, which has 364Gb available space
The size of the Database that I try to vacuum is "only" 2.5 Gb.
So running out of disk space does not look possible.

Shouldn't I get another error message if somehow there was not
enough disk space anyway?  Like SQLITE_FULL = 13?


Simon Slavin wrote:

> On 21 Aug 2012, at 10:30am, Dominique Pell  
> wrote:
>
> > I wonder whether VACUUM of different databases happening
> > in parallel in different processes could use the same temporary
> > file names, causing conflicts.
>
> I don't think so.  SQLite makes up a random filename for the temporary
> database, then checks to see that the file doesn't already exist before
> it makes it.

It seemed unlikely indeed that such a bug (conflicting tmp files) would be
in SQLite, since SQLite is so widely used, but I had no other explanation
so far.

But... you say (1) SQLite creates a random file name and THEN (2) checks
to see if it does  not already exist (3) before it makes it. It seems that there
is a small window here in between (2) and (3) if two processes creates the same
file name at the same time, if there is no mutual exclusion. I still doubt
that would be the issue in practise.

I asked in case I was missing something obvious. It looks like no.

I'll debug further, look at errno (etc.) or come up with a simple program
that reproduces it.

Regards
-- Dominique
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM fails once in a while with status=14 (unable to open database file)

2012-08-21 Thread Simon Slavin

On 21 Aug 2012, at 10:30am, Dominique Pellé  wrote:

> I wonder whether VACUUM of different databases happening
> in parallel in different processes could use the same temporary
> file names, causing conflicts.

I don't think so.  SQLite makes up a random filename for the temporary 
database, then checks to see that the file doesn't already exist before it 
makes it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM fails once in a while with status=14 (unable to open database file)

2012-08-21 Thread Richard Hipp
On Tue, Aug 21, 2012 at 5:30 AM, Dominique Pellé
wrote:

> Hi
>
> Once in a while, I see an error when doing a "VACUUM" operation.
>
> sqlite3_exec3(...) returns status=14 (unable to open database file).
> I suppose that it fails to open a temporary database when doing
> the VACUUM operation, but I don't see why.
>

You might have plenty of disk space left, but do you have plenty of *temp*
disk space left?  I assume you are on Linux?  Is /var/tmp mounted on a
separate filesystem?  Do you have plenty of disk space in /var/tmp?


>
> sqlite3_extended_errcode(db) also returns extendedErrorCode=14
>
> The code to vacuum looks simple, it does these operations:
>
> ==
> sqlite3* db;
> int status = sqlite3_open(dbName, );
> if (status != SQLITE_OK) {
>   ...  no error happening here.
> }
>
> char* errMsg = NULL;
> status = sqlite3_exec(db, "PRAGMA synchronous=OFF;", NULL, NULL, );
> if (status != SQLITE_OK) {
>   ... no error happening here.
> }
>
> status = sqlite3_exec(db, "VACUUM;", NULL, NULL, );
> if (status != SQLITE_OK) {
>... and here I get status is 14 once in a while?!
> }
>
> status = sqlite3_close(db);
> if (status != SQLITE_OK) {
>... no error happening here, even when above VACUUM failed.
> }
> ==
>
> I precise that:
>
> - It's doing VACUUM of fairly large DBs (1 or a couple of Gb in general)
>   but the file system has far more free space.  I realize that
>   VACUUM needs free space (>= 2 or 3 times the size of the DB?)
>   That should not be the problem. And if it was, I would expect
>   a different error code than 14.
>
> - I'm doing several VACUUM in parallel of different DBs
>   in different processes. I'm not using threads.
>   Each process VACUUM a different database which is simple.
>
> - The DB is opened just before doing the VACUUM (as in above code)
>   There is no other opened connection to it.
>
> - It's not 100% reproducible. It happens maybe 1/10
>   of the times, which suggests that it could be a race condition.
>   Yet it's not using threads. Different processes uses
>   different databases, so it does not seem possible to have
>   race conditions in those conditions.
>
> - Even though VACUUM fails, the DB looks fine.
>
> - It's using SQLite-3.7.3  (actually, Spatialite-2.4.0) on Linux x86_64.
>   (OK, maybe time to upgrade SQLite)
>
> - I've checked with Valgrind memory checker on tiny
>   databases: it sees no problem.
>
> -> Any idea what could be wrong?
>
> I wonder whether VACUUM of different databases happening
> in parallel in different processes could use the same temporary
> file names, causing conflicts.
>
> Regards
> -- Dominique
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM command issue

2012-01-04 Thread Tarun
Thank you for your reply.

Pragma wal_checkpoint gave reduced file size.
But I still dont know why DB file size increased when I deleted all records
..
Can you please clarify this issue?

Thanks in advance.

On 04-Jan-2012 5:42 PM, "Richard Hipp"  wrote:

On Wed, Jan 4, 2012 at 6:52 AM, Tarun  wrote:

> Hi All,
>
> I am running p...
The size of the database file is not representative of the amount of
content in the file while there is an active write-ahead log.  Run "PRAGMA
wal_checkpoint;" after doing your inserts and/or after running VACUUM to
get the true database size.






>
> Then I ran VACUUM command.
>
> SQLite file size remained 113664, it did not get reduced.
>...
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM command issue

2012-01-04 Thread Richard Hipp
On Wed, Jan 4, 2012 at 6:52 AM, Tarun  wrote:

> Hi All,
>
> I am running program in which I have created SQLite DB file using:
> sqlite3_open_v2()
>
> configured it:
>
> PRAGMA journal_mode = WAL
> PRAGMA synchronous=OFF
>
> Then created one table into SQLiteDB by using sqlite3_exec() in which
> I passed SQL command of creating table.
>
> After configuration, SQLite file size =  1024bytes
>
> Then I inserted 500 records into table using INSERT sql command in
> sqlite3_exec()
> SQLite file size =  69632
>
> Then I did deletion of all 500 records from table using DELETE sql query in
> sqlite3_exec()
> SQLite file size =  113664
>
> I don't know why file size is increased on deletion ??


The size of the database file is not representative of the amount of
content in the file while there is an active write-ahead log.  Run "PRAGMA
wal_checkpoint;" after doing your inserts and/or after running VACUUM to
get the true database size.





>
> Then I ran VACUUM command.
>
> SQLite file size remained  113664, it did not get reduced.
>
> But when I closed my application working on SQLite DB file, file size
> is reduced to 3072.
>
> I did not understand this behavior. I think as soon as I run VACUUM
> command, it should reduce file size at that moment.
> Please share correct behavior that should happen in this scenario shared
> above.
>
>
> Waiting for your response, I am stuck in my work.
>
>
> --
> Thanks and Regards,
> - Tarun Thakur
> Module Lead
> NEC HCL System Technologies, Noida
> www.nechclst.in
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM

2011-11-11 Thread Jan Hudec
On Fri, Nov 11, 2011 at 00:52:23 +0100, Fabian wrote:
> 2011/11/10 Richard Hipp 
> > Because when you are inserting the Nth row, SQLite has no idea of how many
> > more rows will follow or how big the subsequent rows will be, so it has no
> > way to reserve contiguous space sufficient to hold them all.  The result is
> > that parts of the table and parts of the indices become interleaved in the
> > file.
> >
> But nothing is written to disk until I call commit (i'm using
> journal_mode=memory), so when SQLite has to start writing the data, it
> knows the exact total number of rows, and also that no other rows will
> follow.

That's not how journals work. Or rather, it is the way "wal" journal works.
All the other journal modes, inlcuding "memory", work by writing the data
directly to the database and storing information needed to return the
database to the last consistent state in the journal.

> But then again, maybe the format of the journal in memory, is an exact copy
> of the bytes it will write to disk, and in that case I understand that it
> would be very inefficient to start shuffling things, instead of just
> dumping it. I pictured it like a temporary table, in which case it would be
> fairly easy to restructure things before writing.

No, the journal does not contain the bytes that are going to be written to
disk at all, ever. It contains the bytes that were on the disk before. Than
the transaction is committed by simply deleting the journal and rolled back
by writing the content of the journal back into the file.

The only difference is the write-ahead log, "wal", journal mode, that was
introduced in version 3.7. In that case the the journal contains something
like "patches" to be applied to the database. Transactions are committed by
just marking the data valid in the journal and there is a special
"checkpoint" operation that actually writes the data to the database file
itself. It has the advantage that readers are not blocked by write and is
often faster for small transactions, but it does not handle huge transactions
(at $work we use sqlite to process huge datasets where one transaction often
writes table with several million rows; wal is not much use in such context)

-- 
 Jan 'Bulb' Hudec 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM

2011-11-10 Thread Fabian
2011/11/10 Richard Hipp 

>
> Because when you are inserting the Nth row, SQLite has no idea of how many
> more rows will follow or how big the subsequent rows will be, so it has no
> way to reserve contiguous space sufficient to hold them all.  The result is
> that parts of the table and parts of the indices become interleaved in the
> file.
>
>
But nothing is written to disk until I call commit (i'm using
journal_mode=memory), so when SQLite has to start writing the data, it
knows the exact total number of rows, and also that no other rows will
follow.

But then again, maybe the format of the journal in memory, is an exact copy
of the bytes it will write to disk, and in that case I understand that it
would be very inefficient to start shuffling things, instead of just
dumping it. I pictured it like a temporary table, in which case it would be
fairly easy to restructure things before writing.

Thanks for your explanation!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM

2011-11-10 Thread Simon Slavin

On 10 Nov 2011, at 10:41pm, Fabian wrote:

> Is there some trick to force VACUUM-like output when building the
> database, and avoid to build it twice?

CREATE your TABLES.
BEGIN TRANSACTION.
INSERT your data.
END TRANSACTION.
CREATE your INDEXes.

This should get you the fastest process, and the neatest indexes.  But I think 
that the indexes will still not be sorted in the same way as VACUUM does it.  
Because the index has to be created in the order that the data is found.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM

2011-11-10 Thread Richard Hipp
On Thu, Nov 10, 2011 at 5:41 PM, Fabian  wrote:

> I'm trying to understand the following: Why is it that when I create a new
> database, and fill it with rows, why can it not be commited to disk the
> same way as VACUUM would do it?


Because when you are inserting the Nth row, SQLite has no idea of how many
more rows will follow or how big the subsequent rows will be, so it has no
way to reserve contiguous space sufficient to hold them all.  The result is
that parts of the table and parts of the indices become interleaved in the
file.



> Currently I'm trying to manually vacuum the
> database by inserting rows sorted by table and by rowid, but as soon as the
> table contains INDEXes I cannot reproduce anywhere near the same output as
> a real VACUUM, because it writes the indexes first instead of last, for
> example. Is there some trick to force VACUUM-like output when building the
> database, and avoid to build it twice?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum on sqlite file fills the .wal file and it never shriks since

2011-11-09 Thread Yuriy Kaminskiy
Richard Hipp wrote:
> On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul  wrote:
> 
>> Hi,
>>
>> I've performed a vacuuming operation (I ran the following command:
>> sqlite3.exe  VACUUM;).
>> It caused the WAL file to be the same size as the db file and it never
>> shrink back.
>>
>> For example I had a db file in the size of 1.8GB, wal file in the size of
>> 1.7MB, shm file in the size of 32KB.
>> I ran the VACUUM; command on the db file, now the db file got down to
>> 1.2GB, wal file got up to 1.2GB and the shm file got up to 9.5MB.
>>
>> Do you have  a clue what is the cause for this strange behavior? Do you
>> know if there's a command that can shrink the files back?
>>
> 
> The WAL file is not truncated (prior to the last database connection
> closing) because overwriting an existing file is faster than appending to a
> file.  Hence reusing an existing WAL is faster than truncating and
> appending.
> 
> But if the file is larger than you like, you can set an upper bound on its
> size using the journal_size_limit pragma.
> http://www.sqlite.org/pragma.html#pragma_journal_size_limit  With this
> pragma setting, the WAL will be truncated down to the requested size after
> each successful checkpoint operation.

IMO, that should be mentioned in documentation. Currently it says this pragma
affects only journal_mode=persist and locking_mode=exclusive.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum on sqlite file fills the .wal file and it never shriks since

2011-11-09 Thread Richard Hipp
On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul  wrote:

> Hi,
>
> I've performed a vacuuming operation (I ran the following command:
> sqlite3.exe  VACUUM;).
> It caused the WAL file to be the same size as the db file and it never
> shrink back.
>
> For example I had a db file in the size of 1.8GB, wal file in the size of
> 1.7MB, shm file in the size of 32KB.
> I ran the VACUUM; command on the db file, now the db file got down to
> 1.2GB, wal file got up to 1.2GB and the shm file got up to 9.5MB.
>
> Do you have  a clue what is the cause for this strange behavior? Do you
> know if there's a command that can shrink the files back?
>

The WAL file is not truncated (prior to the last database connection
closing) because overwriting an existing file is faster than appending to a
file.  Hence reusing an existing WAL is faster than truncating and
appending.

But if the file is larger than you like, you can set an upper bound on its
size using the journal_size_limit pragma.
http://www.sqlite.org/pragma.html#pragma_journal_size_limit  With this
pragma setting, the WAL will be truncated down to the requested size after
each successful checkpoint operation.



>
> Thanks,
> Orit
>
>
>
> [Creative Zone by MediaMind]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM command not working

2011-08-19 Thread Tarun
Thank you so much Richard.

I have tried WAL and then applied VACUUM, it is working fine.

Thanks again,
Tarun Thakur

On 8/19/11, Richard Hipp  wrote:
> On Thu, Aug 18, 2011 at 11:19 PM, Tarun  wrote:
>
>> Hi All,
>>
>> I am running program in which I have created SQLite DB file using:
>>
>> sqlite3async_initialize()
>>
>
> The easiest solution to your problem might be simply to not use the
> test_async.c module.  That module was created long ago to work around issues
> with fsync().  These days, using "PRAGMA journal_mode=WAL" does a better job
> of overcoming the same issue.
>
> So I suggest that you drop the test_async.c module and instead enable
> "PRAGMA journal_mode=WAL".  See if that doesn't work out better for you.
> And if not, we'll take up the problem from that point
>
>
>> sqlite3_open_v2()
>>
>> Then created one table into SQLiteDB by using sqlite3_exec() in which
>> I passed SQL command of creating table.
>>
>> Then I inserted records into table using INSERT sql command in
>> sqlite3_exec()
>>
>> Then I did sqlite3async_run()  to commit all write request from
>> pending queue to SQLite DB file on disk.
>>
>> Then I did deletion of records from table using DELETE sql query in
>> sqlite3_exec()
>>
>> Then I ran VACUUM command this way:
>>
>> sql = "VACUUM;";
>> rc = sqlite3_exec(asyncsql.pdb, sql, NULL, 0, );
>>
>>
>>
>> After successful running of above command I checked size of my
>> SQLiteDB file using system ("ls -lrt");
>>
>> NOTE: No compilations issue. async IO code file and sqlite shared
>> library linked properly. All sqlite3_exec() ran successfully with
>> SQLITE_OK, no error code returned.
>>
>> Given below is output shown:
>>
>> [tarun@emu async_prg]$ ./sqlite_async_compKey vacum1.db 1000
>>
>> 1313661267 1313661269 1313661316
>> system1-> Size of file after records insertion
>> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>>
>> system2  -> Size of file after records deletion
>> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>>
>> VACUUM SQLite API SUCCESS
>> system3   -> Size of file after VACUUM command
>> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>>
>> My problem is why SQLiteDB file size is not reduced after VACUUM has
>> been run.  Please help to get VACUUM running in my case.
>> It would be really helpful if anyone can share working demo program
>> using VACUUM.
>> Waiting for your response, I am stuck in my work.
>> --
>> Thanks and Regards,
>> - Tarun Thakur
>> Module Lead
>> NEC HCL System Technologies, Noida
>> www.nechclst.in
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Regards,
- Tarun Thakur
Module Lead
NEC HCL System Technologies, Noida
www.nechclst.in
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM command not working

2011-08-18 Thread Richard Hipp
On Thu, Aug 18, 2011 at 11:19 PM, Tarun  wrote:

> Hi All,
>
> I am running program in which I have created SQLite DB file using:
>
> sqlite3async_initialize()
>

The easiest solution to your problem might be simply to not use the
test_async.c module.  That module was created long ago to work around issues
with fsync().  These days, using "PRAGMA journal_mode=WAL" does a better job
of overcoming the same issue.

So I suggest that you drop the test_async.c module and instead enable
"PRAGMA journal_mode=WAL".  See if that doesn't work out better for you.
And if not, we'll take up the problem from that point


> sqlite3_open_v2()
>
> Then created one table into SQLiteDB by using sqlite3_exec() in which
> I passed SQL command of creating table.
>
> Then I inserted records into table using INSERT sql command in
> sqlite3_exec()
>
> Then I did sqlite3async_run()  to commit all write request from
> pending queue to SQLite DB file on disk.
>
> Then I did deletion of records from table using DELETE sql query in
> sqlite3_exec()
>
> Then I ran VACUUM command this way:
>
> sql = "VACUUM;";
> rc = sqlite3_exec(asyncsql.pdb, sql, NULL, 0, );
>
>
>
> After successful running of above command I checked size of my
> SQLiteDB file using system ("ls -lrt");
>
> NOTE: No compilations issue. async IO code file and sqlite shared
> library linked properly. All sqlite3_exec() ran successfully with
> SQLITE_OK, no error code returned.
>
> Given below is output shown:
>
> [tarun@emu async_prg]$ ./sqlite_async_compKey vacum1.db 1000
>
> 1313661267 1313661269 1313661316
> system1-> Size of file after records insertion
> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>
> system2  -> Size of file after records deletion
> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>
> VACUUM SQLite API SUCCESS
> system3   -> Size of file after VACUUM command
> -rw-r--r--. 1 tarun tarun 559104 Aug 18 15:25 vacum1.db
>
> My problem is why SQLiteDB file size is not reduced after VACUUM has
> been run.  Please help to get VACUUM running in my case.
> It would be really helpful if anyone can share working demo program
> using VACUUM.
> Waiting for your response, I am stuck in my work.
> --
> Thanks and Regards,
> - Tarun Thakur
> Module Lead
> NEC HCL System Technologies, Noida
> www.nechclst.in
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-29 Thread Stephan Beal
On Fri, Apr 29, 2011 at 3:37 AM, Simon Slavin  wrote:

> The thing is, there's not problem with referring to rowid, or ROWID or any
> of the other aliases when you do this.  The only problem is possible
> confusion for the programmer if you define a column with one of these names
> which /isn't/ an alias to SQLite's internal row numbers.  And it's just
> programmer confusion: if you know exactly how SQLite handles this then it's
> fine.
>

Another subtle problem/confusion point is that:

select * ...;

will not return 'rowid' unless it is explicitly declared or you do this:

select rowid, * ...

e.g.:
sqlite> select * from whiki_page_tag;
eek|tag1|
eek2|tag1|
eek|tag2|
sqlite> select rowid,* from whiki_page_tag;
1|eek|tag1|
2|eek2|tag1|
3|eek|tag2|


-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-29 Thread Nico Williams
On Thu, Apr 28, 2011 at 5:20 PM, Dave Hayden  wrote:
> On Apr 28, 2011, at 12:41 PM, Pavel Ivanov wrote:
>>> After more poking, it appears that rowids might not be changed by a vacuum 
>>> if I have an index on the table. Is this true? If so, is it something I can 
>>> rely on going forward?
>>
>> No, it's not true. The only way to keep your rowids intact is to declare an 
>> INTEGER PRIMARY KEY alias for it. And you better never reference "rowid" 
>> name in your application or your database schema.
>
> Can you explain this in more detail? I've never seen any prohibition on using 
> "rowid" in the SQLite docs before. The page on autoincrement says
>
> "You can access the ROWID of an SQLite table using one the special column 
> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column 
> to use one of those special names, then the use of that name will refer to 
> the declared column not to the internal ROWID."
>
> which suggests that referring to rowids is fine. If I add a "rowid integer 
> primary key" column on my tables, it seems like everything would work the way 
> I want it to with minimal code changes. Any reason that won't work?

As long as your 'rowid' column is declared as an INTEGER PRIMARY KEY,
then yes, VACUUM will preserve rowids and you'll be able to use the
rowid column name without other issues.

Others have pointed out that using 'rowid' for any column other than
an INTEGER PRIMARY KEY column would be confusing, but that's not what
you're after.  But even so, using 'rowid' is probably not in your best
interest, not if you could use a more descriptive name, for example.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-29 Thread Marco Bambini
Dave please take a look at this blog post:
http://www.sqlabs.com/blog/?p=51
--
Marco Bambini
http://www.sqlabs.com






On Apr 28, 2011, at 9:36 PM, Dave Hayden wrote:

> When the VACUUM feature was added I took a look at using it to keep database 
> file sizes down, but discovered that it changed rowids and messed up my 
> references between tables (or what I gather the database people call "foreign 
> keys"). I'm playing around with this again and it looks like rowids aren't 
> affected if I have an INTEGER PRIMARY KEY column, but I don't want to rebuild 
> the existing tables if I don't have to.
> 
> After more poking, it appears that rowids might not be changed by a vacuum if 
> I have an index on the table. Is this true? If so, is it something I can rely 
> on going forward?
> 
> Thanks!
> -Dave
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-28 Thread Pavel Ivanov
>> which suggests that referring to rowids is fine.
>
> It does not suggest referring to ROWIDs is fine, it only says that it
> can be done.  I think Pavel's point is that referencing ROWIDs is bad
> practice, so that is why he says you shouldn't do it.

Yes, that's right. You can refer to rowid, but it's a bad practice.
Especially if you refer to it in foreign keys. As you have seen just
vacuum the database and suddenly you see some bugs, incorrect foreign
key references and you don't understand how they were able to make
their way into the database.

You are right that by just adding a column ROWID INTEGER PRIMARY KEY
you can fix things without changing much of the code but it would be a
big confusion for any developer who will look at your code either
after you or as an additional developer on the project. So in a short
term if you want a quick hack that would be fairly decent solution.
But in a long term I would suggest to add column ID INTEGER PRIMARY
KEY and change all references to rowid towards id.


Pavel


On Thu, Apr 28, 2011 at 9:31 PM, Rich Rattanni  wrote:
>> "You can access the ROWID of an SQLite table using one the special column 
>> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column 
>> to use one of those special names, then the use of that name will refer to 
>> the declared column not to the internal ROWID."
>>
>> which suggests that referring to rowids is fine.
>
> It does not suggest referring to ROWIDs is fine, it only says that it
> can be done.  I think Pavel's point is that referencing ROWIDs is bad
> practice, so that is why he says you shouldn't do it.
>
> --
> Rich
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-28 Thread Simon Slavin

On 29 Apr 2011, at 2:31am, Rich Rattanni wrote:

>> "You can access the ROWID of an SQLite table using one the special column 
>> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column 
>> to use one of those special names, then the use of that name will refer to 
>> the declared column not to the internal ROWID."
>> 
>> which suggests that referring to rowids is fine.
> 
> It does not suggest referring to ROWIDs is fine, it only says that it
> can be done.  I think Pavel's point is that referencing ROWIDs is bad
> practice, so that is why he says you shouldn't do it.

The thing is, there's not problem with referring to rowid, or ROWID or any of 
the other aliases when you do this.  The only problem is possible confusion for 
the programmer if you define a column with one of these names which /isn't/ an 
alias to SQLite's internal row numbers.  And it's just programmer confusion: if 
you know exactly how SQLite handles this then it's fine.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-28 Thread Rich Rattanni
> "You can access the ROWID of an SQLite table using one the special column 
> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column 
> to use one of those special names, then the use of that name will refer to 
> the declared column not to the internal ROWID."
>
> which suggests that referring to rowids is fine.

It does not suggest referring to ROWIDs is fine, it only says that it
can be done.  I think Pavel's point is that referencing ROWIDs is bad
practice, so that is why he says you shouldn't do it.

--
Rich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-28 Thread Dave Hayden
On Apr 28, 2011, at 12:41 PM, Pavel Ivanov wrote:

>> After more poking, it appears that rowids might not be changed by a vacuum 
>> if I have an index on the table. Is this true? If so, is it something I can 
>> rely on going forward?
> 
> No, it's not true. The only way to keep your rowids intact is to declare an 
> INTEGER PRIMARY KEY alias for it. And you better never reference "rowid" name 
> in your application or your database schema.

Can you explain this in more detail? I've never seen any prohibition on using 
"rowid" in the SQLite docs before. The page on autoincrement says

"You can access the ROWID of an SQLite table using one the special column names 
ROWID, _ROWID_, or OID. Except if you declare an ordinary table column to use 
one of those special names, then the use of that name will refer to the 
declared column not to the internal ROWID."

which suggests that referring to rowids is fine. If I add a "rowid integer 
primary key" column on my tables, it seems like everything would work the way I 
want it to with minimal code changes. Any reason that won't work?

-D

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vacuum and rowids

2011-04-28 Thread Pavel Ivanov
> After more poking, it appears that rowids might not be changed by a vacuum if 
> I have an index on the table. Is this true? If so, is it something I can rely 
> on going forward?

No, it's not true. The only way to keep your rowids intact is to
declare an INTEGER PRIMARY KEY alias for it. And you better never
reference "rowid" name in your application or your database schema.


Pavel


On Thu, Apr 28, 2011 at 3:36 PM, Dave Hayden  wrote:
> When the VACUUM feature was added I took a look at using it to keep database 
> file sizes down, but discovered that it changed rowids and messed up my 
> references between tables (or what I gather the database people call "foreign 
> keys"). I'm playing around with this again and it looks like rowids aren't 
> affected if I have an INTEGER PRIMARY KEY column, but I don't want to rebuild 
> the existing tables if I don't have to.
>
> After more poking, it appears that rowids might not be changed by a vacuum if 
> I have an index on the table. Is this true? If so, is it something I can rely 
> on going forward?
>
> Thanks!
> -Dave
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-21 Thread Oliver Peters
[...]

> Can you please run sqlite3_analyzer [...] on both
> the original database and the database after VACUUM
> and send me the output?

done, send 2 txt-files

greetings
Oliver

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-19 Thread Florian Weimer
* Oliver Schneider:

> just a few minutes ago I ran a VACUUM on a DB file and the size before
> was 2089610240 and afterwards 2135066624. Is this normal?

This is just typical behavior with B-trees because the fill factor
almost always changes when they are rebuilt.  It seems that SQLite
doesn't try to maximize the fill factor during index creation, so this
behavior is expected for many database files.

Choosing the best fill factor is often difficult.  If additional keys
are inserted into an index with a near-100% fill-factor, many page
splits are required.  But for indices on archive tables, a way to
create the most compact representation could be desirable.  Some
systems (such as PostgreSQL) make the fill factor a per-index
configuration parameter, and they also construct the B-tree from a
sorted representation, which makes it possible to obtain compact
indices if the user requests this (it's also faster than doing
repeated B-tree inserts).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-18 Thread Jay A. Kreibich
On Sat, Sep 18, 2010 at 11:36:12AM -0700, Kyle McKay scratched on the wall:
> On Sep 17, 2010, at 16:08:42 PDT, Oliver Schneider wrote:
> > just a few minutes ago I ran a VACUUM on a DB file and the size before
> > was 2089610240 and afterwards 2135066624. Is this normal?

> Admittedly that's only about 0.4% growth in size, but I too was under  
> the impression that vacuum did not grow the database size.  Subsequent  
> vacuum commands do not seem to grow the database any further.

  If you have several indexes, especially on text values (or some other
  variable-length value) it is conceivable that a VACUUM will increase
  the overall file size.  This possibility will be stronger if the rows
  were initially inserted more or less in index order, but using
  explicit, non-sequential ROWID/INTEGER PRIMARY KEYs.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-18 Thread Richard Hipp
On Sat, Sep 18, 2010 at 2:36 PM, Kyle McKay  wrote:

> On Sep 17, 2010, at 16:08:42 PDT, Oliver Schneider wrote:
> > just a few minutes ago I ran a VACUUM on a DB file and the size before
> > was 2089610240 and afterwards 2135066624. Is this normal?
>
> I've recently been noticing the same thing, for example:
>
> 443182080 newdb.sq3
> $ sqlite3 newdb.sq3
> SQLite version 3.6.22
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> PRAGMA freelist_count;
> 0
> sqlite> PRAGMA integrity_check;
> ok
> sqlite> vacuum;
> sqlite> PRAGMA freelist_count;
> 0
> sqlite> PRAGMA integrity_check;
> ok
> sqlite> .quit
> 444920832 newdb.sq3
>
> Admittedly that's only about 0.4% growth in size, but I too was under
> the impression that vacuum did not grow the database size.  Subsequent
> vacuum commands do not seem to grow the database any further.
>

Can you please run sqlite3_analyzer (available from
http://www.sqlite.org/download.html) on both the original database and the
database after VACUUM and send me the output?


>
> Kyle
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-18 Thread Kyle McKay
On Sep 17, 2010, at 16:08:42 PDT, Oliver Schneider wrote:
> just a few minutes ago I ran a VACUUM on a DB file and the size before
> was 2089610240 and afterwards 2135066624. Is this normal?

I've recently been noticing the same thing, for example:

443182080 newdb.sq3
$ sqlite3 newdb.sq3
SQLite version 3.6.22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA freelist_count;
0
sqlite> PRAGMA integrity_check;
ok
sqlite> vacuum;
sqlite> PRAGMA freelist_count;
0
sqlite> PRAGMA integrity_check;
ok
sqlite> .quit
444920832 newdb.sq3

Admittedly that's only about 0.4% growth in size, but I too was under  
the impression that vacuum did not grow the database size.  Subsequent  
vacuum commands do not seem to grow the database any further.

Kyle
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM can actually increase the DB file?

2010-09-18 Thread Max Vlasov
> just a few minutes ago I ran a VACUUM on a DB file and the size before
> was 2089610240 and afterwards 2135066624. Is this normal?
>
>
If you have a copy of the old db and new db, can you check the results on
both databases of the following pragmas:

PRAGMA freelist_count;
PRAGMA integrity_check;

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and WAL

2010-08-03 Thread Andy Gibbs
On Tuesday, August 03, 2010 2:15 PM, Simon Slavin wrote:

> You can't attach files to messages to this forum.  This is done
> intentionally to stop the posts getting unacceptably long (people
> with problems tend to attach their entire source code).

Cunning!  Ok, in that case, if anyone is interested in the patch they
can email me direct and I'll pass it along that way ;o)

Thanks for the info

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and WAL

2010-08-03 Thread Simon Slavin

On 3 Aug 2010, at 1:14pm, Andy Gibbs wrote:

> On Tuesday, August 03, 2010 2:09 PM, Andy Gibbs wrote:
> 
>> I've attached a patch to this email as a follow-up to this suggestion.
> 
> Sorry, I think I forgot to attach the patch file.  Hopefully its attached 
> this time!

You can't attach files to messages to this forum.  This is done intentionally 
to stop the posts getting unacceptably long (people with problems tend to 
attach their entire source code).

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and WAL

2010-08-03 Thread Andy Gibbs

On Tuesday, August 03, 2010 2:09 PM, Andy Gibbs wrote:


I've attached a patch to this email as a follow-up to this suggestion.


Sorry, I think I forgot to attach the patch file.  Hopefully its attached 
this time!


Andy
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM and WAL

2010-08-03 Thread Andy Gibbs

On Monday, August 02, 2010 4:25 AM, Nikolaus Rath wrote:


Hi,

Could someone clarify to me how the VACUUM command works if WAL is
enabled?

I would like to compact my database, but I am note sure if I should:

1) Run PRAGMA wal_checkpoint to get all outstanding commits into the
  database file and then VACUUM to compact the database file,

or should I

2) Run VACUUM to create a compact version of the database file (but
  writing into the WAL file) and then run PRAGMA wal_checkpoint to get
  the changes done by VACUUM into the database file itself?


The documentation says that "The VACUUM command cleans the main database
by copying its contents to a temporary database file and reloading the
original database file from the copy". But it seems to me that this
might be a remnant from the days of the old journal, since now the WAL
file could be used instead of the temporary database.



Hi,

This is not a definitive answer, I'm afraid -- I also was waiting for some 
insight on this -- but my understanding is that VACUUM still works 
effectively the same way - i.e. it still copies the contents of the main 
database into a temporary database file and reloads the original database 
file from the copy.  Also it only runs in exclusive mode (i.e. requires no 
open transactions in order to complete).


This has a number of implications:

1. It doesn't store the "results" of the VACUUM in the wal journal file (the 
temporary database file that it uses cannot be wal-enabled).  Whether you do 
a checkpoint before or not, I think shouldn't matter, since the algorithm 
that copies the data will work equally well for all intents and purposes in 
either case.


2. I'm not sure what a checkpoint after will actually acheive - isn't the 
journal file invalidated by the replacement of the original database file 
with the newly created one, and therefore is it not just ignored?  (this is 
a question really to the developers).


3. Because the database file is recreated, the wal journal mode setting is 
actually dropped (I believe this is a bug) but you won't notice until you 
close all the connections to the database file and then reconnect.


Given that this is the case, and now I'm making a suggestion to the sqlite 
developers, I believe the VACUUM command should, when used on a wal-enabled 
database, change the journal setting back to "delete", thereby forcing a 
checkpoint and clean-up of the wal journal file, then do the vacuum, then 
re-enable the wal journal setting on the new database file.


I've attached a patch to this email as a follow-up to this suggestion.  The 
patch very simply inserts OP_JournalMode opcodes before and after the normal 
OP_Vacuum opcode where a wal-enabled database is detected.  This 
necessitated a small enhancement to OP_JournalMode to optionally return a 
boolean on success rather than a string containing the journal mode.  I've 
added/modified some tests inside walbak.test.  Test walbak-1.10 demonstrates 
what I believe is the bug inside 3.7.0; the others had to be modified in 
accordance with how the wal journal file is now handled by this patch.


Anyway, I hope this is helpful.

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM not reclaiming space

2010-07-27 Thread Jay A. Kreibich
On Mon, Jul 26, 2010 at 12:28:42PM -0700, Taras Glek scratched on the wall:
>   Hi,
> I noticed an interesting disk-space behavior with VACUUM. If I vacuum my 
> places.sqlite(Firefox database), it's 49mb. If then copy my 2 biggest 
> tables with
>   CREATE table2 as select * from orig_table; drop table ; alter table2 
> rename to table;
> 
> Then vacuum, the db becomes 24mb. The same behavior occurs if I use the 
> create statement from sqlite_master to setup the copy-table.
> 
> Is this a bug or expected behavior?

  If the first table had any indexes, it would be expected behavior.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM not reclaiming space

2010-07-27 Thread Richard Hipp
On Mon, Jul 26, 2010 at 3:28 PM, Taras Glek  wrote:

>  Hi,
> I noticed an interesting disk-space behavior with VACUUM. If I vacuum my
> places.sqlite(Firefox database), it's 49mb. If then copy my 2 biggest
> tables with
>  CREATE table2 as select * from orig_table; drop table ; alter table2
> rename to table;
>
> Then vacuum, the db becomes 24mb. The same behavior occurs if I use the
> create statement from sqlite_master to setup the copy-table.
>
> Is this a bug or expected behavior?
>

My guess is that orig_table contained indices.  You copy table does not copy
the indices, but your drop table does delete them.



>
> Thanks,
> Taras
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
-
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM not reclaiming space

2010-07-27 Thread Max Vlasov
On Mon, Jul 26, 2010 at 11:28 PM, Taras Glek  wrote:

>  Hi,
> I noticed an interesting disk-space behavior with VACUUM. If I vacuum my
> places.sqlite(Firefox database), it's 49mb. ...
> ...
> Then vacuum, the db becomes 24mb.
>

Taras,
49 almost equal to 24*2. Can it be related to some encoding conversion, like
UTF16 in first case and UTF-8 in the other?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-17 Thread Max Vlasov
>  First, I'm sure it is nearly impossible to do this as a
>  guaranteed, atomic operation on most OSes and filesystems.  ...
>
>  Second, if this is meant to look like a cleanup operation on the
>  original file, the original file (including any filesystem meta-data)
>  should be kept in-tact.  ...
>
>  Last, file manipulations of this sort aren't supported by the current
>  VFS interface, and (IMHO) with good reason.  ...
> ...
>  However, it might be nice to have a "VACUUM TO " version of the
>  command.
>

Thanks, Jay (and Pavel too), you gave a very detail description of the
situation. Although unlikely your interesting suggestion (VACUUM TO) will be
implemented, I think one always make his own equivalent if it is necessary
(creating a subset of operations from sqlite3RunVacuum)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Jay A. Kreibich
On Tue, Mar 16, 2010 at 06:18:13PM +0300, Max Vlasov scratched on the wall:

> When I read the comments it was obvious that the algorithm uses very simple
> approach:
> Attach blank database, copy all data,  detach,  rename. Sure I might be
> wrong in details, but generally it looks like this.

  With one exception.  The temporary database is not renamed back to
  the original database name.  Rather, the data is copied back, page by
  page, from the temp database file back into the original database
  file.  The original database file is then truncated to the correct size.

> With this actions journal file(s) for the new database should not contain
> much data.

  That is correct.  The comments in the code support this.

> So my quess is one only need at most the size of the actual data
> from the source base plus very tiny journal file. But in fact (as everyone
> sees) it uses much more.

  Almost.  In addition to the original file, you need enough free space
  for the newly vacuumed database, as well as the journal files for
  both databases.

  As mentioned, the journal file for the temporary database is quite
  small.  However, the "copy back" operation creates a journal file on
  the original database.  This file is going to be similar in size to
  the original database, since every page in that file will be touched
  as part of the vacuum process.  The journal file is required, as any
  failure or error (including a process or power failure) will require
  the journal file to recreate the original pre-vacuum database file.
  
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?

  The copy-back operation.

  Yes, it would be faster and easier to just copy the temp database
  back over to the original database name, but there are issues with
  that.
  
  First, I'm sure it is nearly impossible to do this as a
  guaranteed, atomic operation on most OSes and filesystems.  That's
  bad for data security.  
  
  Second, if this is meant to look like a cleanup operation on the
  original file, the original file (including any filesystem meta-data)
  should be kept in-tact.  Replacing the file by copying something on
  top of it won't do this.
  
  Last, file manipulations of this sort aren't supported by the current
  VFS interface, and (IMHO) with good reason.  The current vacuum
  process works regardless of the operating environment, including
  in-memory databases and any custom VFS modules.  Reaching outside
  of that context to things like filesystem and directory manipulations
  complicates this.  This makes modification of the current vacuum
  process unlikely.



  However, it might be nice to have a "VACUUM TO " version of the
  command.  This would allow the user/application to vacuum a database
  to another non-temporary file and skip the copy-back operation.  This
  would save considerable space and some amount of time.  You could
  then do as you want with the new file.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Pavel Ivanov
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?

I believe renaming of the file cannot be atomic. So in case of OS
crash you can be in situation without database at all - no old and no
new. Also deleting of old file can be problematic when other processes
have open handles to it. Even if Unix system will allow that all those
processes won't know that they should re-open there handles to work
with new file.


Pavel

On Tue, Mar 16, 2010 at 11:18 AM, Max Vlasov  wrote:
>> This means that to VACUUM a SQLite database of size X, you need at
>> least 2X of _additional_ free disk space available.  That seems rather
>> wasteful, just looking at it as a SQLite user.  Although
>> programmatically there may be reasons for it that I'm not aware of.
>>
>>
>
> Hmm, did some research, I think that VACUUM  requirements for free disk
> space is too big.
>
> When I read the comments it was obvious that the algorithm uses very simple
> approach:
> Attach blank database, copy all data,  detach,  rename. Sure I might be
> wrong in details, but generally it looks like this.
>
> With this actions journal file(s) for the new database should not contain
> much data. So my quess is one only need at most the size of the actual data
> from the source base plus very tiny journal file. But in fact (as everyone
> sees) it uses much more.
>
> I just tried to perform VACUUM on a test base and emulate the actions with
> manual attach/copy with the same base.
>
> We have:
> - truecrypt volume, size: 10M in order to see all the "disk full" errors.
> - new connection, testdb.db3
> - CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
> Text)
> - 100,000 times
> INSERT INTO TestTable (Value) VALUES
> ("12345678901234567890123456789012345678901234567890")
> Final size: 6M
> - After we have completely filled the db, let's remove half of the records.
> DELETE FROM TestTable WHERE Id < 5
>
> Now we have our 6M db (ready to be decreased to ~3) and about 4M of free
> space.
>
> So two scenarios:
>
>  1. Simple vacuum
>
> Trying to perform VACUUM:
> Result: "database or disk is full" error.
>
>  2. Emulation of Vacuum actions.
>
> testdb_new.db3 is a new connection (tiny file 3k in size)
>
> ATTACH 'Q:\testdb_new.db3' AS newdb
>
> BEGIN TRANSACTION
>  ; the following operations create two db-journal files so rollback is
> possible.
>
> CREATE TABLE newdb.TestTable ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
> Text)
>  ; actually sqlite3RunVacuum does some string formatting with results from
> sqlite_master and performing corresponding queries. so this create table is
> probably the same query.
>
> INSERT INTO newdb.TestTable SELECT * FROM main.TestTable
>  ; the operation is ok, Windows reports 3M free, but this is probably due
> to cached writes.
>
> END TRANSACTION
>  ; Ok, testdb_new is 3m and 1M is free
>
> =
>
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?
>
> Thanks
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Max Vlasov
> This means that to VACUUM a SQLite database of size X, you need at
> least 2X of _additional_ free disk space available.  That seems rather
> wasteful, just looking at it as a SQLite user.  Although
> programmatically there may be reasons for it that I'm not aware of.
>
>

Hmm, did some research, I think that VACUUM  requirements for free disk
space is too big.

When I read the comments it was obvious that the algorithm uses very simple
approach:
Attach blank database, copy all data,  detach,  rename. Sure I might be
wrong in details, but generally it looks like this.

With this actions journal file(s) for the new database should not contain
much data. So my quess is one only need at most the size of the actual data
from the source base plus very tiny journal file. But in fact (as everyone
sees) it uses much more.

I just tried to perform VACUUM on a test base and emulate the actions with
manual attach/copy with the same base.

We have:
- truecrypt volume, size: 10M in order to see all the "disk full" errors.
- new connection, testdb.db3
- CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
Text)
- 100,000 times
INSERT INTO TestTable (Value) VALUES
("12345678901234567890123456789012345678901234567890")
Final size: 6M
- After we have completely filled the db, let's remove half of the records.
DELETE FROM TestTable WHERE Id < 5

Now we have our 6M db (ready to be decreased to ~3) and about 4M of free
space.

So two scenarios:

 1. Simple vacuum

Trying to perform VACUUM:
Result: "database or disk is full" error.

 2. Emulation of Vacuum actions.

testdb_new.db3 is a new connection (tiny file 3k in size)

ATTACH 'Q:\testdb_new.db3' AS newdb

BEGIN TRANSACTION
 ; the following operations create two db-journal files so rollback is
possible.

CREATE TABLE newdb.TestTable ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
Text)
 ; actually sqlite3RunVacuum does some string formatting with results from
sqlite_master and performing corresponding queries. so this create table is
probably the same query.

INSERT INTO newdb.TestTable SELECT * FROM main.TestTable
  ; the operation is ok, Windows reports 3M free, but this is probably due
to cached writes.

END TRANSACTION
  ; Ok, testdb_new is 3m and 1M is free

=

So the question is what is so special about sqlite3RunVacuum that it needs
more space than a simple emulation of its actions?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Scott Hess
On Mon, Mar 15, 2010 at 11:18 AM, Scott Hess  wrote:
> AFAICT, the operation to copy the pages back _is_ journaled, and the
> journal will get any pages which are overwritten in the front of the
> main database.  If the initial database has half of the pages used, it
> seems like the journal could range from almost nothing (if the used
> pages were mostly at the end) to almost half the database size (if the
> used pages were mostly at the front), so to be safe one would need 2x
> the final database size available.

As an aside, it occurs to me that one could implement an alternate
VACUUM which wrote the new database back to the main database file,
journalling the original versions of those pages.  The main database
would be locked for longer, but that might be a fair tradeoff for some
applications.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Jay A. Kreibich
On Mon, Mar 15, 2010 at 11:18:32AM -0800, Scott Hess scratched on the wall:
> On Sun, Mar 14, 2010 at 5:18 PM, Jay A. Kreibich  wrote:

> > ?While I have not tested this, I was under the impression that the
> > ?journal file is very very small, as no modifications are made to the
> > ?database, other than the final low-level copy (which is not a
> > ?journaled operation).

> AFAICT, the operation to copy the pages back _is_ journaled, and the
> journal will get any pages which are overwritten in the front of the
> main database.

  Taking a closer look at the comments in the code, I believe you are
  correct.  That makes sense anyways, as there is still a need to be
  able to back-out.
  
  There are some comments in the code about how turning journaling
  off is slower than leaving it on, but I see now that those comments
  are for the temp database, not the original file.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Scott Hess
On Sun, Mar 14, 2010 at 5:18 PM, Jay A. Kreibich  wrote:
> On Sun, Mar 14, 2010 at 07:19:59PM -0400, Matthew L. Creech scratched on the 
> wall:
>> I have a SQLite database with one large table, and I'd like to shrink
>> the size of that table to free up space in the filesystem.  My problem
>> is that the database is (for example) 100 MB, and I have 80 MB of free
>> filesystem space.  I figured that I could DELETE, say, half of the
>> records from the table, then VACUUM, and the VACUUM would
>> [temporarily] need ~50 MB of free space for the journal (since that's
>> how much real data there is).
>>
>> Instead, I'm finding that it needs a full 100 MB for the journal, even
>> though once the VACUUM succeeds the resulting DB is only 50 MB.  As a
>> result, I'm stuck unable to shrink the database, since VACUUM fails
>> with a disk I/O error (out of space), seemingly no matter many entries
>> I remove ahead of time.  I know the space is being freed, since
>> "PRAGMA freelist_count" shows the expected numbers.  So presumably
>> this is just an artifact of the way VACUUM is implemented internally.
>
>  Are you sure it is the journal file that is growing too large?
>
>  VACUUM works by making a logical copy of the database from the
>  original database into a temp database.  This restructures the
>  database and recovers space.  The temp database is then copied back
>  to the original database using low-level page copy.  This low-level
>  copy then truncates the original database file, recovering
>  filesystem space.
>
>  This also means the total space required to VACUUM a database is:
>  [old database size] + [new database size] + [journal file]
>
>  While I have not tested this, I was under the impression that the
>  journal file is very very small, as no modifications are made to the
>  database, other than the final low-level copy (which is not a
>  journaled operation).

AFAICT, the operation to copy the pages back _is_ journaled, and the
journal will get any pages which are overwritten in the front of the
main database.  If the initial database has half of the pages used, it
seems like the journal could range from almost nothing (if the used
pages were mostly at the end) to almost half the database size (if the
used pages were mostly at the front), so to be safe one would need 2x
the final database size available.

I could be mis-reading the code.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread D. Richard Hipp

On Mar 14, 2010, at 7:19 PM, Matthew L. Creech wrote:

> Hi,
>
> I have a SQLite database with one large table, and I'd like to shrink
> the size of that table to free up space in the filesystem.
>
> I'm finding that it needs a full 100 MB for the journal, even
> though once the VACUUM succeeds the resulting DB is only 50 MB.
>
> Any tips are appreciated.  Thanks!

PRAGMA journal_mode=OFF;

Be warned, though, that if you session crashes or you lose power, your  
database will go corrupt if there is no journal.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Matthew L. Creech
On Mon, Mar 15, 2010 at 2:11 AM, Matthew L. Creech  wrote:
>
> I'll give this a try tomorrow on a real device with journaling off,
> and see how much space it uses in /tmp with journaling turned off.
>

I ran some tests on a real device with a real database, and got the
following results:



Test 1: 250,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~15 MB free disk space

Toward the end of the operation (before failure):
- 27.9 MB database
- 14.6 MB journal
- < 2 MB free disk space

The VACUUM operation fails with a disk I/O error, presumably due to
running out of space.


Test 2: 250,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 10.6 MB free disk space

The VACUUM operation succeeds.


Test 3: 250,000 entries trimmed to 100,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~33 MB free disk space

Toward the end of the operation:
- 27.9 MB database
- ~28 MB journal
- ~5 MB free disk space

Afterward:
- 11.2 MB database
- 54.3 MB free disk space

The VACUUM operation succeeds.


Test 4: 250,000 entries trimmed to 100,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 33.3 MB free disk space

The VACUUM operation succeeds.




I never did see any temporary files, but space was obviously being
taken up for a temp database, so I assume that SQLite opens a file
then unlink()s it or something like that.  It looks like in the normal
(journaled) case, the journal file consistently grows about as large
as the original database, but the extra disk space used up by the
temporary table is dependent on the new database size.

So Jay's estimate of disk usage:

 [old database size] + [new database size] + [journal file]

is correct, but for the normal VACUUM case, [journal file] is
basically equivalent to [old database size].  So it's really just:

 (2 * [old database size]) + [new database size]

This means that to VACUUM a SQLite database of size X, you need at
least 2X of _additional_ free disk space available.  That seems rather
wasteful, just looking at it as a SQLite user.  Although
programmatically there may be reasons for it that I'm not aware of.

At any rate, I guess my immediate problem will have to be solved with
a one-off patch that disables journaling and does a VACUUM, and for
long-term usage I'll need to shrink the database capacity even more to
account for the 3x disk space usage if I need to VACUUM again in the
future.

Thanks

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   >