apart from the
Non-sequential pages being close to 99%. So the bloat seems to be all in the
indexes.
Brain now fried. Can provide any additional requested info next week.
-Original Message-
From: David Raymond
Sent: Friday, February 08, 2019 2:18 PM
To: SQLite mailing list
Subject: RE:
k time anyway. "user" was way up.
sqlite> vacuum into 'vac_into_from_ro_file.sqlite';
Run Time: real 1107.448 user 853.185069 sys 87.048558
Gonna run sqlite3_analyzer.exe on them now to see what's up. But closing in on
the end of the work day/week here.
-Original Message-
On 2/8/19, David Raymond wrote:
> Non-scientific "let's just try it" results
>
> Short version:
> Original file had been vacuumed already as the last thing that had happened
> to it.
> File size: 20,467,359,744
>
> sqlite> vacuum into 'vac_into.sqlite';
>
Non-scientific "let's just try it" results
Short version:
Original file had been vacuumed already as the last thing that had happened to
it.
File size: 20,467,359,744
sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571
sqlite> vacuum
If you renamed file1.db to file1.bak, opened file1.bak, vacuum into
file1.db, close file1.bak, you have a backup pre-vacuum (just in case...)
and 'streamlines' the process some-what.
Obviously, you'd have to rename the file back again if the vacuum failed
(out of disk space, etc)
Just a
On 2/8/19, David Raymond wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than
So to make sure I'm understanding it ok, with the new vacuum into command, if
I'm the only user of a file, then the sequence...
open file1
vacuum into file2
close file1
delete file1
rename file2 to file1
...is going to be potentially more than twice as fast as the old...
open file1
vacuum
On 4/4/17, Richard Hipp wrote:
> On 4/4/17, Ben Newberg wrote:
>> I've noticed with 3.18.0 that it's possible to make a database increase
>> in
>> size after running pragma integrity_check (which returns "ok") and then
>> running vacuum.
>
> I can now
On 4/4/17, Ben Newberg wrote:
> I've noticed with 3.18.0 that it's possible to make a database increase in
> size after running pragma integrity_check (which returns "ok") and then
> running vacuum.
I can now repro the behavior and have bisected to this check-in:
ion:0
> application id: 0
> software version:3008010
> number of tables:2
> number of indexes: 1
> number of triggers: 0
> number of views: 0
> schema size: 309
>
> --2) Running a vacuum (without pragma integrity_check) results in the
size of database: 3077 page count and 0 freelist page count:
sqlite> vacuum;
sqlite> .dbinfo
database page size: 1024
write format:1
read format: 1
reserved bytes: 0
file change counter: 53
database page count: 3077
freelist page count: 0
schema cookie: 20
schema
On 11/3/16, Christian Czech wrote:
>
> This bug in a release version did a lot of damage to us.
>
> Especially frustrating was that nobody really cared when I reported it:
We cared, and we looked into the problem, but we couldn't reproduce it
at that time.
--
D. Richard Hipp
sqlite is a great tool! And I also use (and bought) the SEE version.
This bug in a release version did a lot of damage to us.
Especially frustrating was that nobody really cared when I reported it:
-
We have a problem with the latest stable release of SQLite. When
compiled for 32 bit
On 11/02/2016 05:18 PM, Christian Czech wrote:
It is a fundamental bug. I hope one day it gets fixed. Otherwise 3.15.0
is useless.
It has been fixed: http://www.sqlite.org/src/info/3028845329c9b7ac
--
Bernardo Sulzbach
http://www.mafagafogigante.org/
mafagafogiga...@gmail.com
I already reported this in this forum a while ago but nobody cared.
It is only in the 32bit version when RAM usage gets over 2GB on windows. On
64 bit of course this does not come up if u have enough ram. It is only
with 3.15.0. It is easy to reproduce.
Problem is that intermediate temp file
On 11/2/16, Dan Kennedy wrote:
>
> We can reproduce it here now too. There will likely be a fix soon...
>
Dan's fix has been checked in and is available in the "Pre-release
Snapshot" on the download page. https://www.sqlite.org/download.html
--
D. Richard Hipp
On 11/02/2016 09:36 PM, Ralf Junker wrote:
I am the author of the DISQLite3 Delphi wrapper. Assurance to all
DISQLite3 users: The leak is not DISQLite3 specific. DISQLite3 5.15.0
behaves exactly like the official SQLite 3.15.0 release.
The problem can reliably be reproduced using the official
I am the author of the DISQLite3 Delphi wrapper. Assurance to all
DISQLite3 users: The leak is not DISQLite3 specific. DISQLite3 5.15.0
behaves exactly like the official SQLite 3.15.0 release.
The problem can reliably be reproduced using the official sqlite3.exe
3.15.0 binary on Win32.
r usage hints.
sqlite> pragma integrity_check
...> ;
ok
sqlite> PRAGMA temp_store=1;
sqlite> PRAGMA temp_store;
1
sqlite> vacuum;
Error: out of memory
>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http
Hello,
Since last version of sqlite (3.15.0) VACUUM causes an 'Out of memory' error,
at least on my bigger databases (> 1 Gb). I came across this error when I
installed the latest DISQLite3 version
(http://www.yunqa.de/delphi/products/sqlite3/index). I was also able to
reproduce the error
Hello,
I am developing an application for Windows Compact Embedded 7 using .net
3.5 and sqlite.
Is there a possibility to run vacuum, mean to compact the database und
WinCe from a C# Program?
Thank you very much.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/05/2015 10:25 AM, Jim Wilcoxson wrote:
> It's easy to do this in Python using iterdump(),
Be aware that the pysqlite dump implementation is incomplete in many
ways. Some issues:
- - Doesn't dump in a transaction so will have irregularities if
On Jan 5, 2015, at 5:03 PM, Eduardo Morras wrote:
> On Mon, 5 Jan 2015 14:42:28 -0600
> Jay Kreibich wrote:
>
>>
>>
>> On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson wrote:
>>
>>> Simon - instead of using vacuum, it's much faster to create
On Mon, 5 Jan 2015 14:42:28 -0600
Jay Kreibich wrote:
>
>
> On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson wrote:
>
> > Simon - instead of using vacuum, it's much faster to create a new
> > database from the old one, then rename it. It's easy to do this in
> >
On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson wrote:
> Simon - instead of using vacuum, it's much faster to create a new
> database from the old one, then rename it. It's easy to do this in
> Python using iterdump(), or you can connect to the new (empty)
> database, do your
Simon - instead of using vacuum, it's much faster to create a new
database from the old one, then rename it. It's easy to do this in
Python using iterdump(), or you can connect to the new (empty)
database, do your create table statements, attach the old database as
olddb, then do:
insert into
On Jan 5, 2015, at 8:43 AM, Nelson, Erik - 2
wrote:
> RSmith wrote on Monday, January 05, 2015 7:43 AM
>>
>>
>>> On 01/05/2015 06:22 PM, Simon Slavin wrote:
I have a database file which is 120GB in size. It consists of two huge
tables and an
Simon Slavin wrote on Monday, January 05, 2015 10:40 AM
>
> On 5 Jan 2015, at 2:43pm, Nelson, Erik - 2
> wrote:
>
> > RSmith wrote on Monday, January 05, 2015 7:43 AM
> >
> >> I haven't done this, but I seem to remember there was a way to tell
> >> SQLite where
On 5 Jan 2015, at 2:43pm, Nelson, Erik - 2
wrote:
> RSmith wrote on Monday, January 05, 2015 7:43 AM
>
>> I haven't done this, but I seem to remember there was a way to tell
>> SQLite where to make temp files, or override the system default at any
>> rate -
RSmith wrote on Monday, January 05, 2015 7:43 AM
>
> On 2015/01/05 13:32, Dan Kennedy wrote:
> > On 01/05/2015 06:22 PM, Simon Slavin wrote:
> >> I have a database file which is 120GB in size. It consists of two
> huge tables and an index. //...
> > Probably running out of space wherever temp
On 2015/01/05 13:32, Dan Kennedy wrote:
On 01/05/2015 06:22 PM, Simon Slavin wrote:
I have a database file which is 120GB in size. It consists of two huge tables
and an index. //...
Probably running out of space wherever temp files are created.
I haven't done this, but I seem to remember
On 5 Jan 2015, at 11:32am, Dan Kennedy wrote:
> Probably running out of space wherever temp files are created.
Oh, that makes a lot of sense. The free space on the boot volume for that
system is only 37GB. Okay, I can move the file to another computer.
Thanks for the
Hi, Simon,
On Mon, Jan 5, 2015 at 6:32 AM, Dan Kennedy wrote:
> On 01/05/2015 06:22 PM, Simon Slavin wrote:
>>
>> I have a database file which is 120GB in size. It consists of two huge
>> tables and an index.
>> Its journal_mode is DELETE.
>>
>> It is on a partition with
On 01/05/2015 06:22 PM, Simon Slavin wrote:
I have a database file which is 120GB in size. It consists of two huge tables
and an index.
Its journal_mode is DELETE.
It is on a partition with 803GB of free space. By my calculations I have 6.7
times the amount of free space as the database is
I have a database file which is 120GB in size. It consists of two huge tables
and an index.
Its journal_mode is DELETE.
It is on a partition with 803GB of free space. By my calculations I have 6.7
times the amount of free space as the database is taking up.
I use the SQLite shell tool
On 18 Jul 2014, at 4:45pm, Carlos Ferreira wrote:
> This is on a standard windows 7 64 bits laptop.
>
> I am not using a shell command. I am using the win32/win64 precompiled Dlls.
Can you please check to see if using VACUUM in the shell tool has the same
problem with
Discussion of SQLite Database
Subject: Re: [sqlite] Vacuum command fails
On 18 Jul 2014, at 4:11pm, Carlos Ferreira <car...@csiberkeley.com> wrote:
> I also have a problem with VACCUM. ( both 32 and 64 bits )
>
> Whenever I deal with small databases, it works fine, but as so
On 18 Jul 2014, at 4:11pm, Carlos Ferreira wrote:
> I also have a problem with VACCUM. ( both 32 and 64 bits )
>
> Whenever I deal with small databases, it works fine, but as soon as the DB
> is more than 200 Mb, the vaccum command bails out with code 21.
> or
Is
: [sqlite] Vacuum command fails
On 2014/07/18 16:30, Nelson, Erik - 2 wrote:
> veeresh kumar wrote:
>
>
>> Now I am running the the application on D: Drive (it has 841 GB free
>> space) and C: drive has space (333 GB free). It looks like VACUUM
>> uses C:drive s
On 2014/07/18 16:30, Nelson, Erik - 2 wrote:
veeresh kumar wrote:
Now I am running the the application on D: Drive (it has 841 GB free
space) and C: drive has space (333 GB free). It looks like VACUUM uses
C:drive space ??? Also command seems to be working fine...
Maybe I'm missing
veeresh kumar wrote:
> Now I am running the the application on D: Drive (it has 841 GB free
> space) and C: drive has space (333 GB free). It looks like VACUUM uses
> C:drive space ??? Also command seems to be working fine...
>
Maybe I'm missing something, but I felt like this was pretty
Hi Simon,,
Happy to provide you with answers...Answers Inline...
One thing i would like to know is does the VACUUM command uses C: drive space
or the drive where my application is running...It seems like it requires C:
drive to have enough space.
On Thursday, 17 July 2014 2:14 PM, Simon
My earlier email didnt went due to size limit. May be it might come later. Here
is my summary
What I understood after this exercise i.e executing VACUUM command is
- It requires C: drive free space and not the drive where your application is
running.
Since PRAGMA temp_store_directory is
On 2014/07/17 23:03, veeresh kumar wrote:
I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store =
FILE). But it did not help. I get SQLITE_NOMEM error message.
Might I offer one more possible (although probably not very likely) problem - I have just recently posted an
On 17 Jul 2014, at 10:03pm, veeresh kumar wrote:
> I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store
> = FILE). But it did not help. I get SQLITE_NOMEM error message.
What version of SQLite are you using ? The simplest way is probably to tell us
I did google aswell :-)...I did set PRAGMA temp_store = 1 (PRAGMA temp_store =
FILE). But it did not help. I get SQLITE_NOMEM error message.
Its not a space issue as I do have enough space in the hard drive. Whenver
Vacuum command is executed, its using the RAM and thats were it goes out of
On 17 Jul 2014, at 6:24pm, veeresh kumar wrote:
> I tried to execute Vacuum on the sqlite database of size 14GB and it failed
> with an error "out of memory".
What kind of error is this ? Are you getting SQLITE_NOMEM returned from a
SQLite call or is this being returned
Luuk wrote:
>so, 0 means temp is written to disk.
>Back to the the question.
>How much free disk space is there?
>I think you need more than 14Gb of free space if your database is 14Gb in size.
>http://lmgtfy.com/?q=sqlite+vacuum+how+much+disk+space+is+needed=1
>&qu
:
https://sqlite.org/pragma.html#pragma_temp_store
so, 0 means temp is written to disk.
Back to the the question.
How much free disk space is there?
I think you need more than 14Gb of free space if your database is 14Gb
in size.
http://lmgtfy.com/?q=sqlite+vacuum+how+much+disk+space
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/07/14 10:42, veeresh kumar wrote:
> When i execute the command PRAGMA temp_store, it returned me 0. What is
> the ideal value that needs to be set?
A quick google search would have found the answer:
When i execute the command PRAGMA temp_store, it returned me 0. What is the
ideal value that needs to be set?
On Thursday, 17 July 2014 10:30 AM, Simon Slavin wrote:
On 17 Jul 2014, at 6:24pm, veeresh kumar wrote:
> I tried to execute Vacuum on
On 17 Jul 2014, at 6:24pm, veeresh kumar wrote:
> I tried to execute Vacuum on the sqlite database of size 14GB and it failed
> with an error "out of memory". I was under the impression that it copies the
> database file and then performs vacuum on that file. Here it looks
I tried to execute Vacuum on the sqlite database of size 14GB and it failed
with an error "out of memory". I was under the impression that it copies the
database file and then performs vacuum on that file. Here it looks like its
using RAM. Appreciate your inputs from the experts.
On 2013/12/11 01:41, veeresh kumar wrote:
Thanks Igor and Simon for your inputs. I was under the impression that VACUUM
would also help performance since it does defragmentation.
Hi Veeresh,
Vacuum does de-fragment the database, it also amalgamates any transaction files and so on - so you
Thanks Igor and Simon for your inputs. I was under the impression that VACUUM
would also help performance since it does defragmentation.
On Tuesday, 10 December 2013 3:02 PM, Simon Slavin wrote:
On 10 Dec 2013, at 8:04pm, veeresh kumar wrote:
>
On 10 Dec 2013, at 8:04pm, veeresh kumar wrote:
> -If we put inside a transaction, definitely we would get a SQLITE_BUSY
> error.Any way to avoid this error?
It is normal for SQL engines to prevent VACUUM inside a transaction, and to
lock the database from other
On 12/10/2013 3:04 PM, veeresh kumar wrote:
I wanted to know what is the good way to execute Vacuum command.
- Inside a transaction or without transaction?
Doesn't really matter (unless you want to combine it with other
statements in a single transaction). VACUUM command is no different
The database size that we use in the application can grow upto 50 GB. We have
an option of shrinking the database using the Vacuum command. I understand that
Vacuum command consumes lots of time to execute,but i dont see any other way.
I wanted to know what is the good way to execute Vacuum
On Fri, Aug 30, 2013 at 3:52 PM, Simon Slavin wrote:
> Great question. If all you are doing is writing (never any UPDATE or
> DELETE FROM or DROP) then VACUUM won't save any space in your database
> file. It would defragment your database and might thereby increase speed
Hi,
Im Auftrag von uncle.f
> Hello all,
>
> I have a process that runs 24/7 and permanently inserts data into an SQLite
> database:
>
> 1) Create new database at midnight, keep it open for writing for the next
> 24 hours
> 2) There are no deletions and not even reads during the database
Unfortunately this does not help :-(
It may have still something to do with WAL.
Before I run the VACUUM the WAL file is quite small.
After the VACUUM has completed, it is about 20 MB - about the same size as
the properly compacted database would be.
But when I run a pragma wal_checkpoint; the
On 08/31/2013 08:29 PM, Mario M. Westphal wrote:
I run the VACUUM command at the end of a diagnosis and cleanup operation on
my database.
I use the SQLite 3 API on Windows 7. Latest version of SQLite.
My database uses the WAL mode.
The database size is 120 MB when I run the
I run the VACUUM command at the end of a diagnosis and cleanup operation on
my database.
I use the SQLite 3 API on Windows 7. Latest version of SQLite.
My database uses the WAL mode.
The database size is 120 MB when I run the sqlite3_execute("VACUUM",...)
command.
After about 20 seconds of heavy
On 30 Aug 2013, at 9:32pm, uncle.f wrote:
> I thought of several options:
>
> 1) VACUUM before creating indices, then create indices, then move database
> off to storage
> 2) Create indices, then VACUUM, then move off to storage
> 3) Create indices, move to storage, VACUUM
Hello all,
I have a process that runs 24/7 and permanently inserts data into an SQLite
database:
1) Create new database at midnight, keep it open for writing for the next
24 hours
2) There are no deletions and not even reads during the database creation
3) I use only bulk inserts wrapped in a
Great :-)
I guess that PRAGMA temp_store=MEMORY then does not add additional
performance on Windows and I can safely let it to DEFAULT or FILE.
This will avoid the excessive memory usage during VACUUM for my use case.
Thanks.
-- Mario
___
On Sun, Jun 23, 2013 at 5:06 AM, Mario M. Westphal wrote:
>
> When creating/opening a file in Windows, an application can mark this file
> as "temporary" (flag: FILE_ATTRIBUTE_TEMPORARY) and Windows will try to
> keep
> it in memory if sufficient memory is available, avoiding all
Hello List
the SQLite databases I use on Windows can become fairly large (several GB).
I just noticed that running a VACCUM on such a large database (where several
of the tables are also real large) can cause excessive memory usage (more
than 2.5 GB RAM in peak).
I tracked this down to using
Richard Hipp wrote:
> On Tue, Aug 21, 2012 at 5:30 AM, Dominique Pellé
> wrote:
>
>> Hi
>>
>> Once in a while, I see an error when doing a "VACUUM" operation.
>>
>> sqlite3_exec3(...) returns status=14 (unable to open database file).
>> I suppose that
On 21 Aug 2012, at 10:30am, Dominique Pellé wrote:
> I wonder whether VACUUM of different databases happening
> in parallel in different processes could use the same temporary
> file names, causing conflicts.
I don't think so. SQLite makes up a random filename for
On Tue, Aug 21, 2012 at 5:30 AM, Dominique Pellé
wrote:
> Hi
>
> Once in a while, I see an error when doing a "VACUUM" operation.
>
> sqlite3_exec3(...) returns status=14 (unable to open database file).
> I suppose that it fails to open a temporary database when doing
>
Hi
Once in a while, I see an error when doing a "VACUUM" operation.
sqlite3_exec3(...) returns status=14 (unable to open database file).
I suppose that it fails to open a temporary database when doing
the VACUUM operation, but I don't see why.
sqlite3_extended_errcode(db) also returns
Thank you for your reply.
Pragma wal_checkpoint gave reduced file size.
But I still dont know why DB file size increased when I deleted all records
..
Can you please clarify this issue?
Thanks in advance.
On 04-Jan-2012 5:42 PM, "Richard Hipp" wrote:
On Wed, Jan 4, 2012 at
On Wed, Jan 4, 2012 at 6:52 AM, Tarun wrote:
> Hi All,
>
> I am running program in which I have created SQLite DB file using:
> sqlite3_open_v2()
>
> configured it:
>
> PRAGMA journal_mode = WAL
> PRAGMA synchronous=OFF
>
> Then created one table into SQLiteDB by using
Hi All,
I am running program in which I have created SQLite DB file using:
sqlite3_open_v2()
configured it:
PRAGMA journal_mode = WAL
PRAGMA synchronous=OFF
Then created one table into SQLiteDB by using sqlite3_exec() in which
I passed SQL command of creating table.
After configuration,
On Fri, Nov 11, 2011 at 00:52:23 +0100, Fabian wrote:
> 2011/11/10 Richard Hipp
> > Because when you are inserting the Nth row, SQLite has no idea of how many
> > more rows will follow or how big the subsequent rows will be, so it has no
> > way to reserve contiguous space
2011/11/10 Richard Hipp
>
> Because when you are inserting the Nth row, SQLite has no idea of how many
> more rows will follow or how big the subsequent rows will be, so it has no
> way to reserve contiguous space sufficient to hold them all. The result is
> that parts of the
On 10 Nov 2011, at 10:41pm, Fabian wrote:
> Is there some trick to force VACUUM-like output when building the
> database, and avoid to build it twice?
CREATE your TABLES.
BEGIN TRANSACTION.
INSERT your data.
END TRANSACTION.
CREATE your INDEXes.
This should get you the fastest process, and
On Thu, Nov 10, 2011 at 5:41 PM, Fabian wrote:
> I'm trying to understand the following: Why is it that when I create a new
> database, and fill it with rows, why can it not be commited to disk the
> same way as VACUUM would do it?
Because when you are inserting the Nth
I'm trying to understand the following: Why is it that when I create a new
database, and fill it with rows, why can it not be commited to disk the
same way as VACUUM would do it? Currently I'm trying to manually vacuum the
database by inserting rows sorted by table and by rowid, but as soon as the
Richard Hipp wrote:
> On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul wrote:
>
>> Hi,
>>
>> I've performed a vacuuming operation (I ran the following command:
>> sqlite3.exe VACUUM;).
>> It caused the WAL file to be the same size as the db file and it never
>> shrink back.
On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul wrote:
> Hi,
>
> I've performed a vacuuming operation (I ran the following command:
> sqlite3.exe VACUUM;).
> It caused the WAL file to be the same size as the db file and it never
> shrink back.
>
> For example I had a db file
Hi,
I've performed a vacuuming operation (I ran the following command: sqlite3.exe
VACUUM;).
It caused the WAL file to be the same size as the db file and it never shrink
back.
For example I had a db file in the size of 1.8GB, wal file in the size of
1.7MB, shm file in the size of 32KB.
I
Thank you so much Richard.
I have tried WAL and then applied VACUUM, it is working fine.
Thanks again,
Tarun Thakur
On 8/19/11, Richard Hipp wrote:
> On Thu, Aug 18, 2011 at 11:19 PM, Tarun wrote:
>
>> Hi All,
>>
>> I am running program in which I
On Thu, Aug 18, 2011 at 11:19 PM, Tarun wrote:
> Hi All,
>
> I am running program in which I have created SQLite DB file using:
>
> sqlite3async_initialize()
>
The easiest solution to your problem might be simply to not use the
test_async.c module. That module was
Hi All,
I am running program in which I have created SQLite DB file using:
sqlite3async_initialize()
sqlite3_open_v2()
Then created one table into SQLiteDB by using sqlite3_exec() in which
I passed SQL command of creating table.
Then I inserted records into table using INSERT sql command in
On Fri, Apr 29, 2011 at 3:37 AM, Simon Slavin wrote:
> The thing is, there's not problem with referring to rowid, or ROWID or any
> of the other aliases when you do this. The only problem is possible
> confusion for the programmer if you define a column with one of these
On Thu, Apr 28, 2011 at 5:20 PM, Dave Hayden wrote:
> On Apr 28, 2011, at 12:41 PM, Pavel Ivanov wrote:
>>> After more poking, it appears that rowids might not be changed by a vacuum
>>> if I have an index on the table. Is this true? If so, is it something I can
>>> rely on
Dave please take a look at this blog post:
http://www.sqlabs.com/blog/?p=51
--
Marco Bambini
http://www.sqlabs.com
On Apr 28, 2011, at 9:36 PM, Dave Hayden wrote:
> When the VACUUM feature was added I took a look at using it to keep database
> file sizes down, but discovered that it
>> which suggests that referring to rowids is fine.
>
> It does not suggest referring to ROWIDs is fine, it only says that it
> can be done. I think Pavel's point is that referencing ROWIDs is bad
> practice, so that is why he says you shouldn't do it.
Yes, that's right. You can refer to rowid,
On 29 Apr 2011, at 2:31am, Rich Rattanni wrote:
>> "You can access the ROWID of an SQLite table using one the special column
>> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column
>> to use one of those special names, then the use of that name will refer to
>> the
> "You can access the ROWID of an SQLite table using one the special column
> names ROWID, _ROWID_, or OID. Except if you declare an ordinary table column
> to use one of those special names, then the use of that name will refer to
> the declared column not to the internal ROWID."
>
> which
On Apr 28, 2011, at 12:41 PM, Pavel Ivanov wrote:
>> After more poking, it appears that rowids might not be changed by a vacuum
>> if I have an index on the table. Is this true? If so, is it something I can
>> rely on going forward?
>
> No, it's not true. The only way to keep your rowids
> After more poking, it appears that rowids might not be changed by a vacuum if
> I have an index on the table. Is this true? If so, is it something I can rely
> on going forward?
No, it's not true. The only way to keep your rowids intact is to
declare an INTEGER PRIMARY KEY alias for it. And
When the VACUUM feature was added I took a look at using it to keep database
file sizes down, but discovered that it changed rowids and messed up my
references between tables (or what I gather the database people call "foreign
keys"). I'm playing around with this again and it looks like rowids
[...]
> Can you please run sqlite3_analyzer [...] on both
> the original database and the database after VACUUM
> and send me the output?
done, send 2 txt-files
greetings
Oliver
___
sqlite-users mailing list
sqlite-users@sqlite.org
* Oliver Schneider:
> just a few minutes ago I ran a VACUUM on a DB file and the size before
> was 2089610240 and afterwards 2135066624. Is this normal?
This is just typical behavior with B-trees because the fill factor
almost always changes when they are rebuilt. It seems that SQLite
doesn't
On Sat, Sep 18, 2010 at 11:36:12AM -0700, Kyle McKay scratched on the wall:
> On Sep 17, 2010, at 16:08:42 PDT, Oliver Schneider wrote:
> > just a few minutes ago I ran a VACUUM on a DB file and the size before
> > was 2089610240 and afterwards 2135066624. Is this normal?
> Admittedly that's only
gt;
> I've recently been noticing the same thing, for example:
>
> 443182080 newdb.sq3
> $ sqlite3 newdb.sq3
> SQLite version 3.6.22
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> PRAGMA freelist_count;
> 0
SQLite version 3.6.22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA freelist_count;
0
sqlite> PRAGMA integrity_check;
ok
sqlite> vacuum;
sqlite> PRAGMA freelist_count;
0
sqlite> PRAGMA integrity_check;
ok
sqlite> .quit
1 - 100 of 204 matches
Mail list logo