The vacuum removes empty pages by rewriting the database ground up.
https://www.sqlite.org/lang_vacuum.html
"The VACUUM command works by copying the contents of the database into a
temporary database file and then overwriting the original with the contents
of the temporary file. When overwriting
Domingo Alvarez Duarte wrote:
Hello Simon !
I already did it without using "wal" and the result was the same.
And even for my surprise in one try I stopped at the middle performed an
"analyze" and the performance deteriorated a lot to a point that I needed to
delete the stats tables to get the
What was the size of the original database?
To VACUUM a database, the process is:
1) Read the logical components of the database, write them to a new file. This
will generate at least 1x reads (original size), and 1x writes (final size).
In most cases the reads will be larger and the
On 10/1/16, David Goldwich wrote:
> On Sat, Oct 1, 2016 at 4:41 PM, Simon Slavin wrote:
>> According to a 2012 post from DRH, try this compilation option:
>>
>> -DSQLITE_WITHOUT_ZONEMALLOC
>
> Another, probably better way would be
I understand that this project doesn't produce go-sqlite3, but the
sqlite-bindings.c that project incorporates (and which is the origin of the
deprecation warning) do seem to be part of this project. If that's not so,
can you direct me to the true source so I can raise the issue there?
On Sat,
On Sat, Oct 1, 2016 at 4:41 PM, Simon Slavin wrote:
> According to a 2012 post from DRH, try this compilation option:
>
> -DSQLITE_WITHOUT_ZONEMALLOC
Another, probably better way would be -DOSATOMIC_USE_INLINED=1. That
will transparently substitute legacy atomic op
Hello Bob !
I'm using the default sqlite page size, but I also did a try with 32KB
page size and I've got a bi smaller overall database size but no visible
perfomance gain in terms of time and I/O.
Also the memory usage skyrocked, also forcing memory swap.
The OS was OS X yosemite, I also
On Sat, 1 Oct 2016, Domingo Alvarez Duarte wrote:
Hello !
I'm using sqlite (trunk) for a database (see bellow) and for a final database
file of 22GB a "vacuum" was executed and doing so it made a lot of I/O (
134GB reads and 117GB writes in 2h:30min).
What means are you using the evaluate
Hello Keith !
Probably have you seem in the code attached that also tried that once
and even the mmap but it doesn't had any visible improvement. Also due
to the data distribution 80% of the data inserts took 20% of the total
time and the other 20% of data insert took 80% of the total time.
Did you change the cache size? The default is rather small for a database of
22 GB.
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Domingo Alvarez Duarte
> Sent: Saturday, 1 October, 2016 15:19
> To: SQLite mailing list
>
Hello Simon !
I already did it without using "wal" and the result was the same.
And even for my surprise in one try I stopped at the middle performed an
"analyze" and the performance deteriorated a lot to a point that I
needed to delete the stats tables to get the better performance without
On 1 Oct 2016, at 10:18pm, Domingo Alvarez Duarte wrote:
> About the vacuum I also understand the need to rewrite the whole database but
> I'm not sure if it's really necessary to do almost 5 times the database size
> in both reads and writes (also an equivalent amount of
Hello Simon !
Thanks for reply !
I already know your suggestions and if you look at the database schema
and the program used to insert data you can see that there is no
unnecessary indices active and all inside transactions.
About the vacuum I also understand the need to rewrite the whole
On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte wrote:
> I'm using sqlite (trunk) for a database (see bellow) and for a final database
> file of 22GB a "vacuum" was executed and doing so it made a lot of I/O (
> 134GB reads and 117GB writes in 2h:30min).
>
> Can
Hello !
I'm using sqlite (trunk) for a database (see bellow) and for a final
database file of 22GB a "vacuum" was executed and doing so it made a
lot of I/O ( 134GB reads and 117GB writes in 2h:30min).
Can something be improved on sqlite to achieve a better performance ?
The data is public
On 10/1/16, Judson Lester wrote:
> I understand that this project doesn't produce go-sqlite3, but the
> sqlite-bindings.c that project incorporates (and which is the origin of the
> deprecation warning) do seem to be part of this project. If that's not so,
> can you direct me to
I’m experimenting with querying databases of JSON documents. These data-sets
are schemaless and there’s no guarantee that they all have a common set of
properties; in fact it’s common for them to have the equivalent of multiple
‘tables’ in the same data-set, i.e. groups of documents with
According to a 2012 post from DRH, try this compilation option:
-DSQLITE_WITHOUT_ZONEMALLOC
Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Richard Hipp wrote:
> On 9/30/16, Judson Lester wrote:
>> vendor/github.com/mattn/go-sqlite3/sqlite3-binding.c:18709:17: warning:
>> 'OSAtomicCompareAndSwapPtrBarrier' is deprecated: first deprecated in macOS
>> 10.12 - Use atomic_compare_exchange_strong() from instead
>>
On 1 Oct 2016, at 1:22pm, Heather, James (ELS-LON)
wrote:
> This is odd. I had rather assumed it would be possible to speed up the
> searching using those individual indexes... in fact I know it's possible,
> because MySQL performs the same query very much faster
On 10/1/16, Heather, James (ELS-LON) wrote:
>
> This is odd. I had rather assumed it would be possible to speed up the
> searching using those individual indexes... in fact I know it's possible,
> because MySQL performs the same query very much faster when I add the
>
On 9/30/16, Judson Lester wrote:
> Using go-sqlite3, I get this warning with every build:
>
> # github.com/mattn/go-sqlite3
> vendor/github.com/mattn/go-sqlite3/sqlite3-binding.c:18709:17: warning:
> 'OSAtomicCompareAndSwapPtrBarrier' is deprecated: first deprecated in macOS
>
On 9/30/16, Malte Legenhausen wrote:
> Other database systems
> like postgres can differ between indexes with the same name on different
> tables.
No it can't. Index names must be unique within a schema. Have you
actually tried creating two indexes with the same name
On Sat, 2016-10-01 at 13:57 +0200, Clemens Ladisch wrote:
Heather, James (ELS-LON) wrote:
I have ...
... asked this question elsewhere:
http://dba.stackexchange.com/questions/150858/why-is-this-sqlite-query-much-slower-when-i-index-the-columns
Yes, I didn't mean this to be subterfuge. It
Without any indexes, SQLite is probably creating its own transient
index to use for the join. And when SQLite creates an index, it will
be creating a single index on all three columns being joined.
But if you have your own (inferior, single-column) indexes, then
SQLite figures you know what
On 30 Sep 2016, at 2:33pm, Malte Legenhausen wrote:
> sqlite> create table t1(one varchar(10));
> sqlite> CREATE INDEX unique_one ON t1 (one);
> sqlite> create table t2(one varchar(10));
> sqlite> CREATE INDEX unique_one ON t2 (one);
> Error: index unique_one already
Sorry what is "very hard" about
SELECT * FROM sqlite_master WHERE type = 'index' AND name = 'new_index_name'
to see if the index already exists
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
On 28 Sep 2016, at 8:01pm, Heather, James (ELS-LON)
wrote:
> But if I add indexes to the three columns on each table (six indexes in all):
>
> CREATE INDEX `idx_uk_givenname` ON `fakenames_uk` (`givenname` )
> //etc.
>
> then it runs painfully slowly:
I can't
Heather, James (ELS-LON) wrote:
> I have ...
... asked this question elsewhere:
http://dba.stackexchange.com/questions/150858/why-is-this-sqlite-query-much-slower-when-i-index-the-columns
Regards,
Clemens
___
sqlite-users mailing list
On 9/28/16, Heather, James (ELS-LON) wrote:
> I have a sqlite database with two tables, each with 50,000 rows in,
> containing names of (fake) people. I've constructed a simple query to find
> out how many names there are (given name, middle initial, surname) that are
Using go-sqlite3, I get this warning with every build:
# github.com/mattn/go-sqlite3
vendor/github.com/mattn/go-sqlite3/sqlite3-binding.c:18709:17: warning:
'OSAtomicCompareAndSwapPtrBarrier' is deprecated: first deprecated in macOS
10.12 - Use atomic_compare_exchange_strong() from instead
Hi,
Could someone please explain why the following sequence of commands fail in
sqlite?
SQLite version 3.14.0
sqlite> create table t1(one varchar(10));
sqlite> CREATE INDEX unique_one ON t1 (one);
sqlite> create table t2(one varchar(10));
sqlite> CREATE INDEX unique_one ON t2
Hello,
I'm writing a function to read from an sqlite3 database.
I'd like to use function sqlite3_column_blob() (and similar functions) to
read a column's value in a row. However I'm confused about how to check for
errors when using this function. The documentation
I have a sqlite database with two tables, each with 50,000 rows in, containing
names of (fake) people. I've constructed a simple query to find out how many
names there are (given name, middle initial, surname) that are common to both
tables:
select count(*) from fakenames_uk inner join
Hi,
I am a passive subscriber to this list at best and I use SQLite on small
projects.
I wrote a tool that I am using to maintain SQLite DB schema. It is very
simple and it is based on user_version pragma, which makes it SQLite
specific. I have no plans to make it more complicated than it is
35 matches
Mail list logo