[sqlite] MMAP performance with databases over 2GB

2015-08-06 Thread Dan Kennedy
On 08/06/2015 09:53 PM, Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> In my testing I am finding that using SQLite's mmap support with
> larger databases (final db size is ~10GB) to create the database (WAL
> mode) is considerably (about three times) slower than no mmap.
>
> The default max mmap limit is 2GB (sadly).  Has anyone else tested
> mmap with >2GB databases and have results?  I don't know if this is a
> peculiarity of my data set, or because of how SQLite is implemented.

Is it using more CPU cycles in mmap mode or just taking longer? If the 
former, does [time] attribute them to "user" or "system"?

How large are you letting the wal file grow between checkpoints?

Dan.




>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iEYEARECAAYFAlXDdQMACgkQmOOfHg372QRt3wCeKQpP9g2OhWS2yJg+iU7Gxvxo
> TPUAn3ikDarecOaKLPIsnS3Xv+IltreU
> =yUxV
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Determine query type

2015-08-06 Thread Stephan Beal
On Thu, Aug 6, 2015 at 6:57 PM, Ben Newberg  wrote:

> Excellent. This is exactly what I was looking for.
>

Great :). Now that i have some code in front of me i can say, more
concretely, that sqlite3_column_count() is what you want.

https://www.sqlite.org/c3ref/column_count.html

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] Determine query type

2015-08-06 Thread Stephan Beal
On Thu, Aug 6, 2015 at 6:46 PM, Ben Newberg  wrote:

> Is there a good way to determine if a sql query is either attempting to
> modify a database, or simply querying it?
>

Check the column count of the prepared statement: it will be >0 for a
select or select-like pragma. Anything else is an update, delete, drop,
create table/view, non-select-like pragma, or similar.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] Sorting by greatest count amount

2015-08-06 Thread Reinhard Max
On Thu, 6 Aug 2015 at 17:51, Reinhard Max wrote:

> On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote:
>
>> SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk;
>
> For ordered results you need an ORDER BY clause, and if you want to 
> order by a calculated column (count in this case), you have to give 
> it a name:
>
> SELECT pmuk, count(*) AS amount
> FROM LSOpenJobs
> WHERE pmuk LIKE '% %' GROUP BY pmuk
> ORDER BY amount

BTW, an alternative to naming the column is repeating the expression 
that was used to calculate it:

SELECT pmuk, count(*)
FROM LSOpenJobs
WHERE pmuk LIKE '% %' GROUP BY pmuk
ORDER BY count(*)

You can even use the expression for sorting without including the the 
value in the result set:

SELECT pmuk
FROM LSOpenJobs
WHERE pmuk LIKE '% %' GROUP BY pmuk
ORDER BY count(*)

cu
Reinhard


[sqlite] Sorting by greatest count amount

2015-08-06 Thread Reinhard Max

On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote:

> SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk;

For ordered results you need an ORDER BY clause, and if you want to 
order by a calculated column (count in this case), you have to give it 
a name:

SELECT pmuk, count(*) AS amount
FROM LSOpenJobs
WHERE pmuk LIKE '% %' 
GROUP BY pmuk
ORDER BY amount

Does this do what you want?

cu
Reinhard


[sqlite] Determine query type

2015-08-06 Thread Nelson, Erik - 2
> users-bounces at mailinglists.sqlite.org] On Behalf Of Scott Hess wrote on 
> Thursday, August 06, 2015 1:11 PM
> 
> Also consider https://www.sqlite.org/c3ref/stmt_readonly.html
> 
> On Thu, Aug 6, 2015 at 10:08 AM, Stephan Beal 
> wrote:
> 
> > On Thu, Aug 6, 2015 at 6:57 PM, Ben Newberg 
> wrote:
> >
> > > Excellent. This is exactly what I was looking for.
> > >
> >
> > Great :). Now that i have some code in front of me i can say, more
> > concretely, that sqlite3_column_count() is what you want.
> >
> > https://www.sqlite.org/c3ref/column_count.html
> >

And https://www.sqlite.org/c3ref/c_alter_table.html

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.


[sqlite] Read data from database slow the first time but then very fast thereafter. How can I improve the speed for the first time read?

2015-08-06 Thread Stephan Beal
On Thu, Aug 6, 2015 at 4:54 PM, Wade, William  wrote:

> I'm an SQLite novice, so if I'm wrong about this, I hope someone will
> correct me.
>
> The first query will be slower than later queries for the reason Simon
> mentioned.


Pedantically speaking, "will" is not quite right. That will _normally_
apply, but depends on the whims (and memory pressure) of the OS.

When this question/problem has come up before, some people have suggested
"coercing" the raw db file into the filesystem cache with something like:

  cat mydb > /dev/null

(or whatever the equivalent is on Windows)

which simply causes mydb to be read into the filesystem cache,
independently of sqlite (again, subject to the whims of the OS).

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] Read data from database slow the first time but then very fast thereafter. How can I improve the speed for the first time read?

2015-08-06 Thread Wade, William
I'm an SQLite novice, so if I'm wrong about this, I hope someone will correct 
me.

The first query will be slower than later queries for the reason Simon 
mentioned. However, it may still be possible to get some speed up of the first 
query, if it can avoid reading stuff not in the result set and also avoid lots 
of wide-spread random seeks. If your 1 million records are the entire database, 
none of these suggestions will help much.

If you "own" the database (can perform writes, or even change the schema) you 
may be able to change it to a form which reads-faster on the first attempt. All 
of these are most likely to help if not a lot of changes have occurred since 
the last VACUUM.

For instance, if your 1 million records are an entire table, but are also only 
1% of the volume of the database as a whole, a VACUUMed database may be faster.

In addition, if your 1 million records are  each small, and extracted from a 
table of 100 million records, then having your table's primary key such-that 
the result set comes from a contiguous range of the original table may help. 
The documentation says that VACUUM makes each table contiguous. It doesn't 
explicitly say that it makes them contiguous in primary-key order, so I don't 
for sure that this would help (but I suspect it does).

Finally, if the table has substantially more columns than you are asking for, 
splitting the table into two tables (the stuff you want, and the stuff you 
don't want) may help.

If nothing else, VACUUM is likely to shrink the database, which speeds loading 
it from disk.

Bill

> From: Simon Slavin [mailto:slavins at bigfraud.org]
> Sent: Tuesday, August 04, 2015 10:46 AM

> On 3 Aug 2015, at 1:58pm, Linquan Bai  wrote:

>> I am trying to read large data from the database about 1 million records.
>> It takes around 1min for the first time read. But if I do the same
>> process thereafter, the time is significantly reduced to 3 seconds.
>> How can I get a fast speed for the first time read?

> You can't.  Some part of your computer has pulled that data into cache, and 
> it's still in the cache when you run the process again, so it the data 
> doesn't need to be fetched from disk again.

> Simon.

**
This e-mail and any attachments thereto may contain confidential information 
and/or information protected by intellectual property rights for the exclusive 
attention of the intended addressees named above. If you have received this 
transmission in error, please immediately notify the sender by return e-mail 
and delete this message and its attachments. Unauthorized use, copying or 
further full or partial distribution of this e-mail or its contents is 
prohibited.
**


[sqlite] Sqlite ticket f51d9501800de5a0fb69d5048ce6662981b461ec

2015-08-06 Thread Bernhard Schommer
Hi,

the warning which is mentioned in the ticket still occurs also with newer
gcc versions. The ticket was closed after a gcc bug report was opened. The
gcc bug was closed due to a missing testcase.
I had a quick look at the problem and it seems that the warning is right
since in certain context and with optimizations enabled gcc can prove
that EXPR_FULLSIZE
- nSize == 0 and thus the triggered warning for this is correct.
Replacing
memset(&zAlloc[nSize], 0, EXPR_FULLSIZE-nSize);
By
if(EXPR_FULLSIZE-nSize > 0)
 memset(&zAlloc[nSize], 0, EXPR_FULLSIZE-nSize);
would remove the warning,

Cheers,
-Bernhard


[sqlite] Regarding root in transitive closure extension of SQLite

2015-08-06 Thread Sairam Gaddam
I have enabled the transitive closure extension and I followed the steps
regarding querying the virtual table mentioned in closure.c file as follows

CREATE VIRTUAL TABLE ct1 USING transitive_closure(
   tablename='group1',
   idcolumn='groupId',
   parentcolumn='parentId'
   );

CREATE TABLE group1(
 groupId INTEGER PRIMARY KEY,
 parentId INTEGER REFERENCES group1
  );

CREATE INDEX group_idx1 ON group(parentId);

SELECT group1.* FROM group1, ct1
WHERE element.groupid=ct1.id
 AND ct1.root=?1
 AND ct1.depth<=2;

In the above query, can the root be assigned a text data type?

I tried with INTEGER data type for both groupId and parentId, it worked
fine but when they are TEXT or INT data type and if I query like
ct1.root='SQLITE' there was no output.

Can anyone kindly tell help me regarding this ?


[sqlite] Sorting by greatest count amount

2015-08-06 Thread jose isaias cabrera

"Reinhard Max" wrote...

> On Thu, 6 Aug 2015 at 17:51, Reinhard Max wrote:
>
>> On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote:
>>
>>> SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY 
>>> pmuk;
>>
>> For ordered results you need an ORDER BY clause, and if you want to order 
>> by a calculated column (count in this case), you have to give it a name:
>>
>> SELECT pmuk, count(*) AS amount
>> FROM LSOpenJobs
>> WHERE pmuk LIKE '% %' GROUP BY pmuk
>> ORDER BY amount
>
> BTW, an alternative to naming the column is repeating the expression that 
> was used to calculate it:
>
> SELECT pmuk, count(*)
> FROM LSOpenJobs
> WHERE pmuk LIKE '% %' GROUP BY pmuk
> ORDER BY count(*)
>
> You can even use the expression for sorting without including the the 
> value in the result set:
>
> SELECT pmuk
> FROM LSOpenJobs
> WHERE pmuk LIKE '% %' GROUP BY pmuk
> ORDER BY count(*)

Another unexpected lesson today!  Yay! Muchas gracias, or just simply, 
danka. :-)



[sqlite] Sorting by greatest count amount

2015-08-06 Thread jose isaias cabrera

"Reinhard Max" wrote...
>
> On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote:
>
>> SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY 
>> pmuk;
>
> For ordered results you need an ORDER BY clause, and if you want to order 
> by a calculated column (count in this case), you have to give it a name:
>
> SELECT pmuk, count(*) AS amount
> FROM LSOpenJobs
> WHERE pmuk LIKE '% %' GROUP BY pmuk
> ORDER BY amount
>
> Does this do what you want?

Darn it!  I keep forgetting about ORDER BY.  Yep, that is exactly what I 
wanted.  Thanks.

jos? 



[sqlite] Determine query type

2015-08-06 Thread Ben Newberg
Excellent. This is exactly what I was looking for.

Thanks.

On Thu, Aug 6, 2015 at 11:50 AM, Stephan Beal  wrote:

> On Thu, Aug 6, 2015 at 6:46 PM, Ben Newberg  wrote:
>
> > Is there a good way to determine if a sql query is either attempting to
> > modify a database, or simply querying it?
> >
>
> Check the column count of the prepared statement: it will be >0 for a
> select or select-like pragma. Anything else is an update, delete, drop,
> create table/view, non-select-like pragma, or similar.
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> http://gplus.to/sgbeal
> "Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
> those who insist on a perfect world, freedom will have to do." -- Bigby
> Wolf
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bugs/improvements to sqlite constraints

2015-08-06 Thread sqlite-mail
Hello !  

After submitting several emails with subject "Bug in
sqlite3_trace/trigger/delete" and only one reply to then so far I decided to
look a bit more deep on the problem I'm facing using sqlite3 with one
specific database and created a simpler database that can show several
problems and opportunities for improvements in sqlite3.  

I probably only discovery this problem because I was using sqlite3_trace to
output the sql from a server application I'm doing.  

The bugs/opportunities for improvements found:  

1- Missing comma separating table constraints not reported as error.  

2- Duplicated table constraints not reported as error.  

3- The sqlite3_trace hook function been called with misleading info from
sqlite3 internal DML operations to satisfy "ON DELETE SET NULL". See bellow
the output of the C program with comments. Some applications use the output
of sqlite3_trace to replicate the database and having internal only
operations been send to it will create problems.  

4- Sqlite3 do not perform any optimization by joining "table scans" searching
for the same value on more than one column on the same table. See bellow the
output of sqlite3 test-fkbugs.db "explain query plan delete from aa where
id=10";  

?  

Based on this experience I'm suggesting to remove the output of internal
operations from sqlite3_trace (see the dml operations to satisfy "ON DELETE
SET NULL") and have another trace hook "sqlite3_trace_explain_query" that
would also show at high level the internal sqlite3 operations a kind of mix
of sqlite3_trace + "explain" that would give for this database example an
output like this:  

---  

/test-sqlite-bug  

SQL: INSERT INTO "aa"("id","name") VALUES(10, 'daddad')
SQL: DELETE FROM aa WHERE id=10
0|0|0|SEARCH TABLE aa USING INTEGER PRIMARY KEY (rowid=?)
SQL: -- TRIGGER aa_delete_trigger  

0|0|0|SCAN TABLE tbl for constrained_fkey_aa_id
0|0|0|SCAN TABLE tbl for constrained_fkey_ab_id
0|0|0|SCAN TABLE tbl for constrained_fkey_ac_id
0|0|0|SCAN TABLE tbl for constrained_fkey_ad_id
0|0|0|SCAN TABLE tbl for constrained_fkey_ad_id << duplicated constraint wold
not exists if detected on create table
0|0|0|SCAN TABLE tbl for constrained_fkey_ad_id << duplicated constraint wold
not exists if detected on create table
0|0|0|SCAN TABLE tbl for constrained_fkey_ad_id << duplicated constraint wold
not exists if detected on create table
0|0|0|UPDATE TABLE tbl constrained_fkey_aa_id? "ON DELETE SET NULL"
0|0|0|UPDATE TABLE tbl constrained_fkey_aa_id? "ON DELETE SET NULL"
---  

?  

- the database "test-fkbugs.db"  

PRAGMA foreign_keys=OFF;

BEGIN TRANSACTION;

CREATE TABLE aa(id INTEGER PRIMARY KEY, name TEXT);
INSERT OR IGNORE INTO aa(id, name) VALUES 
(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd');

CREATE TABLE IF NOT EXISTS tbl(
??? id? INTEGER PRIMARY KEY, 
??? name varchar,
??? a_id INTEGER,
??? b_id INTEGER,
??? c_id INTEGER,
??? d_id INTEGER,
??? CONSTRAINT constrained_fkey_aa_id FOREIGN KEY(a_id) REFERENCES aa(id)
ON DELETE SET NULL?? -- missing comma separator not detected
??? CONSTRAINT constrained_fkey_ab_id FOREIGN KEY(b_id) REFERENCES aa(id)
ON DELETE SET NULL,
??? CONSTRAINT constrained_fkey_ac_id FOREIGN KEY(c_id) REFERENCES
aa(id)? -- missing comma separator
??? CONSTRAINT constrained_fkey_ad_id FOREIGN KEY(d_id) REFERENCES aa(id)
??? CONSTRAINT constrained_fkey_ad_id FOREIGN KEY(d_id) REFERENCES aa(id)
-- duplicated constraint not detected
??? CONSTRAINT constrained_fkey_ad_id FOREIGN KEY(d_id) REFERENCES aa(id)
-- duplicated constraint not detected
??? CONSTRAINT constrained_fkey_ad_id FOREIGN KEY(d_id) REFERENCES aa(id)
-- duplicated constraint not detected
);

INSERT OR IGNORE INTO tbl(id, name) VALUES 
(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd');

CREATE TRIGGER IF NOT EXISTS aa_delete_trigger BEFORE DELETE ON aa 
BEGIN
??? SELECT RAISE(ABORT, 'Can not remove root/admin user!') WHERE OLD.id =
1;
END;

COMMIT;  

-  

- the C program to see the sqlite3_trace/constraint problem
"test-sqlite-bug.c"  

#include 
#include "sqlite3.h"

static void db_trace_callback(void *user, const char *sql)
{
??? printf("SQL: %s\n", sql ? sql : "??");
}

int main(int argc, char *argv[])
{
??? sqlite3 *db;
??? int rc = sqlite3_open("test-fkbugs.db", &db);
??? if(rc == SQLITE_OK)
??? {
??? ??? char *errmsg;
??? ??? const char insert_szSQL[] = "INSERT INTO aa(id,name) VALUES(10,
'daddad')";
??? ??? const char delete_szSQL[] = "DELETE FROM aa WHERE id=10";
??? ??? sqlite3_trace(db, db_trace_callback, NULL);
??? ??? rc = sqlite3_exec(db, insert_szSQL, NULL, NULL, &errmsg);
??? ??? rc = sqlite3_exec(db, delete_szSQL, NULL, NULL, &errmsg);
??? ??? sqlite3_close(db);
??? }
??? return 0;
}  

-  

- the shell script to compile the C program  

MYINC=.

gcc \
??? -DTHREADSAFE=1 \
??? -DSQLITE_DEFAULT_FILE_FORMAT=4 \
??? -DSQLITE_DEFAULT_AUTOVACUUM=1 \
??? -DSQLITE_DEFAULT_FOREIGN_KEYS=1 \
??? -DSQLITE_ENABLE_COLUMN_METADATA=1 \
??? -DSQLITE_ENABLE_FTS4=1 \
??? -DSQLITE_ENABLE_FTS3_PARENTH

[sqlite] Determine query type

2015-08-06 Thread Ben Newberg
Is there a good way to determine if a sql query is either attempting to
modify a database, or simply querying it?

In the past, I have used sqlite3_update_hook with good results, but in this
current project, I am looking for a way to determine the type of query even
before the prepared statement is stepped thru at all. Ie, based on the
query text alone, I want to try to make this determination.

Would Explain / [Query Plan] be useful in determining the type of query?

Thanks all.


[sqlite] Sorting by greatest count amount

2015-08-06 Thread jose isaias cabrera

Greetings.

I have this query,

SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk;

this works but it is not sorted.  I tried things like

SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY count(*) 
|| '-' || pmuk;

and

SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY '' || 
count(*) || '' || '-' || pmuk;

I would like to have the pmuk sorted by the lowest to highest, if possible.  
Otherwise, the opposite.  Thanks.

jos?


[sqlite] Determine query type

2015-08-06 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/06/2015 09:46 AM, Ben Newberg wrote:
> Is there a good way to determine if a sql query is either
> attempting to modify a database, or simply querying it?

You most like want to use the authoriser.  It tells you exactly what
the query is doing, and you can block/render harmless/approve the
various actions making it up:

  https://sqlite.org/c3ref/set_authorizer.html

A very brief example of what you get called back with is here:

  http://rogerbinns.github.io/apsw/example.html#authorizer-example

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlXDm8oACgkQmOOfHg372QTuBACcCD99VOOBscT6YOHsE4jux3Zc
lSYAoMksAQcHqieUHsdWniNqXnOFYJ4v
=IG84
-END PGP SIGNATURE-


[sqlite] MMAP performance with databases over 2GB

2015-08-06 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/06/2015 09:27 AM, Dan Kennedy wrote:
>> Is it using more CPU cycles in mmap mode or just taking longer?
>> If the former, does [time] attribute them to "user" or "system"?

It is taking longer.  I have 3 XML dumps which I turn into JSON
(incrementally) and then denormalise and insert into SQLite across
several tables.  While all this work is going on, the code prints out
statistics about how long it is running and about how many records per
second are being processed.  The final database size after commit and
vacuum is ~8GB.  There are a lot of foreign keys too, although all
simple (referring to an INTEGER PRIMARY KEY column).

I gave mmap a try - ie the *only* change was to add an extra pragma
before the beginning of the transaction:

   "pragma mmap_size="+str(2*1024*1024*1024)

In hard numbers, without that line I was doing ~1,118 records per
second and with it it does ~300.  A normal run takes about 1h20m but
the mmap one was still running 3 hours later when I aborted it.

(BTW this is all on a tmpfs filesystem on 64 bit Linux with swap
spread across two ssds, and 32GB of ram.  ie the actual storage
hardware isn't a factor.  Also single threaded because XML.)

>> How large are you letting the wal file grow between checkpoints?

Pretty much the entire database size.  For the tests I was starting
with a deleted database directory (ie no pre-existing files), and then
doing these pragmas:

  "pragma page_size=4096",
# "pragma mmap_size="+str(2*1024*1024*1024),
  "pragma journal_mode=wal",
  "pragma wal_autocheckpoint=1",
  "pragma foreign_keys=on",

Then I start a transaction, and do the importing within that
transaction.  The database file is 4kb during that process, the wal
file gets to be about 10GB.

If I use gdb to periodically break into the running process in the
mmap case, then it was always in sqlite3WalFindFrame.

I don't need any help fixing my importing process (eg don't need a
journal on an empty database anyway).  But it is frustrating that mmap
only goes up to a few kb shy of 2GB even for 64 bit, and I have one
example (ie anecdote not data) showing that mmap hurts for inserts on
> 2GB databases.  Perhaps it is worth others testing to see if this
> is
a systemic problem, or just bad luck for me :-)

It may also be relevant that tables add columns over time.  I
dynamically add them after encountering previously unseen fields in
the JSON.  However I'd expect the schema to be final a few thousand
records in.  Most tables have 3 to 6 million records.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlXDmukACgkQmOOfHg372QSVtgCbBihGgIuZqS3Yy2JARXZ1+Q59
GmwAoMG53XxuLNhcMIw1PV46fD/Z/5tT
=luXx
-END PGP SIGNATURE-


[sqlite] Read data from database slow the first time but then very fast thereafter. How can I improve the speed for the first time read?

2015-08-06 Thread John McKown
If the OP shows us the data base schema, and the select, then perhaps its
performance could be enhanced by changing something. One thing which I
sometimes do is "denormalize" a table by splitting it into two, or more,
tables which are related via a shared UUID value. I put the often used
columns in one and the lesser used columns in the other. That way, when I
do a SELECT which only needs the columns in the first, I don't need to read
the other table at all. In my case, this table was rarely updated. But
updating it (changing column values and adding new rows) is a bit more
difficult. Too bad that SQLite does not implement updatable views.

On Thu, Aug 6, 2015 at 10:08 AM, Stephan Beal  wrote:

> On Thu, Aug 6, 2015 at 4:54 PM, Wade, William  wrote:
>
> > I'm an SQLite novice, so if I'm wrong about this, I hope someone will
> > correct me.
> >
> > The first query will be slower than later queries for the reason Simon
> > mentioned.
>
>
> Pedantically speaking, "will" is not quite right. That will _normally_
> apply, but depends on the whims (and memory pressure) of the OS.
>
> When this question/problem has come up before, some people have suggested
> "coercing" the raw db file into the filesystem cache with something like:
>
>   cat mydb > /dev/null
>
> (or whatever the equivalent is on Windows)
>
> which simply causes mydb to be read into the filesystem cache,
> independently of sqlite (again, subject to the whims of the OS).
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> http://gplus.to/sgbeal
> "Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
> those who insist on a perfect world, freedom will have to do." -- Bigby
> Wolf
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


[sqlite] System.Data.SQLite encryption

2015-08-06 Thread Marco Bambini
Anyone know which encryption algorithm is used in System.Data.SQLite?
Seems like it doesn?t use the standard SEE sqlite extension.

Thanks a lot.
--
Marco Bambini
http://www.sqlabs.com
http://twitter.com/sqlabs
http://instagram.com/sqlabs




[sqlite] Determine query type

2015-08-06 Thread Scott Hess
Also consider https://www.sqlite.org/c3ref/stmt_readonly.html

-scott

On Thu, Aug 6, 2015 at 10:08 AM, Stephan Beal  wrote:

> On Thu, Aug 6, 2015 at 6:57 PM, Ben Newberg  wrote:
>
> > Excellent. This is exactly what I was looking for.
> >
>
> Great :). Now that i have some code in front of me i can say, more
> concretely, that sqlite3_column_count() is what you want.
>
> https://www.sqlite.org/c3ref/column_count.html
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> http://gplus.to/sgbeal
> "Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
> those who insist on a perfect world, freedom will have to do." -- Bigby
> Wolf
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug in sqlite3_trace/trigger/delete

2015-08-06 Thread sqlite-mail
Hello !  

After waiting for any help on this issue I've tried to see what "explain
query plan delete from res_users where id=7" shows and found that on this
particular database because the res_users.id is a foreign key on almost all
other tables and sometimes on more than one field (most of then without an
index) deleting anything on res_users become a very costly operation.  

It also shows that sqlite is not capable to merge table scans for the same
value on more than one column and execute several table scans (optimization
opportunity), but I'm still not sure why it's calling sqlite3_trace with
empty values ? Would it be better if it shows "INTERNAL QUERY SCAN TABLE .."
?  

Cheers !  

-  

0|0|0|SEARCH TABLE res_users USING INTEGER PRIMARY KEY (rowid=?)
0|0|0|SCAN TABLE ir_property
0|0|0|SCAN TABLE ir_property
0|0|0|SCAN TABLE res_lang
0|0|0|SCAN TABLE res_lang
0|0|0|SCAN TABLE ir_rule
0|0|0|SCAN TABLE ir_rule
0|0|0|SCAN TABLE sale_order
0|0|0|SEARCH TABLE sale_order USING COVERING INDEX sale_order_user_id_index
(user_id=?)
0|0|0|SCAN TABLE sale_order
0|0|0|SCAN TABLE stock_picking_type
0|0|0|SCAN TABLE stock_picking_type
0|0|0|SEARCH TABLE analytic_profit_emp_rel USING COVERING INDEX
analytic_profit_emp_rel_emp_id_index (emp_id=?)
0|0|0|SCAN TABLE calendar_event
0|0|0|SCAN TABLE calendar_event
0|0|0|SCAN TABLE calendar_event
0|0|0|SCAN TABLE fleet_vehicle_log_contract
0|0|0|SCAN TABLE fleet_vehicle_log_contract
0|0|0|SCAN TABLE fleet_vehicle_contract_state
0|0|0|SCAN TABLE fleet_vehicle_contract_state
0|0|0|SCAN TABLE fleet_vehicle_cost_frequency
0|0|0|SCAN TABLE fleet_vehicle_cost_frequency
0|0|0|SCAN TABLE fleet_vehicle_cost
0|0|0|SCAN TABLE fleet_vehicle_cost
0|0|0|SCAN TABLE purchase_order_line
0|0|0|SCAN TABLE purchase_order_line
0|0|0|SCAN TABLE purchase_order
0|0|0|SCAN TABLE purchase_order
0|0|0|SCAN TABLE purchase_order
0|0|0|SCAN TABLE purchase_order_method
0|0|0|SCAN TABLE purchase_order_method
0|0|0|SCAN TABLE purchase_order_state
0|0|0|SCAN TABLE purchase_order_state
0|0|0|SCAN TABLE sale_order_state
0|0|0|SCAN TABLE sale_order_state
0|0|0|SCAN TABLE crm_lead
0|0|0|SCAN TABLE crm_lead
0|0|0|SEARCH TABLE crm_lead USING COVERING INDEX crm_lead_user_id_index
(user_id=?)
0|0|0|SCAN TABLE hr_applicant
0|0|0|SCAN TABLE hr_applicant
0|0|0|SCAN TABLE hr_applicant
0|0|0|SCAN TABLE hr_applicant_appreciation
0|0|0|SCAN TABLE hr_applicant_appreciation
0|0|0|SCAN TABLE account_invoice
0|0|0|SCAN TABLE account_invoice
0|0|0|SCAN TABLE account_invoice
0|0|0|SCAN TABLE account_invoice_state
0|0|0|SCAN TABLE account_invoice_state
0|0|0|SCAN TABLE account_invoice_type
0|0|0|SCAN TABLE account_invoice_type
0|0|0|SCAN TABLE fleet_vehicle_cost_type
0|0|0|SCAN TABLE fleet_vehicle_cost_type
0|0|0|SCAN TABLE stock_location
0|0|0|SCAN TABLE stock_location
0|0|0|SCAN TABLE stock_location_usage
0|0|0|SCAN TABLE stock_location_usage
0|0|0|SCAN TABLE stock_picking_type_code
0|0|0|SCAN TABLE stock_picking_type_code
0|0|0|SEARCH TABLE share_wizard_res_user_rel USING COVERING INDEX
share_wizard_res_user_rel_user_id_index (user_id=?)
0|0|0|SEARCH TABLE sale_member_rel USING COVERING INDEX
sale_member_rel_member_id_index (member_id=?)
0|0|0|SEARCH TABLE res_groups_users_rel USING COVERING INDEX
res_groups_users_rel_uid_index (uid=?)
0|0|0|SEARCH TABLE res_company_users_rel USING COVERING INDEX
res_company_users_rel_user_id_index (user_id=?)
0|0|0|SEARCH TABLE project_user_rel USING COVERING INDEX
project_user_rel_uid_index (uid=?)
0|0|0|SEARCH TABLE pos_details_report_user_rel USING COVERING INDEX
pos_details_report_user_rel_wizard_id_index (wizard_id=?)
0|0|0|SEARCH TABLE gamification_invited_user_ids_rel USING COVERING INDEX
gamification_invited_user_ids_rel_res_users_id_index (res_users_id=?)
0|0|0|SEARCH TABLE gamification_challenge_users_rel USING COVERING INDEX
gamification_challenge_users_rel_res_users_id_index (res_users_id=?)
0|0|0|SEARCH TABLE forum_post_res_users_rel USING COVERING INDEX
forum_post_res_users_rel_res_users_id_index (res_users_id=?)
0|0|0|SEARCH TABLE crm_lead2opportunity_partner_mass_res_users_rel USING
COVERING INDEX
crm_lead2opportunity_partner_mass_res_users_rel_res_users_id_in
(res_users_id=?)
0|0|0|SCAN TABLE fleet_vehicle_transmission_type
0|0|0|SCAN TABLE fleet_vehicle_transmission_type
0|0|0|SCAN TABLE fleet_vehicle_fuel_type
0|0|0|SCAN TABLE fleet_vehicle_fuel_type
0|0|0|SCAN TABLE wkf_transition
0|0|0|SCAN TABLE wkf_transition
0|0|0|SCAN TABLE wkf_activity
0|0|0|SCAN TABLE wkf_activity
0|0|0|SCAN TABLE wkf
0|0|0|SCAN TABLE wkf
0|0|0|SCAN TABLE wizard_valuation_history
0|0|0|SCAN TABLE wizard_valuation_history
0|0|0|SCAN TABLE wizard_multi_charts_accounts
0|0|0|SCAN TABLE wizard_multi_charts_accounts
0|0|0|SCAN TABLE wizard_ir_model_menu_create
0|0|0|SCAN TABLE wizard_ir_model_menu_create
0|0|0|SCAN TABLE website_seo_metadata
0|0|0|SCAN TABLE website_seo_metadata
0|0|0|SCAN TABLE website_menu
0|0|0|SCAN TABLE website_menu
0|0|0|SCAN TABLE website_converter_test_sub
0|0|0|SCAN TABL

[sqlite] Regarding root in transitive closure extension of SQLite

2015-08-06 Thread Dominique Devienne
On Thu, Aug 6, 2015 at 8:55 AM, Sairam Gaddam 
wrote:

> I have enabled the transitive closure extension and I followed the steps
> regarding querying the virtual table mentioned in closure.c file as follows
>

Given that SQLite supports CTE's now, why use that extension?

AFAIK, it was to add hierarchical query capability to SQLite, which CTE
does builtin now. --DD


[sqlite] MMAP performance with databases over 2GB

2015-08-06 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In my testing I am finding that using SQLite's mmap support with
larger databases (final db size is ~10GB) to create the database (WAL
mode) is considerably (about three times) slower than no mmap.

The default max mmap limit is 2GB (sadly).  Has anyone else tested
mmap with >2GB databases and have results?  I don't know if this is a
peculiarity of my data set, or because of how SQLite is implemented.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlXDdQMACgkQmOOfHg372QRt3wCeKQpP9g2OhWS2yJg+iU7Gxvxo
TPUAn3ikDarecOaKLPIsnS3Xv+IltreU
=yUxV
-END PGP SIGNATURE-


[sqlite] System.Data.SQLite encryption

2015-08-06 Thread Kevin Benson
On Thu, Aug 6, 2015 at 4:16 AM, Marco Bambini  wrote:

> Anyone know which encryption algorithm is used in System.Data.SQLite?
> Seems like it doesn?t use the standard SEE sqlite extension.
>

https://www.mail-archive.com/sqlite-users at sqlite.org/msg87770.html

--
   --
  --
 --???--
K e V i N