Re: [sqlite] Question about aggregate returning empty row

2013-02-23 Thread Pierre Chatelier
Ok, thanks !

There are certainly good reasons for that, but I find it surprising at first 
sight.

Pierre


> That's SQL standard -- query with aggregate functions always return at
> least one row.
> 
>> 
>> [tested under 3.6.12 and 3.7.15.2]
>> 
>> I have a question regarding the use of aggregate functions.
>> 
>> Let's imagine the following db :
>>> create table A (id1 INTEGER UNIQUE PRIMARY KEY, id2 INTEGER);
>> It is empty.
>> 
>> The following query :
>>> select id1,id2 from A;
>> returns nothing, there is no row.
>> 
>> However, the following query, using the aggregate min() :
>>> select min(id1),id2 from A;
>> returns an empty line (displays '|' in the shell).
>> 
>> Using avg(), max()... will do the same.
>> 
>> With the C interface, SQLITE_ROW is returned, and I must test 
>> sqlite_column_type() against SQLITE_NULL to check that in fact, there is no 
>> result.
>> 
>> Is this expected behaviour ?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite aggregate cold/hot boot performance

2013-01-23 Thread Pierre Chatelier

Hi,


Is there some global cache retained by sqlite3.dll that makes my query faster ? 
Could it be rather related to some hard disk cache ?

SQLite uses normal file accesses, so the operating system will try to cache the 
data.

OK


And how many GB is the system using for file caching?  :)

I don't know ! There are so many levels of caching that I do not know how much 
can be expected at that level.

Regards,

Pierre Chatelier


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite aggregate cold/hot boot performance

2013-01-23 Thread Pierre Chatelier

Hello,

I can observe a behaviour that I do not understand.

[context]
I have an application that links whith sqlite3.dll. I can open a
document, and perform a query on it. That query is using an aggregate
function, that I have created myself using the callbacks like
sqlite_aggr_indexset_step...
It works very well.

[what I do not understand]
I perform a computer cold boot, launch my app, opens a doc, perform the
query. The sqlite3_step() takes some time (a few seconds).It's ok, the
query is rather complex.
Now, I close my doc, reopens it. The same query performs very fast.
I close my app, restart it, open the doc, perform the query, and one
again it goes very fast.
I stop the computer, restart it, redo the above, and that time, the
query is slow.

Is there some global cache retained by sqlite3.dll that makes my query
faster ? Could it be rather related to some hard disk cache ? (please
note that my database files where the query is done are about 150MB)
Any hint ?

Regards,

Pierre Chatelier


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Boost copy from one table to another ?

2011-01-28 Thread Pierre Chatelier
Hello,

I have some data to transfer from table A to table B (same structure).
"INSERT INTO B SELECT * FROM A" works ok.
However, the performance is comparable to a loop inserting rows one by one. I 
thought that such a bulk copy could perform better with internal optimizations.
Is there a way to perform better, or should I just be happy with that ?

The context is a little more subtle.
Actually, my table structure is very simple (two columns : an INDEX PRIMARY KEY 
and a BLOB of few KB). I am generating data and store it in the database. I am 
working with synchronous=OFF and journal_mode=OFF.
The simplest workflow works very well : I am continuously reading/filtering my 
input data and stores the results progressively, row by row, in the DB. I 
suppose that the page_size and the cache_size are already doing a good memory 
cache job.

But I wondered if I could improve the performance with the following scheme :
my disk DB is B
I create a memory DB, with the same structure, named A
I attach B to A
then in a a loop
I insert the rows in A
When A is big enough, I flush A into B with "INSERT INTO B SELECT * from A"
I make A empty
and so on until the input data is exhausted

But the overall performance is comparable to not using A at all. Is it a stupid 
idea (regarding the way sqlite is already optimized) or can I do something 
clever with that ?
A few more info :
the whole DB cannot fit in memory
the DB on disk may already have content : I must append data, and cannot use 
the backup API.

Regards,

Pierre Chatelier
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advices to get max performance with SQLITE and BLOBS

2009-03-16 Thread Pierre Chatelier
> A few hundred blocks of raw data? Blocksize approx 300K bytes?  
> Database
> created and dropped by the same process? 500 blocks is approx 150M
> bytes; why not keep it in a hash table in memory? If you keep it in a
> database or the file system, it's going to be washed through your real
> memory and pagefile-aka-swap-partition anyway, so just cut out the
> middlemen :-)

You're right, but who said I have only 1 DB at a time :-) ?
In fact, I have several DBs and I do not known in advance what size it  
will represent. Perhaps 500MB. And I need RAM for other stuff, so the  
simplest thing is to use "normal" DBs. Using memory DBs and swapping  
them aftwerwards would not be smooth.

But we are not answering my initial question !

Can I expect some gain in
-recompiling SQLite (which options/DEFINEs would help ?)
-using custom memory allocators (I am on Win32, in a multi-threaded  
environment, and yes, "it's bad")
-using compression

Regards,

Pierre Chatelier

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Advices to get max performance with SQLITE and BLOBS

2009-03-13 Thread Pierre Chatelier
Hello,

I am using SQLITE to store and retrieve raw data blocks that are  
basically ~300Ko. Each block has an int identifier, so that insert/ 
select are easy. This is a very basic use : I do not use complex  
queries. Only "INSERT/SELECT where index=..."

Now, I am thinking about performance, for writing a sequence of a few  
hundreds 300k blocks, as fast as possible.
Obviously, I use bind_blob(), blob_read() and blob_write() functions.  
I have already tuned the PRAGMAs for journal/synchronous/page_size/ 
cache, so that it's rather efficient.
I do not DELETE any content and the whole database is dropped after  
use: VACUUM is not important.

There are other ways to optimize, but I wonder if it is worth, or it  
the gain would be only marginal regarding what I am doing.
1)recompile SQLite ? Which compile options would help in this case ?
2)using other memory allocators ? I am not sure that writing big data  
blocks triggers many calls to malloc()
3)using compression ? zlib could help, but since my data does not  
compress very well (Let's say an average 20% space can be saved per  
block), I am not sure that the compression time will balance the  
writing time.

Of course, I am only asking for advices regarding your experience,  
there is certainly no exact answer, and it will always depend on my  
data.

Regards,

Pierre Chatelier

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transfer data between databases

2009-01-09 Thread Pierre Chatelier
Hello,

This may be a question with a very short answer...
I have two separate SQLite database files, but containing the same  
kind of tables. Is there a quick way to copy rows from one table of a  
file to the same table of the other file ?
I suppose "no", and I will have to perform SELECT on one side and  
INSERT on the other. But since one of my column is a blob type, it is  
a little more pain than a simple string copy, because for performance  
I should have to handle the blob with the read/write functions. Right ?

Anyway, are there tricks to know to make such a row transfert simple  
and efficient ?

Regards,

Pierre Chatelier
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users