[sqlite] Performance tools?

2009-01-20 Thread Christopher Mason
I have some complex queries that take a long time.  Is there a tool that 
can tell me which steps in the query are taking the longest?  I'm 
thinking of output similar to the EXPLAIN or EXPLAIN QUERY PLAN that 
would annotate the tables/indices/VM instructions with how long that 
portion of the query took.

I've tried using ANALYZE and it makes some of the queries faster but 
some slower.  I've also noticed that SQLite really prefers subselects to 
joins.  I've read the material on the web that describes the query 
optimizer but I still have a hard time knowing how to decide whether to 
force an order on operations and what that order should be.  Does anyone 
have any hints?

Thanks so much,

-c


-- 
Christopher Mason   Proteome Software(503) 244-6027
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to port sqlite to uc/os

2009-01-20 Thread YIN HUI-XIAN-BHMJ64
Hi all:
   I want to port sqlite to uc/os without FS supporting.
   Is it extremely  difficult?
  Can someone do me a favor?
 Tks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recover deleted records

2009-01-20 Thread John Machin
On 21/01/2009 7:03 AM, Alex Mandel wrote:
> Griggs, Donald wrote:
>> Hi Alex
>>
>> I can't answer your specific questions, but I presume you've already
>> studied the file format document (most recently mentioned by Roger
>> Binn's recent post):
>>
>> http://www.sqlite.org/fileformat.html 
>>
> Yes I did read through that, but I didn't see what I was looking for.

Have a look at section 3.3.2 
(http://www.sqlite.org/fileformat.html#record_format) and when you get 
to mentions of "variable length integer" and start to wonder what that 
is, read section 3.3.1.


> Is there a list of binary codes or other delimiters likely to be
> encountered between records or rows, or is the magic of length of a
> given record only stored in some coded journal section or index.
> 
> So far things are looking good using the unix tr command and trying out
> various filters to remove unneeded characters. I also vacuumed a copy of
> the database and did a diff against the tr results from that to narrow
> down where my missing data is in the file.
> 
> Now I'm just trying to figure out how to slice it into records, which
> may dependant on me knowing the data really well, the problem I'm
> running into is that there's no consistent character between two integer
> fields so it's hard to split them apart.

Two levels of problem:
(1) Finding where each record starts
(2) Unpacking the record into columns
You would certainly need to write code to do the record unpacking, and 
probably for the record finding as well.

HTH,
John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recover deleted records

2009-01-20 Thread Alex Mandel
Griggs, Donald wrote:
> Hi Alex
> 
> I can't answer your specific questions, but I presume you've already
> studied the file format document (most recently mentioned by Roger
> Binn's recent post):
> 
> http://www.sqlite.org/fileformat.html 
> 
Yes I did read through that, but I didn't see what I was looking for.
Is there a list of binary codes or other delimiters likely to be
encountered between records or rows, or is the magic of length of a
given record only stored in some coded journal section or index.

So far things are looking good using the unix tr command and trying out
various filters to remove unneeded characters. I also vacuumed a copy of
the database and did a diff against the tr results from that to narrow
down where my missing data is in the file.

Now I'm just trying to figure out how to slice it into records, which
may dependant on me knowing the data really well, the problem I'm
running into is that there's no consistent character between two integer
fields so it's hard to split them apart.

Thanks,
Alex

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 'UPDATE shop_orders SET comments=comments||? WHERE oid=?', ('test', '1')

2009-01-20 Thread Hoover, Jeffrey
if comments is null then comments||? is also null, try this:

comments=coalesce(comments,'')||?



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert Cuykens
Sent: Sunday, January 18, 2009 10:15 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] 'UPDATE shop_orders SET comments=comments||? WHERE
oid=?',('test', '1')

How do i do the following ?

comments=comments||?

When I add a comment nothing happens ?

(please reply to my email address no subscription)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-20 Thread Jens Miltner

Am 19.01.2009 um 18:42 schrieb D. Richard Hipp:

>
> On Jan 19, 2009, at 3:50 AM, Jens Miltner wrote:
>
>> Hello,
>>
>> I just upgraded to sqlite 3.6.10 and keep getting a segmentation
>> violation when running the full tests on Mac OS X:
>> The last test completed is consistently thread001.1.3.
>>
>
> This was a problem in the testing logic, not in the SQLite library
> itself.  The test logic was trying to run a threading test with some
> of the mutexes disabled.  Everything works correctly once mutexes are
> enabled properly.


Thanks for the info - I applied the changes from checkin #6193 and the  
seg violations disappeared.
However, I now get another bus error in perm-no_mutex_try.thread003.2

Do you need additional information?

-jens


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-20 Thread John Stanton
A compression program like gzip is not a l"library", it is a free 
standing, open source program.  It has no secrets.

vlema...@ausy.org wrote:
> Hello, thank you and others for your answers.
>
> We are not allowed to use external libraries, because of industrial
> certification constraints. We can use SQLite because we can not do without
> database, but it's a big stuff to validate it according to those
> constraints, so we have to reduce as much as possible the use of such
> third-parties libraries.
>
> The idea from Eric Minbiole to drop index in the copy file and to
> investigate with sqlite_analyzer is for the moment the better one !
>
> For now we are just wondering how to use SQlite facilities, and if it's
> not sufficient, maybe we would think of the opportunity to developp a tiny
> compression algorithm by ourselves, or not... There is no requirement of
> on-the-fly compression / decompression because it's for archive only
> (fortunately !).
>
> ++
>
> Vincent
>
>   
>> Hello!
>>
>> Â ñîîáùåíèè îò Monday 19 January 2009 20:22:33 vlema...@ausy.org
>> íàïèñàë(à):
>> 
>>> It is a requirement that the size of those copies being as small as
>>> possible, without having to perform an external compression.
>>>   
>> You can using internal data compression. By compressing a few big fields
>> you can extremly reduce
>> size of your database. zlib on-the-fly compression is good.
>>
>> Best regards, Alexey.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] About Sqlite cache metch times and read() call times

2009-01-20 Thread yoky
Hi all,
  I add some debug information in sqlite3.c, and test Sqlite cache metch
times, unmetch times and read() called times in Linux,
I found that page unmetch times in cache is bigger than read() called times.

Then I found in Btree.c  getAndInitPage() function called
sqlite3PageLookup,  and sqlite3PageLookup() called sqlite3PcacheFetch()
which do the cache search.
If the page is not in cache, getAndInitPage() will call
sqlite3BtreeGetPage() to get a page, in sqlite3BtreeGetPage() will call
sqlite3PagerAcquire(),
in  sqlite3PagerAcquire(), it also call sqlite3PcacheFetch()  to do a cache
search.
My English is not very well, in a word,  in getAndInitPage() function,
if a page it want to get is not in Sqlite cache, it will do 2 times search
in cache,
Is it reasonable? or something I did wrong?
  Thanks very much!
  yang
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recover deleted records

2009-01-20 Thread Griggs, Donald
Hi Alex

I can't answer your specific questions, but I presume you've already
studied the file format document (most recently mentioned by Roger
Binn's recent post):

http://www.sqlite.org/fileformat.html 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .dump and transactions

2009-01-20 Thread Thomas Jarosch
On Monday, 12. January 2009 14:20:40 Thomas Jarosch wrote:
> If an open transaction would block the dump of the database,
> then the second command line tool should busy wait until a timeout occurs
> or atleast return an error message.

I finally tracked this down by testing sqlite 3.6.10 on my workstation
and the target system. The target system is Redhat 6.2 based (glibc 2.1.3)
and running kernel 2.6.27.11. The glibc will be upgreaded soon.

A strace run of the ".dump" command showed strange behavior:

write(1, "BEGIN TRANSACTION;\n", 19BEGIN TRANSACTION;
)= 19
brk(0x8055000)  = 0x8055000
fcntl(3, F_SETLK, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=0}) = 
-1 EAGAIN (Resource temporarily unavailable)
fcntl(3, F_SETLK, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=0}) = 
-1 EAGAIN (Resource temporarily unavailable)
write(1, "COMMIT;\n", 8COMMIT;


It turned out that the "configure" script for 3.6.10 generated by
autoconf 2.59 incorrectly detected large file support on this system.

config.h looked like this:
#define _FILE_OFFSET_BITS 64
/* #undef _LARGE_FILES */

I've rebuilt "configure" with autoconf 2.61 and now it detects this:
/* #undef _FILE_OFFSET_BITS */
/* #undef _LARGE_FILES */

I've now manually disabled large file support and everything works fine.

Cheers,
Thomas

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite JDBC driver performance

2009-01-20 Thread Christopher Mason
[Apologies for posting this here, but the sqlitejdbc list appears to be 
defunct.]

Hello.

I'm noticing fairly severe performance difference between running a 
complex query from the sqlite 3.6.7 command line and from within the 
sqlite jdbc driver version 0.54 available at [1].

Sqlite command line:
real0m44.708s
user0m5.211s
sys 0m1.994s

java:
2.7min

I've tried a number of things:

  * I've tried both sqlite 3.6.1 and 3.6.7 (I upgraded the jdbc driver 
to  .7);
  * I've tried with and without the shared_cache option; this has no 
effect for me.
  * I've tried changing the optimization settings for sqlitejdbc to 
match those of the command line tool I build and installed (-O2).
  * Explicitly set the cache size of the


I've done some profiling [2] and the prevalence of 
'sqlite3_enable_shared_cache' in the java version is surprising.  The 
java version also uses significantly less CPU than the command line version.

I've copied and pasted the SQL out of the prepared statement, so I'm 
confident that I'm running the same SQL.  It's a single query, so I 
doubt that JDBC/JNI overhead is to blame.

Any help or ideas would be most appreciated!

Thanks so much,

-c


[1] http://zentus.com/sqlitejdbc/


[2]

Profiling with shark on Mac OS X 10.5.6:

java:
self  total
   0.0%  50.2%   sqlite3_step
   5.5%  48.4%sqlite3_blob_open
   9.2%  40.0% sqlite3_randomness
   8.4%  25.1%  sqlite3_enable_shared_cache
   2.1%  16.0%   sqlite3_config
   5.4%  7.6% sqlite3_malloc
   1.1%  1.2%  sqlite3_config
   0.0%  0.1%   sqlite3_free
   0.1%  0.1%sqlite3_vfs_unregister
   0.8%  1.0%  sqlite3_mutex_try
   0.1%  0.1%  sqlite3_mutex_leave
   0.2%  3.3% sqlite3_value_type
   0.9%  2.7% sqlite3_os_end
   0.2%  0.2% sqlite3_mutex_try
   0.1%  0.1% sqlite3_snprintf
   0.3%  0.3%sqlite3_free
   0.2%  0.2%sqlite3_malloc
   0.1%  0.1%sqlite3_snprintf

command line:
   self total
   0.0%  64.1% sqlite3_exec
   0.0%  64.0%  sqlite3_step
   8.6%  63.3%   sqlite3Step
   0.3%  21.0%sqlite3VdbeCursorMoveto
   6.0%  20.6% sqlite3BtreeMovetoUnpacked
   0.3%  10.6%  moveToChild
   1.0%  10.2%   getAndInitPage
   0.5%  5.1%  sqlite3PcacheFetch
   3.0%  4.5%   pcache1Fetch
   0.6%  0.9%pthreadMutexEnter
   0.2%  0.2% dyld_stub_pthread_self
   0.1%  0.1% dyld_stub_pthread_mutex_lock
   0.2%  0.2%pcache1PinPage
   0.2%  0.2%sqlite3_mutex_leave
   0.1%  0.1%sqlite3_mutex_enter
   0.1%  0.1%pthreadMutexLeave
   0.1%  0.1%dyld_stub_pthread_mutex_unlock
   0.1%  0.1%   pthreadMutexLeave
   0.0%  3.3%  sqlite3BtreeGetPage
   0.3%  0.3%  btreePageFromDbPage
   0.0%  0.2%  pagerPagecount
   0.1%  0.1%  sqlite3BtreeInitPage
   0.1%  0.1%  sqlite3PagerGetData
   0.1%  0.1%  sqlite3PagerGetExtra
   0.1%  0.1% btreePageFromDbPage
   0.1%  0.1% sqlite3PcacheFetch
   0.3%  2.0%moveToRoot
   1.9%  1.9%sqlite3GetVarint
   0.1%  0.1%sqlite3Get4byte
   0.1%  0.1%sqlite3PagerUnref
   0.1%  0.1%   sqlite3GetVarint
   0.1%  0.1%   sqlite3Get4byte
   1.5%  15.1%sqlite3BtreeMovetoUnpacked



-- 
Christopher Mason   Proteome Software(503) 244-6027
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-20 Thread Alexey Pechnikov
Hello!

В сообщении от Tuesday 20 January 2009 12:24:41 vlema...@ausy.org написал(а):
> For now we are just wondering how to use SQlite facilities, and if it's
> not sufficient, maybe we would think of the opportunity to developp a tiny
> compression algorithm by ourselves, or not... There is no requirement of
> on-the-fly compression / decompression because it's for archive only
> (fortunately !).

You can build SQLite with zlib and are using compress/uncompress sql functions.

sqlite> select uncompress(compress('A same very big string'));
A same very big string

I did reduce my databases more then 2x by using compression for one or two 
fields. Of cource resuilt 
is dependent of your dataset.

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-20 Thread vlemaire
Hello, thank you and others for your answers.

We are not allowed to use external libraries, because of industrial
certification constraints. We can use SQLite because we can not do without
database, but it's a big stuff to validate it according to those
constraints, so we have to reduce as much as possible the use of such
third-parties libraries.

The idea from Eric Minbiole to drop index in the copy file and to
investigate with sqlite_analyzer is for the moment the better one !

For now we are just wondering how to use SQlite facilities, and if it's
not sufficient, maybe we would think of the opportunity to developp a tiny
compression algorithm by ourselves, or not... There is no requirement of
on-the-fly compression / decompression because it's for archive only
(fortunately !).

++

Vincent

> Hello!
>
> Â ñîîáùåíèè îò Monday 19 January 2009 20:22:33 vlema...@ausy.org
> íàïèñàë(à):
>> It is a requirement that the size of those copies being as small as
>> possible, without having to perform an external compression.
>
> You can using internal data compression. By compressing a few big fields
> you can extremly reduce
> size of your database. zlib on-the-fly compression is good.
>
> Best regards, Alexey.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-20 Thread Alexey Pechnikov
Hello!

В сообщении от Monday 19 January 2009 20:22:33 vlema...@ausy.org написал(а):
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.

You can using internal data compression. By compressing a few big fields you 
can extremly reduce 
size of your database. zlib on-the-fly compression is good.

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users