Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread Joe Wilson
make that:

pragma default_cache_size=[half your machine RAM in bytes / page_size];

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> You don't have to recompile. 
> Just make a new database file with:
> 
> pragma page_size=32768;
> pragma default_cache_size=[your machine RAM in bytes / 32768];
> 
> --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:
> > But there is a limit (3.4.0) which stops at 32KB.
> > the compile macro would do?



   

Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] 3.4.2 (or 3.5.0) on Solaris 10?

2007-09-16 Thread Halton Huo
I did not build sqlite on Solaris 10, but I do build it on Solaris
Express Community Edition successfully. 

On Fri, 2007-09-14 at 16:20 +0100, Tim Bradshaw wrote:

> Has anyone successfully got either of these to build on Solaris 10,  
> using the gcc that ships with it?  I've tried on 10u4 on x86 and  
> (after fixing the known problem with B_FALSE/B_TRUE for 3.4.2) they  
> both failed sometime while linking.  I just did a
> 
> ./configure --prefix/what/ever
> 
> with no special options.  The eventual target would be SPARC boxes  
> but I don't think the problem looks architecture-related.
> 
> Thanks
> 
> --tim
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 

-- 
Halton Huo
Solaris Desktop Team, Sun Microsystems
Tel: +86-10-82618200 ext. 82113/ +86-10-626732113
Fax: +86-10-62780969
eMail: [EMAIL PROTECTED]


Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread Joe Wilson
You don't have to recompile. 
Just make a new database file with:

pragma page_size=32768;
pragma default_cache_size=[your machine RAM in bytes / 32768];

--- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:
> But there is a limit (3.4.0) which stops at 32KB.
> the compile macro would do?


  

Tonight's top picks. What will you watch tonight? Preview the hottest shows on 
Yahoo! TV.
http://tv.yahoo.com/ 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread RaghavendraK 70574
But there is a limit (3.4.0) which stops at 32KB.
the compile macro would do?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson <[EMAIL PROTECTED]>
Date: Monday, September 17, 2007 12:11 pm
Subject: Re: [sqlite] Sqlite insertion performance

> --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:
> > But with large cache there is too much fragmentation observered 
> (pgin/pgout)> over a period of time. Do u see this behaviour?
> 
> Fragmentation is not a function of page cache size, but yes,
> I also see this fragmentation if the secondary index exists before 
> the inserts. If you create the index after your inserts, you will 
> have far less (or no) fragmentation.
> 
> See the merge-sort point in:
> 
>  http://www.sqlite.org/cvstrac/wiki?p=ToDo
> 
> > My CacheSize: 32KB, PgSize: 8KB (to the limits)
> 
> Try a page size of 32768 and a value of half your RAM for cache size.
> 
> 
>   
> 
> Need a vacation? Get great deals
> to amazing places on Yahoo! Travel.
> http://travel.yahoo.com/
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread Joe Wilson
--- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:
> But with large cache there is too much fragmentation observered (pgin/pgout)
> over a period of time. Do u see this behaviour?

Fragmentation is not a function of page cache size, but yes,
I also see this fragmentation if the secondary index exists before 
the inserts. If you create the index after your inserts, you will 
have far less (or no) fragmentation.

See the merge-sort point in:

  http://www.sqlite.org/cvstrac/wiki?p=ToDo

> My CacheSize: 32KB, PgSize: 8KB (to the limits)

Try a page size of 32768 and a value of half your RAM for cache size.


   

Need a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread RaghavendraK 70574
But with large cache there is too much fragmentation observered (pgin/pgout)
over a period of time. Do u see this behaviour?

My CacheSize: 32KB, PgSize: 8KB (to the limits)

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson <[EMAIL PROTECTED]>
Date: Monday, September 17, 2007 11:07 am
Subject: Re: [sqlite] Sqlite insertion performance

> --- "D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
> > >>>
> > >>> I have been struggling with the performance of insertion in 
> sqlite.> >>>
> > >>> Here we have a very simple case :
> > >>>
> > >>> A table with an integer autoincrement primary key and a text 
> 
> > >>> field that is
> > >>> unique.
> > >>>
> > >>> CREATE TABLE my (id PRIMARY KEY, url);
> > >>>
> > >>> CREATE UNIQUE INDEX myurl ON my(url);
> > >>>
> > >>>
> > >>> My application requires inserting up to 10 million records 
> in  
> > >>> batches of
> > >>> 20 thousand records.
> > 
> > For each group of 2 records, first insert them into a TEMP 
> table.> Call the temp table t1.  Then transfer the records to the 
> main table
> > as follows:
> > 
> >   INSERT OR IGNORE INTO my(url) SELECT url FROM t1 ORDER BY url;
> 
> I had no performance improvement with that temp store staging table
> technique in my testing - actually it was slower.
> 
>  http://www.mail-archive.com/sqlite-users@sqlite.org/msg22143.html
> 
> Mind you, the table I was testing against had 4 indexes, whereas 
> the above 
> table has 2. I also wasn't using "OR IGNORE". There might be a 
> difference.
> Just setting pragma cache_size to a huge value and inserting into
> the table normally in large batches resulted in better performance 
> in 
> my case. It may have already been mentioned, but having a big 
> database page_size value helps minimize the disk writes as well.
> 
> 
> 
>   
> 
> Take the Internet to Go: Yahoo!Go puts the Internet in your 
> pocket: mail, news, photos & more. 
> http://mobile.yahoo.com/go?refer=1GNXIC
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multi-User confusion

2007-09-16 Thread Joe Wilson
--- Zbigniew Baniewski <[EMAIL PROTECTED]> wrote:
> On Sun, Sep 16, 2007 at 05:47:07PM -0700, Trevor Talbot wrote:
> 
> > The default locking mechanism relies on the underlying filesystem to
> > provide the needed locking guarantees.  In this case, the OP is
> > needing to access a database on a networked filesystem, and many
> > networked filesystems are unable to provide proper locking.  So no, if
> > the underlying filesystem is "broken", the database is not protected.
> 
> And what you mean about sharing SQLite's database file - among WinXP-driven
> computers - in "network neighborhood"? Does Window's filesystem assure
> enough protection?

You can do a crude check to verify it with the sqlite3 commandline shell 
and 3 networked computers: A, B, C. (any OS).
Host a shared database file on computer A, say shared.db.
>From computer B, open shared.db remotely and execute "BEGIN EXCLUSIVE;".
>From computer C, open shared.db remotely and execute "BEGIN EXCLUSIVE;".
If computer C has the error "SQL error: database is locked", then
its locking probably works.

Repeat the test with just 2 computers to test local locking versus remote 
locking.



  

Don't let your dream ride pass you by. Make it a reality with Yahoo! Autos.
http://autos.yahoo.com/index.html
 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread Joe Wilson
--- "D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
> >>>
> >>> I have been struggling with the performance of insertion in sqlite.
> >>>
> >>> Here we have a very simple case :
> >>>
> >>> A table with an integer autoincrement primary key and a text  
> >>> field that is
> >>> unique.
> >>>
> >>> CREATE TABLE my (id PRIMARY KEY, url);
> >>>
> >>> CREATE UNIQUE INDEX myurl ON my(url);
> >>>
> >>>
> >>> My application requires inserting up to 10 million records in  
> >>> batches of
> >>> 20 thousand records.
> 
> For each group of 2 records, first insert them into a TEMP table.
> Call the temp table t1.  Then transfer the records to the main table
> as follows:
> 
>   INSERT OR IGNORE INTO my(url) SELECT url FROM t1 ORDER BY url;

I had no performance improvement with that temp store staging table
technique in my testing - actually it was slower.

  http://www.mail-archive.com/sqlite-users@sqlite.org/msg22143.html

Mind you, the table I was testing against had 4 indexes, whereas the above 
table has 2. I also wasn't using "OR IGNORE". There might be a difference.

Just setting pragma cache_size to a huge value and inserting into
the table normally in large batches resulted in better performance in 
my case. It may have already been mentioned, but having a big 
database page_size value helps minimize the disk writes as well.



   

Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. 
http://mobile.yahoo.com/go?refer=1GNXIC

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multi-User confusion

2007-09-16 Thread Zbigniew Baniewski
On Sun, Sep 16, 2007 at 05:47:07PM -0700, Trevor Talbot wrote:

> The default locking mechanism relies on the underlying filesystem to
> provide the needed locking guarantees.  In this case, the OP is
> needing to access a database on a networked filesystem, and many
> networked filesystems are unable to provide proper locking.  So no, if
> the underlying filesystem is "broken", the database is not protected.

And what you mean about sharing SQLite's database file - among WinXP-driven
computers - in "network neighborhood"? Does Window's filesystem assure
enough protection?
-- 
pozdrawiam / regards

Zbigniew Baniewski

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Error in SQLITE on applyNumericAffinity

2007-09-16 Thread Trevor Talbot
On 9/12/07, Stéphane Thiers <[EMAIL PROTECTED]> wrote:

Somehow I missed replying earlier, sorry about that.

> The column which contains these numbers is
> declared as REAL. This raises another question to
> me: I thought that the numbers were stored as
> strings, so why sqlite would try to transform these strings into double?

By declaring the column as REAL, you provided a type affinity.  That
means sqlite will try to convert any provided input into REAL form
(which is double), and only store the original type if that fails.

If you don't declare a type for the column, then the value you input
will stay in the same type you provided it in (i.e. string).

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multi-User confusion

2007-09-16 Thread Trevor Talbot
On 9/16/07, Zbigniew Baniewski <[EMAIL PROTECTED]> wrote:

> I've found a remark regarding write concurrency in SQLite:
>
> "All SQLite write operations obtain an exclusive lock on the whole database"
> ( http://www.mail-archive.com/sqlite-users@sqlite.org/msg18342.html )
>
> Doesn't it mean, that database is protected enough already by it's internal
> design, and there's no need to take care about that "dot locks"?

The default locking mechanism relies on the underlying filesystem to
provide the needed locking guarantees.  In this case, the OP is
needing to access a database on a networked filesystem, and many
networked filesystems are unable to provide proper locking.  So no, if
the underlying filesystem is "broken", the database is not protected.

Dot files replace the locking mechanism with a convention that skips
the filesystem, but the consquence is that if anything else accesses
the database file without respecting that convention, corruption may
occur.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: "attach database"

2007-09-16 Thread Zbigniew Baniewski
On Sun, Sep 16, 2007 at 07:09:58PM -0400, Igor Tandetnik wrote:

> No. You will have to do the same replication process as in #1, only in 
> the other direction.

Tough luck.

Yes, found an example - maybe someone will be interested too:

http://www.tcl.tk/community/tcl2004/Papers/D.RichardHipp/drh.html
-- 
pozdrawiam / regards

Zbigniew Baniewski

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multi-User confusion

2007-09-16 Thread Zbigniew Baniewski
On Thu, Sep 06, 2007 at 10:33:56AM -0700, Joe Wilson wrote:

> Without some code modification, I doubt it.
> I don't see any mention of "dot" in os_win.c.
> 
> However, in SQLite 3.5 you can define your own OS Interface File 
> Virtual Methods Object and create your own file lock/unlock routines 
> based on the dot locks in os_unix.c:
> 
>   http://sqlite.org/capi3ref.html#sqlite3_io_methods

I've found a remark regarding write concurrency in SQLite:

"All SQLite write operations obtain an exclusive lock on the whole database"
( http://www.mail-archive.com/sqlite-users@sqlite.org/msg18342.html )

Doesn't it mean, that database is protected enough already by it's internal
design, and there's no need to take care about that "dot locks"?
-- 
pozdrawiam / regards

Zbigniew Baniewski

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: "attach database"

2007-09-16 Thread Igor Tandetnik

Zbigniew Baniewski  wrote:

found this interesting feature - but I'm not quite sure, how it's
working, and manual doesn't contain a good example.


All ATTACH DATABASE command is giving you is the ability to run 
statements across database files. There is no magic replication going 
on, or anything like that.



1. Copy the entire contents of existing database from file into
:memory:.

  Can I "just copy database" (both structure & data) in one shot - or
  I've got to recreate a "twin database" structure in :memory: first,
  and only then copy data from the "main" database?


You will have to recreate the schema and copy data over, one table at a 
time.



3. When finishing: save changed :memory: contents into "main
  database" file. Will it be automatically saved when DETACH-ing
:memory: ? I'm afraid, no?


No. You will have to do the same replication process as in #1, only in 
the other direction.


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] "attach database"

2007-09-16 Thread Zbigniew Baniewski
Hallo,

found this interesting feature - but I'm not quite sure, how it's working,
and manual doesn't contain a good example.

Perhaps someone using this feature could give me an example, how should it
look like in the case, when I want to:

1. Copy the entire contents of existing database from file into :memory:.

   Can I "just copy database" (both structure & data) in one shot - or I've
   got to recreate a "twin database" structure in :memory: first, and only
   then copy data from the "main" database?

2. Doing the current work just on the :memory:.

3. When finishing: save changed :memory: contents into "main database" file.
   Will it be automatically saved when DETACH-ing :memory: ? I'm afraid, no?
-- 
pozdrawiam / regards

Zbigniew Baniewski

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] LINKING AND LOADING FTS - on Linux/Ubuntu

2007-09-16 Thread RaghavendraK 70574
 
>env | grep LD_LIBRARY_PATH;
it will display path list. Now use that path and see if your lib is present 
there or not in that paht.
If not add the path(where fst2 exists) to LD_LIBRARY_PATH

R u new to unix/linux,  i suggest to go through 
basic of them.

regrads
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Uma Krishnan <[EMAIL PROTECTED]>
Date: Sunday, September 16, 2007 10:38 am
Subject: [sqlite] LINKING AND LOADING FTS - on Linux/Ubuntu

> Hello,
> 
> I'm having trouble loading fts2. I modified makefile to create fts2 
> library on Linux/Ubuntu.I
> 
> When I attempt to load fts2 using the command select 
> load_extension('fts2'), i get the error shared library not found.
> ( noticed that it had not created the .so file, only .la file.)
> 
> What am I doing wrong?
> 
> Thanks in advance
> 
> Uma
> 
> Igor Tandetnik <[EMAIL PROTECTED]> wrote: Kefah T. Issa  wrote:
> >> I tried the ordered-urls-insert the results were better, but it is
> >> still
> >> taking progressively longer time as the number of records 
> increases.>>
> >> A fundamental question to be asked here :
> >>
> >> Shouldn't the time complexity (Big-O) of the insert operation be
> >> constant?
> 
> Of  course not. It takes O(log N) to find an appropriate place in 
> the 
> index for every new record (where N is the number of records 
> already 
> inserted). Also, it generates a lot of disk activity once the index 
> grows too large to fit in memory cache.
> 
> >> I even did a third test where the integer primary key is not auto
> >> increment;
> >> the same problem is observed.
> 
> The id is not a problem: O(log N) is caused by the index on url.
> 
> Igor Tandetnik 
> 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> 
> -
> 
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-