Re: [sqlite] the speed of embedded database engines, sqlite3 vs berkeley db, I'm confused

2009-04-23 Thread RaghavendraK 70574
sqlite has edge over BDB:
1) code readability is not as clean and clear as sqlite
2) forums not sure if they are responsive as sqlite
3) sqlite is stable and does not crash on-restarts, bdb make uses of
   mmap and its behaviour is undefined under diskful condition.
   get sigbus sometimes,sometimes throws error.
4) sqlite supports vfs,hence u can write your own layer to meet performance
5) too many parameters to configure in bdb. sqlite is zero config system.
6) easy to extend for later inclusions of alter table, bdb is key-value system
7) footprint is very small of sqlite compared with bdb.
8) sqlite does not support row or table level concurrency but bdb supports it 
well.
9) DB file is os independednt.

My opinion is sqlite is painfree system.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained here in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
email in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: liubin liu 7101...@sina.com
Date: Friday, April 24, 2009 9:05 am
Subject: [sqlite]  the speed of embedded database engines, sqlite3 vs berkeley 
db, I'm confused
To: sqlite-users@sqlite.org

 
 our project is using sqlite3, but found that the speed is too slow.
 I just tested the BerkeleyDB, and found that the speed is very 
 fast. But I
 knew the sqlite3 is fast enough. And so I'm confused.
 I may be using sqlite3 in wrong way?
 
 anyway, next is my test code. I'm glad to receive your message.
 
 __
 
 
 // http://www.ibm.com/developerworks/cn/linux/l-embdb/
 
 //head
 ///#include stdio.h
 #include stdlib.h // for system
 #include string.h // for memset strcpy
 #include time.h // for time
 
 #include sqlite3.h // for Sqlite3
 #include db.h // for Berkeley DB
 
 
 
   macro and struct 
 /
 #define DB_FILE_SQLITE test_sqlite_0.1.db
 #define DB_FILE_BDB test_bdb_0.1.db
 
 struct customer
 {
   int c_id;
   char name[10];
   char address[20];
   int age;
 };
 
 
 
 
 
 
 //   global variable  
 ///
 
 sqlite3 *db = NULL;
 
 int ret = -1; // ??
 
 
 
 
 
 
 
 //   func proto  
 ///
 
 void way01(); // 
 
 
 
 / sqlite3 //
 
 int sqlite_createtb(sqlite3 *db);
 int sqlite_insertdb(sqlite3 *db);
 
 int getdata_sqlite ( sqlite3 *db, struct customer *tb1 );
 
 
 
 /// berkeley db 
 
 int bdb_createdb(); // ???
 
 void print_error(int r);
 void init_dbt( DBT *key, DBT *data );
 
 
 
 
 
 
 /// code
 ///
 int main ( void )
 {
   int c = 0;
   
   system ( rm -rf test_0.1.db );
   ret = sqlite3_open ( DB_FILE_SQLITE, db );
   ret = sqlite_createtb(db);
   ret = sqlite_insertdb(db);
   sqlite3_close (db);
   
   printf ( Sqlite3 / Berkeley DB, ? +  ... ??\n );
   
   printf ( /\n );
   printf ( 1 : ?? - Berkeley DB ? Sqlite3 ///\n );
   
   while ( (c=getchar()) != 'q' )
   {
   switch (c)
   {
   case '1':
   way01();
   break;
   default:
   break;
   }
   }
   
   system ( rm -rf test_sqlite_0.1.db );
   system ( rm -rf test_bdb_0.1.db );
   
   return 0;
 }
 
 ///
 // ?? - Berkeley DB ? Sqlite3
 void way01()
 {
   time_t tick1, tick2;
   
   int i = 0;
   int num = 1000*100;
   
   struct customer tb_data;
   
   ///
   time ( tick1 );
   for ( i=0; inum; i++ )
   {
   ret = sqlite3_open ( DB_FILE_SQLITE, db );
   ret = getdata_sqlite ( db, tb_data );
   sqlite3_close (db);
   }
   time ( tick2 );
   printf(Sqlite3 : ? %d ?, ???: %4ld s\n, num, 
 tick2 -
 tick1 );
   
   ///
   bdb_createdb();
 }
 
 
 
 
 
 
 
 ///
 void *callback(void 

Re: [sqlite] Using Sqlite in place of BDB

2008-09-10 Thread RaghavendraK 70574

BDB 4.3.28 has other facet like 
a) crash when zero diskspace.
b) if there is abrupt crash then recovery can fail.
   also the time taken during recover is very long.
c) foot print is too big.
d) Performance is very good on most platforms.
e) Support is not very responsive.
f) Cannot copy the file directly should use dump and load facility which is 
again time consuming.
g) If used in multiprocess env then abrupt crash
will render the application behaviour unpredictable due to locks held in shared 
area.

Sqlite is stable.In comparison using sqlite in-mem db  is hassle free. Support 
is excellent. 
Sqlite has RTree module which is similar to your requirement. U can check more 
from the mail archives.

Finally it depends on usage context and in our case we found sqlite more 
suitable.

regard
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Lawrence Gold [EMAIL PROTECTED]
Date: Wednesday, September 10, 2008 3:29 am
Subject: [sqlite] Using Sqlite in place of BDB

 Hello,
 
 I work for a company which is seeking to replace its homegrown  
 database engine with a more robust, modern engine.  I've looked at  
 BerkeleyDB, which would be ideal since all we really need are key/ 
 value pairs for our records and indexes, but its cost is rather  
 prohibitive.
 
 At this point I'm considering targeting SQLite's internal B-tree 
 API  
 directly, realizing that I'm assuming some risk as the API isn't  
 guaranteed to remain stable between releases.  Before I delve into  
 this, can anyone suggest a better approach to using SQLite in a 
 lower- 
 level manner for storing key/value pairs?
 
 drh mentioned the SQLite Stored Statement Extension in a post last 
 year:
   http://www.mail-archive.com/sqlite-users@sqlite.org/msg23591.html
 
 Does it in fact have an API that would lend itself well to 
 key/value  
 pairs?
 
 Thanks!
 
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Loading a existing database 100% into memory

2008-08-06 Thread RaghavendraK 70574
Hi  Stephen Woodbridge,

 (although it might
  be an interesting academic exercise to make a VFS port of SQLite 
 that uses
  memory arrays for read/write ops.)

Do u have any ref impl of this kind?

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Stephen Woodbridge [EMAIL PROTECTED]
Date: Thursday, August 7, 2008 7:24 am
Subject: Re: [sqlite] Loading a existing database 100% into memory

 Stephen Oberholtzer wrote:
  On Wed, Aug 6, 2008 at 6:26 PM, Brown, Daniel [EMAIL PROTECTED] 
 wrote: 
  Good afternoon list,
 
  I would like to load my current database file completely into 
 memory, mostly as an experiment to check SQLite's maximum memory 
 footprint, however searching through the documentation I can 
 only find references
  about how to create new databases that are completely memory 
 resident. Is there a way to do this?  I'm currently using the 
 SQlite console
  application for my testing if that makes a difference.
  
  
  What, exactly, is it you're after?  I can load a SQLite database 
 100% into
  memory quite quickly:
  
  int fd = open(sqlitedb.dat);
  struct stat info;
  fstat(fd, info);
  char *buf = malloc(info.st_size);
  read(fd, buf, info.st_size);
  
  I find it extremely unlikely that this is what you want 
 (although it might
  be an interesting academic exercise to make a VFS port of SQLite 
 that uses
  memory arrays for read/write ops.)
  
  At the other end of the spectrum, you could just dump the entire 
 database on
  disk and then insert all the data into a :memory: database.  
 However, this
  doesn't seem like it would be very useful, either.
  
  This sounds like an XY problem.  What are you really trying to 
 accomplish? What constraints are preventing you from simply using 
 an on-disk database?
  
 
 Another interesting option might be to mmap the DB file so you use 
 the 
 OS virtual memory paging to map the file to memory as you need 
 access to 
 it. But this probably has the downside that writes are not sync'd 
 to 
 disk so in the event of a crash you out of luck, but that is the 
 case 
 with any memory DB. The upside is that when you shutdown your DB 
 is 
 sync'd to disk and the OS paging is pretty efficient.
 
 -Steve W
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Thank you

2008-06-06 Thread RaghavendraK 70574

Kudos!!! to Dan/DRH. 
We always get responses from Dan/DRH which is logical,meaningful and on dot.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: D. Richard Hipp [EMAIL PROTECTED]
Date: Friday, June 6, 2008 6:02 pm
Subject: Re: [sqlite] Thank you

 I'd like to take credit for the new r-tree module because it is a 
 fine  
 piece of work.  But in truth the new r-tree module was written  
 entirely by Dan Kennedy.  http://www.sqlite.org/crew.html  Good 
 job,  
 Dan!
 
 D. Richard Hipp
 [EMAIL PROTECTED]
 
 
 
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Berkeley DB vs. SQLite for threaded application

2008-01-16 Thread RaghavendraK 70574
I would suggest you to go for thread specific sqlite handles.
This would make each thread operate in its own db connection handle.

threadID = getthreadID();
if(threadIDHashList.find( threadID)== false)
  {
//new thread,just add this threadID to list
//create new DB connection and save it as part of TSD*threadspecific data.
  }

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Tomas Lee [EMAIL PROTECTED]
Date: Wednesday, January 16, 2008 12:54 pm
Subject: [sqlite] Berkeley DB vs. SQLite for threaded application

 I've got an application that has Berkeley DB embedded in it.  I want
 to replace Berkeley DB with SQLite.  (I plan to use the 3.5.4
 almagamation, which is the latest I could find.)  The thing is, this
 application uses threads.  I know threads are evil, but this
 application uses them, and there it is.  So, I wanted to understand
 what I had to do to be safe.
 
 As I understand it, Berkeley DB has free-threaded database 
 handles, so
 my application can open a Berkeley DB database connection and have all
 of its thread use that same connection.  But SQLite doesn't allow
 this, so I'm going to have to change things.  In theory, I could just
 open and close a new connection whenever I want to access the
 database.  How much a performance hit is that?
 
 http://sqlite.org/faq.html tells me that I can move a connection
 handle across threads as long as that connection is holding no fcntl()
 locks, and that this is due to fcntl() bugs on some OSes, like 
 RedHat9.But what if I'm running on an OS that doesn't have these 
 bugs -- do
 I have to worry about moving a connection handle across threads?  And
 how can I tell if my OS has these bugs?
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Improving performance of SQLite. Anyone heard of Devic eSQL?

2007-12-16 Thread RaghavendraK 70574


Sqlite has Big names. May be this should be
showcased at the sidebar on the front page.

Does it need any other brand building activity?
Atleast we got a better with those names.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: John Elrick [EMAIL PROTECTED]
Date: Monday, December 17, 2007 7:06 am
Subject: Re: [sqlite] Improving performance of SQLite. Anyone heard of Devic 
eSQL?

 John Stanton wrote:
  This also is an anecdote from some time back.  As we were signing 
 a 
  fairly significant software contract with a large organization 
 their 
  manager told us You guys know nothing about marketing.  Your 
  presentation was unprofessional, no glossy brochures, no audio 
 visuals 
  and we would not have bought except that you were the only ones 
 who 
  convinced us you could do the job.  We just smiled and watched 
 the 
  ink dry while we pondered where did we go right?.
 
  The simple truth is that if you hype a product and sell it into 
 an 
  area where it is inadequate your triumph is short lived and the 
 scorn 
  and litigation enduring.  On the other hand if you deliver a 
 solution 
  which works as well, or preferably better, than proposed you have 
  generated raving fans who will buy again and endorse your product 
 to 
  all and sundry.  Which is the better model?
 
 To quote a former programs manager for Bank of America the first 
 solution which meets my business needs and performs the job 
 adequately.  In this case, adequately can be defined as loosely as 
 doesn't crash too often or as stringently as positively no 
 errors, 
 depending on the business use.
 
 Keeping the discussion academic, hype a product... is a business 
 model 
 that apparently has been used to at least some degree by a company 
 called Microsoft.  It tends to work because the model permits them 
 such 
 an early lead that even better products have difficulty catching up.
 
 I do most of my programming in Delphi, a Borland product which 
 remains 
 in my opinion, even in its shadow of former glory state, a far more 
 straightforward and powerful product than Visual Studio.  Borland 
 has 
 always been a technical company, not a market driven one and its 
 flagship product is surviving only because it remains a more well 
 rounded Windows solution than its competition.  However, it is only 
 surviving and is unlikely to actually thrive ever again.
 
 So my suggested answer is, the proven model is dominate the market 
 early with an adequate product.  If your product is very good and 
 even 
 better than proposed, all the better.  But if you are Johnny come 
 lately, you will likely lose unless your product is very, very 
 good.  
 And, whether we like it or not, a big part of market domination is 
 to 
 convince all the decision makers (management) and decision breakers 
 (engineers with influence) that yours is the safest choice to make.
 
 FWIW
 
 
 John Elrick
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] --enable-cross-thread-connections

2007-11-28 Thread RaghavendraK 70574

There is no association of this flag and code in sqlite3.5.2,
does this really have any impact while building sqlite?

regards
ragha



**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_BUSY retry

2007-11-28 Thread RaghavendraK 70574
Overall sqlite is losing its credits.
Now when i run sqlite in multithread  multi process 
mode system experiences a infinite loop in the
busy handler. Also there is no log which tell
which process/thread has acquired the lock,hence needing complete system halt 
and restart. This observed in 3.4.0,3.4.2, 3.5.1,3.5.2
Also as stated elsewhere sqlite 3.5.2 now support 
passing one connecion acorss threads is not valid.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: John Stanton [EMAIL PROTECTED]
Date: Thursday, November 29, 2007 1:34 am
Subject: Re: [sqlite] SQLITE_BUSY  retry

 You could use a BEGIN IMMEDIATE to lock the DB before you launch 
 the 
 transaction and loop on SQLITE_BUSY or use the plain BEGIN which 
 will 
 allow reads during the transaction and not lock the DB until you 
 issue a 
 COMMIT (the END).  Just loop on the BUSY on the END SQL statement 
 until 
 the user who has the DB locked releases it.
 
 A technique we use to get a minimum latency but reasonably 
 efficient 
 busy wait is to issue a yield call each time an SQLITE_BUSY is 
 encountered so that the time slice is dropped and other processes 
 can 
 run.  A alternative is to issue a short delay or sleep.
 
 Joanne Pham wrote:
  Hi All,
  Here my statements to insert rows into the database
Open the database connection
BEGIN
  insert ...using sqlite3_step
  insert ...using sqlite3_step
END
  So at the time I issued END transaction I got the error message 
 SQLITE_BUSY so I need to issue the END transaction again or What 
 should I do in this case to handle SQLITE_BUSY.
  Thanks a lot in advance for the help or advice.
  JP
  
  
  
  - Original Message 
  From: Joanne Pham [EMAIL PROTECTED]
  To: sqlite-users@sqlite.org
  Sent: Wednesday, November 28, 2007 11:27:52 AM
  Subject: [sqlite] SQLITE_BUSY retry
  
  Hi All,
  I have used BEGIN and END Transaction to insert the data to 
 SQLite database.
  BEGIN
  insert ...
  insert ...
END
  
  When I issued the END operation the error message return back 
 is SQLITE_BUSY. 
  What should I do if I want to handle SQLITE_BUSY /retry the 
 transaction. Should I execute END transaction again.
  How to handle the SQLITE_BUSY?
  Thanks,
  JP
  
  

 
  Get easy, one-click access to your favorites. 
  Make Yahoo! your homepage.
  http://www.yahoo.com/r/hs
  
  

 
  Be a better sports nut!  Let your teams follow you 
  with Yahoo Mobile. Try it now.  
 http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Disk caching impacts performance.

2007-11-09 Thread RaghavendraK 70574

Ram Drive involve a context switch(from user to Kernel) and hence there is loss 
of performance!!!
Check this factor also.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: John Stanton [EMAIL PROTECTED]
Date: Friday, November 9, 2007 7:02 pm
Subject: Re: [sqlite] Disk caching impacts performance.

 Michael Scharf wrote:
  [EMAIL PROTECTED] wrote:
  Trevor Talbot [EMAIL PROTECTED] wrote:
  Beyond that, I'm not aware of anything that would help.
 
 
  All good advice.  But you left off the obvious:  Get a
  faster disk drive.  ;-)
  
  ...which does not really help unless you buy a very expensive
  flash disk drive. How much faster is a *really* fast spinning disk?
  
  Trevo, have you tried to put your database on a (fast!) USB stick.
  It should be much faster in 'seeking' but is slower in the
  data transfer. This would give some indication if the access
  is limited by seek or the disk reading speed.
  
  
  Michael
  
 A USB flash drive is not particularly fast due to the limited write 
 speed of flash memory and buss speed.  A fast disk spins at 15,000 
 rpm, 
 double the speed of the higher end 7,500 rpm disks and almost 3 
 times 
 the speed of the regular 5,400 rpm devices.
 
 If you want to simulate a disk with no latency set up a RAM drive.
 
 There is a physical constraint here.  If you want to verify that 
 your 
 data is safely written to non-volatile storage you have to live 
 with the 
 latency.  If that is unimportant to you you can relax the ACID 
 requirements and get faster writes, but when you do that there is 
 no 
 crying over lost data after a crash.
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Disk caching impacts performance.

2007-11-08 Thread RaghavendraK 70574
Try this, use mmap 
(i assume u'r db is readonly)this is much faster and
better than ifstream read. Also ifstream read can keep the data in cache as 
long as no other serious
i/o occurs.

U need to accept it as we work with Virtual Mem or
write your own FileSystem which is mem based and short circuits os calls. 
Sqlite 3.5x has good support for such ext.

regrads
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Julien Renggli [EMAIL PROTECTED]
Date: Thursday, November 8, 2007 4:15 pm
Subject: [sqlite] Disk caching impacts performance.

 Hello,
 
 
 
 I'm currently working on a project which is very likely to use 
 SQLite as
 data storage. Since performance is important for us, I already 
 found on
 the SQLite website many ways to optimise the code (always working in
 transactions where possible, using a page size of 4096 since it's
 running on the Windows platform, using integers primary keys, ...). 
 ButI have one problem that I solved in an unorthodox way; it 
 works, but
 maybe you have a better solution than mine?
 
 
 
 I've been doing some test with a ~100 MB database, in which I have 
 three
 tables: one for structured data (Objects, 2000 entries), one for the
 blobs we have to store (ObjectBlobs ID INTEGER PRIMARY KEY, Data BLOB)
 (8000 entries), and one which binds the structured data and the blobs
 (8000 entries). As you can imagine, each Object has 4 blobs linked to
 it; the blobs can be quite large (let's say up to 1 MB).
 
 My (C++) application just has to read the table Objects, and one of
 the Blobs for each Object.
 
 
 
 Now the first time I run my application, it takes quite a long time
 
 (30s) to load the blobs. But if I re-run the app, it only takes 1s to
 load them. It's clearly a disk caching issue: if I copy huge files to
 the disk between two runs, it takes again 30s to load the blobs (i.e.
 
 the DB is no more in the disk cache). Profiling the application
 indicates sqlite::winRead() is the bottleneck.
 
 
 
 I then had the following idea: SQLite is probably reading the file
 randomly, depending on where the data lies. If I can force the DB 
 to be
 cached, everything should be fine. So before connecting the 
 database, I
 first read it sequentially (using a C++ ifstream) until the end of 
 file.
 It perfectly solves the database problem, even though I still 
 notice a
 difference (3s to read the file on 1st run, 0.2s later). But 3s is OK
 where 30s was worrying me.
 
 
 
 I hope I explain the situation clear enough, and ask you now: is it 
 theonly way to do it? I find the trick a bit nasty and don't like 
 it; maybe
 I missed something? Before you ask: I tried to VACUUM the DB, it only
 reduced the timings to 16s, which was still bad for our requirements.
 
 Tests with a larger DB (it can get much bigger than my example) and on
 different machines tend to confirm my theory.
 
 
 
 Thanks in advance (and a big thank for SQLite which is really nice and
 easy to use !),
 
 
 
 Julien Renggli
 
 
 
 P.S.: Some technical informations:
 
 - sqlite v.3.3.16 (we will upgrade to the latest version later), C API
 
 - Windows XP SP2
 
 - Timings on Pentium 4 3.4GHz, 2GB RAM
 
 
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: RE: [sqlite] Impact of no.of tables and records on Startup time

2007-10-25 Thread RaghavendraK 70574

Can cache and page size configuration improve or worsen the situation?

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Samuel R. Neff [EMAIL PROTECTED]
Date: Thursday, October 25, 2007 6:16 pm
Subject: RE: [sqlite] Impact of no.of tables and records on Startup time

 
 Complexity of the schema affects time required to open a connection 
 sincethe schema has to be read and processed.  With about 70 tables 
 each with
 lots of indexes and triggers, it takes us 17ms to open the connection.
 
 HTH,
 
 Sam 
 
 
 ---
 We're Hiring! Seeking a passionate developer to join our team building
 products. Position is in the Washington D.C. metro area. If interested
 contact [EMAIL PROTECTED]
 
 -Original Message-
 From: Kiran Kumar.M.R [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, October 25, 2007 5:54 AM
 To: sqlite-users@sqlite.org
 Subject: [sqlite] Impact of no.of tables and records on Startup time
 
 Hi,
 
 When SQLite is used in on disk mode,
 
 What is the impact of no. of tables and records on the startup time.
 
 For example,
 
 1. number of tables (10, 100, 1000 tables,  each having 100 
 records, each
 record 256 bytes)
 
 2. number of records in a single table (1 table, having 
 10,100,1000,10records, each record 256 bytes)
 
 Will the startup time get affected? If yes in what order?
 
 Thanks,
 
 Kiran
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] End of Search Notification to Collation function

2007-09-30 Thread RaghavendraK 70574
Hi,

Can any suggest a way to notify collation function end of search from 
control algorthim?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: End of Search Notification to Collation function

2007-09-30 Thread RaghavendraK 70574

Hi,

its like this,

i have one new collation function TestEq.
I register this with sqlite. Also provide a context 
structure.

Now i write query as below,

select * from table where col = 'xxx' collate TestEq 
and  col2='xxx' collate TestEq and col3='xxx' collate TestEq;

Now in the TestEq impl i need to know the column for
which am doing the comparison.

Now most would suggest have different functions,but thats not possible because 
table creation and ppl writing the query is dynamic (not controlled by me). I 
can only bundle my TestEq function and provide api desc abt what it does. End 
users would club it as above in the sql statement and use it. So is there a way 
to get column information for which am running the collate function?
it is logical to have the column names as part
of the collate function.

Second Part: Collate function needs a end of search notification,as Collate 
function can do a re-drill down or up based on the saved context.

All these increases the performance as it happens within sqlite boundary. 

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Sunday, September 30, 2007 6:41 pm
Subject: [sqlite] Re: End of Search Notification to Collation function

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote: 
  Can any suggest a way to notify collation function end of search
  from 
  control algorthim?
 
 What for? What exactly are you trying to achieve?
 
 Igor Tandetnik
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite3_open_v2 and SQLITE_OPEN_READONLY

2007-09-20 Thread RaghavendraK 70574
Hi,

Is the performance same,better or 

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Thursday, September 20, 2007 10:40 pm
Subject: Re: [sqlite] sqlite3_open_v2 and SQLITE_OPEN_READONLY

 Liam Healy [EMAIL PROTECTED] wrote:
  I tried to use sqlite3_open_v2 and SQLITE_OPEN_READONLY as 
 described in
  http://sqlite.org/capi3ref.html#sqlite3_open, but the symbol
  SQLITE_OPEN_READONLY is unknown, even though I have included
  sqlite3.h.  I am using version 3.4.2.  Was this symbol not added 
 until a later version, or am I supposed to use the definition 
 given on the
  web page?
  
 
 Sqlite3_open_v2() and SQLITE_OPEN_READONLY and a whole bunch
 of other stuff is all new to 3.5.0.  Version 3.5.0 is stable.
 It has lots of cool stuff.  Older versions are not supported
 (except for paying customers) - by which we mean that if any
 bugs are discovered they will be fixed in 3.5.0 only, not
 in branches.  You (and a lot of other people) really need
 to upgrade.
 --
 D. Richard Hipp [EMAIL PROTECTED]
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] large bulk insert speed observations when page_size and cache_size are varied

2007-09-18 Thread RaghavendraK 70574

Why is the CPU shot up by 100% (relative)when the full db is in mem? This can 
be a concern.I understand there is not much i/o but 100% is too high.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Tuesday, September 18, 2007 8:28 pm
Subject: [sqlite] large bulk insert speed observations when page_size and 
cache_size are varied

 The bash script below attempts to quantify the relationship between 
 SQLite (3.5.0 alpha) page_size and cache_size towards the speed of 
 populating a large table with many indexes.
 
 First some rough metrics on the final populated database:
 The database is approximately 208M in size, consisting of a single
 table, foo, and its associated indexes. The table 'foo' ends up 
 having 
 177,147 randomly populated rows of a few hundred bytes each. 
 Its schema can be gleaned from the script. The script builds a brand
 new database from scratch for each page_size/cache_size combination 
 for each run to largely eliminate operating system cache effects.
 
 #!/bin/bash
 SQLITE=./sqlite3
 for S in 4096 8192 16384 32768; do
 for C in 1000 2000 4000 8000 16000; do
 rm -f foo.db foo.csv
 echo 
  pragma page_size=$S;
  pragma default_cache_size=$C;
  CREATE TABLE foo(
a text unique,
b text unique not null,
c text unique not null,
primary key(c,a,b)
  );
  CREATE INDEX foo_ba on foo(b,a);
  | $SQLITE foo.db
 $SQLITE foo.csv EOF
  create view v1 as select 1 union all select 2 union all select 3;
  select hex(randomblob(17)),
 hex(randomblob(13)),
 hex(randomblob(131))
  from v1,v1,v1,v1,v1,v1,v1,v1,v1,v1,v1;
 EOF
 echo -n page_size $S, cache_size $C : 
 time $SQLITE foo.db .import foo.csv foo 21 | head -1
 done
 done
 
 The raw output of the script on a 512M Linux box:
 
 page_size 4096, cache_size 1000 : 39.72user 30.88system 
 4:32.46elapsed 25%CPU
 page_size 4096, cache_size 2000 : 40.67user 26.34system 
 4:46.75elapsed 23%CPU
 page_size 4096, cache_size 4000 : 40.47user 21.61system 
 4:34.05elapsed 22%CPU
 page_size 4096, cache_size 8000 : 41.80user 13.37system 
 4:55.15elapsed 18%CPU
 page_size 4096, cache_size 16000 : 42.23user 7.58system 
 4:10.79elapsed 19%CPU
 page_size 8192, cache_size 1000 : 40.62user 37.50system 
 3:11.05elapsed 40%CPU
 page_size 8192, cache_size 2000 : 43.01user 26.60system 
 3:04.52elapsed 37%CPU
 page_size 8192, cache_size 4000 : 42.85user 16.55system 
 2:57.13elapsed 33%CPU
 page_size 8192, cache_size 8000 : 43.62user 8.08system 
 2:34.28elapsed 33%CPU
 page_size 8192, cache_size 16000 : 43.11user 2.75system 
 1:48.53elapsed 42%CPU
 page_size 16384, cache_size 1000 : 43.07user 47.92system 
 2:19.82elapsed 65%CPU
 page_size 16384, cache_size 2000 : 42.41user 31.77system 
 1:59.79elapsed 61%CPU
 page_size 16384, cache_size 4000 : 42.38user 18.70system 
 1:47.69elapsed 56%CPU
 page_size 16384, cache_size 8000 : 41.83user 9.06system 
 1:18.35elapsed 64%CPU
 page_size 16384, cache_size 16000 : 41.28user 8.36system 
 1:00.16elapsed 82%CPU
 page_size 32768, cache_size 1000 : 44.19user 52.55system 
 2:03.40elapsed 78%CPU
 page_size 32768, cache_size 2000 : 43.15user 27.36system 
 1:35.95elapsed 73%CPU
 page_size 32768, cache_size 4000 : 43.18user 11.14system 
 1:10.48elapsed 77%CPU
 page_size 32768, cache_size 8000 : 42.91user 10.34system 
 1:04.69elapsed 82%CPU
 page_size 32768, cache_size 16000 : 42.87user 10.28system 
 1:02.35elapsed 85%CPU
 
 The output regrouped by equal sized page_size x cache_size buckets, 
 for 
 equivalent total sqlite cache size per run:
 
 -- ~16M total cache (1/16th of database pages in cache)
 page_size 4096, cache_size 4000 : 40.47user 21.61system 
 4:34.05elapsed 22%CPU
 page_size 8192, cache_size 2000 : 43.01user 26.60system 
 3:04.52elapsed 37%CPU
 page_size 16384, cache_size 1000 : 43.07user 47.92system 
 2:19.82elapsed 65%CPU
 
 -- ~32M total cache (1/8th of database pages in cache)
 page_size 4096, cache_size 8000 : 41.80user 13.37system 
 4:55.15elapsed 18%CPU
 page_size 8192, cache_size 4000 : 42.85user 16.55system 
 2:57.13elapsed 33%CPU
 page_size 16384, cache_size 2000 : 42.41user 31.77system 
 1:59.79elapsed 61%CPU
 page_size 32768, cache_size 1000 : 44.19user 52.55system 
 2:03.40elapsed 78%CPU
 
 -- ~64M total cache (a quarter of database pages in cache)
 page_size 4096, cache_size 16000 : 42.23user 

Re: [sqlite] LINKING AND LOADING FTS - on Linux/Ubuntu

2007-09-16 Thread RaghavendraK 70574
 
env | grep LD_LIBRARY_PATH;
it will display path list. Now use that path and see if your lib is present 
there or not in that paht.
If not add the path(where fst2 exists) to LD_LIBRARY_PATH

R u new to unix/linux,  i suggest to go through 
basic of them.

regrads
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Uma Krishnan [EMAIL PROTECTED]
Date: Sunday, September 16, 2007 10:38 am
Subject: [sqlite] LINKING AND LOADING FTS - on Linux/Ubuntu

 Hello,
 
 I'm having trouble loading fts2. I modified makefile to create fts2 
 library on Linux/Ubuntu.I
 
 When I attempt to load fts2 using the command select 
 load_extension('fts2'), i get the error shared library not found.
 ( noticed that it had not created the .so file, only .la file.)
 
 What am I doing wrong?
 
 Thanks in advance
 
 Uma
 
 Igor Tandetnik [EMAIL PROTECTED] wrote: Kefah T. Issa  wrote:
  I tried the ordered-urls-insert the results were better, but it is
  still
  taking progressively longer time as the number of records 
 increases.
  A fundamental question to be asked here :
 
  Shouldn't the time complexity (Big-O) of the insert operation be
  constant?
 
 Of  course not. It takes O(log N) to find an appropriate place in 
 the 
 index for every new record (where N is the number of records 
 already 
 inserted). Also, it generates a lot of disk activity once the index 
 grows too large to fit in memory cache.
 
  I even did a third test where the integer primary key is not auto
  increment;
  the same problem is observed.
 
 The id is not a problem: O(log N) is caused by the index on url.
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread RaghavendraK 70574
But with large cache there is too much fragmentation observered (pgin/pgout)
over a period of time. Do u see this behaviour?

My CacheSize: 32KB, PgSize: 8KB (to the limits)

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Monday, September 17, 2007 11:07 am
Subject: Re: [sqlite] Sqlite insertion performance

 --- D. Richard Hipp [EMAIL PROTECTED] wrote:
  
   I have been struggling with the performance of insertion in 
 sqlite. 
   Here we have a very simple case :
  
   A table with an integer autoincrement primary key and a text 
 
   field that is
   unique.
  
   CREATE TABLE my (id PRIMARY KEY, url);
  
   CREATE UNIQUE INDEX myurl ON my(url);
  
  
   My application requires inserting up to 10 million records 
 in  
   batches of
   20 thousand records.
  
  For each group of 2 records, first insert them into a TEMP 
 table. Call the temp table t1.  Then transfer the records to the 
 main table
  as follows:
  
INSERT OR IGNORE INTO my(url) SELECT url FROM t1 ORDER BY url;
 
 I had no performance improvement with that temp store staging table
 technique in my testing - actually it was slower.
 
  http://www.mail-archive.com/sqlite-users@sqlite.org/msg22143.html
 
 Mind you, the table I was testing against had 4 indexes, whereas 
 the above 
 table has 2. I also wasn't using OR IGNORE. There might be a 
 difference.
 Just setting pragma cache_size to a huge value and inserting into
 the table normally in large batches resulted in better performance 
 in 
 my case. It may have already been mentioned, but having a big 
 database page_size value helps minimize the disk writes as well.
 
 
 
   
 
 Take the Internet to Go: Yahoo!Go puts the Internet in your 
 pocket: mail, news, photos  more. 
 http://mobile.yahoo.com/go?refer=1GNXIC
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-16 Thread RaghavendraK 70574
But there is a limit (3.4.0) which stops at 32KB.
the compile macro would do?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Monday, September 17, 2007 12:11 pm
Subject: Re: [sqlite] Sqlite insertion performance

 --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  But with large cache there is too much fragmentation observered 
 (pgin/pgout) over a period of time. Do u see this behaviour?
 
 Fragmentation is not a function of page cache size, but yes,
 I also see this fragmentation if the secondary index exists before 
 the inserts. If you create the index after your inserts, you will 
 have far less (or no) fragmentation.
 
 See the merge-sort point in:
 
  http://www.sqlite.org/cvstrac/wiki?p=ToDo
 
  My CacheSize: 32KB, PgSize: 8KB (to the limits)
 
 Try a page size of 32768 and a value of half your RAM for cache size.
 
 
   
 
 Need a vacation? Get great deals
 to amazing places on Yahoo! Travel.
 http://travel.yahoo.com/
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] multi column select

2007-09-12 Thread RaghavendraK 70574
Hi,

Can anyone explain how does the below sql statmenent work,

select * from tbl1 where a=xx and b=yy; when a is indexed and b is indexed 
seperately? I tried explain could not make out.
 Does it select all records with t=111 and then do a search for b=222 with
in that set or other way?

From the trace below i don;t see idx1 being used.

sqlite create table test( t text,b text);
sqlite create index idx1 on test(t);
sqlite create index idx2 on test(b);
sqlite insert into test values ('111','111');
sqlite insert into test values ('111','222');
sqlite insert into test values ('111','333');
sqlite insert into test values ('222','111');
sqlite insert into test values ('222','222');
sqlite insert into test values ('222','333');
sqlite pragma vdbe_trace=ON;
VDBE Execution Trace:
SQL: [pragma vdbe_trace=ON;]
   0 Expire   10
   1 Halt 00
sqlite explain select * from test where t='111' and b='222';
0|Goto|0|25|
1|Integer|0|0|# test
2|OpenRead|0|2|
3|SetNumColumns|0|2|
4|Integer|0|0|# idx2
5|OpenRead|1|4|keyinfo(1,BINARY)
6|String8|0|0|222
7|IsNull|-1|22|
8|MakeRecord|1|0|a
9|MemStore|0|0|
10|MoveGe|1|22|
11|MemLoad|0|0|
12|IdxGE|1|22|+
13|IdxRowid|1|0|
14|MoveGe|0|0|
15|Column|0|0|# test.t
16|String8|0|0|111
17|Ne|353|21|collseq(BINARY)
18|Column|0|0|# test.t
19|Column|0|1|# test.b
20|Callback|2|0|
21|Next|1|11|
22|Close|0|0|
23|Close|1|0|
24|Halt|0|0|
25|Transaction|0|0|
26|VerifyCookie|0|3|
27|Goto|0|1|
28|Noop|0|0|


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: multi column select

2007-09-12 Thread RaghavendraK 70574

Thank u.
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Wednesday, September 12, 2007 8:10 pm
Subject: [sqlite] Re: multi column select

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  Can anyone explain how does the below sql statmenent work,
 
  select * from tbl1 where a=xx and b=yy;
 
  when a is indexed and b is
  indexed seperately?
 
 The query can only use an index on one of the fields (the 
 optimizer will 
 try to guess which one).
 
  I tried explain could not make out. Does it
  select all records with t=111 and then do a search for b=222 with
  in that set
 
 Either that, or the other way round - use the index to find 
 records with 
 b=222, then scan them linearly to find those with t=111
 
  From the trace below i don;t see idx1 being used.
 
  2|OpenRead|0|2|
 
 This opens the table itself.
 
  5|OpenRead|1|4|keyinfo(1,BINARY)
 
 This opens one of the indexes (look in sqlite_master table to 
 figure out 
 which one). Again, only one index is used. You can also run this 
 query:
 explain query plan select * from test where t='111' and b='222';
 
 The output summarizes which tables and which indexes are used to 
 satisfy 
 the request.
 
 Igor Tandetnik 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Merging two tables

2007-09-10 Thread RaghavendraK 70574

Thanks, i will go through it.

regrads
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dennis Cote [EMAIL PROTECTED]
Date: Tuesday, September 11, 2007 4:39 am
Subject: Re: [sqlite] Re: Re: Merging two tables

 RaghavendraK 70574 wrote:
  In sqlite3_create_function, xFunc func pointer signature does not
  allow to have a return value.Any other possible way achieving it
  apart from collation?
 
 

 The result of your custom function is not returned directly by the 
 xFunc 
 function. your xFunc function must call one of the sqlite3_result_* 
 functions (see 
 http://www.sqlite.org/capi3ref.html#sqlite3_result_blob). 
 For a boolean result you would use sqlite3_result_integer and 
 return a 
 value of 1 for true or 0 for false.
 
 For more information see section 2.3 of 
 http://www.sqlite.org/capi3.html 
 and the func.c source file at 
 http://www.sqlite.org/cvstrac/fileview?f=sqlite/src/func.cv=1.174 
 which 
 uses these APIs to implement all the built in SQL functions.
 
 HTH
 Dennis Cote
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Merging two tables

2007-09-08 Thread RaghavendraK 70574

Hi,

In sqlite3_create_function, xFunc func pointer signature does not
allow to have a return value.Any other possible way achieving it
apart from collation?

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Wednesday, September 5, 2007 8:50 pm
Subject: [sqlite] Re: Re: Merging two tables

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  How to plug custom search algorthim within Sqlite?
  For example,
 
  select * from table where a = 'xxx';
 
  Instead of using normal inbuilt search can it be userDefined
  function?
 
 Yes. You can use sqlite3_create_function to create a function 
 taking two 
 parameters and returning a boolean (actually, an integer with the 
 values 
 0 or 1). Then you can say
 
 select * from table where MyComparison(a, 'xxx');
 
 Igor Tandetnik 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] New Operator Support

2007-09-07 Thread RaghavendraK 70574
May be,
 first want to try with the same impl as any other operator like =.
If successful, then try providing some mech which make use of indexes
where they are actually not allowed for.

If u see one of my earlier post, related to pread,Made use of custom pread
to avoid disk access, now in 3.5 it is a default feature. 

Just want to experiment.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Friday, September 7, 2007 5:56 pm
Subject: Re: [sqlite] New Operator Support

 It still might be useful to explain what the ~ should do.
 
 RBS
 
  Hi,
 
  Its to get involved in the development of sqlite.If possible
  move to contrib section.a dream.
 
  regards
  ragha
 
  
 **
   This email and its attachments contain confidential information from
  HUAWEI, which is intended only for the person or entity whose 
 address is
  listed above. Any use of the information contained herein in any way
  (including, but not limited to, total or partial disclosure,
  reproduction, or dissemination) by persons other than the intended
  recipient(s) is prohibited. If you receive this e-mail in error, 
 please notify the sender by phone or email immediately and delete it!
   
 *
  - Original Message -
  From: [EMAIL PROTECTED]
  Date: Friday, September 7, 2007 5:26 pm
  Subject: Re: [sqlite] New Operator Support
 
  Couldn't tell you as I don't know C, but what
  will the ~ do? Can't the same be done with the
  available operators?
 
  RBS
 
  
   Hi,
  
   Just to get more hands on Sqlite i want to
   write a custom operator. Pls suggest how i can do it.
  
   For example
   select * from tablex where column1 ~ '123';
  
   I want implement it similar to '='. Can anyone help me
   what all steps,files i need to change?
  
   regards
   ragha
  
  
  
  
 **
   This email and its attachments contain confidential information from
   HUAWEI, which is intended only for the person or entity whose
  address is
   listed above. Any use of the information contained herein in 
 any way
   (including, but not limited to, total or partial disclosure,
   reproduction, or dissemination) by persons other than the 
 intended  recipient(s) is prohibited. If you receive this e-
 mail in error,
  please notify the sender by phone or email immediately and 
 delete it!
  
  
 *
   -
  
   To unsubscribe, send email to [EMAIL PROTECTED]
   --
 ---
  
  
  
  
 
 
 
 
  
 ---
  --
  To unsubscribe, send email to [EMAIL PROTECTED]
  
 ---
  --
 
 
 
  -
 
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 
 
 
 
 
 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] New Operator Support

2007-09-07 Thread RaghavendraK 70574

Its like a newbie exploration nothing more, for example if someone want to 
write a linux kernel module there is some fixed guidelines,once a person is 
accustomed to it,he/she would hv better understanding. So is there any guidline 
ther than
tracing every line of code and then finding it out.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dwight Ingersoll [EMAIL PROTECTED]
Date: Friday, September 7, 2007 10:48 pm
Subject: Re: [sqlite] New Operator Support

 On 9/7/07, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  Hi,
 
  Its to get involved in the development of sqlite.If possible
  move to contrib section.a dream.
 
 
 If that's the case, I think the first step is to post your 
 proposal, and get
 input from the SQLite community as to whether the modification 
 would be
 useful enough to be included in the base distribution.
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] New Operator Support

2007-09-07 Thread RaghavendraK 70574
Hi,

Its to get involved in the development of sqlite.If possible
move to contrib section.a dream.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Friday, September 7, 2007 5:26 pm
Subject: Re: [sqlite] New Operator Support

 Couldn't tell you as I don't know C, but what
 will the ~ do? Can't the same be done with the
 available operators?
 
 RBS
 
 
  Hi,
 
  Just to get more hands on Sqlite i want to
  write a custom operator. Pls suggest how i can do it.
 
  For example
  select * from tablex where column1 ~ '123';
 
  I want implement it similar to '='. Can anyone help me
  what all steps,files i need to change?
 
  regards
  ragha
 
 
  
 **
   This email and its attachments contain confidential information from
  HUAWEI, which is intended only for the person or entity whose 
 address is
  listed above. Any use of the information contained herein in any way
  (including, but not limited to, total or partial disclosure,
  reproduction, or dissemination) by persons other than the intended
  recipient(s) is prohibited. If you receive this e-mail in error, 
 please notify the sender by phone or email immediately and delete it!
   
 *
  -
 
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 
 
 
 
 
 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] New Operator Support

2007-09-07 Thread RaghavendraK 70574

Hi,

Just to get more hands on Sqlite i want to
write a custom operator. Pls suggest how i can do it.

For example
select * from tablex where column1 ~ '123';

I want implement it similar to '='. Can anyone help me
what all steps,files i need to change?

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Merging two tables

2007-09-05 Thread RaghavendraK 70574
How to plug custom search algorthim within Sqlite?
For example,

select * from table where a = 'xxx';

Instead of using normal inbuilt search can it be userDefined function?

regards
ragha



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Merging two tables

2007-09-05 Thread RaghavendraK 70574

Thats great.
If possible Can u please provide a sample,

Will this function receive the database column values or indexes?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Wednesday, September 5, 2007 6:20 pm
Subject: [sqlite] Re: Re: Merging two tables

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  How to plug custom search algorthim within Sqlite?
  For example,
 
  select * from table where a = 'xxx';
 
  Instead of using normal inbuilt search can it be userDefined
  function?
 
 Yes. You can use sqlite3_create_function to create a function 
 taking two 
 parameters and returning a boolean (actually, an integer with the 
 values 
 0 or 1). Then you can say
 
 select * from table where MyComparison(a, 'xxx');
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Fwd: Re: [sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-09-02 Thread RaghavendraK 70574

Hi,

Ok.

Is there any way to tell the VDBE to stop execution moment it gets a record
from the IN list rather than continue to query for all the parameters?

I mean can it work like the C if clause
a = 1;
b = 100;
if( a == 1 or b == 10)
{
}

in the above case a is evauated but not b. Is this possible in SQL or SQLite?
Pls suggest.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*
---BeginMessage---
Hi,

Ok.

Is there any way to tell the VDBE to stop execution moment it gets a record
from the IN list rather than continue to query for all the parameters?

I mean can it work like the C if clause
a = 1;
b = 100;
if( a == 1 or b == 10)
{
}

in the above case a is evauated but not b. Is this possible in SQL or SQLite?
Pls suggest.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Trevor Talbot [EMAIL PROTECTED]
Date: Sunday, September 2, 2007 1:03 am
Subject: Re: [sqlite] Difference between OR and IN 
[http://www.sqlite.org/php2004/page-052.html]

 On 9/1/07, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  I want to know why
  prepareStatement: select * from xxx where IN (?);
  stmt.bind(abc,xyz,123); is not supported for multiple
  values.
 
 It's not supported because it doesn't make sense.  The parametric
 binding mechanism is for single values; it's not a macro-like text
 replacement system.  With your syntax, how do I bind a set of
 integers?  Strings?  Blobs?
 
 One common use for parametric binding (besides convenience) is to
 avoid SQL injection attacks.  The example you posted doesn't do that;
 you have to manually escape each individual value to make sure it's
 valid syntax for the IN() group in text form.  Why even use parameters
 in that case?  It's the same amount of work whether you build the
 entire SQL statement or not.
 
 All common databases I'm aware of work exactly the same way.
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-


---End Message---
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-09-02 Thread RaghavendraK 70574
Hi,

create table test (column text);
create index idx on text(column);[IN uses index]

insert into test values ('9');
insert into test values ('98');
insert into test values ('985');

My Query: see if u have any record
98451234 which has a similar pattern.

select * from test where column in 
('98451234','9845123','984512','98451',
 '9845','984','98','9','-1') limit 1;

even when limit 1 is provided it continues to search.

Pls suggest a better way.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Christian Smith [EMAIL PROTECTED]
Date: Sunday, September 2, 2007 8:15 pm
Subject: Re: [sqlite] Difference between OR and IN 
[http://www.sqlite.org/php2004/page-052.html]

 Once you get your first row back (corresponding to (a==1), simply 
 halt 
 there and sqlite3_finalize() or sqlite3_reset the statement. You 
 control 
 the execution and how many rows you want back.
 
 
 RaghavendraK 70574 uttered:
 
  Hi,
 
  Ok.
 
  Is there any way to tell the VDBE to stop execution moment it 
 gets a record
  from the IN list rather than continue to query for all the 
 parameters?
  I mean can it work like the C if clause
  a = 1;
  b = 100;
  if( a == 1 or b == 10)
  {
  }
 
  in the above case a is evauated but not b. Is this possible in 
 SQL or SQLite?
  Pls suggest.
 
  regards
  ragha
 
 
  
 **
  This email and its attachments contain confidential information from HUAWEI, 
 which is intended only for the person or entity whose address is listed 
 above. Any use of the information contained herein in any way (including, but 
 not limited to, total or partial disclosure, reproduction, or dissemination) 
 by persons other than the intended recipient(s) is prohibited. If you receive 
 this e-mail in error, please notify the sender by phone or email immediately 
 and delete it!
  
 *
  - Original Message -
  From: Trevor Talbot [EMAIL PROTECTED]
  Date: Sunday, September 2, 2007 1:03 am
  Subject: Re: [sqlite] Difference between OR and IN 
 [http://www.sqlite.org/php2004/page-052.html]
 
  On 9/1/07, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  I want to know why
  prepareStatement: select * from xxx where IN (?);
  stmt.bind(abc,xyz,123); is not supported for multiple
  values.
 
  It's not supported because it doesn't make sense.  The parametric
  binding mechanism is for single values; it's not a macro-like text
  replacement system.  With your syntax, how do I bind a set of
  integers?  Strings?  Blobs?
 
  One common use for parametric binding (besides convenience) is to
  avoid SQL injection attacks.  The example you posted doesn't do 
 that; you have to manually escape each individual value to make 
 sure it's
  valid syntax for the IN() group in text form.  Why even use 
 parameters in that case?  It's the same amount of work whether 
 you build the
  entire SQL statement or not.
 
  All common databases I'm aware of work exactly the same way.
 
  -
 --
  --
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 --
  --
 
 
 
  --
 ---
  To unsubscribe, send email to [EMAIL PROTECTED]
  --
 ---
 
 
 --
 /\
 \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
  X   - AGAINST MS ATTACHMENTS
 / \
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-09-01 Thread RaghavendraK 70574
Thx.

I want to know why 
prepareStatement: select * from xxx where IN (?);
stmt.bind(abc,xyz,123); is not supported for multiple
values.

Instead sqlite expect use to declare the statement var before hand,
is it due to limitation of Virtual Machine impl?

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dan Kennedy [EMAIL PROTECTED]
Date: Saturday, September 1, 2007 11:59 am
Subject: Re: [sqlite] Difference between OR and IN 
[http://www.sqlite.org/php2004/page-052.html]

 On Sat, 2007-09-01 at 11:58 +0800, RaghavendraK 70574 wrote:
  Hi,
  
  In one of the slides http://www.sqlite.org/php2004/page-052.html;
  it is stated#65292; OR will make DBEngine not to use Indexes 
 and IN will make DBEngine use
  indexes
  
  I could not understand the rationale abt this. Can u explain?
 
 The point is (was) that if you have a table like this:
 
  CREATE TABLE abc(a PRIMARY KEY, b, c);
 
 Then the first query uses the index on a, the second does (did) not:
 
  SELECT * FROM abc WHERE a IN ('hello', 'world');
  SELECT * FROM abc WHERE a = 'hello' OR a = 'world';
 
 However, since those slides were written SQLite has been updated 
 so that
 queries of the second form are internally transformed to the 
 first, and
 therefore may use an index.
 
 Dan.
 
  
  regards
  ragha
  
  
 **
   This email and its attachments contain confidential information from 
 HUAWEI, which is intended only for the person or entity whose address is 
 listed above. Any use of the information contained herein in any way 
 (including, but not limited to, total or partial disclosure, reproduction, or 
 dissemination) by persons other than the intended recipient(s) is prohibited. 
 If you receive this e-mail in error, please notify the sender by phone or 
 email immediately and delete it!
   
 *
  
  -
 
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 
  
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Merge different Sqlite DB's

2007-09-01 Thread RaghavendraK 70574
Hi,

If i have 2 sqlite files is there a way to merge
the files without corruption? assuming the schema
of both the files are same. 
I don't want to use sql statements rather want to do
using file api's.

Similar to diffdb.c 
pls help.

regards
ragha



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-09-01 Thread RaghavendraK 70574
Hi,

Ok.

Is there any way to tell the VDBE to stop execution moment it gets a record
from the IN list rather than continue to query for all the parameters?

I mean can it work like the C if clause
a = 1;
b = 100;
if( a == 1 or b == 10)
{
}

in the above case a is evauated but not b. Is this possible in SQL or SQLite?
Pls suggest.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Trevor Talbot [EMAIL PROTECTED]
Date: Sunday, September 2, 2007 1:03 am
Subject: Re: [sqlite] Difference between OR and IN 
[http://www.sqlite.org/php2004/page-052.html]

 On 9/1/07, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  I want to know why
  prepareStatement: select * from xxx where IN (?);
  stmt.bind(abc,xyz,123); is not supported for multiple
  values.
 
 It's not supported because it doesn't make sense.  The parametric
 binding mechanism is for single values; it's not a macro-like text
 replacement system.  With your syntax, how do I bind a set of
 integers?  Strings?  Blobs?
 
 One common use for parametric binding (besides convenience) is to
 avoid SQL injection attacks.  The example you posted doesn't do that;
 you have to manually escape each individual value to make sure it's
 valid syntax for the IN() group in text form.  Why even use parameters
 in that case?  It's the same amount of work whether you build the
 entire SQL statement or not.
 
 All common databases I'm aware of work exactly the same way.
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-08-31 Thread RaghavendraK 70574
Hi,

In one of the slides http://www.sqlite.org/php2004/page-052.html;
it is stated#65292; OR will make DBEngine not to use Indexes and IN will 
make DBEngine use
indexes

I could not understand the rationale abt this. Can u explain?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Table locked - why?

2007-08-30 Thread RaghavendraK 70574

Pls see if u hv an open sqlite3 terminal.sometimes this can also cause a  prob 
with begin tx and just kept it open.

regrads
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Yves Goergen [EMAIL PROTECTED]
Date: Friday, August 31, 2007 3:41 am
Subject: Re: [sqlite] Table locked - why?

 On 30.08.2007 23:00 CE(S)T, [EMAIL PROTECTED] wrote:
  Probably another thread or process is reading from the database
  and does not want you to drop the table out from under it.
 
 Surely not. This database is there for testing and development 
 purposesonly and my application is the only one that opens it. It 
 doesn't use
 multiple threads so there also can't be another thread locking it.
 
 -- 
 Yves Goergen LonelyPixel [EMAIL PROTECTED]
 Visit my web laboratory at http://beta.unclassified.de
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Sqlite Prepare Statement

2007-08-30 Thread RaghavendraK 70574
Hi,

Do we have tool(similar to Sqlite3) by which Prepare statement can be tested as 
easily as normal sql ?
As most of the time we have use Prepare statement hence we need a RAD tool
to verify it.This will be useful to the Sqlite grp.Share your opinion.

Or if any have an idea how to do it,let me know i will make the changes
and submit to Group owner.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*
Dan Source code is attached. I didn't write this, someone else from the forum 
did their name is not on it, nor coppyrighted.. I thought it was a clean way to 
test threading.

Interestingly if you remove out the shared cache everything runs to completion.



Dan Kennedy [EMAIL PROTECTED] wrote: 
Hi Ken,

Probably a bug in the new threading stuff. Can you share
source code for this test or is it part of some large app?

Either way, thanks for the report.

Dan.

On Wed, 2007-08-29 at 22:15 -0700, Ken wrote:
 Also erros out here, sporadically.
 int sqlite3OsWrite(sqlite3_file *id, const void *pBuf, int amt, i64 offset){
   return id-pMethods-xWrite(id, pBuf, amt, offset);
 }
 
 Program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 1075841376 (LWP 15747)]
 0x0040c413 in sqlite3OsWrite (id=0x55aaa0, pBuf=0x401ffc30, amt=24, 
 offset=0) at os.c:38
 (gdb) Quit
 (gdb) 
 
 Ken  wrote: 4 threads, shared_Cache enabled
 LOOP 100 
 BEGIN
  LOOP 50 times
   INSERT
  end LOOP
  COMMIT
 
   SELECT COUNT(*) ...
 end LOOP
 
 
 program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 1080043872 (LWP 15448)]
 moveToChild (pCur=0x569058, newPgno=) at btree.c:3304
 (gdb) 
 
 
 if( rc ) return rc;
   pNewPage-idxParent = pCur-idx;
   pOldPage = pCur-pPage;
   pOldPage-idxShift = 0;  Error Here
   releasePage(pOldPage);
   pCur-pPage = pNewPage;
   pCur-idx = 0;
   pCur-info.nSize = 0;
 
 
 Ken
 
 
 
 
 Ken  wrote: 4 threads, shared_Cache enabled
 LOOP 100 
 BEGIN
  LOOP 50 times
   INSERT
  end LOOP
  COMMIT
 
   SELECT COUNT(*) ...
 end LOOP
 
 
 program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 1080043872 (LWP 15448)]
 moveToChild (pCur=0x569058, newPgno=) at btree.c:3304
 (gdb) 
 
 
 if( rc ) return rc;
   pNewPage-idxParent = pCur-idx;
   pOldPage = pCur-pPage;
   pOldPage-idxShift = 0;  Error Here
   releasePage(pOldPage);
   pCur-pPage = pNewPage;
   pCur-idx = 0;
   pCur-info.nSize = 0;
 
 
 Ken
 
 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-

-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] Performance tuning for Insert and select operations

2007-08-30 Thread RaghavendraK 70574

Convert your sql to sql Statements.
run sqliteanalz and see which all columns are to be indexed.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: venkata ramana [EMAIL PROTECTED]
Date: Friday, August 31, 2007 9:34 am
Subject: [sqlite] Performance tuning for Insert and select operations

 Hi,
 
 I am using SQLite 3_3_17with the default configuration of SQLite.
 Using Windows XP C++ API.
 50,000 inserts into Table (with primary key and no other index as 
 singletransaction commit) is taking 4.609000 sec.
 50,000 selects from Table ( having 1,00,000 rec with primary key 
 and no
 other index) , query using primary key is taking 8.751000 sec.
 I have to achieve insertion bench mark of nearly 50,000 insertions 
 in 1.5 to
 2 sec.
 I have to achieve selection bench mark of nearly 50,000 selections 
 in 1.5 to
 2 sec.
 Please tell me if it is possible to tune Sqlite to achieve this 
 performance.
 Regards,
 Ramana
 
 
  This e-mail and attachments contain confidential information from 
 HUAWEI,which is intended only for the person or entity whose 
 address is listed
 above. Any use of the information contained herein in any way 
 (including,but not limited to, total or partial disclosure, 
 reproduction, or
 dissemination) by persons other than the intended recipient's) is
 prohibited. If you receive this e-mail in error, please notify the 
 sender by
 phone or email immediately and delete it!
 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574
Hi,

create table test (t text);

insert into test values ('9');
insert into test values ('98');
insert into test values ('986');
insert into test values ('9867');

select * from test where '98555'  like t || '%' order by t desc limit 1;

When we try to compile the above sql as a statement,we get Success but
when we bind it gives a error SQLITE_RANGE.
After inspection we find
sParse.nVar = 0 [which represent nr of  ?]

Can u pls help to correct this error.


regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Daniel Önnerby [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 4:20 pm
Subject: Re: [sqlite] Towards SQLite version 3.5.0

 Hi!
 
 The new multithread-features will be great.
 Do you think that it will be better to share one connection 
 between all 
 theads in an application or is better to have each thread open a 
 new 
 connection and use the sqlite3_enable_shared_cache?
 
 Best regards
 Daniel
 
 [EMAIL PROTECTED] wrote:
  The transition from 3.4.2 to 3.5.0 will perhaps be the
  largest single change to SQLite since 2.8-3.0.  There 
  will not be that many visible changes, but a lot is 
  changing behind the scenes.  Some less frequently used
  interfaces will be changing in slightly incompatible
  ways.  Users who have build customized OS intereface layers
  or backends for SQLite will find that they are going to
  need to do some rework.
 
  SQLite version 3.5.0 is not close to being ready yet.
  But it is to the point where the source code will
  compile and pass many tests.  And so I would like to
  take this opportunity to encourage people in the 
  community to download the CVS HEAD and give it
  a whirl in their applications.  Please let me know
  about any serious issues you run across.
 
  I have *started* to prepare documentation describing
  the changes in 3.5.0.  This is draft documentation.
  But for those who are interested, please visit
 
 http://www.sqlite.org/34to35.html
 http://www.sqlite.org/capi350ref.html
 
  In particular, if your application uses a customized
  OS interface for SQLite, you should read the 34to35.html
  document to see exactly what will be involved in porting
  your application to run with version 3.5.0.
 
  The SQLite code currently in CVS HEAD is not ready for
  production use.  We know that.  We know what many of the
  problems are and Dan and I are working long hours to fix
  them.  It's the problems that we *do not* know about that
  are scary.  So that is why I am inviting the larger
  community to have an early look and perhaps bring our
  attention to issues sooner rather than later.
 
  --
  D. Richard Hipp [EMAIL PROTECTED]
 
 
  -
 
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 
 

 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574
Thx. I have modifed it to ?, but 
Sqlite fails to get records for the below query. When debug it retuns 
SQLITE_DONE. Pls help.

select * from 'tbl.7' where ? like column1 || '%' order by column1 desc limit 1;

Data is as below:
Version: 3.4.0
Re-confirm the problem in sqlite and not in my code,
I tried using sqlite3 commandLine tool and found the same problem.

create table 'tbl.7'(ver integer,
 column1 text not NULL,
 column2 text not NULL,
 column3 text not NULL,
 column4 text not NULL,
 column5 text not NULL,
 column6 text not NULL,
 column7 text not NULL,
 column8 text not NULL,
 column9 text not NULL,
 column10 text not NULL,
 primary key(ver,column1,column2,column3,column4,column5));


insert into 'tbl.7'
values
(7,
'9845002655',
'9845002655',
'9845002655',
'9845002655',
'9845002655',
'CO',
'CO',
'CO',
'CO',
'CO');

insert into 'tbl.7'
values
(7,
'9854002656',
'9845002655',
'9845002655',
'9845002655',
'9845002655',
'CO',
'CO',
'CO',
'CO',
'CO');

--Best match for 985
select * from 'tbl.7' where '985' like column1 || '%' order by column1 desc 
limit 1;


regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dan Kennedy [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 7:07 pm
Subject: Re: [sqlite] BestMatch and SqliteStatment Clash

 On Wed, 2007-08-29 at 18:37 +0800, RaghavendraK 70574 wrote:
  Hi,
  
  Am using sqlite 3.4.0
  
  stmt= sqlite_prepareV2(select * from test where '?'  like t || 
 '%' order by t desc);
 
 You need to remove the ' quotes around the question mark.
 At the moment the expression is a literal string value, 
 not an sql variable. 
 
 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574
Hi,

Best match is 9854002656 among the 2 records.

Pls try this simple one.It will make it clear,

create table test(t text);

insert into test values ('9');
insert into test values('98');
insert into test values('983');
insert into test values('9854');

select * from test where '982' like t || '%' order by t desc limit 1;

above sql tries to model a DST(digit search tree).
Expected output: 98

This works but will not work for earlier data(9854002656).So seems to be bug.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: John Machin [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 6:22 pm
Subject: Re: [sqlite] BestMatch and SqliteStatment Clash

 On 29/08/2007 10:37 PM, RaghavendraK 70574 wrote:
  Thx. I have modifed it to ?, but 
  Sqlite fails to get records for the below query. When debug it 
 retuns 
  SQLITE_DONE. Pls help.
  
  select * from 'tbl.7' where ? like column1 || '%' order by 
 column1 desc limit 1;
  
  Data is as below:
  Version: 3.4.0
  Re-confirm the problem in sqlite and not in my code,
  I tried using sqlite3 commandLine tool and found the same problem.
  
  create table 'tbl.7'(ver integer,
   column1 text not NULL,
   column2 text not NULL,
   column3 text not NULL,
   column4 text not NULL,
   column5 text not NULL,
column6 text not NULL,
   column7 text not NULL,
   column8 text not NULL,
   column9 text not NULL,
   column10 text not NULL,
   primary 
 key(ver,column1,column2,column3,column4,column5)); 
  
  insert into 'tbl.7'
  values
  (7,
  '9845002655',
  '9845002655',
  '9845002655',
  '9845002655',
  '9845002655',
  
 'CO',
  
 'CO',
  
 'CO',
  
 'CO',
  
 'CO');
  
  insert into 'tbl.7'
  values
  (7,
  '9854002656',
  '9845002655',
  '9845002655',
  '9845002655',
  '9845002655',
  
 'CO',
  
 'CO',
  
 'CO',
  
 'CO',
  
 'CO');
  
  --Best match for 985
  select * from 'tbl.7' where '985' like column1 || '%' order by 
 column1 desc limit 1;
 
 Can you leave out the incredibly annoying 'COL...' stuff? It's 
 nothing to do with your problem!
 
 I don't see how *ANY* of your rows will match. Which rows do you 
 expect 
 to match? Why?
 
 If column1 was '98' (for example), then you would have
 '985' like '98' || '%'
 which is
 '985' like '98%'
 which is true.
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574
Hi,

There are 2 testcases one works and other fails
Hope am clear.

SqliteVersion: 3.4.0

TestCase 1: works

create table test(t text);

insert into test values ('9');
insert into test values('98');
insert into test values('983');
insert into test values('9854');

select * from test where '982' like t || '%' order by t desc limit 1;

output: 98 [correct]

TestCase 2: does not work
create table 'tbl.7'(ver integer,
  column1 text not NULL,
  column2 text not NULL,
  column3 text not NULL,
  column4 text not NULL,
  column5 text not NULL,
   column6 text not NULL,
  column7 text not NULL,
  column8 text not NULL,
  column9 text not NULL,
  column10 text not NULL,
  primary key(ver,column1,column2,column3,column4,column5));

 insert into 'tbl.7'
 values
 (7, '9845002655', '1', '1', '1', '1','x','x','x',
  'x','x');

 insert into 'tbl.7'
 values
 (7, '9855002655', '1', '1', '1', '1','x','x','x',
  'x','x');

--Best match for 985 shd be 9855002655
select * from 'tbl.7' where '985' like column1 || '%' order by column1 desc 
limit 1;

output: none //this is a bug.

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Simon Davies [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 7:39 pm
Subject: Re: [sqlite] BestMatch and SqliteStatment Clash

 On 29/08/2007, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  select * from test where '982' like t || '%' order by t desc 
 limit 1;
 
  This works but will not work for earlier data(9854002656).So 
 seems to be bug.
 
  regards
  ragha
 
 '9854002656%' is not a match for '982', so seems not to be a bug
 
 Rgds,
 Simon
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574

Hi,

Pls see my last post, hope it is clear.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 7:57 pm
Subject: [sqlite] Re: BestMatch and SqliteStatment Clash

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  Best match is 9854002656 among the 2 records.
 
  Pls try this simple one.It will make it clear,
 
  create table test(t text);
 
  insert into test values ('9');
  insert into test values('98');
  insert into test values('983');
  insert into test values('9854');
 
  select * from test where '982' like t || '%' order by t desc 
 limit 1;
 
  above sql tries to model a DST(digit search tree).
  Expected output: 98
 
  This works but will not work for earlier data(9854002656).So 
 seems to
  be bug.
 
 The expression ('982' like '98%') evaluates to true. The expression 
 ('982' like '9854002656%') evaluates to false. LIKE operator 
 behaves 
 correctly in both cases. There is no bug, just a case of 
 unrealistic 
 expectations and/or wishful thinking on your part.
 
 It's not clear what your definition of a best match is, but it's 
 obvious that the test you put into the SELECT statement is not it. 
 Computers have this nasty annoying habit of doing what you tell 
 them to 
 do, not what you want them to do. If you describe the metric you 
 want to 
 use to determine the best match, perhaps someone would help you 
 design 
 a statement that would implement it.
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574

Hi,

What u and Igor say is correct. Thx. 
Actually i was trying to model this.
http://algo.inria.fr/flajolet/Publications/Flajolet06.pdf;

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Simon Davies [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 8:18 pm
Subject: Re: [sqlite] Re: BestMatch and SqliteStatment Clash

 ragha,
 
 you want something to give you a 'best match'. The 'like' operator in
 the way you are using it does not do that, but it IS working as it
 should.
 
 I am not sure how to make it any clearer.
 
 Rgds,
 Simon
 
 On 29/08/2007, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  Hi,
 
  Pls see my last post, hope it is clear.
 
  regards
  ragha
  
 **
   This email and its attachments contain confidential information from 
 HUAWEI, which is intended only for the person or entity whose address is 
 listed above. Any use of the information contained herein in any way 
 (including, but not limited to, total or partial disclosure, reproduction, or 
 dissemination) by persons other than the intended recipient(s) is prohibited. 
 If you receive this e-mail in error, please notify the sender by phone or 
 email immediately and delete it!
   
 *
  - Original Message -
  From: Igor Tandetnik [EMAIL PROTECTED]
  Date: Wednesday, August 29, 2007 7:57 pm
  Subject: [sqlite] Re: BestMatch and SqliteStatment Clash
 
   RaghavendraK 70574
   [EMAIL PROTECTED] wrote:
Best match is 9854002656 among the 2 records.
   
Pls try this simple one.It will make it clear,
   
create table test(t text);
   
insert into test values ('9');
insert into test values('98');
insert into test values('983');
insert into test values('9854');
   
select * from test where '982' like t || '%' order by t desc
   limit 1;
   
above sql tries to model a DST(digit search tree).
Expected output: 98
   
This works but will not work for earlier data(9854002656).So
   seems to
be bug.
  
   The expression ('982' like '98%') evaluates to true. The 
 expression  ('982' like '9854002656%') evaluates to false. LIKE 
 operator  behaves
   correctly in both cases. There is no bug, just a case of
   unrealistic
   expectations and/or wishful thinking on your part.
  
   It's not clear what your definition of a best match is, but it's
   obvious that the test you put into the SELECT statement is not it.
   Computers have this nasty annoying habit of doing what you tell
   them to
   do, not what you want them to do. If you describe the metric you
   want to
   use to determine the best match, perhaps someone would help you
   design
   a statement that would implement it.
  
   Igor Tandetnik
  
  
   
 
   -
   To unsubscribe, send email to [EMAIL PROTECTED]
   
 
   -
  
  
 
  --
 ---
  To unsubscribe, send email to [EMAIL PROTECTED]
  --
 ---
 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] BestMatch and SqliteStatment Clash

2007-08-29 Thread RaghavendraK 70574

Yes,i understand it is no problem,my mistake.

One another thing is how does reversing of value and column work,
i mean the internals of it. Does it create any temp table ...

And i see the sqlite documentation of like, which say if the first char is not 
a wild char then index
will be helpful.

create table test(t text);

insert into test values ('9');
insert into test values('98');
insert into test values('983');
insert into test values('9854');

select * from test where '982' like t || '%' order by t desc limit 1;

output: 98 [correct]


regard
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dennis Cote [EMAIL PROTECTED]
Date: Wednesday, August 29, 2007 8:45 pm
Subject: Re: [sqlite] BestMatch and SqliteStatment Clash

 RaghavendraK 70574 wrote:
  Hi,
 
  There are 2 testcases one works and other fails
  Hope am clear.
 
  SqliteVersion: 3.4.0
 
  TestCase 1: works
 
  create table test(t text);
 
  insert into test values ('9');
  insert into test values('98');
  insert into test values('983');
  insert into test values('9854');
 
  select * from test where '982' like t || '%' order by t desc 
 limit 1;
 
  output: 98 [correct]
 
  TestCase 2: does not work
  create table 'tbl.7'(ver integer,
column1 text not NULL,
column2 text not NULL,
column3 text not NULL,
column4 text not NULL,
column5 text not NULL,
 column6 text not NULL,
column7 text not NULL,
column8 text not NULL,
column9 text not NULL,
column10 text not NULL,
primary 
 key(ver,column1,column2,column3,column4,column5));
   insert into 'tbl.7'
   values
   (7, '9845002655', '1', '1', '1', '1','x','x','x',
'x','x');
 
   insert into 'tbl.7'
   values
   (7, '9855002655', '1', '1', '1', '1','x','x','x',
'x','x');
 
  --Best match for 985 shd be 9855002655
  select * from 'tbl.7' where '985' like column1 || '%' order by 
 column1 desc limit 1;
 
  output: none //this is a bug.
 
 
  '9854002656%' is not a match for '982', so seems not to be a bug
 
 
 
 As has been pointed out several times already your expectations are 
 wrong.
 You need additional code to implement your best match criterion. 
 What 
 you might want to match is the substring of the column up to the 
 length 
 of the target string, not the entire column.
 
   select * from tab 
   where :target like substr(column1,1,length(:target)) || '%' 
   order by column1 desc limit 1;
 
 I'm not sure why you think the first such match is the best match, 
 but 
 that is another issue for you to look at.
 
 Another point, you should be using double quotes around your table 
 name, 
 not single quotes.
 
 HTH
 Dennis Cote
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In memory database question

2007-08-28 Thread RaghavendraK 70574

Hi,

I did tried something crazy like this (it worked, for read only DB only).
I changed pread to preadCustom api.Then in preadCustom maintained a static 
fdArray. if fd is not listed then i mmap the whole file and the use the memcpy 
to return the data. There was significant performance again as there was no 
context switch.
U can overload all the os wrappers to your custom impl and then share it with 
multiple threads.

I had to do this as even mounting the file on /dev/shm there was no significant 
performance gain.

Just a possible direction,

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Virgilio Alexandre Fornazin [EMAIL PROTECTED]
Date: Tuesday, August 28, 2007 8:42 pm
Subject: [sqlite] In memory database question

 Hi
 
 
 
 There´s possible to share a sqlite3 handle to a memory database in all
 threads of application?
 
 Or there´s a way to ?duplicate? the handle (sqlite_open() or 
 something like
 that)?
 
 
 
 
 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: like operator

2007-08-26 Thread RaghavendraK 70574

Hi,

Will Sqlite uses indexes when
using the like operator 
as below,

select * from tbl where '9845' like column || '%'
order by des limit 1;

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Friday, August 17, 2007 7:25 am
Subject: [sqlite] Re: like operator

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  we have given a web interface which receive delete request.
  Now in the req we get % and in the delete impl we do this
  delete from table where itemName like xxx.%;
 
  since the key is % the above statement becomes,
  delete from table where itemName like %.%;And result in fatal
  problem of erasing all records.
 
 Try
 
 delete from table
 where itemName = 'xxx.' and itemName  'xxx/';
 
 (a slash '/' character happens to come after period '.' in ASCII). Or
 
 delete from table
 where substr(itemName, 1, length('xxx.')) = 'xxx.';
 
 The first query would run much faster than the second if you have 
 an 
 index on itemName.
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: like operator

2007-08-26 Thread RaghavendraK 70574
sorry.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Sunday, August 26, 2007 7:00 pm
Subject: [sqlite] Re: Re: like operator

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  Will Sqlite uses indexes when
  using the like operator
  as below,
 
  select * from tbl where '9845' like column || '%'
  order by des limit 1;
 
 No. Next time you have a similar question, be aware you can find 
 out 
 yourself by prepending your query with EXPLAIN QUERY PLAN.
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] A Question About Creating and Accessing a SQLite Database in a RAM Drive

2007-08-21 Thread RaghavendraK 70574

Hi,

But the performance is very bad in comparision with the file based DB,
any suggestion to improve the same.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Rich Shepard [EMAIL PROTECTED]
Date: Tuesday, August 21, 2007 7:35 am
Subject: Re: [sqlite] A Question About Creating and Accessing a SQLite Database 
in a RAM Drive

 On Mon, 20 Aug 2007, Lee Crain wrote:
 
  Is there any reason why this would not be possible?
 
 Lee,
 
   Not at all. You can create the databse in memory as long as you 
 havesufficient storage for it. See the web pages for syntax. If 
 you do not
 specify a filename when you invoke sqlite3, it's an in-memory 
 database on
 which you can perform all SQL operations.
 
 Rich
 
 -- 
 Richard B. Shepard, Ph.D.   |The Environmental 
 PermittingApplied Ecosystem Services, Inc.|  
 Accelerator(TM)http://www.appl-ecosys.com Voice: 503-667-
 4517  Fax: 503-667-8863
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: RE: [sqlite] A Question About Creating and Accessing a SQLite Database in a RAM Drive

2007-08-21 Thread RaghavendraK 70574
Hi,

I have tried with a ram disk and ramfs. Read Performance is same as that of 
disk based system.
How did i evaluate?
wrote simple program
  create file on disk,ramfs
  do pread(...)
In fact sometimes the performance is low when using ramdisk or ramfs or dev/shm.

FileSize: 80MB
read at once: 20bytes
type of read: sequential,meaning in the do loop just increment the offset by 
20bytes.

Pls guide me how pread can be replaced for ramfs which is much more efficent.
Is it possible to avoid this pread system as this consumes significant cpu 
cycles when it switches from user to kernel mode.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Rich Shepard [EMAIL PROTECTED]
Date: Wednesday, August 22, 2007 1:14 am
Subject: RE: [sqlite] A Question About Creating and Accessing a SQLite Database 
in a RAM Drive

 On Tue, 21 Aug 2007, Lee Crain wrote:
 
  The approach I planned was a little different than what you 
 proposed.
   That's fine, Lee.
 
  This technique for performing database updates offline and then 
 updating the original database via a file copy operation has 
 worked very well on
  hard drives. I am only considering using the RAM drive to 
 improve the
  speed of the database updates.
 
   This was common in the early 1980s when drives and other 
 hardware were
 slow. I've not seen a situation any time recently when this was 
 necessarywith modern hardware and fast memory. When I was 
 capturing real-time data
 (lat/lon from the GPS receiver and depth from the sonar), I'd 
 write both to
 memory buffers, then write to disk on a regular basis. This let me use
 slower hardware (compared to the data flow) while writing to disk 
 in chunks
 and ensuring that no data were lost.
 
   I'm confident that you can tune your database for speed in 
 other ways, but
 -- of course -- it's your choice.
 
 Good luck with it,
 
 Rich
 
 -- 
 Richard B. Shepard, Ph.D.   |The Environmental 
 PermittingApplied Ecosystem Services, Inc.|  
 Accelerator(TM)http://www.appl-ecosys.com Voice: 503-667-
 4517  Fax: 503-667-8863
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: RE: [sqlite] A Question About Creating and Accessing a SQLite Database in a RAM Drive

2007-08-21 Thread RaghavendraK 70574
Hi,

Sqlite uses pread and the performance with ramdisk and disk is the same
on Linux SuSE9.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Lee Crain [EMAIL PROTECTED]
Date: Wednesday, August 22, 2007 3:31 am
Subject: RE: [sqlite] A Question About Creating and Accessing a SQLite Database 
in a RAM Drive

 Rich,
 
 We're going to delete and rewrite ~109,369 records in 5 tables 
 every week.
 
 
 Hard drives are a minimum of 10,000 times slower than RAM. I'll 
 let you
 know if this process is not a lot faster than writing the records,
 individually, to a hard drive. 
 
 Lee Crain
 
 _
 
 
 -Original Message-
 From: Rich Shepard [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, August 21, 2007 11:15 AM
 To: sqlite-users@sqlite.org
 Subject: RE: [sqlite] A Question About Creating and Accessing a SQLite
 Database in a RAM Drive
 
 On Tue, 21 Aug 2007, Lee Crain wrote:
 
  The approach I planned was a little different than what you 
 proposed.
   That's fine, Lee.
 
  This technique for performing database updates offline and then 
 updating the original database via a file copy operation has 
 worked very well on
  hard drives. I am only considering using the RAM drive to 
 improve the
  speed of the database updates.
 
   This was common in the early 1980s when drives and other 
 hardware were
 slow. I've not seen a situation any time recently when this was 
 necessarywith modern hardware and fast memory. When I was 
 capturing real-time data
 (lat/lon from the GPS receiver and depth from the sonar), I'd 
 write both
 to
 memory buffers, then write to disk on a regular basis. This let me use
 slower hardware (compared to the data flow) while writing to disk in
 chunks
 and ensuring that no data were lost.
 
   I'm confident that you can tune your database for speed in 
 other ways,
 but
 -- of course -- it's your choice.
 
 Good luck with it,
 
 Rich
 
 -- 
 Richard B. Shepard, Ph.D.   |The Environmental 
 PermittingApplied Ecosystem Services, Inc.|  
 Accelerator(TM)http://www.appl-ecosys.com Voice: 503-667-
 4517  Fax:
 503-667-8863
 
 ---
 ---
 ---
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 ---
 ---
 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] To increase search speed

2007-08-18 Thread RaghavendraK 70574

use sqlite statements.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Sreedhar.a [EMAIL PROTECTED]
Date: Saturday, August 18, 2007 10:52 am
Subject: [sqlite] To increase search speed

 Hi
 I am working in 3.3.6.
 I have created a table with 4 records with 12 fields as follows.
 
 create table MUSIC (Id integer primary key, AlbumName text not 
 null collate
 nocase,Track text not null collate nocase,ArtistName text not null 
 collatenocase,URL text not null collate nocase , Duration integer, 
 TrackFormat text
 not null collate nocase, BitRate integer, sampleRate integer, Channels
 integer, Filesize integer GenreName text not null collate nocase);
 
 I will often search for the following fields only.
 
Select distinct
 AlbumName from MUSIC;
Select  distinct
 ArtistName from MUSIC;
Select  distinct
 GenreName from MUSIC;
Select  distinct
 AlbumName for particular ArtistName
Select  Track for
 particular AlbumName
Select  distinct
 ArtistName for particular GenreName
  
 To obtain nice search speed which method will work fine.
 I have tried wilth the following methods.
 
 Method 1:
 
 It's the one described above
 
 Method 2:
 
 By doing indexing.I tried with the following.
 
create  index Musicidx1 on MUSIC(ArtistName  collate 
 nocase,AlbumNamecollate nocase);   
create  index Musicidx2 on MUSIC(AlbumName collate 
 nocase,ArtistNamecollate nocase,URL collate nocase);
 
 This gives better performance than method 1 for the following 
 searches;

 Selectdistinct  AlbumName for particular ArtistName

 SelectTrack for particular AlbumName

 Selectdistinct ArtistName for particular GenreName
 
 Method 3:
 
 Joining Multiple tables.
 I have created 4 tables as follows;
 
create table  ALBUMS (id integer primary key,Album
 text,unique(Album));
create table  ARTISTS (id integer primary key,Artist
 text,unique(Artist));
create table  GENRES (id integer primary key,Genre
 text,unique(Genre));
 
create table MUSIC (Id integer primary key, AlbumName 
 text not
 null collate nocase,Track text not null collate nocase,ArtistName 
 text not
 null collate nocase,URL text not null collate nocase , Duration 
 integer,TrackFormat text not null collate nocase, BitRate integer, 
 sampleRateinteger, Channels integer, Filesize integer,album_id 
 integer,artist_idinteger, GenreName text not null collate nocase 
 ,Genre_Id integer);
 
 Here album_id , artist_id and Genre_Id are the id values of
 ALBUMS,ARTISTS,GENRES Tables.
 This shows better performance than indexing for the following 
 searches.
 
 Select distinct AlbumName from MUSIC;
 
 Select  distinct ArtistName from MUSIC;
 
 Select  distinct  GenreName from MUSIC;
 Method 4:
 
 Inmemory method.I will copy all the content from the temporary 
 database to
 inmemory and then performing search.
 If I am using this method means then while inserting records , that 
 recordwill be inserted into the temporary memory only.
 But I want to be inserted in to the original database also. Is 
 there any
 other way to do this.
 
 Can any one help to increase my search speed.
 Thanks in Advance.
 
 Regards,
 Sreedhar
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

2007-08-16 Thread RaghavendraK 70574
hi,

Am not clear.
Suppose i have 2 databases and a process spwans 2 threads and each thread
opne the db will it result in any problem? They are independent files.


thx
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Andre du Plessis [EMAIL PROTECTED]
Date: Thursday, August 16, 2007 4:36 pm
Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

 Ok well I guess I forgot to mention this is what has made me want to
 pull my hair out a few times :) the fact that you have to worry about
 both scenarios for two different reasons, if multiple threads are
 working with the same connection handle, then SQL will have a better
 understanding of the state of your connection and inform you of busy
 errors better. If you are using different DB handles what will 
 happen is
 that SQLite may not care that some other thread is busy with another
 handle and all will work fine until one point, the connection handle
 needs to commit data and enter exclusive mode, it has to get an
 exclusive lock on the DB File  and no matter that other 
 connections have
 their own handles if they have any locks on the db, sqlite will go 
 intobusy handler mode and eventually timeout,
 depending on how long you wait. If a query keeps a read cursor 
 open for
 some reason inevitably this will result in a database is locked error.
 The problem to watch out for is a deadlock, example
 
 THREAD1 THREAD2
 BEGINBEGIN 
 INSERT SOME  INSERT SOME
 COMMIT (busy handler)COMMIT (busy handler)
 As you can see thread1 waits for thread2, they will deadlock, and 
 unlessyou have a limit in your busy handler you will wait forever.
 
 As recommended, BEGIN IMMEDIATE should prevent thread2 from even
 starting a transaction if thread1 did so first, however I think this
 will only work correctly if the same connection handle is used in 
 both,else they still may not know about eachother.
 
 So yes there is two ways to do this, one is that make sure your busy
 handler works properly and then let your applications just try and 
 thenfail on busy throw the exception and let the application try 
 again until
 all locks are gone,
 Or two use a global mutex (IF your application runs in more than one
 process space)
 Or 3 (use a global critical section - this will be faster) if your
 application is just in one process space.
 Make sure that inserts/queries finish their business including 
 begin and
 commit transaction in the critical
 
 If your application ONLY does queries for example you should have NO
 problem,
 
 Additionally if you are using the same DB handle across threads EVERY
 CALL to the library no matter what should be (serialized) locked 
 in a
 critical section.
 
 Ive used these principles  that is running fine now, so I will 
 stick to
 this design
 
 Hope this helps
 
 -Original Message-
 From: Mark Brown [mailto:[EMAIL PROTECTED] 
 Sent: 15 August 2007 04:34 PM
 To: sqlite-users@sqlite.org
 Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment
 
 Hi Andre-
 
 After rereading your post, I wanted to confirm something.  In your
 example
 below, are thread1 and thread2 connected to the same database, or
 different
 databases?  In my scenario, the threads are connected to different
 databases, so I'm not sure if it is the same situation.
 
 Thanks,
 Mark
 
 
  -Original Message-
  From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, August 15, 2007 5:05 AM
  To: sqlite-users@sqlite.org
  Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded 
 environment 
  
  Being a newbie to SQLite I've had the same problems working 
  with SQLite
  so maybe I can help, 
  It does not matter how well your database is synchronized, a common
  pitfall I had was that I would have a query object with an open 
 cursor which prevents any other statement from committing to the 
 database. 
  So for example:
  THREAD1 THREAD2
  LOCK
  QUERY   
  UNLOCK  LOCK
  (Step through query)BEGIN TRANSACTION
  INSERTS
  COMMIT - SQLite busy error here 
  UNLOCK 
   
 
 
 
 ---
 -
 -
 To 

[sqlite] like operator

2007-08-16 Thread RaghavendraK 70574
Hi,

we have given a web interface which receive delete request.
Now in the req we get % and in the delete impl we do this
delete from table where itemName like xxx.%;

since the key is % the above statement becomes,
delete from table where itemName like %.%;And result in fatal problem of 
erasing all records.

Is there any api to deal with like operator for these conditions, pls help. 
Hopefully fix will not 
degrade performance.

regrds
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: like operator

2007-08-16 Thread RaghavendraK 70574
Hi,

Thanks for the reply.
I could not under the / what is the purpose of it?

regards
ragha



**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Friday, August 17, 2007 9:55 am
Subject: [sqlite] Re: like operator

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  we have given a web interface which receive delete request.
  Now in the req we get % and in the delete impl we do this
  delete from table where itemName like xxx.%;
 
  since the key is % the above statement becomes,
  delete from table where itemName like %.%;And result in fatal
  problem of erasing all records.
 
 Try
 
 delete from table
 where itemName = 'xxx.' and itemName  'xxx/';
 
 (a slash '/' character happens to come after period '.' in ASCII). Or
 
 delete from table
 where substr(itemName, 1, length('xxx.')) = 'xxx.';
 
 The first query would run much faster than the second if you have 
 an 
 index on itemName.
 
 Igor Tandetnik 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Best Match Query

2007-08-06 Thread RaghavendraK 70574
Hi,

How to form the SQL query(in SQLite) for the following problem below

table:
44
442
4454

Input String: 4429845

Expected output from SQL query: 442

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Trevor Talbot [EMAIL PROTECTED]
Date: Monday, August 6, 2007 2:39 pm
Subject: Re: [sqlite] a c++ newbie question

 On 8/5/07, Stephen Sutherland [EMAIL PROTECTED] wrote:
 
I am trying to treat a string before passing it through my SQL 
 statement into the database.
 
I know that a  single apostrophe will break the SQL statement.
So I have to replace them all to double apostrophes.
 
But are there any other characters that will break the SQL 
 statement ?
 
  I actually have a situation where the user creates an XML file 
 and the contents of the XML file gets dumped in the database. So 
 there is opportunity for a hacker to create an XML file which has 
 some SQL statements in it like ' DELETE TABLE X ;
 
So any thoughts or existing code would be great.
 
 Don't attempt to treat strings at all.  Instead, always use the
 parametric binding API for whatever database you're using.  You
 prepare statements like INSERT INTO table VALUES (?), and then pass
 in the input string as a separate argument for the database engine to
 put in place of the ?.  This avoids the entire problem of escaping
 special characters, and you don't need to treat your input data
 specially.
 
 For sqlite, use sqlite3_prepare_v2() and sqlite3_bind_text().
 http://sqlite.org/capi3.html should get you up to speed on the
 process, and browse through the other documents on the site for more
 information.
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Transaction Check

2007-07-29 Thread RaghavendraK 70574
Hi,

That would  be fatal risk, is there a way to notify the application, such a 
thing happened,so that application don;t add further data to the db.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Igor Tandetnik [EMAIL PROTECTED]
Date: Sunday, July 29, 2007 8:06 pm
Subject: [sqlite] Re: Transaction Check

 RaghavendraK 70574
 [EMAIL PROTECTED] wrote:
  If the sqlite file is deleted at runtime and if there are any open
  connections,can these connections detect and return error?
 
 On Windows, you simply cannot delete a file that is open by 
 somebody 
 else. DeleteFile call returns an error.
 
 On Linux, you can delete (technically, unlink) a file in this 
 situation, but it won't be really deleted until all handles on it 
 are 
 closed. So any already open connections will continue to work with 
 the 
 file.
 
 Igor Tandetnik 
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Sqlite file deleted but no error when insert new records

2007-07-29 Thread RaghavendraK 70574
Hi,

is there a way to notify the application, such a thing happened,so that 
application don;t add further data to the db.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Gilles Ganault [EMAIL PROTECTED]
Date: Monday, July 30, 2007 9:41 am
Subject: Re: [sqlite] Re: How to know which rows are affected by  UPDATE/DELETE?

 At 14:06 29/07/2007 -0400, you wrote:
 select rowid from suppliers where name='IBM';
 
 Simple enough :-) Thanks.
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Transaction Check

2007-07-28 Thread RaghavendraK 70574
Hi,

Is there any api which tell us if a transaction is already started or not?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Transaction Check

2007-07-28 Thread RaghavendraK 70574
Thank you.

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Saturday, July 28, 2007 6:20 pm
Subject: Re: [sqlite] Transaction Check

 RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  Hi,
  
  Is there any api which tell us if a transaction is already 
 started or not?
  
 
 sqlite3_get_autocommit()
 
 --
 D. Richard Hipp [EMAIL PROTECTED]
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Transaction Check

2007-07-28 Thread RaghavendraK 70574
am not sure of select but
it may be costly sql_exec(begin,...);

regrads
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Dr Gerard Hammond [EMAIL PROTECTED]
Date: Sunday, July 29, 2007 8:10 am
Subject: Re: [sqlite] Transaction Check

 Cool. Is there a way to ask the engine this?
 Some SELECT call?
 
 At 12:50 PM + 28/7/07, [EMAIL PROTECTED] wrote:
 RaghavendraK 70574 [EMAIL PROTECTED] wrote:
   Hi,
 
   Is there any api which tell us if a transaction is already 
 started or not?
 
 
 sqlite3_get_autocommit()
 
 
 
 -- 
 
 Cheers,
 
 Dr Gerard Hammond
 MacSOS Solutions Pty Ltd
 [EMAIL PROTECTED]  http://www.macsos.com.au
 
 Proofread carefully to see if you any words out.
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Transaction Check

2007-07-28 Thread RaghavendraK 70574

Hi,

If the sqlite file is deleted at runtime and if there are any open 
connections,can these connections detect and return error?

Pls suggest.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Saturday, July 28, 2007 6:20 pm
Subject: Re: [sqlite] Transaction Check

 RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  Hi,
  
  Is there any api which tell us if a transaction is already 
 started or not?
  
 
 sqlite3_get_autocommit()
 
 --
 D. Richard Hipp [EMAIL PROTECTED]
 
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Duplicate Row check

2007-07-19 Thread RaghavendraK 70574
Hi,

How can check if a row exists in the db or not without querying for it?

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Duplicate Row check

2007-07-19 Thread RaghavendraK 70574
Q was incomplete.
When i do a insert is there a way to know row already exists!!

regrads
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: RaghavendraK 70574 [EMAIL PROTECTED]
Date: Thursday, July 19, 2007 5:17 pm
Subject: [sqlite] Duplicate Row check

 Hi,
 
 How can check if a row exists in the db or not without querying for 
 it?
 regards
 ragha
 
 
 **
 This email and its attachments contain confidential information 
 from HUAWEI, which is intended only for the person or entity whose 
 address is listed above. Any use of the information contained 
 herein in any way (including, but not limited to, total or partial 
 disclosure, reproduction, or dissemination) by persons other than 
 the intended recipient(s) is prohibited. If you receive this e-mail 
 in error, please notify the sender by phone or email immediately 
 and delete it!
 
 *
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] performance issue

2007-07-18 Thread RaghavendraK 70574

Turn off sync using pragma.
Build sqlite with appropriate cache and page size suitable to u;r system.
use sqliteanalyze and find out the page size.
Always use begin and commit/rollback for insert and updates.

Joe others had provided us with good direction.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: [EMAIL PROTECTED]
Date: Wednesday, July 18, 2007 1:36 pm
Subject: [sqlite] performance issue

 Hi
 
 I am using SQLite on MVL OS for ARM processor based embedded platform.
 I am using SQLite version 3.3.13. We use SQLite APIs for DB operation.
 I am facing following issue. 
 
 While testing I observed INSERT and UPDATE command is taking more 
 time 
 than SELECT queries.
 For example one select query is taking 1 to 2 mili sec where as 
 one INSERT 
 or UPDATE takes 40 to 100 mili secs.
 We are seeing very high latency for write queries.
 We tried some performance enhancement flags and PRAGMA settings. 
 
 Is there any performance enhancement settings in SQLite? Or any 
 known 
 issue?
 
 Thanks  Regards
 Suresh
 
 ***  Aricent-Restricted   ***
 DISCLAIMER: This message is proprietary to Aricent and is 
 intended solely for the use of 
 the individual to whom it is addressed. It may contain privileged 
 or confidential information and should not be 
 circulated or used for any purpose other than for what it is 
 intended. If you have received this message in error, 
 please notify the originator immediately. If you are not the 
 intended recipient, you are notified that you are strictly
 prohibited from using, copying, altering, or disclosing the 
 contents of this message. Aricent accepts no responsibility for 
 loss or damage arising from the use of the information transmitted 
 by this email including damage from virus.
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Prepared Statement (select * from x where y in ());

2007-07-18 Thread RaghavendraK 70574
HI,

1)How can we prepare a SQliteStatement for the following type of select,
  select * from tablename where column in (?);
  ?: we don;t know the length of this string.

2) Can we mix SqliteStatement and sqlite3_exec functions?

regads
ragha



**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Prepared Statement (select * from x where y in ());

2007-07-18 Thread RaghavendraK 70574

May be my Q is not clear,

Problem is as shown below,

SQL = select * from table where xyz in (:abc);

bind_text('ab','xy','zx','123')

bind_text will append '' at the begining and at end making it as one string 
rather 
than ab,xy,zx.

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Bharath Booshan L [EMAIL PROTECTED]
Date: Thursday, July 19, 2007 11:56 am
Subject: Re: [sqlite] Prepared Statement (select * from x where y in ());

 
 
 
  1)How can we prepare a SQliteStatement for the following type of 
 select,   select * from tablename where column in (?);
?: we don;t know the length of this string.
 
 
 Pass any value less than zero as 3rd parameter to sqlite3_prepare 
 function,in which case, characters up to the first null terminator 
 will be
 considered.
 
 Cheers,
 
 Bharath Booshan L.
 
 
 
 ---
 Robosoft Technologies - Come home to Technology
 
 Disclaimer: This email may contain confidential material. If you 
 were not an intended recipient, please notify the sender and 
 delete all copies. Emails to and from our network may be logged 
 and monitored. This email and its attachments are scanned for 
 virus by our scanners and are believed to be safe. However, no 
 warranty is given that this email is free of malicious content or 
 virus.
 
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Sqlite in Shared Memory

2007-07-12 Thread RaghavendraK 70574
Hi,

I want to use Sqlite as in-mem db and want this Mem to be alloacted on SHM.
Has anyone tried it?

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database File size not exceeding 2GB

2007-07-05 Thread RaghavendraK 70574
Pls check if can create a normal file more than 2GB from your program
if not then some options must be missing.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Krishnamoorthy, Priya (IE10) [EMAIL PROTECTED]
Date: Thursday, July 5, 2007 7:37 pm
Subject: [sqlite] Database File size not exceeding 2GB

 Hi all,
 
 
 
 This is with reference to the problem mentioned below:
 
 
 
 1)   I am using NTFS file system
 
 2)   I tried in windows 2000 and Windows XP
 
 
 
 But still I am able to log only 2 GB of data. SQLite stops logging 
 datamore than that. But it is not throwing any exception also.
 
 
 
 Is there any setting that needs to be done in Windows XP? Enable 
 LFS etc
 so as to log data more than 2GB?
 
 
 
 Please help me in this regard.
 
 
 
 Regards,
 
 Priya
 
 
 
 ---
 -
 ---
 -
 
 
 
 
 I am using SQLite3 database in my application.
 
 
 
 My application runs on Windows XP (32 bit) platform. I am not able to
 store more than 2GB of data in my database. Is it not possible to 
 storemore than 2gb data in windows XP? 
 
 
 
 I used SQlite3 in Linux and could store more than 2GB.
 
 
 
 Please help me in this regard
 
 
 
 Best regards,
 
 Priya 
 
 
 
 
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-07-03 Thread RaghavendraK 70574
Hi,

As said, i used a seperate ID (hash) but it made the performance very bad.
Now it takes minutes.[Version:  3.4.0]

regards
ragha

unsigned long idGen(const string  id) 
{
const char* __s = id.c_str();
unsigned long __h = 0; 
for ( ; *__s; ++__s)
{
   __h = 5*__h + *__s;
}
return size_t(__h); 
}

gen: program
int main()
{
 fstream fout(data.sql,ios::out);
 fstream fout1(query.sql,ios::out);
  foutBegin;endl;
  foutcreate table test (serName text,doid text,id integer,primary key 
(serName,doid,id));endl;
  foutcreate index serNameIdx1 on test(id);endl;
 for(int j=0; j10;j++)
 {
char* ptr = tmpnam(0); 
 string key = 
11.;
 key += ptr;
 fout1keyendl;
 unsigned long idKey = idGen(key);
   foutinsert into test values 
('key','2',idKey);endl;



 }
  foutcommit;endl;


 return 0;
}

Perf Prg:
  int main()
 {
   sqlite3* db;
   int ret = sqlite3_open(abc,db);
 
   char* errmsg;
   char buf[1024];
 
   //read all the srvName from file and store it in arr and query.
   fstream fin(query.sql,ios::in);
   string data[10];
   long long idKeyArr[10];

for(int j=0;!fin.eof();j++)
{
   findata[j];
   //coutdata[j]endl;
   idKeyArr[j] = idGen(data[j]);
} 
fin.close();
 
   sprintf(buf,create temporary table ttest(column1 text,column2 text,id 
integer,primary key (column1,column2,id));); 
   ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);

   sprintf(buf,create index idx1 on ttest(id)););   
   ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);

   sprintf(buf,insert into ttest select * from test;);
   ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);


sleep(5);
 
coutCalling Test Nowendl;

liststring lst;
char* szError=0;
const char* szTail=0;
sqlite3_stmt* pVM;
string clauseSt;

//memset(buf,0,1024);
//sprintf(buf,select doid from test where id = :xyz);  
sprintf(buf,select column2 from ttest where id = :xyz); 

ret = sqlite3_prepare(db, buf, strlen(buf), pVM, szTail);  
cout__LINE__ ret=retendl;

long long idKey = 0;

//Now Query Data.
time_t start = time(0);  
 
for(int k=0;k10 ;k++)
{
  //idKey = idGen(data[k]);

   ret = sqlite3_bind_int64(pVM,
  sqlite3_bind_parameter_index(pVM, :xyz),
  idKeyArr[k]);

   ret = sqlite3_step(pVM); 
   //coutdata=sqlite3_column_text(pVM,0)endl;
   sqlite3_reset(pVM);
}
time_t end = time(0);
coutstart=startendl;
coutend=endendl;
coutdiff=end - startendl;
   
   
 
   return 0;
}



**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Ken [EMAIL PROTECTED]
Date: Tuesday, July 3, 2007 10:23 pm
Subject: Re: [sqlite] Re: In Mem Query Performance

 Just a thought regarding this key issue.
 
 enhancement for sqlite, enable a reverse Key indexing method. 
 Would this be a simple change?
 
 Ken
 
 [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 
 Hi
 
 Hi Ken,
 
 Thanks a lot.
 But this would require the key to be short. Will Check if this is 
 acceptable
 to all as we may not be able to port old db data if the key 
 format is 
 changed.
 
 
 Perhaps the key can be modified only for comparation. You store 
 the 
 key as you want, but before compare it do a rle compression. You 
 can 
 store the rle compressed key in database too. Note that rle is a 
 one-to-
 one transform, that is  one key has only one compressed key and 
 one 
 compressed key has only one key. Working that way you can compare 
 200,1,1,2 with 199,1,2,2 (four bytes) instead ..(196 
 '1')..2 
 with ..(195 '1')..22.
 
 
 HTH
 
 regards
 ragha
 
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 

Re: [sqlite] Re: In Mem Query Performance

2007-07-02 Thread RaghavendraK 70574
Hi,

Pls notify me if it can be brought down to 1 sec.
There is no mem constraint.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: RaghavendraK 70574 [EMAIL PROTECTED]
Date: Sunday, July 1, 2007 11:58 pm
Subject: Re: [sqlite] Re: In Mem Query Performance

 Hi Joe/John
 
 Appreciate your reply.
 Result: with your suggestions it is to down to 3 secs.
 But i need to get it to 1sec. Possible?
 
 Here are the findings,
 1) With Mem db default compile options and with suggestions(bind+ 
 index) you have
 provided to use bind performance stands at 10-11secs.
 2) With File db default compile options it stands at 13-14secs.
 3) begin and commit seems to have no affect on the query.
   Seperate index on mem table seems to have no affect.
 4) Earlier Joe had suggetsed to make changes to page size,with that
 it is down to 3 secs.[Along with your suggestions]
  Compile Option changes:
  ./configure --enable-threadsafe
 
  BCC = gcc -g -O2
 
  # C Compile and options for use in building executables that
  # will run on the target platform.  (BCC and TCC are usually the
  # same unless your are cross-compiling.)
  #
  TCC = gcc -g -O2 -I. -I${TOP}/src
 
  # Define -DNDEBUG to compile without debugging (i.e., for 
 production usage)
  # Omitting the define will cause extra debugging code to be 
 inserted and
  # includes extra comments when EXPLAIN stmt is used.
  #
  TCC += -DNDEBUG=1 -DSQLITE_DEFAULT_CACHE_SIZE=32768 -
 DSQLITE_DEFAULT_PAGE_SIZE=8192 -
 DSQLITE_DEFAULT_TEMP_CACHE_SIZE=32768 -DTEMP_STORE=3
 
 regards
 ragha
 
 **
 This email and its attachments contain confidential information 
 from HUAWEI, which is intended only for the person or entity whose 
 address is listed above. Any use of the information contained 
 herein in any way (including, but not limited to, total or partial 
 disclosure, reproduction, or dissemination) by persons other than 
 the intended recipient(s) is prohibited. If you receive this e-
 mail in error, please notify the sender by phone or email 
 immediately and delete it!
 
 *
 - Original Message -
 From: Joe Wilson [EMAIL PROTECTED]
 Date: Sunday, July 1, 2007 12:47 pm
 Subject: Re: [sqlite] Re: In Mem Query Performance
 
  In addition,
  
  - make a new index only on column1
  - move the prepare before the loop to avoid reparsing the SELECT 
  each time
  - use sqlite3_bind_* and sqlite3_reset in the loop.
  - move finalize after the loop.
  - query the test table directly - not the temporary ttest table.
  - don't SELECT * if you only need a column or two
  
  --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
   int main()
   {
 sqlite3* db;
 int ret = sqlite3_open(abc,db);
   
 char* errmsg;
 char buf[1024];
 sprintf(buf,create temporary table ttest(column1 
 text,column2 
  text,primary key
   (column1,column2));); 
 ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);
   
   
 sprintf(buf,insert into ttest select * from test;);
 ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);
   
   
 //read all the srvName from file and store it in arr and query.
 fstream fin(query.sql,ios::in);
 string data[10];
  
  for(int j=0;!fin.eof();j++)
  {
 findata[j];
 //coutdata[j]endl;
  } 
  fin.close();
   
  coutCalling Test Nowendl;
  sleep(1);
   
  //Now Query Data.
  time_t start = time(0);
  liststring lst;
   
  char* szError=0;
  const char* szTail=0;
  sqlite3_stmt* pVM;
   
  for(int k=0;k10;k++)
  {
 sprintf(buf,select * from ttest where column1 = 
  '%s',data[k].c_str());  
 
   
 ret = sqlite3_prepare(db, buf, -1, pVM, szTail);  
 ret = sqlite3_step(pVM); 
 //lst.push_back();
 ret = sqlite3_finalize(pVM);
  }
  //
  time_t end = time(0);
  coutstart=startendl;
  coutend=endendl;
   
 return 0;
   }
   
   //Result: It takes 17 sec to read 100,000 records. Can it be 
  reduced to 1 sec

Re: [sqlite] iPhone

2007-07-02 Thread RaghavendraK 70574

Congratulations to DRH and all.

I Just love this beast,incredibly small but with monster features.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Rich Shepard [EMAIL PROTECTED]
Date: Tuesday, July 3, 2007 3:01 am
Subject: Re: [sqlite] iPhone

 On Mon, 2 Jul 2007, Peter van Dijk wrote:
 
  Apple's new iPhone uses SQLite 3.1.3 to manage its database of 
 certificates
   Well, then: imagine how much more folks would have had to pay 
 to cover the
 costs of developing their own dbms, or licensing a commercial one.
 
 Rich
 
 -- 
 Richard B. Shepard, Ph.D.   |The Environmental 
 PermittingApplied Ecosystem Services, Inc.|  
 Accelerator(TM)http://www.appl-ecosys.com Voice: 503-667-
 4517  Fax: 503-667-8863
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] iPhone

2007-07-02 Thread RaghavendraK 70574
Definitely not iPhone.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: P Kishor [EMAIL PROTECTED]
Date: Tuesday, July 3, 2007 10:32 am
Subject: Re: [sqlite] iPhone

 On 7/2/07, RaghavendraK 70574 [EMAIL PROTECTED] wrote:
 
  Congratulations to DRH and all.
 
  I Just love this beast,incredibly small but with monster features.
 
 
 
 hmmm now, is that a phone in your pocket, or is it just that your
 database is happy to see me? Which beast are we talking about here?
 
 ;-)
 
 
 
 
  - Original Message -
  From: Rich Shepard [EMAIL PROTECTED]
  Date: Tuesday, July 3, 2007 3:01 am
  Subject: Re: [sqlite] iPhone
 
   On Mon, 2 Jul 2007, Peter van Dijk wrote:
  
Apple's new iPhone uses SQLite 3.1.3 to manage its database of
   certificates
 Well, then: imagine how much more folks would have had to pay
   to cover the
   costs of developing their own dbms, or licensing a commercial one.
  
   Rich
  
   --
   Richard B. Shepard, Ph.D.   |The Environmental
   PermittingApplied Ecosystem Services, Inc.|
   Accelerator(TM)http://www.appl-ecosys.com Voice: 503-667-
   4517  Fax: 503-667-8863
  
   ---
 
   --
   To unsubscribe, send email to [EMAIL PROTECTED]
   ---
 
   --
  
  
 
  -
 
  To unsubscribe, send email to [EMAIL PROTECTED]
  -
 
 
 
 
 
 -- 
 Puneet Kishor http://punkish.eidesis.org/
 Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
 Open Source Geospatial Foundation http://www.osgeo.org/education/
 ST Policy Fellow, National Academy of Sciences http://www.nas.edu/
 ---
 --
 collaborate, communicate, compete
 =
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-07-01 Thread RaghavendraK 70574
Hi Joe/John

Appreciate your reply.
Result: with your suggestions it is to down to 3 secs.
But i need to get it to 1sec. Possible?

Here are the findings,
1) With Mem db default compile options and with suggestions(bind+ index) you 
have
provided to use bind performance stands at 10-11secs.
2) With File db default compile options it stands at 13-14secs.
3) begin and commit seems to have no affect on the query.
   Seperate index on mem table seems to have no affect.
4) Earlier Joe had suggetsed to make changes to page size,with that
it is down to 3 secs.[Along with your suggestions]
  Compile Option changes:
  ./configure --enable-threadsafe

  BCC = gcc -g -O2

  # C Compile and options for use in building executables that
  # will run on the target platform.  (BCC and TCC are usually the
  # same unless your are cross-compiling.)
  #
  TCC = gcc -g -O2 -I. -I${TOP}/src

  # Define -DNDEBUG to compile without debugging (i.e., for production 
usage)
  # Omitting the define will cause extra debugging code to be inserted and
  # includes extra comments when EXPLAIN stmt is used.
  #
  TCC += -DNDEBUG=1 -DSQLITE_DEFAULT_CACHE_SIZE=32768 
-DSQLITE_DEFAULT_PAGE_SIZE=8192 -DSQLITE_DEFAULT_TEMP_CACHE_SIZE=32768 
-DTEMP_STORE=3

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Sunday, July 1, 2007 12:47 pm
Subject: Re: [sqlite] Re: In Mem Query Performance

 In addition,
 
 - make a new index only on column1
 - move the prepare before the loop to avoid reparsing the SELECT 
 each time
 - use sqlite3_bind_* and sqlite3_reset in the loop.
 - move finalize after the loop.
 - query the test table directly - not the temporary ttest table.
 - don't SELECT * if you only need a column or two
 
 --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  int main()
  {
sqlite3* db;
int ret = sqlite3_open(abc,db);
  
char* errmsg;
char buf[1024];
sprintf(buf,create temporary table ttest(column1 text,column2 
 text,primary key
  (column1,column2));); 
ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);
  
  
sprintf(buf,insert into ttest select * from test;);
ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);
  
  
//read all the srvName from file and store it in arr and query.
fstream fin(query.sql,ios::in);
string data[10];
 
 for(int j=0;!fin.eof();j++)
 {
findata[j];
//coutdata[j]endl;
 } 
 fin.close();
  
 coutCalling Test Nowendl;
 sleep(1);
  
 //Now Query Data.
 time_t start = time(0);
 liststring lst;
  
 char* szError=0;
 const char* szTail=0;
 sqlite3_stmt* pVM;
  
 for(int k=0;k10;k++)
 {
sprintf(buf,select * from ttest where column1 = 
 '%s',data[k].c_str());  

  
ret = sqlite3_prepare(db, buf, -1, pVM, szTail);  
ret = sqlite3_step(pVM); 
//lst.push_back();
ret = sqlite3_finalize(pVM);
 }
 //
 time_t end = time(0);
 coutstart=startendl;
 coutend=endendl;
  
return 0;
  }
  
  //Result: It takes 17 sec to read 100,000 records. Can it be 
 reduced to 1 sec.???
 
 
 
   
 
 Take the Internet to Go: Yahoo!Go puts the Internet in your 
 pocket: mail, news, photos  more. 
 http://mobile.yahoo.com/go?refer=1GNXIC
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-06-30 Thread RaghavendraK 70574
Hi All,

Pls find the sample test below,
A) Using Sqlite 3.3.17,SuSE9 version,8GB Ram,4 CPU machine,
1) Compile the generate the test data program and generate the data
2) Import the data to DB
3) Compile the perf Data program and then generate perf result.
4) Result for 100,000 records it takes 17 sec. Find one record at a time.

regards
ragha
@@1) Program to generate the test data:

//Compile: g++ -g gen.cpp -o gen

#include unistd.h
#include iostream
#include stdio.h
#include fstream

using namespace std;


int main()
{
 fstream fout(data.sql,ios::out);
 fstream fout1(query.sql,ios::out);
  //Schema
  foutcreate table test (serName text,doid text,primary key 
(serName,doid));endl;

 for(int j=0; j10;j++)
 {
char* ptr = tmpnam(0); 
 string key = 
.;
 key += ptr;
 fout1keyendl;
foutinsert into test values 
('key','2');endl;

 }

 return 0;
}

@@2) Use sqlite3 to import the data to abc. use begin and end.To make it 
faster.

@@3) Program to check the performance
//VersionUsed:3.3.17
//Compile:g++ -g main.cpp -I. -L. -lsqlite3  -lpthread

#include iostream
#include fstream
#include time.h
#include list

#include string
#include unistd.h

#include sqlite3.h

using namespace std;

int main()
{
  sqlite3* db;
  int ret = sqlite3_open(abc,db);

  char* errmsg;
  char buf[1024];
  sprintf(buf,create temporary table ttest(column1 text,column2 text,primary 
key (column1,column2));); 
  ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);


  sprintf(buf,insert into ttest select * from test;);
  ret = sqlite3_exec(db , buf, NULL ,NULL,errmsg);


  //read all the srvName from file and store it in arr and query.
  fstream fin(query.sql,ios::in);
  string data[10];
   
   for(int j=0;!fin.eof();j++)
   {
  findata[j];
  //coutdata[j]endl;
   } 
   fin.close();

   coutCalling Test Nowendl;
   sleep(1);

   //Now Query Data.
   time_t start = time(0);
   liststring lst;

   char* szError=0;
   const char* szTail=0;
   sqlite3_stmt* pVM;

   for(int k=0;k10;k++)
   {
  sprintf(buf,select * from ttest where column1 = '%s',data[k].c_str());  

  

  ret = sqlite3_prepare(db, buf, -1, pVM, szTail);  
  ret = sqlite3_step(pVM);   
  //lst.push_back();
  ret = sqlite3_finalize(pVM);
   }
   //
   time_t end = time(0);
   coutstart=startendl;
   coutend=endendl;

  return 0;
}

//Result: It takes 17 sec to read 100,000 records. Can it be reduced to 1 
sec.???
//All help appreciated.

//*

regards
ragha



**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: A. Pagaltzis [EMAIL PROTECTED]
Date: Saturday, June 30, 2007 9:48 pm
Subject: [sqlite] Re: In Mem Query Performance

 Hi RaghavendraK,
 
 * RaghavendraK 70574 [EMAIL PROTECTED] [2007-06-25 08:45]:
  When the DB is opened in in Memory mode,performance of query
  does not improve. For table which has 10 columns of type Text
  and each column having 128bytes data and having a total of
  1 records.
 
 that is small enough to fit into memory, and therefore small
 enough to fit into the OS disk cache. In such a case, the
 performance of SQLite does not differ significantly between
 in-memory and on-disk databases.
 
 Your problem is elsewhere. If you provide your schema and
 queries, someone might be able to tell you what about them makes
 SQLite go so slow and how to make it faster.
 
 Regards,
 -- 
 Aristotle Pagaltzis // http://plasmasturm.org/
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In Mem Query Performance

2007-06-29 Thread RaghavendraK 70574

Hi Joe,

I have configured the sqlite in the following mode (see below), and performance
marginally improved from 2500 records/sec to 8000/sec.

Pls help,if anything else need to be done to get at least 3records/sec
with each column being 128Bytes,10 column table,1 records.

regards
ragha

Compile Options:

./configure --enable-threadsafe

Makefile:

#BCC = gcc -g -O2
BCC = gcc  -O2

# C Compile and options for use in building executables that
# will run on the target platform.  (BCC and TCC are usually the
# same unless your are cross-compiling.)
#
#TCC = gcc -g -O2 -I. -I${TOP}/src
TCC = gcc  -O2 -I. -I${TOP}/src

# Define -DNDEBUG to compile without debugging (i.e., for production usage)
# Omitting the define will cause extra debugging code to be inserted and
# includes extra comments when EXPLAIN stmt is used.
#
TCC += -DNDEBUG=1 -DSQLITE_DEFAULT_CACHE_SIZE=32768 
-DSQLITE_DEFAULT_PAGE_SIZE=8192 -DSQLITE_DEFAULT_TEMP_CACHE_SIZE=32768 
-DTEMP_STORE=3

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: RaghavendraK 70574 [EMAIL PROTECTED]
Date: Tuesday, June 26, 2007 10:30 pm
Subject: Re: [sqlite] In Mem Query Performance

 Ok.
 Will notify u once i complete the test.
 
 regards
 ragha
 
 **
 This email and its attachments contain confidential information 
 from HUAWEI, which is intended only for the person or entity whose 
 address is listed above. Any use of the information contained 
 herein in any way (including, but not limited to, total or partial 
 disclosure, reproduction, or dissemination) by persons other than 
 the intended recipient(s) is prohibited. If you receive this e-mail 
 in error, please notify the sender by phone or email immediately 
 and delete it!
 
 *
 - Original Message -
 From: Joe Wilson [EMAIL PROTECTED]
 Date: Wednesday, June 27, 2007 0:48 am
 Subject: Re: [sqlite] In Mem Query Performance
 
  :memory: databases only have a page size of 1024.
  
  Try various page_size settings for a file based database file and
  see what happens.
  
  I have no other suggestions.
  
  --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
   It is a server platform,Linux SuSE9 enterpraise edition.
   4 CPU machine,8GB ram.
   We want load all the tables in to mem db of Sqlite.Achieve
   read performance of upto 5records/sec for the table data i 
  had mentioned earlier.
   
   so it would have to be file based.
   I could not get it. Does it mean even increasing the page size 
  there would be
   no effect on the performance?
  
  
  
  
  
 
  It's here! Your new message!  
  Get new email alerts with the free Yahoo! Toolbar.
  http://tools.search.yahoo.com/toolbar/features/mail/
  
  --
 -
  --
  To unsubscribe, send email to [EMAIL PROTECTED]
  --
 -
  --
  
  
 
 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]
 
 -
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FW: BLOB data retrieval

2007-06-27 Thread RaghavendraK 70574
GetTable will retrive entire snapshot of our table.I guess u app
is a 32bit application hence u can maximum access 2GB of user address space.
Upgrade to 64bit to access beyond this limit.


regards
ragha
PS:Sqlite is designed for small data sets amied at Embedded apps

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Krishnamoorthy, Priya (IE10) [EMAIL PROTECTED]
Date: Thursday, June 28, 2007 1:21 pm
Subject: [sqlite] FW: BLOB data retrieval

 Hi,
 
 
 
 I have a database which has a table that contains BLOB data. The table
 has two columns - one is Row_Num which is of type AUTO_INCREMENT
 (INTERGER_PRIMARY_KEY) and the other column Data contains BLOB data.
 
 
 
 I am writing a program (in MS VC++) which has to read the blob 
 data in
 all the rows of the above mentioned table one after another. I am 
 usingcppSQLite3 which is C++ wrapper function for accessing 
 sqlite3 db. Mine
 is an offline program that reads the data in the database which is
 updated by another program.
 
 
 
 I wrote the following code for reading from the database.
 
 
 
 CppSQLite3DBdb;
 
CppSQLite3Buffer bufSQL;
 
 
 
// Open the database. The name is provided by the user
 
db.open(database_name);
 
// Read the entire binary table
 
bufSQL.format(select * from %s order by 1;,table_name);
 
CppSQLite3Table b = db.getTable(bufSQL);
 
CppSQLite3Binary blobz;
 
 
 
// Read binary records one at a time from the database 
 until all
 the records are read
 
for (int i=1;i = b.numRows() ; i++)
 
{
 
   CppSQLite3Query q; 
 
   CppSQLite3Buffer sql_command;
 
   long length = 0;
 
  
 
   // Read binary record from row number i
 
   sql_command.format(select Data from %s where Row_Num =
 %d;,table_name,i);
 
   q = db.execQuery(sql_command);
 
   
 
   if (!q.eof())
 
   {
 
   blobz.setEncoded((unsigned char*)q.fieldValue(Data));
 
   cout  Retrieved binary Length:  
 blobz.getBinaryLength()  endl;
 
   }
 
 
 
   const unsigned char* pbin = blobz.getBinary();
 
}
 
 
 
 This method works fine only when the table size is small. For 
 example, I
 have a database of size over 2GB in which case I get an error
 SQLITE_NOMEM when I try to do db.gettable(). But i need to know the
 number of rows in the table for reading BLOB data in each of the rows
 one after another.
 
 
 
 Please let me know how I can go about doing this?
 
 Secondly, the column Row_Num is of type INTEGER_PRIMARY_KEY. So, it
 will take negative value after 2GB as mine is a 32 bit machine.
 
 So, how do I access data beyond 2 GB?
 
 
 
 Please help me in this regard.
 
 
 
 Regards,
 
 Priya
 
 
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] In Mem Query Performance

2007-06-26 Thread RaghavendraK 70574

Hi,

Thanks for the suggestion. But with that performance went down by 25% further.
Pls suggest an alternative. Mr DRH says it is possible we can reach up to a 
million,if there is a way pls notify.

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*
---BeginMessage---
Don't use sqlite_get_table. Use sqlite3_prepare(_v2), sqlite3_step and
sqlite3_finalize/sqlite3_reset.

Mike 

-Ursprüngliche Nachricht-
Von: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 25. Juni 2007 13:48
An: sqlite-users@sqlite.org
Betreff: [sqlite] In Mem Query Performance


Hi,

We are using Sqlite in in Memory Mode and we have around 200 tables.
Each table has 10 columns of type text.
Each table has around 1 records each column has around 128bytes data.
Select performance is around 2000records/sec. Pls suggest if there is a way
to improve further.

Table structure,Query style is as below,

create table test1 ...200
(
  key0 text,
  key1 text,
  key2 text,
  key3 text,
  key4 text,
  nonKey0 text,
  nonKey1 text,
  nonKey2 text,
  nonKey3 text,
  nonKey4 text,
  primary key(key0,key1,key2,key3,key4,key5)
);

  Query Used..

 SELECT * FROM TABLE136 WHERE
key0='kk
kk490' AND
key1='kk
kk491' AND
key2='kk
kk492' AND
key3='kk
kk493' AND
key4='kk
kk494'

API used sqlite_get_table...

regards
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
 

*


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-


---End Message---
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] In Mem Query Performance

2007-06-26 Thread RaghavendraK 70574
Hi Joe,

Your input is valuable. I will increase the pg size to 4KB and check.
We just have a set of tables which is to be read on startup.No complex Query is 
involved.
I find Sqlite to be most powerful given the size and complexity it handles.

I use the following apis to create the 

int ret = sqlite3_open(:memory:,m_sqliteDb1);
also use for all temp tables,
pragma PRAGMA temp_store = MEMORY

PLs let me know if this is correct.


regrads
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Tuesday, June 26, 2007 10:58 pm
Subject: Re: [sqlite] In Mem Query Performance

 --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  Thanks for the suggestion. But with that performance went down 
 by 25% further.
  Pls suggest an alternative. Mr DRH says it is possible we can 
 reach up to a million,if there is
  a way pls notify.
 ...
  We are using Sqlite in in Memory Mode and we have around 200 
 tables. Each table has 10 columns of type text.
  Each table has around 1 records each column has around 
 128bytes data.
  Select performance is around 2000records/sec. Pls suggest if 
 there is a way
  to improve further.
  
  Table structure,Query style is as below,
  
  create table test1 ...200
  (
key0 text,
key1 text,
key2 text,
key3 text,
key4 text,
nonKey0 text,
nonKey1 text,
nonKey2 text,
nonKey3 text,
nonKey4 text,
primary key(key0,key1,key2,key3,key4,key5)
  );
  
Query Used..
  
   SELECT * FROM TABLE136 WHERE
  
 key0='kk 
 kk490' AND
  
 key1='kk 
 kk491' AND
  
 key2='kk 
 kk492' AND
  
 key3='kk 
 kk493' AND
  
 key4='kk 
 kk494'
 
 If your tables have 10 columns of 128 bytes each, then each table 
 row is 
 over 1280 bytes, which exceeds a memory page size (1024), so 
 overflow 
 pages are used.  You might try a file-based database with a bigger 
 page_size,say 8192.
 
 Judging by you example queries, your keys vary only after the 
 120th byte
 or so. That may play a role in the lack of speed. Try putting the 
 differentiating characters first in the key strings.
 
 Are your slow query really only looking at a single table, or do 
 they do
 a multiple table joins?
 
 How do you create your memory database?
 Maybe you're not making a memory database as you think you are.
 
 
 
   
 
 Be a better Globetrotter. Get better travel answers from someone 
 who knows. Yahoo! Answers - Check it out.
 http://answers.yahoo.com/dir/?link=listsid=396545469
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In Mem Query Performance

2007-06-26 Thread RaghavendraK 70574
Hello Joe,

It is a server platform,Linux SuSE9 enterpraise edition.
4 CPU machine,8GB ram.
We want load all the tables in to mem db of Sqlite.Achieve
read performance of upto 5records/sec for the table data i had mentioned 
earlier.

so it would have to be file based.
I could not get it. Does it mean even increasing the page size there would be
no effect on the performance?


regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Tuesday, June 26, 2007 11:57 pm
Subject: Re: [sqlite] In Mem Query Performance

 --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  Your input is valuable. I will increase the pg size to 4KB and 
 check.
 :memory: databases only use 1024 byte pages if I remember 
 correctly, 
 so it would have to be file based.
 
  We just have a set of tables which is to be read on startup.No 
 complex Query is involved.
  I find Sqlite to be most powerful given the size and complexity 
 it handles.
  
  I use the following apis to create the 
  
  int ret = sqlite3_open(:memory:,m_sqliteDb1);
  also use for all temp tables,
  pragma PRAGMA temp_store = MEMORY
 
 That looks fine. It ought to be fast. Although fast is a 
 relative term.
 
 Is this an embedded platform or a PC that you're using?
 
 
   
 
 Sick sense of humor? Visit Yahoo! TV's 
 Comedy with an Edge to see what's on, when. 
 http://tv.yahoo.com/collections/222
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In Mem Query Performance

2007-06-26 Thread RaghavendraK 70574
Ok.
Will notify u once i complete the test.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson [EMAIL PROTECTED]
Date: Wednesday, June 27, 2007 0:48 am
Subject: Re: [sqlite] In Mem Query Performance

 :memory: databases only have a page size of 1024.
 
 Try various page_size settings for a file based database file and
 see what happens.
 
 I have no other suggestions.
 
 --- RaghavendraK 70574 [EMAIL PROTECTED] wrote:
  It is a server platform,Linux SuSE9 enterpraise edition.
  4 CPU machine,8GB ram.
  We want load all the tables in to mem db of Sqlite.Achieve
  read performance of upto 5records/sec for the table data i 
 had mentioned earlier.
  
  so it would have to be file based.
  I could not get it. Does it mean even increasing the page size 
 there would be
  no effect on the performance?
 
 
 
 
 
 It's here! Your new message!  
 Get new email alerts with the free Yahoo! Toolbar.
 http://tools.search.yahoo.com/toolbar/features/mail/
 
 ---
 --
 To unsubscribe, send email to [EMAIL PROTECTED]
 ---
 --
 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] In Mem Query Performance

2007-06-25 Thread RaghavendraK 70574
Hi,

When the DB is opened in in Memory mode,performance of query does not 
improve. For table which has 10 columns of type Text and each column having 
128bytes data and having a total of 1 records. Performance is around 2400 
records/sec. Any ways to improve it.

Thanks in advance.

regards 
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Fwd: In Mem Query Performance

2007-06-25 Thread RaghavendraK 70574
Hi,

We are using Sqlite in in Memory Mode and we have around 200 tables.
Each table has 10 columns of type text.
Each table has around 1 records each column has around 128bytes data.
Select performance is around 2000records/sec. Pls suggest if there is a way to
improve further.

Table structure,Query style is as below,

create table test1 ...200
(
  key0 text,
  key1 text,
  key2 text,
  key3 text,
  key4 text,
  nonKey0 text,
  nonKey1 text,
  nonKey2 text,
  nonKey3 text,
  nonKey4 text,
  primary key(key0,key1,key2,key3,key4,key5)
);

  Query Used..

 SELECT * FROM TABLE136 WHERE 
key0='490'
 AND 
key1='491'
 AND 
key2='492'
 AND 
key3='493'
 AND 
key4='494'

API used sqlite_get_table...

regards
ragha
**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*
---BeginMessage---
Hi,

When the DB is opened in in Memory mode,performance of query does not 
improve. For table which has 10 columns of type Text and each column having 
128bytes data and having a total of 1 records. Performance is around 2400 
records/sec. Any ways to improve it.

Thanks in advance.

regards 
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

---End Message---
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

[sqlite] In Mem Query Performance

2007-06-25 Thread RaghavendraK 70574

Hi,

We are using Sqlite in in Memory Mode and we have around 200 tables.
Each table has 10 columns of type text.
Each table has around 1 records each column has around 128bytes data.
Select performance is around 2000records/sec. Pls suggest if there is a way to
improve further.

Table structure,Query style is as below,

create table test1 ...200
(
  key0 text,
  key1 text,
  key2 text,
  key3 text,
  key4 text,
  nonKey0 text,
  nonKey1 text,
  nonKey2 text,
  nonKey3 text,
  nonKey4 text,
  primary key(key0,key1,key2,key3,key4,key5)
);

  Query Used..

 SELECT * FROM TABLE136 WHERE 
key0='490'
 AND 
key1='491'
 AND 
key2='492'
 AND 
key3='493'
 AND 
key4='494'

API used sqlite_get_table...

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-