Re: [sqlite] Can an online backup happen during a nested transaction?

2010-05-11 Thread Dan Kennedy

On May 12, 2010, at 2:53 AM, Shaun Seckman (Firaxis) wrote:

> Hello,
>
>I'm attempting to save a backup of my in-memory  
> database
> using the online backup routines.  I noticed that I cannot seem to  
> make
> backups of the database when there is a pending save point.  The error
> code is SQLITE_BUSY.  Is this the expected behavior?  Are there any  
> ways
> to make this work without committing the transaction?

sqlite3_busy_step() always returns SQLITE_BUSY if there is a
write transaction open on the source database handle. No way
around this at present.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert large data question ??

2010-05-11 Thread Marcus Grimm
> Thanks Kishor ,I will note it !!
>
> I already used transaction to doing this job.
> I tried to remove all of index ,this time the job used about 31600 seconds
>
> ps. I had use "PRAGMA synchronous=OFF" in front of my transaction.
>
> someone can help me do this job more faster ??

have you tried increasing the page cache ?

eg. --> PRAGMA cache_size = 40;

occasionally it helps when dealing with large tables..

Marcus

>
> thank everybody
>
>
>
> 2010/5/11 P Kishor :
>> On Tue, May 11, 2010 at 12:47 AM, ­·ºå  wrote:
>>> Dear
>>>
>>> I have about 9 million data insert string need to insert into an table
>>> ,each
>>> row data is unique
>>>
>>> this is a sample:
>>> insert into mydata
>>> VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
>>> 19:55:50');
>>>
>>> this is my schema:
>>> table|mydata|mydata|2|CREATE TABLE mydata
>>> (
>>>itno VARCHAR(20),
>>>lcno VARCHAR(20),
>>>srno VARCHAR(10),
>>>ran  VARCHAR(20),
>>>pp INTEGER,
>>>cdate VARCHAR(20),
>>>PRIMARY KEY (itno DESC, lcno ASC)
>>> )
>> ..
>>> but i have question about performance,everytime doing the job takes
>>> about
>>> 63000 seconds
>>
>> Use transactions. But, do you also realize that most of your columns
>> are defined as VARCHAR, but you are inserting stuff that doesn't look
>> like TEXT. You will be/should be surprised by the results. From the
>> example above,
>>
>>>itno VARCHAR(20),
>>>lcno VARCHAR(20),
>>>srno VARCHAR(10),
>>>ran  VARCHAR(20),
>>>pp INTEGER,
>>>cdate VARCHAR(20),
>>
>> ltno VARCHAR(20): 38824801 <-- if it had leading zeroes, they would
>> vanish
>> lcno VARCHAR(10): 56888 <-- will become 56888
>> srno VARCHAR(10): 'AABBCC' <-- inserted correctly
>> ran  VARCHAR(20): 4.999 <-- are you expecting this to remain a REAL?
>> pp INTEGER: 157 <-- inserted correctly
>> cdate VARCHAR: '2009/9/10 19:55:50' <-- inserted correctly
>>
>> enclose your VARCHARs in single quotes.
>>
>> --
>> Puneet Kishor http://www.punkish.org
>> Carbon Model http://carbonmodel.org
>> Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
>> Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
>> Nelson Institute, UW-Madison http://www.nelson.wisc.edu
>> ---
>> Assertions are politics; backing up assertions with evidence is science
>> ===
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert large data question ??

2010-05-11 Thread 風箏
Thanks Kishor ,I will note it !!

I already used transaction to doing this job.
I tried to remove all of index ,this time the job used about 31600 seconds

ps. I had use "PRAGMA synchronous=OFF" in front of my transaction.

someone can help me do this job more faster ??

thank everybody



2010/5/11 P Kishor :
> On Tue, May 11, 2010 at 12:47 AM, 風箏  wrote:
>> Dear
>>
>> I have about 9 million data insert string need to insert into an table ,each
>> row data is unique
>>
>> this is a sample:
>> insert into mydata
>> VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
>> 19:55:50');
>>
>> this is my schema:
>> table|mydata|mydata|2|CREATE TABLE mydata
>> (
>>itno VARCHAR(20),
>>lcno VARCHAR(20),
>>srno VARCHAR(10),
>>ran  VARCHAR(20),
>>pp INTEGER,
>>cdate VARCHAR(20),
>>PRIMARY KEY (itno DESC, lcno ASC)
>> )
> ..
>> but i have question about performance,everytime doing the job takes about
>> 63000 seconds
>
> Use transactions. But, do you also realize that most of your columns
> are defined as VARCHAR, but you are inserting stuff that doesn't look
> like TEXT. You will be/should be surprised by the results. From the
> example above,
>
>>itno VARCHAR(20),
>>lcno VARCHAR(20),
>>srno VARCHAR(10),
>>ran  VARCHAR(20),
>>pp INTEGER,
>>cdate VARCHAR(20),
>
> ltno VARCHAR(20): 38824801 <-- if it had leading zeroes, they would vanish
> lcno VARCHAR(10): 56888 <-- will become 56888
> srno VARCHAR(10): 'AABBCC' <-- inserted correctly
> ran  VARCHAR(20): 4.999 <-- are you expecting this to remain a REAL?
> pp INTEGER: 157 <-- inserted correctly
> cdate VARCHAR: '2009/9/10 19:55:50' <-- inserted correctly
>
> enclose your VARCHARs in single quotes.
>
> --
> Puneet Kishor http://www.punkish.org
> Carbon Model http://carbonmodel.org
> Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
> Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
> Nelson Institute, UW-Madison http://www.nelson.wisc.edu
> ---
> Assertions are politics; backing up assertions with evidence is science
> ===
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC sideandinstrumentside.

2010-05-11 Thread Lei, Rick (GE EntSol, SensInsp)
OK, got it. Thanks!

BR
Rick 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: Wednesday, May 12, 2010 10:01 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] DB files are different between PC
sideandinstrumentside.


On 12 May 2010, at 2:39am, Lei, Rick (GE EntSol, SensInsp) wrote:

> Another question is if Sqlite uses at least 1 page for each table and
index, does it means if the contents in a table doesn't fill 1 page,
Sqlite will not request a new page when operating this table. Is it
right?

Any extra data -- updating a field or adding a row -- might request a
new page until SQLite can successfully delete the old data.  And don't
forget that SQLite maintains a journal file.  Almost any change to the
database may temporarily ask for more journal space even if the new data
is smaller than the old data.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multi processes, so many errores of SQLITE_BUSY and SQLITE_MISUSE

2010-05-11 Thread liubin liu

Thank you very much!

It may be because my system's resource is limited. It's a embedded system
containing 32M RAM, ARM9 CPU.

My "reiterating 20 times" is already using usleep().

After I add the loop in the prepare statements, the system performance is
still very bad... And there are still many errores of SQLITE_BUSY.

The only improvement is the disappear of the error of SQLITE_MISUSE.

And I tried to use the "BEGIN EXCLUSIVE TRANSACTION". The things are same
with them without using it.







Black, Michael (IS) wrote:
> 
> Your "reiterating 20 times" is not using a usleep so you'll blow by this
> most every time it's busy.
>  
> Do this instead in all your proc's
>  
> ret = sqlite3_step (p_stmt);
> if (SQLITE_BUSY == ret)
> {
> int n=0;
> usleep(10); // try one more time before error
> while ((ret=sqlite3_step(p_stmt))==SQLITE_BUSY) {
> printf("proc1 ret==BUSY %d\n",++n);
> usleep(10);
> }
> }
> 
> And you'll also need to handle "database is locked" coming from your
> prepare statements.  I saw that error too.
> You'll need to loop there too.
> 
> The more you drop the usleep time the more times it will show as busy. 
> 1/10th or 1/100th of second is about all you want I would think.
>  
> And get rid of the usleep at the bottom of each proc -- it's pretty
> useless at 100 microseconds.  You don't need to sleep unless you're busy.
>  
> I tested your code with this and got no errors at all -- just a bunch of
> BUSY messages.
>  
>  
> Not sure what your purpose is in sqlrun.c with looping and killing.  Looks
> pretty squirrely to me.  You're not waiting for the forks to finish so
> what is your logic here?
>  
> Michael D. Black
> Senior Scientist
> Northrop Grumman Mission Systems
>  
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org on behalf of liubin liu
> Sent: Tue 5/11/2010 4:57 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] multi processes, so many errores of SQLITE_BUSY and
> SQLITE_MISUSE
> 
> ...
> 
> --
> View this message in context:
> http://old.nabble.com/multi-processes%2C-so-many-errores-of-SQLITE_BUSY-and-SQLITE_MISUSE-tp28522127p28522127.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/multi-processes%2C-so-many-errores-of-SQLITE_BUSY-and-SQLITE_MISUSE-tp28522127p28531394.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC side andinstrumentside.

2010-05-11 Thread Simon Slavin

On 12 May 2010, at 2:39am, Lei, Rick (GE EntSol, SensInsp) wrote:

> Another question is if Sqlite uses at least 1 page for each table and index, 
> does it means if the contents in a table doesn't fill 1 page, Sqlite will not 
> request a new page when operating this table. Is it right?

Any extra data -- updating a field or adding a row -- might request a new page 
until SQLite can successfully delete the old data.  And don't forget that 
SQLite maintains a journal file.  Almost any change to the database may 
temporarily ask for more journal space even if the new data is smaller than the 
old data.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC side andinstrumentside.

2010-05-11 Thread Lei, Rick (GE EntSol, SensInsp)
Hi, Pavel,

Thanks for your comments.

Another question is if Sqlite uses at least 1 page for each table and index, 
does it means if the contents in a table doesn't fill 1 page, Sqlite will not 
request a new page when operating this table. Is it right?


BR
Rick 

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Pavel Ivanov
Sent: Tuesday, May 11, 2010 9:33 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] DB files are different between PC side andinstrumentside.

> But I think at least
> Sqlite3 should have used most space on the sector when it request a 
> new sector.

Due to internal SQLite specifics it uses at least 1 page for each table and 
index. So even if you don't store there anything with a big schema database 
will still consume significant amount of disk space.
Probably from your point of view it's a waste of space but it's necessary 
trade-off to make it light.

> Maybe I need to return a smaller number when querying the sector size.

Read the link Dan gave you. Issuing 'pragma page_size' before creation of any 
tables in the database is enough.


Pavel

On Tue, May 11, 2010 at 4:29 AM, Lei, Rick (GE EntSol, SensInsp) 
 wrote:
> Hi, Dan,
>
> Yes, I noticed this setting. The sector size in SDHC card is 4Kbyte 
> which is different from the size of harddriver. But I think at least
> Sqlite3 should have used most space on the sector when it request a 
> new sector. It looks like that the Sqlite wastes a lot of space on 
> SDHC card. Of course the space is not a problem now. Because we can 
> easily find a 8G or 16G SDHC card. However I think we still need to 
> take care of it. Maybe I need to return a smaller number when querying 
> the sector size.
>
> BR
> Rick
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Tuesday, May 11, 2010 4:18 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] DB files are different between PC side and 
> instrumentside.
>
>
> On May 11, 2010, at 2:54 PM, Lei, Rick (GE EntSol, SensInsp) wrote:
>
>>
>> Hi,
>>
>> I ported Sqlite3 to my instrument. The database file is stored in a 
>> SDHC card. Sqlite3 runs ok. However I found the database file 
>> generated on instrument side is much bigger than the file on PC side.
>> I checked the files generated on instrument by UltraEdit. I found a 
>> lot of space which is full filled with 0 in the file. And I can't 
>> find
>
>> these parts in the files generated on PC side. I don't know why the 
>> files are different between PC side and instrument side. I think they 
>> should be same. Is there any advice?
>
> Different page sizes.
>
>   http://www.sqlite.org/pragma.html#pragma_page_size
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Computed columns in VIEWs return NULL but should be able to be typed! Any ideas?

2010-05-11 Thread BareFeetWare
On 11/05/2010, at 6:12 PM, Ben Harper wrote:

> To determine the type of columns in a view I use
> SELECT typeof(column) FROM viewname LIMIT something;
> 
> Unfortunately if most of the column data is NULL then you can end up having 
> to scan the entire table.

Yes, I also do that as a last resort:

1. Check pragma table_info(MyView) to see if SQLite knows the column type. It 
seems to provide it only if the view's column directly references a table 
column.

2. If 1 returns null (unknown), then parse the schema of the view to look for 
the cast(expression, type) or other functions that return a known type.

3. If 2 fails to provide type through a known result type of a function, then 
go through the actual result data to see what types are returned. If all rows 
return the same type (ignoring null results) then use that type. Otherwise type 
is unknown (varies or all null). Of course this doesn't work if there are 
currently no rows in the view, which makes it impossible to determine the input 
type for the user if they choose to insert a new row.

By the way, your select statement will scan the whole view. Limit just 
truncates the result, but SQLite is first scanning the whole thing. You'd be 
better served by something like:

select typeof(ColumnName) as "Type" from ViewName where Type not null group by 
"Type";

If the result gives one row, then you have your type. If zero or multiple rows, 
then type could not be determined.

As already mentioned, though, this won't help if the view currently has no 
rows, or if the value of MyColumn for the present rows is null, and is of 
limited value of there are just a few existing rows with data.

Tom
BareFeetWare

 --
Comparison of SQLite GUI tools:
http://www.tandb.com.au/sqlite/compare/?ml

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request: specify database name in table-name part of CREATE INDEX

2010-05-11 Thread David Nicol
Sorry, big duh. Of course a SQLite index needs to be in the same file
as the indexed table; and that is specified. Having an index for a
table in one database stored in a different database would not be
consistent with SQLite, as both would need to be opened together.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] feature request: specify database name in table-name part of CREATE INDEX

2010-05-11 Thread David Nicol
http://www.sqlite.org/lang_createindex.html shows the optional
[DATABASE] DOT for where the index lives, but not for where the
indexed table lives. Just in case anyone is keeping track of weird
edge cases that are actually being used somewhere and why, I'm
modifying a DDL dump from a different database and creating an
equivalent structure, my input DDL has two-part tablenames in the
CREATE INDEX statements, and working around this syntax limitation
isn't going to do anything but slow me down a little.

Cheers!



-- 
"To avoid both financial loss and jail time, we provide security
mechanisms, including access control." -- Mike Burrows, 2006
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_CORRUPT: database disk image is malformed

2010-05-11 Thread kundan bharti
Hello Sir/Maam,

I am getting the error "SQLITE_CORRUPT: database disk image is malformed" while 
opening the attached database in SQlite Expert. Can you please tell me the 
reason as well as solution to rectify this issue.

Thanks,
Kundan Bharti


  ___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_CORRUPT error

2010-05-11 Thread daksh jasra
Thank you Roger,
TH3 is proprietary and requires a license,Do you know how can I obtain a 
license to access and use TH3?

Thanks,
Daksh




From: Roger Binns 
To: General Discussion of SQLite Database 
Sent: Tue, May 11, 2010 11:11:29 AM
Subject: Re: [sqlite] SQLITE_CORRUPT error

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/10/2010 04:07 PM, daksh jasra wrote:
> I have ported SQLITE over VRTX based embedded platform,

I'd suggest licensing the TH3 test suite in order to verify your port.  You
could be doing something like getting an operating wrong once the database
reaches a certain size, is an exact multiple of some interesting number etc.

  http://www.sqlite.org/th3.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkvpgbAACgkQmOOfHg372QTPeQCgo2W6ytS8FH360YoBFkQg60YQ
Q+UAoNS5JUdy7/Vtr85mlXNfrMEk2Wyt
=32se
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can an online backup happen during a nested transaction?

2010-05-11 Thread Shaun Seckman (Firaxis)
Hello,

I'm attempting to save a backup of my in-memory database
using the online backup routines.  I noticed that I cannot seem to make
backups of the database when there is a pending save point.  The error
code is SQLITE_BUSY.  Is this the expected behavior?  Are there any ways
to make this work without committing the transaction?

 

-Shaun

 

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Manuj Bhatia
I need to exchange messages across processes, not threads. And one of the
reasons that I am inclined towards SQLite is that I do not want a separate
Queue-manager process.
I'll just write wrapper APIs around SQLite and embed them into each
application, so I have a manager-less implementation.

I found a Queue implementation using SQLite at XMLBlaster:
http://www.xmlblaster.org/xmlBlaster/doc/requirements/client.c.queue.html

I'll see how they have implemented it and might adapt it to my needs.

Michael,
Thanks for the link! I didn't know about this feature of AIX. I'll see if I
can get my Unix Admins to create a Ram Disk for me to play around with.

Thanks,
Manuj


On Tue, May 11, 2010 at 2:29 PM, Alexey Pechnikov wrote:

> Hm... You can use the dedicated thread in your application for SQLite
> in-memory database. Why you want to build external application for
> this? And SQL for you task is not needed I think - you can use the
> simple hash table or any other simple structure. If you have same
> additional needs or ideas - speak it!
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Alexey Pechnikov
Hm... You can use the dedicated thread in your application for SQLite
in-memory database. Why you want to build external application for
this? And SQL for you task is not needed I think - you can use the
simple hash table or any other simple structure. If you have same
additional needs or ideas - speak it!

-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Side effects of commit

2010-05-11 Thread Jay A. Kreibich
On Tue, May 11, 2010 at 12:12:09PM -0700, Jim Terman scratched on the wall:
> We have sqlite databases in a memory shared cache environment where 
> individual tables may be locked out by other processes.

  If other processes are doing the locking, the whole database is
  locked.

> This means that we have to worry about SQLITE-BUSY errors and make
> sure are code can handle this.

  You have to do this regardless, but yes.

> There is some internal debate about whether we have to worry about table 
> locks only at the beginning of a transaction, or whether a commit itself 
> can generate them.

  Either.

  If you open the transaction with just BEGIN, the commit itself can
  generate an SQLITE_BUSY and fail.

  If you open with a BEGIN EXCLUSIVE, the commit should not generate an
  SQLITE_BUSY error, but the BEGIN can.

  http://sqlite.org/lang_transaction.html

> The question is, if we have a transaction with several sql statements 
> that execute without any locks, is there anyway that the commit itself 
> would encounter a lock.

  You can't execute a statement without *any* locks.

> We know that the commit involves writing of the 
> results from memory to disk, but would any potentials locks actually be 
> deferred until this point?

  Yes.  Some locks.  Some times.  Depending on the transaction, the
  commands, and the size of the update.


  But the core of your question is that you need to make all your
  transactions exclusive =OR= you need to deal with COMMIT returning an
  SQLITE_BUSY *and* possibly rolling back your transaction.  Actually, it
  is recommend you always rollback any transaction after receiving an
  SQLITE_BUSY to avoid some types of deadlock situations.

  http://sqlite.org/lockingv3.html
  http://sqlite.org/c3ref/busy_handler.html

-j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Side effects of commit

2010-05-11 Thread Jim Terman
We have sqlite databases in a memory shared cache environment where 
individual tables may be locked out by other processes. This means that 
we have to worry about SQLITE-BUSY errors and make sure are code can 
handle this.

There is some internal debate about whether we have to worry about table 
locks only at the beginning of a transaction, or whether a commit itself 
can generate them.

The question is, if we have a transaction with several sql statements 
that execute without any locks, is there anyway that the commit itself 
would encounter a lock. We know that the commit involves writing of the 
results from memory to disk, but would any potentials locks actually be 
deferred until this point?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] create virtual table if not exists table_id???

2010-05-11 Thread P Kishor
On Tue, May 11, 2010 at 1:50 PM, Matt Young  wrote:
> sqlite> create virtual table if not exists words using fts3  (f1 );
> Error: near "not": syntax error
> sqlite> create  table if not exists U (w1 );
> sqlite>
>
> Different syntax?

Yes.

> virtual tables don't persist?

How did you reach that conclusion? Did you create a virtual table
successfully, quit sqlite, then reopen the db and found the virtual
table missing?



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] create virtual table if not exists table_id???

2010-05-11 Thread Black, Michael (IS)
Syntax says they are different...virtual tables don't have the same flexibility 
apparently...I suppose you're looking for "why" though?
http://www.sqlite.org/lang_createvtab.html
 
 
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Matt Young
Sent: Tue 5/11/2010 1:50 PM
To: General Discussion of SQLite Database
Subject: [sqlite] create virtual table if not exists table_id???



sqlite> create virtual table if not exists words using fts3  (f1 );
Error: near "not": syntax error
sqlite> create  table if not exists U (w1 );
sqlite>

Different syntax?  virtual tables don't persist?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA auto_vacuum

2010-05-11 Thread Pavel Ivanov
I'd recommend NONE (or 0).

If you have fairly balanced insertions and deletions then there will
be no excessive disk space consumption, vacuuming won't help much. But
with auto-vacuuming turned on you won't have the same performance
because additional code will be executed after each transaction.


Pavel

On Tue, May 11, 2010 at 2:50 PM, Joanne Pham  wrote:
> Hi All,
> I have the database which has a lot of insertion and deletion. Do you have 
> any recomendation about what value that need to be set for auto_vacuum
> in this case to improve the performance for deletion as well as insertion the 
> new row to the database. (0 | NONE | 1 | FULL | 2 | INCREMENTAL;)
> Thanks,
> JP
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] PRAGMA auto_vacuum

2010-05-11 Thread Joanne Pham
Hi All,
I have the database which has a lot of insertion and deletion. Do you have any 
recomendation about what value that need to be set for auto_vacuum
in this case to improve the performance for deletion as well as insertion the 
new row to the database. (0 | NONE | 1 | FULL | 2 | INCREMENTAL;)
Thanks,
JP


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] create virtual table if not exists table_id???

2010-05-11 Thread Matt Young
sqlite> create virtual table if not exists words using fts3  (f1 );
Error: near "not": syntax error
sqlite> create  table if not exists U (w1 );
sqlite>

Different syntax?  virtual tables don't persist?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite connection?

2010-05-11 Thread Jay A. Kreibich
On Tue, May 11, 2010 at 01:45:03PM -0400, john cummings scratched on the wall:
> hi all,
> 
> i'm new to this forum and sqlite.
> 
> is it possible to have an executable (i.e. .exe) with connections to 2
> sqlite databases?
> 
> i've read doc and it doesn't speak to this one way or the other.

  Yes.  Just call sqlite3_open_xxx() more than once.


  You can also have one database connection manage more than one
  database file.  Among other this, this allows you to formulate
  queries that gather data from both databases.  See the SQLite SQL
  command "ATTACH".

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite connection?

2010-05-11 Thread Simon Slavin

On 11 May 2010, at 6:45pm, john cummings wrote:

> is it possible to have an executable (i.e. .exe) with connections to 2
> sqlite databases?
> 
> i've read doc and it doesn't speak to this one way or the other.

Sure.  Use sqlite_open() two times, and keep the returned values in two 
separate variables.  Make sure you use the right one for each subsequent 
operation.  When you're finished with each database, call sqlite_close() on 
that one, not the other one.

http://www.sqlite.org/c_interface.html

Note to the SQLite docos: section 3.10 of that page is entirely in bold because 
of a tag problem.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite connection?

2010-05-11 Thread Adam DeVita
Yes.

One can also
attach 'somedatabase path' as anothername  ;
and you can run a query accessing both at the same time.

regards,

Adam

On Tue, May 11, 2010 at 1:45 PM, john cummings wrote:

> hi all,
>
> i'm new to this forum and sqlite.
>
> is it possible to have an executable (i.e. .exe) with connections to 2
> sqlite databases?
>
> i've read doc and it doesn't speak to this one way or the other.
>
> thanks,
>
> john
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
VerifEye Technologies Inc.
905-948-0015x245
7100 Warden Ave, Unit 3
Markham ON, L3R 8B5
Canada
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite connection?

2010-05-11 Thread P Kishor
On Tue, May 11, 2010 at 12:45 PM, john cummings  wrote:
> hi all,
>
> i'm new to this forum and sqlite.
>
> is it possible to have an executable (i.e. .exe) with connections to 2
> sqlite databases?

I've never made an executable, but given that I can do so with Perl, I
don't see why not. A connection is just a handle. Just give different
connections different names. Heck, you could make multiple connections
to multiple different databases... one to sqlite, one to Pg, another
one to MySQL, another one to sqlite. Knock yourself out Go crazy.


>
> i've read doc and it doesn't speak to this one way or the other.
>
> thanks,
>






-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite connection?

2010-05-11 Thread john cummings
hi all,

i'm new to this forum and sqlite.

is it possible to have an executable (i.e. .exe) with connections to 2
sqlite databases?

i've read doc and it doesn't speak to this one way or the other.

thanks,

john
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] join performance query

2010-05-11 Thread Tim Romano
Let's try that again : expose the [number] column to the outer selection (**
are for emphasis**):


( select  id_song, **number** from


 (
 select  id_song, **number**
 from PLAYLIST_SONG
 where id_playlist=2
{and|or }  number > 258
) as MYPLAYLISTSONGS

Regards
Tim Romano

On Tue, May 11, 2010 at 12:46 PM, Tim Romano wrote:

> And you would put move your title-condition to the outer query:
>
> .
> .
> .
>  ) as SONGIDLIST
>  on SONG.id_song = SONGIDLIST.id_song
>
> where
> your title-condition and|or your  title-number condition
>
>
> Regards
> Tim Romano
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] join performance query

2010-05-11 Thread Tim Romano
And you would put move your title-condition to the outer query:

.
.
.
 ) as SONGIDLIST
 on SONG.id_song = SONGIDLIST.id_song

where
your title-condition and|or your  title-number condition


Regards
Tim Romano
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] join performance query

2010-05-11 Thread Tim Romano
Arrrgh, Google Chrome ate the top half of my reply.

You must also expose the number column in the inner query against
PLAYLIST_SONG; include your number-condition there and also specify the
number column in the select-list:

( select  id_song, number from

 (
 select  id_song from PLAYLIST_SONG
 where id_playlist=2
{AND|OR }  number > 258
) as MYPLAYLISTSONGS

Regards
Tim Romano



On Tue, May 11, 2010 at 12:39 PM, Tim Romano wrote:

> You could remove the title condition from the inner SONGS select, limiting
> your conditions to artist and genre; an index on column [artist] would make
> this subquery run quickly:
>
>
>  (
>  select id_song from
>  SONG
>  where genre_id = 0 AND artist = 'Las ketchup'
> //  AND title >= 'Asereje(karaoke version)'// --> moved to outer select
> > ) as MYSONGS
>
>
>
> The goal is to produce small inner subsets using indexes, and then to join
> these with each other, and to let the inner subsets expose the necessary
> columns to the outer query.
>
> Regards
> Tim Romano
>
>
>
> On Tue, May 11, 2010 at 11:13 AM, Andrea Galeazzi wrote:
>
>> Sorry but in your solution, how can I solve the condition
>> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
>> >> version)' OR number > 258)
>> ?
>> title is on song and number is song_number on Playlist_Song AS PS.
>> Furthermore I also need title and number in place of your  select * from
>> SONG
>> Could you write it again please?
>> Thanks
>>
>> Citando Tim Romano :
>>
>> > 1. Try discrete single-column indexes rather than multi-column composite
>> > indexes.
>> > 2. Try  breaking the query down into subsets expressed as parenthetical
>> > queries; you can treat these parenthetical queries as if they were
>> tables by
>> > assigning them an alias, and then you can join against the aliases.   I
>> have
>> > sped queries up in SQLite using this approach and, with a little
>> tinkering,
>> > the time can drop from over a minute to sub-second.   Performance will
>> > depend on the indexes and criteria used, of course. But this approach
>> lets
>> > you see how SQLite is optimizing the creation of the component sets from
>> > which you can build up your ultimate query.
>> > .
>> > select * from SONG
>> > JOIN
>> >
>> > ( select  id_song from
>> >
>> > (
>> > select  id_song from PLAYLIST_SONG
>> > where id_playlist=2
>> > ) as MYPLAYLISTSONGS
>> >
>> > JOIN
>> >
>> > (
>> > select id_song from
>> > SONG
>> > where genre_id = 0 AND artist = 'Las ketchup'
>> > AND title >= 'Asereje(karaoke version)'
>> > ) as MYSONGS
>> >
>> > on MYSONGS.id_song = MYPLAYLISTSONGS.id_song
>> >
>> >
>> > ) as SONGIDLIST
>> >
>> > on SONG.id_song = SONGIDLIST.id_song
>> >
>> >
>> > Regards
>> > Tim Romano
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, May 11, 2010 at 6:07 AM, Andrea Galeazzi 
>> wrote:
>> >
>> >> Hi guys,
>> >> I'm in a bind for a huge time consuming query!
>> >> I made the following database schema:
>> >>
>> >> CREATE TABLE Song (
>> >>idINTEGER NOT NULL UNIQUE,
>> >>titleVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
>> >>artistVARCHAR(40) NOT NULL DEFAULT '' COLLATE
>> NOCASE,
>> >>genre_idINT NOT NULL DEFAULT 0,
>> >> PRIMARY KEY (id),
>> >>
>> >> CONSTRAINT fk_Genre FOREIGN KEY (genre_id)
>> >>REFERENCES Genre (id)
>> >>ON DELETE SET DEFAULT
>> >>ON UPDATE CASCADE);
>> >>
>> >> CREATE INDEX Song_Browse_View_idx ON Song(genre_id,artist,title);
>> >>
>> >> CREATE TABLE PlayList (
>> >>id INTEGER NOT NULL UNIQUE,
>> >>name   VARCHAR(15) NOT NULL COLLATE NOCASE, --KORGCOLLATE,
>> >>length INT NOT NULL DEFAULT 0,
>> >>created_date   TEXT,
>> >> PRIMARY KEY (id));
>> >>
>> >> CREATE TABLE PlayList_Song (
>> >>id_song INT NOT NULL,
>> >>id_playlist INT NOT NULL,
>> >>song_number INTEGER NOT NULL,
>> >> PRIMARY KEY (id_playlist, song_number),
>> >> CONSTRAINT fk_PlayList_Song1 FOREIGN KEY (id_song)
>> >>REFERENCES Song (id)
>> >>ON DELETE CASCADE
>> >>ON UPDATE CASCADE,
>> >> CONSTRAINT fk_PlayList_Song2 FOREIGN KEY (id_playlist)
>> >>REFERENCES PlayList (id)
>> >>ON DELETE CASCADE
>> >>ON UPDATE CASCADE);
>> >>
>> >> CREATE INDEX PlayList_Song_song_number_idx ON
>> PlayList_Song(song_number);
>> >>
>> >> Now I need to scroll title filtered by genre_id and artist both in Song
>> >> table and Playlist.
>> >> The query for the first case is very fast:
>> >> SELECT id AS number,title  FROM Song WHERE genre_id = 0 AND artist =
>> >> 'Las ketchup'
>> >> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
>> >> version)' OR number > 258)
>> >> ORDER BY title ASC , number ASC LIMIT 4;
>> >>
>> >> The second case is about 35 times slower... so the scrolling is quite
>> >> impossible (or useless)!
>> >> SELECT song_number AS number,title  FROM Song 

Re: [sqlite] join performance query

2010-05-11 Thread Tim Romano
You could remove the title condition from the inner SONGS select, limiting
your conditions to artist and genre; an index on column [artist] would make
this subquery run quickly:


 (
 select id_song from
 SONG
 where genre_id = 0 AND artist = 'Las ketchup'
//  AND title >= 'Asereje(karaoke version)'// --> moved to outer select
> ) as MYSONGS



The goal is to produce small inner subsets using indexes, and then to join
these with each other, and to let the inner subsets expose the necessary
columns to the outer query.

Regards
Tim Romano


On Tue, May 11, 2010 at 11:13 AM, Andrea Galeazzi  wrote:

> Sorry but in your solution, how can I solve the condition
> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> >> version)' OR number > 258)
> ?
> title is on song and number is song_number on Playlist_Song AS PS.
> Furthermore I also need title and number in place of your  select * from
> SONG
> Could you write it again please?
> Thanks
>
> Citando Tim Romano :
>
> > 1. Try discrete single-column indexes rather than multi-column composite
> > indexes.
> > 2. Try  breaking the query down into subsets expressed as parenthetical
> > queries; you can treat these parenthetical queries as if they were tables
> by
> > assigning them an alias, and then you can join against the aliases.   I
> have
> > sped queries up in SQLite using this approach and, with a little
> tinkering,
> > the time can drop from over a minute to sub-second.   Performance will
> > depend on the indexes and criteria used, of course. But this approach
> lets
> > you see how SQLite is optimizing the creation of the component sets from
> > which you can build up your ultimate query.
> > .
> > select * from SONG
> > JOIN
> >
> > ( select  id_song from
> >
> > (
> > select  id_song from PLAYLIST_SONG
> > where id_playlist=2
> > ) as MYPLAYLISTSONGS
> >
> > JOIN
> >
> > (
> > select id_song from
> > SONG
> > where genre_id = 0 AND artist = 'Las ketchup'
> > AND title >= 'Asereje(karaoke version)'
> > ) as MYSONGS
> >
> > on MYSONGS.id_song = MYPLAYLISTSONGS.id_song
> >
> >
> > ) as SONGIDLIST
> >
> > on SONG.id_song = SONGIDLIST.id_song
> >
> >
> > Regards
> > Tim Romano
> >
> >
> >
> >
> >
> >
> > On Tue, May 11, 2010 at 6:07 AM, Andrea Galeazzi 
> wrote:
> >
> >> Hi guys,
> >> I'm in a bind for a huge time consuming query!
> >> I made the following database schema:
> >>
> >> CREATE TABLE Song (
> >>idINTEGER NOT NULL UNIQUE,
> >>titleVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
> >>artistVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
> >>genre_idINT NOT NULL DEFAULT 0,
> >> PRIMARY KEY (id),
> >>
> >> CONSTRAINT fk_Genre FOREIGN KEY (genre_id)
> >>REFERENCES Genre (id)
> >>ON DELETE SET DEFAULT
> >>ON UPDATE CASCADE);
> >>
> >> CREATE INDEX Song_Browse_View_idx ON Song(genre_id,artist,title);
> >>
> >> CREATE TABLE PlayList (
> >>id INTEGER NOT NULL UNIQUE,
> >>name   VARCHAR(15) NOT NULL COLLATE NOCASE, --KORGCOLLATE,
> >>length INT NOT NULL DEFAULT 0,
> >>created_date   TEXT,
> >> PRIMARY KEY (id));
> >>
> >> CREATE TABLE PlayList_Song (
> >>id_song INT NOT NULL,
> >>id_playlist INT NOT NULL,
> >>song_number INTEGER NOT NULL,
> >> PRIMARY KEY (id_playlist, song_number),
> >> CONSTRAINT fk_PlayList_Song1 FOREIGN KEY (id_song)
> >>REFERENCES Song (id)
> >>ON DELETE CASCADE
> >>ON UPDATE CASCADE,
> >> CONSTRAINT fk_PlayList_Song2 FOREIGN KEY (id_playlist)
> >>REFERENCES PlayList (id)
> >>ON DELETE CASCADE
> >>ON UPDATE CASCADE);
> >>
> >> CREATE INDEX PlayList_Song_song_number_idx ON
> PlayList_Song(song_number);
> >>
> >> Now I need to scroll title filtered by genre_id and artist both in Song
> >> table and Playlist.
> >> The query for the first case is very fast:
> >> SELECT id AS number,title  FROM Song WHERE genre_id = 0 AND artist =
> >> 'Las ketchup'
> >> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> >> version)' OR number > 258)
> >> ORDER BY title ASC , number ASC LIMIT 4;
> >>
> >> The second case is about 35 times slower... so the scrolling is quite
> >> impossible (or useless)!
> >> SELECT song_number AS number,title  FROM Song AS S, Playlist_Song AS PS
> >> WHERE S.id = PS.id_song AND
> >> PS.id_playlist = 2 AND genre_id = 0 AND artist = 'Las ketchup'
> >> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> >> version)' OR number > 959)
> >> ORDER BY title ASC , number ASC LIMIT 4;
> >>
> >> I also execute the EXPLAIN QUERY PLAN:
> >> 1st query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
> >>
> >> 2nd query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
> >>  1 1  TABLE Playlist_Song AS PS
> >> So it seems that the second plan (1,1) requires very long time!
> >> How can I 

Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Black, Michael (IS)
I think you may be worrying too much about file speed as it's already pretty 
fast.
 
But if you want AIX ramdisk check here:
http://www.ee.pw.edu.pl/~pileckip/aix/mkramdisk.htm
 
 
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Manuj Bhatia
Sent: Tue 5/11/2010 11:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite Database in Shared Memory



I am developing this solution for an AIX machine. I am not sure if it does
any such optimization for the temp file system.
As someone recommended, I can probably implement a VFS for Shared-memory,
but that seems to be too much work :)

I am inclining towards a file-based DB with syncs turned off. If the
performance is not satisfactory, then I might look at implementing a VFS.

Thanks,
~Manuj


On Tue, May 11, 2010 at 8:39 AM, Eric Smith  wrote:

> Manuj Bhatia wrote:
>
> > I do not have a requirement of persistence in my current design, but I
> > expect that we might extend this shared-queue solution to more areas of
> > the server and will require some sort of persistence then.
> > That is one of the main reasons I do not want to use IPC queues (there
> are
> > other reasons like fixed message sizes, minimal support for queue/message
> > level metadata).
>
> OP might consider creating a database file on a tmpfs filesystem.
> The OS tricks SQLite (and everything in user space) into thinking the
> file is a normal file with all the usual properties thereof -- but
> it's backed by RAM and not any persistent medium.  You'll get the perf
> benefits you wanted, along with the relatively easy ability to make the
> DB persistent later.
>
> Fedora 12 has one of these mounted at /dev/shm by default, though I
> presume any modern Linux will support this.
>
> Caveat (1).  I ran 'make test' on SQLite 3.6.23.1 on my box (Linux ___
> 2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686 athlon
> i386 GNU/Linux) from within a tmpfs filesystem and 23 tests failed:
>
> shared-1.1.1 shared-1.2.1 shared-1.4.1.1 shared-1.4.1.2 shared-1.4.1.3
> shared-2.1.1 shared-2.2.1 shared-2.4.1.1 shared-2.4.1.2 shared-2.4.1.3
> stmt-1.2 stmt-1.3 stmt-1.4 stmt-1.5 stmt-1.6 stmt-1.7 stmt-2.1 stmt-2.2
> stmt-2.3 stmt-2.5 tempdb-2.2 tempdb-2.3 tkt2565-1.X
>
> I wanted to investigate to see why but haven't had the time -- it has
> to do with the global Tcl variable sqlite_open_file_count.  Running the
> fixture on just those test files yields passes (every time), but running
> the whole 'veryquick' suite yields failures (every time).  I see there's
> machinery to try to clear all state between test runs -- obviously this
> is not successful in my test.
>
> The testfixture is dynamically linked against these libraries:
>linux-gate.so.1 =>  (0x00511000)
>libtcl8.5.so => /usr/lib/libtcl8.5.so (0x005cb000)
>libdl.so.2 => /lib/libdl.so.2 (0x00d1f000)
>libm.so.6 => /lib/libm.so.6 (0x00d42000)
>libpthread.so.0 => /lib/libpthread.so.0 (0x00d26000)
>libc.so.6 => /lib/libc.so.6 (0x00ba9000)
>/lib/ld-linux.so.2 (0x00b87000)
>
> Caveat (2).  I don't claim this is the best solution for the OP -- just
> a possibility.
>
> Eric
>
> --
> Eric A. Smith
>
> I think there's a world market for about five computers.
>-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Manuj Bhatia
I am developing this solution for an AIX machine. I am not sure if it does
any such optimization for the temp file system.
As someone recommended, I can probably implement a VFS for Shared-memory,
but that seems to be too much work :)

I am inclining towards a file-based DB with syncs turned off. If the
performance is not satisfactory, then I might look at implementing a VFS.

Thanks,
~Manuj


On Tue, May 11, 2010 at 8:39 AM, Eric Smith  wrote:

> Manuj Bhatia wrote:
>
> > I do not have a requirement of persistence in my current design, but I
> > expect that we might extend this shared-queue solution to more areas of
> > the server and will require some sort of persistence then.
> > That is one of the main reasons I do not want to use IPC queues (there
> are
> > other reasons like fixed message sizes, minimal support for queue/message
> > level metadata).
>
> OP might consider creating a database file on a tmpfs filesystem.
> The OS tricks SQLite (and everything in user space) into thinking the
> file is a normal file with all the usual properties thereof -- but
> it's backed by RAM and not any persistent medium.  You'll get the perf
> benefits you wanted, along with the relatively easy ability to make the
> DB persistent later.
>
> Fedora 12 has one of these mounted at /dev/shm by default, though I
> presume any modern Linux will support this.
>
> Caveat (1).  I ran 'make test' on SQLite 3.6.23.1 on my box (Linux ___
> 2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686 athlon
> i386 GNU/Linux) from within a tmpfs filesystem and 23 tests failed:
>
> shared-1.1.1 shared-1.2.1 shared-1.4.1.1 shared-1.4.1.2 shared-1.4.1.3
> shared-2.1.1 shared-2.2.1 shared-2.4.1.1 shared-2.4.1.2 shared-2.4.1.3
> stmt-1.2 stmt-1.3 stmt-1.4 stmt-1.5 stmt-1.6 stmt-1.7 stmt-2.1 stmt-2.2
> stmt-2.3 stmt-2.5 tempdb-2.2 tempdb-2.3 tkt2565-1.X
>
> I wanted to investigate to see why but haven't had the time -- it has
> to do with the global Tcl variable sqlite_open_file_count.  Running the
> fixture on just those test files yields passes (every time), but running
> the whole 'veryquick' suite yields failures (every time).  I see there's
> machinery to try to clear all state between test runs -- obviously this
> is not successful in my test.
>
> The testfixture is dynamically linked against these libraries:
>linux-gate.so.1 =>  (0x00511000)
>libtcl8.5.so => /usr/lib/libtcl8.5.so (0x005cb000)
>libdl.so.2 => /lib/libdl.so.2 (0x00d1f000)
>libm.so.6 => /lib/libm.so.6 (0x00d42000)
>libpthread.so.0 => /lib/libpthread.so.0 (0x00d26000)
>libc.so.6 => /lib/libc.so.6 (0x00ba9000)
>/lib/ld-linux.so.2 (0x00b87000)
>
> Caveat (2).  I don't claim this is the best solution for the OP -- just
> a possibility.
>
> Eric
>
> --
> Eric A. Smith
>
> I think there's a world market for about five computers.
>-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_CORRUPT error

2010-05-11 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/10/2010 04:07 PM, daksh jasra wrote:
> I have ported SQLITE over VRTX based embedded platform,

I'd suggest licensing the TH3 test suite in order to verify your port.  You
could be doing something like getting an operating wrong once the database
reaches a certain size, is an exact multiple of some interesting number etc.

  http://www.sqlite.org/th3.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkvpgbAACgkQmOOfHg372QTPeQCgo2W6ytS8FH360YoBFkQg60YQ
Q+UAoNS5JUdy7/Vtr85mlXNfrMEk2Wyt
=32se
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Manuj Bhatia
I am sorry... when I said IPC-based I really meant an IPC-Queue-based (the
standard msgget()/mq_open() stuff).
I have nothing against IPC :). It's just that an IPC-queue solution will not
satisfy all my requirements.
I will definitely be using IPC semaphore/mutex facility to avoid having to
poll the database every time (and of course for SQLite to perform the
serialization).

Here is a high-level flow of what I have in mind:
1. Writer writes a message to the db.
2. Then, it increments a semaphore (indicating a new message is available).
3. All the readers are waiting to lock the semaphore (so there will be no
sleep-polling pattern).
4. One of the readers obtains a lock on the semaphore.
5. At that time, it queries the database and retrieves a message to be
processed.
6. Then it again goes into a wait state to lock the semaphore.

Considering that there is no straight-forward solution to get SQLite into
the shared memory, I'll probably go with a disk-based database and use other
optimization methods (like no syncing, caching etc.) to help with the
performance.

Thanks!
~Manuj


On Tue, May 11, 2010 at 7:34 AM, Pavel Ivanov  wrote:

> > In short, using a SQLite-backed queue solution gives me a lot of options
> > that a simple IPC based (and, for that matter, even a professional
> Messaging
> > Product) does not give.
>
> Also SQLite-backed solution gives you a big restriction that IPC
> doesn't: you have to poll the queue instead of pushing to it. I.e. the
> process reading the queue will have to execute some query periodically
> to see if there's anything in the queue. You don't want to execute
> this query without delay because it will eat 100% of you CPU at any
> time even when there's nothing in the queue. Besides it can introduce
> writer starvation. But when you execute query with any delay you lose
> immediate reaction of the queue. It's your choice of course.
>
> BTW, look closely at your requirements - you have some contradiction
> in them. You don't want to mess with file system because *you think*
> it will have performance penalty (as was already said it's not always
> true because OS cache your file in memory anyway). You don't want to
> use IPC because it's "bad". You want SQLite to work completely in
> memory and you want it to work inside several processes with the same
> memory. But how do you think SQLite should interact with itself to
> avoid reader in one process reading corrupted data while writer in
> another process is writing something new? The only way to do it is to
> use IPC. And SQLite does use one (probably the easiest) method of IPC
> - file systems locks. No other IPC mechanism is implemented in SQLite.
> So you have to allow SQLite to do its job - you need to have your
> database in the file system even if you won't ever read it once your
> application is closed.
>
>
> Pavel
>
> On Mon, May 10, 2010 at 3:59 PM, Manuj Bhatia 
> wrote:
> > Pavel,
> >
> > I do not have a requirement of persistence in my current design, but I
> > expect that we might extend this shared-queue solution to more areas of
> the
> > server and will require some sort of persistence then.
> > That is one of the main reasons I do not want to use IPC queues (there
> are
> > other reasons like fixed message sizes, minimal support for queue/message
> > level metadata).
> >
> > One of the main attractions of SQLite-based solution is to be able to
> > perform all kind of queries on the queue itself (from the point of view
> of
> > maintenance scripts/production support).
> > In my experience, if there are lots of services sharing different types
> of
> > messages over an IPC shared queue, sometimes you run into a situation
> where
> > the queue starts backing up and there is no way for production support
> folks
> > to determine which particular service is causing the backup (by sending
> > messages too fast, or consuming them really slow). And, in the end the
> only
> > solution is to bounce all the services (instead of just bouncing the
> > culprit) and we never discover the root cause of the backup.
> >
> > If I use a SQLite-backed queue, I can simply use the command line shell
> and
> > run queries like:
> >
> > select sender, receiver, count(*)
> > from queue
> > group by sender, receiver;
> >
> > Or any combination of message metadata to analyze the current state of
> the
> > queue.
> >
> > Also, I can easily modify my queue APIs to just update a used flag,
> instead
> > of deleting the message from the db. This way, I can analyze all the
> > messages at the end of day and determine all kinds of statistics (like
> how
> > long does a particular type of message sits in the queue).
> >
> > In short, using a SQLite-backed queue solution gives me a lot of options
> > that a simple IPC based (and, for that matter, even a professional
> Messaging
> > Product) does not give.
> >
> > Jay,
> > I did think of implementing a VFS for the shared-memory, but as you
> > 

Re: [sqlite] join performance query

2010-05-11 Thread Andrea Galeazzi
Sorry but in your solution, how can I solve the condition
AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
>> version)' OR number > 258)
?
title is on song and number is song_number on Playlist_Song AS PS.
Furthermore I also need title and number in place of your  select * from SONG
Could you write it again please?
Thanks

Citando Tim Romano :

> 1. Try discrete single-column indexes rather than multi-column composite
> indexes.
> 2. Try  breaking the query down into subsets expressed as parenthetical
> queries; you can treat these parenthetical queries as if they were tables by
> assigning them an alias, and then you can join against the aliases.   I have
> sped queries up in SQLite using this approach and, with a little tinkering,
> the time can drop from over a minute to sub-second.   Performance will
> depend on the indexes and criteria used, of course. But this approach lets
> you see how SQLite is optimizing the creation of the component sets from
> which you can build up your ultimate query.
> .
> select * from SONG
> JOIN
>
> ( select  id_song from
>
> (
> select  id_song from PLAYLIST_SONG
> where id_playlist=2
> ) as MYPLAYLISTSONGS
>
> JOIN
>
> (
> select id_song from
> SONG
> where genre_id = 0 AND artist = 'Las ketchup'
> AND title >= 'Asereje(karaoke version)'
> ) as MYSONGS
>
> on MYSONGS.id_song = MYPLAYLISTSONGS.id_song
>
>
> ) as SONGIDLIST
>
> on SONG.id_song = SONGIDLIST.id_song
>
>
> Regards
> Tim Romano
>
>
>
>
>
>
> On Tue, May 11, 2010 at 6:07 AM, Andrea Galeazzi  wrote:
>
>> Hi guys,
>> I'm in a bind for a huge time consuming query!
>> I made the following database schema:
>>
>> CREATE TABLE Song (
>>idINTEGER NOT NULL UNIQUE,
>>titleVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
>>artistVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
>>genre_idINT NOT NULL DEFAULT 0,
>> PRIMARY KEY (id),
>>
>> CONSTRAINT fk_Genre FOREIGN KEY (genre_id)
>>REFERENCES Genre (id)
>>ON DELETE SET DEFAULT
>>ON UPDATE CASCADE);
>>
>> CREATE INDEX Song_Browse_View_idx ON Song(genre_id,artist,title);
>>
>> CREATE TABLE PlayList (
>>id INTEGER NOT NULL UNIQUE,
>>name   VARCHAR(15) NOT NULL COLLATE NOCASE, --KORGCOLLATE,
>>length INT NOT NULL DEFAULT 0,
>>created_date   TEXT,
>> PRIMARY KEY (id));
>>
>> CREATE TABLE PlayList_Song (
>>id_song INT NOT NULL,
>>id_playlist INT NOT NULL,
>>song_number INTEGER NOT NULL,
>> PRIMARY KEY (id_playlist, song_number),
>> CONSTRAINT fk_PlayList_Song1 FOREIGN KEY (id_song)
>>REFERENCES Song (id)
>>ON DELETE CASCADE
>>ON UPDATE CASCADE,
>> CONSTRAINT fk_PlayList_Song2 FOREIGN KEY (id_playlist)
>>REFERENCES PlayList (id)
>>ON DELETE CASCADE
>>ON UPDATE CASCADE);
>>
>> CREATE INDEX PlayList_Song_song_number_idx ON PlayList_Song(song_number);
>>
>> Now I need to scroll title filtered by genre_id and artist both in Song
>> table and Playlist.
>> The query for the first case is very fast:
>> SELECT id AS number,title  FROM Song WHERE genre_id = 0 AND artist =
>> 'Las ketchup'
>> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
>> version)' OR number > 258)
>> ORDER BY title ASC , number ASC LIMIT 4;
>>
>> The second case is about 35 times slower... so the scrolling is quite
>> impossible (or useless)!
>> SELECT song_number AS number,title  FROM Song AS S, Playlist_Song AS PS
>> WHERE S.id = PS.id_song AND
>> PS.id_playlist = 2 AND genre_id = 0 AND artist = 'Las ketchup'
>> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
>> version)' OR number > 959)
>> ORDER BY title ASC , number ASC LIMIT 4;
>>
>> I also execute the EXPLAIN QUERY PLAN:
>> 1st query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>>
>> 2nd query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>>  1 1  TABLE Playlist_Song AS PS
>> So it seems that the second plan (1,1) requires very long time!
>> How can I optimized a such kind of query?
>> Cheers
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Eric Smith
Manuj Bhatia wrote: 

> I do not have a requirement of persistence in my current design, but I 
> expect that we might extend this shared-queue solution to more areas of 
> the server and will require some sort of persistence then.  
> That is one of the main reasons I do not want to use IPC queues (there are 
> other reasons like fixed message sizes, minimal support for queue/message 
> level metadata).  

OP might consider creating a database file on a tmpfs filesystem.  
The OS tricks SQLite (and everything in user space) into thinking the 
file is a normal file with all the usual properties thereof -- but 
it's backed by RAM and not any persistent medium.  You'll get the perf 
benefits you wanted, along with the relatively easy ability to make the 
DB persistent later.  

Fedora 12 has one of these mounted at /dev/shm by default, though I 
presume any modern Linux will support this.  

Caveat (1).  I ran 'make test' on SQLite 3.6.23.1 on my box (Linux ___ 
2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686 athlon 
i386 GNU/Linux) from within a tmpfs filesystem and 23 tests failed: 

shared-1.1.1 shared-1.2.1 shared-1.4.1.1 shared-1.4.1.2 shared-1.4.1.3 
shared-2.1.1 shared-2.2.1 shared-2.4.1.1 shared-2.4.1.2 shared-2.4.1.3 
stmt-1.2 stmt-1.3 stmt-1.4 stmt-1.5 stmt-1.6 stmt-1.7 stmt-2.1 stmt-2.2 
stmt-2.3 stmt-2.5 tempdb-2.2 tempdb-2.3 tkt2565-1.X 

I wanted to investigate to see why but haven't had the time -- it has 
to do with the global Tcl variable sqlite_open_file_count.  Running the 
fixture on just those test files yields passes (every time), but running 
the whole 'veryquick' suite yields failures (every time).  I see there's 
machinery to try to clear all state between test runs -- obviously this 
is not successful in my test.  

The testfixture is dynamically linked against these libraries:
linux-gate.so.1 =>  (0x00511000)
libtcl8.5.so => /usr/lib/libtcl8.5.so (0x005cb000)
libdl.so.2 => /lib/libdl.so.2 (0x00d1f000)
libm.so.6 => /lib/libm.so.6 (0x00d42000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00d26000)
libc.so.6 => /lib/libc.so.6 (0x00ba9000)
/lib/ld-linux.so.2 (0x00b87000)

Caveat (2).  I don't claim this is the best solution for the OP -- just 
a possibility.  

Eric 

-- 
Eric A. Smith

I think there's a world market for about five computers.
-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC side and instrumentside.

2010-05-11 Thread Pavel Ivanov
> But I think at least
> Sqlite3 should have used most space on the sector when it request a new
> sector.

Due to internal SQLite specifics it uses at least 1 page for each
table and index. So even if you don't store there anything with a big
schema database will still consume significant amount of disk space.
Probably from your point of view it's a waste of space but it's
necessary trade-off to make it light.

> Maybe I need to return a smaller number when querying the sector
> size.

Read the link Dan gave you. Issuing 'pragma page_size' before creation
of any tables in the database is enough.


Pavel

On Tue, May 11, 2010 at 4:29 AM, Lei, Rick (GE EntSol, SensInsp)
 wrote:
> Hi, Dan,
>
> Yes, I noticed this setting. The sector size in SDHC card is 4Kbyte
> which is different from the size of harddriver. But I think at least
> Sqlite3 should have used most space on the sector when it request a new
> sector. It looks like that the Sqlite wastes a lot of space on SDHC
> card. Of course the space is not a problem now. Because we can easily
> find a 8G or 16G SDHC card. However I think we still need to take care
> of it. Maybe I need to return a smaller number when querying the sector
> size.
>
> BR
> Rick
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Tuesday, May 11, 2010 4:18 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] DB files are different between PC side and
> instrumentside.
>
>
> On May 11, 2010, at 2:54 PM, Lei, Rick (GE EntSol, SensInsp) wrote:
>
>>
>> Hi,
>>
>> I ported Sqlite3 to my instrument. The database file is stored in a
>> SDHC card. Sqlite3 runs ok. However I found the database file
>> generated on instrument side is much bigger than the file on PC side.
>> I checked the files generated on instrument by UltraEdit. I found a
>> lot of space which is full filled with 0 in the file. And I can't find
>
>> these parts in the files generated on PC side. I don't know why the
>> files are different between PC side and instrument side. I think they
>> should be same. Is there any advice?
>
> Different page sizes.
>
>   http://www.sqlite.org/pragma.html#pragma_page_size
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update: set multiple values

2010-05-11 Thread Adam DeVita
but...

...but I LOVE my hammer!   How dare every problem not be a nail?

;)



Good point.   Likely all the updates can fit nicely into a transaction.


On Mon, May 10, 2010 at 5:11 PM, Simon Slavin  wrote:

>
> On 10 May 2010, at 9:25pm, Adam DeVita wrote:
>
> > Simon, can you expand your syntax, or are you just saying, "get x,y,z
>  store
> > them in a set of variables, then run update with appropriate bindings"?
>
> Just that.  You have a programming language with variables, so use it.
>  That's what your programming language is for.
>
> You might be able to get extremely clever and work out some contorted
> SQLite syntax which will do the whole thing in one SQL command, but why
> bother ?  It'll be hell to work out what's wrong if you get an error
> message.  And it'll be difficult to document because you have to explain
> your perverse syntax.  Better to use two extremely simple SQL commands and
> say "We get three values here ... then we use them in this UPDATE.".  Faster
> and simpler to write, debug and document.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
VerifEye Technologies Inc.
905-948-0015x245
7100 Warden Ave, Unit 3
Markham ON, L3R 8B5
Canada
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Pavel Ivanov
> In short, using a SQLite-backed queue solution gives me a lot of options
> that a simple IPC based (and, for that matter, even a professional Messaging
> Product) does not give.

Also SQLite-backed solution gives you a big restriction that IPC
doesn't: you have to poll the queue instead of pushing to it. I.e. the
process reading the queue will have to execute some query periodically
to see if there's anything in the queue. You don't want to execute
this query without delay because it will eat 100% of you CPU at any
time even when there's nothing in the queue. Besides it can introduce
writer starvation. But when you execute query with any delay you lose
immediate reaction of the queue. It's your choice of course.

BTW, look closely at your requirements - you have some contradiction
in them. You don't want to mess with file system because *you think*
it will have performance penalty (as was already said it's not always
true because OS cache your file in memory anyway). You don't want to
use IPC because it's "bad". You want SQLite to work completely in
memory and you want it to work inside several processes with the same
memory. But how do you think SQLite should interact with itself to
avoid reader in one process reading corrupted data while writer in
another process is writing something new? The only way to do it is to
use IPC. And SQLite does use one (probably the easiest) method of IPC
- file systems locks. No other IPC mechanism is implemented in SQLite.
So you have to allow SQLite to do its job - you need to have your
database in the file system even if you won't ever read it once your
application is closed.


Pavel

On Mon, May 10, 2010 at 3:59 PM, Manuj Bhatia  wrote:
> Pavel,
>
> I do not have a requirement of persistence in my current design, but I
> expect that we might extend this shared-queue solution to more areas of the
> server and will require some sort of persistence then.
> That is one of the main reasons I do not want to use IPC queues (there are
> other reasons like fixed message sizes, minimal support for queue/message
> level metadata).
>
> One of the main attractions of SQLite-based solution is to be able to
> perform all kind of queries on the queue itself (from the point of view of
> maintenance scripts/production support).
> In my experience, if there are lots of services sharing different types of
> messages over an IPC shared queue, sometimes you run into a situation where
> the queue starts backing up and there is no way for production support folks
> to determine which particular service is causing the backup (by sending
> messages too fast, or consuming them really slow). And, in the end the only
> solution is to bounce all the services (instead of just bouncing the
> culprit) and we never discover the root cause of the backup.
>
> If I use a SQLite-backed queue, I can simply use the command line shell and
> run queries like:
>
> select sender, receiver, count(*)
> from queue
> group by sender, receiver;
>
> Or any combination of message metadata to analyze the current state of the
> queue.
>
> Also, I can easily modify my queue APIs to just update a used flag, instead
> of deleting the message from the db. This way, I can analyze all the
> messages at the end of day and determine all kinds of statistics (like how
> long does a particular type of message sits in the queue).
>
> In short, using a SQLite-backed queue solution gives me a lot of options
> that a simple IPC based (and, for that matter, even a professional Messaging
> Product) does not give.
>
> Jay,
> I did think of implementing a VFS for the shared-memory, but as you
> mentioned a file-based DB with all syncs off might be a simpler trade-off.
>
> Alexey,
> As Simon said, having a socket based daemon solution is something I want to
> avoid because it adds another layer to the architecture.
>
> Thanks,
> Manuj
>
>
>
> On Mon, May 10, 2010 at 10:56 AM, Simon Slavin  wrote:
>
>>
>> On 10 May 2010, at 4:47pm, Alexey Pechnikov wrote:
>>
>> > TCP-socket listening daemon + SQLite in-memory database may be helpful.
>>
>> Yes.  You can make one process, which handles all your SQLite transactions,
>> and receives its orders from other processes via inter-process calls or
>> TCP/IP.  I've seen a few solutions which do this and they work fine.  But
>> that process will itself become some sort of bottleneck if you have many
>> processes calling it.  And I think that the original post in this thread
>> described a situation where that was not a good solution.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list

Re: [sqlite] multi processes, so many errores of SQLITE_BUSY and SQLITE_MISUSE

2010-05-11 Thread Black, Michael (IS)
Your "reiterating 20 times" is not using a usleep so you'll blow by this most 
every time it's busy.
 
Do this instead in all your proc's
 
ret = sqlite3_step (p_stmt);
if (SQLITE_BUSY == ret)
{
int n=0;
usleep(10); // try one more time before error
while ((ret=sqlite3_step(p_stmt))==SQLITE_BUSY) {
printf("proc1 ret==BUSY %d\n",++n);
usleep(10);
}
}

And you'll also need to handle "database is locked" coming from your prepare 
statements.  I saw that error too.
You'll need to loop there too.

The more you drop the usleep time the more times it will show as busy.  1/10th 
or 1/100th of second is about all you want I would think.
 
And get rid of the usleep at the bottom of each proc -- it's pretty useless at 
100 microseconds.  You don't need to sleep unless you're busy.
 
I tested your code with this and got no errors at all -- just a bunch of BUSY 
messages.
 
 
Not sure what your purpose is in sqlrun.c with looping and killing.  Looks 
pretty squirrely to me.  You're not waiting for the forks to finish so what is 
your logic here?
 
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of liubin liu
Sent: Tue 5/11/2010 4:57 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] multi processes, so many errores of SQLITE_BUSY and 
SQLITE_MISUSE




Multi processes, getting so many errores of SQLITE_BUSY and SQLITE_MISUSE...

And the system performance is very bad because of the three processes of
insert/read/update database.

How to improve the sqlite3's operations?


_my codes___


___proc_main.c_


// create three processes of insert/read/update database.

// proc_1 : insert;
// proc_2 : select;
// proc_3 : update;


#include 
#include   // for fork();
#include  // for fork(); execv(); usleep();
#include  // for kill();

#include 


static int createdb (void);

int main (void)
{
int ret = -1;


ret = createdb ();
usleep (5);


int i=0;
while (1)
{
pid_t p1, p2, p3;

p1 = fork();
if (0 == p1)
{
char *argv1[] = {"proc_1"};
ret = execv ("./proc_1", argv1);
}

p2 = fork();
if (0 == p2)
{
char *argv2[] = {"proc_2"};
ret = execv ("./proc_2", argv2);
}

p3 = fork();
if (0 == p3)
{
char *argv3[] = {"proc_3"};
ret = execv ("./proc_3", argv3);
}


usleep (100 * 100);

while (1)
{
ret = kill (p1, SIGKILL);
ret = kill (p2, SIGKILL);
ret = kill (p3, SIGKILL);


usleep (10 * 100);
}
}

return 0;
}

static int createdb (void)
{
int ret = -1;

sqlite3 *db = NULL;
ret = sqlite3_open ("test.db", );


ret = sqlite3_exec (db, "CREATE TABLE t1 (mp_no INTEGER, di INTEGER,
data INT64, rec_time INTEGER, data_type CHAR(1) )", NULL,NULL,NULL);
ret = sqlite3_exec (db, "CREATE UNIQUE INDEX i_t1 ON t1 (mp_no, di)",
NULL,NULL,NULL);


ret = sqlite3_close (db);
return ret;
}




___proc_1.c_


#include 
#include// for time();
#include  // for srand(); rand();
#include  // for usleep();
#include 

#define DELAY_TIME  2   // 20ms
#define REDO_TIMES  60

int main (void)
{
int ret = -1;

struct tm *tm = NULL;
time_t t;
char datee[30];

FILE *fp1;
fp1 = fopen ("proc_1.log", "a+");

srand ((int) time(0));


sqlite3 *db = NULL;
ret = sqlite3_open ("test.db", );


char *sql_f = "INSERT OR REPLACE INTO t1 VALUES (%d, %d, %llu, %d, %d)";
char *sql = NULL;

sqlite3_stmt *p_stmt = NULL;

int i=0, n=5000;
for (i=0; i

Re: [sqlite] Insert large data question ??

2010-05-11 Thread P Kishor
On Tue, May 11, 2010 at 12:47 AM, 風箏  wrote:
> Dear
>
> I have about 9 million data insert string need to insert into an table ,each
> row data is unique
>
> this is a sample:
> insert into mydata
> VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
> 19:55:50');
>
> this is my schema:
> table|mydata|mydata|2|CREATE TABLE mydata
> (
>    itno VARCHAR(20),
>    lcno VARCHAR(20),
>    srno VARCHAR(10),
>    ran  VARCHAR(20),
>    pp INTEGER,
>    cdate VARCHAR(20),
>    PRIMARY KEY (itno DESC, lcno ASC)
> )
..
> but i have question about performance,everytime doing the job takes about
> 63000 seconds

Use transactions. But, do you also realize that most of your columns
are defined as VARCHAR, but you are inserting stuff that doesn't look
like TEXT. You will be/should be surprised by the results. From the
example above,

>itno VARCHAR(20),
>lcno VARCHAR(20),
>srno VARCHAR(10),
>ran  VARCHAR(20),
>pp INTEGER,
>cdate VARCHAR(20),

ltno VARCHAR(20): 38824801 <-- if it had leading zeroes, they would vanish
lcno VARCHAR(10): 56888 <-- will become 56888
srno VARCHAR(10): 'AABBCC' <-- inserted correctly
ran  VARCHAR(20): 4.999 <-- are you expecting this to remain a REAL?
pp INTEGER: 157 <-- inserted correctly
cdate VARCHAR: '2009/9/10 19:55:50' <-- inserted correctly

enclose your VARCHARs in single quotes.

-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Black, Michael (IS)
Just 'cuz you don't need persitence now of course doesn't mean you can't use it.
 
That solves your "shared memory" problem even though it's not as elegant.
 
You can even access via file shares that way too which sounds a bit like what 
you may want do anyways.
 
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Manuj Bhatia
Sent: Mon 5/10/2010 2:59 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite Database in Shared Memory



Pavel,

I do not have a requirement of persistence in my current design, but I
expect that we might extend this shared-queue solution to more areas of the
server and will require some sort of persistence then.
That is one of the main reasons I do not want to use IPC queues (there are
other reasons like fixed message sizes, minimal support for queue/message
level metadata).

One of the main attractions of SQLite-based solution is to be able to
perform all kind of queries on the queue itself (from the point of view of
maintenance scripts/production support).
In my experience, if there are lots of services sharing different types of
messages over an IPC shared queue, sometimes you run into a situation where
the queue starts backing up and there is no way for production support folks
to determine which particular service is causing the backup (by sending
messages too fast, or consuming them really slow). And, in the end the only
solution is to bounce all the services (instead of just bouncing the
culprit) and we never discover the root cause of the backup.

If I use a SQLite-backed queue, I can simply use the command line shell and
run queries like:

select sender, receiver, count(*)
from queue
group by sender, receiver;

Or any combination of message metadata to analyze the current state of the
queue.

Also, I can easily modify my queue APIs to just update a used flag, instead
of deleting the message from the db. This way, I can analyze all the
messages at the end of day and determine all kinds of statistics (like how
long does a particular type of message sits in the queue).

In short, using a SQLite-backed queue solution gives me a lot of options
that a simple IPC based (and, for that matter, even a professional Messaging
Product) does not give.

Jay,
I did think of implementing a VFS for the shared-memory, but as you
mentioned a file-based DB with all syncs off might be a simpler trade-off.

Alexey,
As Simon said, having a socket based daemon solution is something I want to
avoid because it adds another layer to the architecture.

Thanks,
Manuj



On Mon, May 10, 2010 at 10:56 AM, Simon Slavin  wrote:

>
> On 10 May 2010, at 4:47pm, Alexey Pechnikov wrote:
>
> > TCP-socket listening daemon + SQLite in-memory database may be helpful.
>
> Yes.  You can make one process, which handles all your SQLite transactions,
> and receives its orders from other processes via inter-process calls or
> TCP/IP.  I've seen a few solutions which do this and they work fine.  But
> that process will itself become some sort of bottleneck if you have many
> processes calling it.  And I think that the original post in this thread
> described a situation where that was not a good solution.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation typo

2010-05-11 Thread Richard Hipp
2010/5/10 "Carlos Andrés Ramírez C." 

>
> Hello guys,
> I was breaking my head trying to figure out how to obtain the last
> inserted row's ID --- using SQLite from Ruby.
>
> I found 'last_insert_rowid()' in your documentation at
> http://www.sqlite.org/lang_corefunc.html   and still did not do it.
>
> After spending a lot of time searching, I found that it was not
> 'last_insert_rowid()' as documented, but instead 'last_insert_row_id()'
>  with an extra underscore character before the 'id'.
>

SQLite uses last_insert_rowid() (with no _ between row and id) is correct,
as the documentation states.  Perhaps Ruby has an equivalent method named
last_insert_row_id().  Or perhaps the version of SQLite built into Ruby has
a modified version of SQLite that adds the extra _ in the name.



>
> is this a Typo? --- I almost went crazy
> It was like that also in the downloadable documentation, so you wanna
> check that out,
>
> Best wishes,
>
> Carlos Ramirez,
> www.antai-group.com
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
-
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation typo

2010-05-11 Thread P Kishor
2010/5/10 "Carlos Andrés Ramírez C." :
>
> Hello guys,
> I was breaking my head trying to figure out how to obtain the last
> inserted row's ID --- using SQLite from Ruby.
>
> I found 'last_insert_rowid()' in your documentation at
> http://www.sqlite.org/lang_corefunc.html   and still did not do it.
>
> After spending a lot of time searching, I found that it was not
> 'last_insert_rowid()' as documented, but instead 'last_insert_row_id()'
>  with an extra underscore character before the 'id'.
>

Nope. last_insert_rowid() is correct. Note that just doing a 'SELECT
row_id FROM table' will croak with an error, while 'SELECT rowid FROM
table' works fine. There is no underscore. I have
$dbh->sqlite_last_insert_rowid() in Perl.

Perhaps your Ruby SQLite package has changed the syntax and introduced
the underscore. Get it corrected there.


> is this a Typo? --- I almost went crazy
> It was like that also in the downloadable documentation, so you wanna
> check that out,
>
> Best wishes,
>
> Carlos Ramirez,
> www.antai-group.com
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert large data question ??

2010-05-11 Thread Lei, Rick (GE EntSol, SensInsp)
Try to use transaction syntax. 

Rick

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of �L�~
Sent: Tuesday, May 11, 2010 1:48 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Insert large data question ??

Dear

I have about 9 million data insert string need to insert into an table ,each 
row data is unique

this is a sample:
insert into mydata
VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
19:55:50');

this is my schema:
table|mydata|mydata|2|CREATE TABLE mydata
(
itno VARCHAR(20),
lcno VARCHAR(20),
srno VARCHAR(10),
ran  VARCHAR(20),
pp INTEGER,
cdate VARCHAR(20),
PRIMARY KEY (itno DESC, lcno ASC)
)
index|sqlite_autoindex_mydata_1|mydata|3|
index|IDX_CDATE|mydata|4|CREATE INDEX IDX_CDATE  on mydata (cdate  DESC) 
index|IDX_PRICE|mydata|5|CREATE INDEX IDX_PRICE  on mydata (pp  ASC)
index|IDX_RANK|mydata|6|CREATE INDEX IDX_RANK   on mydata (ran   DESC)
index|IDX_LC_NO|mydata|7|CREATE INDEX IDX_LC_NO  on mydata (lcno ASC) 
index|IDX_SR_NO|mydata|8|CREATE INDEX IDX_SR_NO  on mydata (srno ASC)

but i have question about performance,everytime doing the job takes about 63000 
seconds is there any suggestion can make more faster with my job ??
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation typo

2010-05-11 Thread Simon Slavin

On 10 May 2010, at 4:11pm, Carlos Andrés Ramírez C. wrote:

> I was breaking my head trying to figure out how to obtain the last 
> inserted row's ID --- using SQLite from Ruby.
> 
> I found 'last_insert_rowid()' in your documentation at   
> http://www.sqlite.org/lang_corefunc.html   and still did not do it.
> 
> After spending a lot of time searching, I found that it was not 
> 'last_insert_rowid()' as documented, but instead 'last_insert_row_id()' 
>  with an extra underscore character before the 'id'.

The amalgamation source for 3.6.23 has

SQLITE_API sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*);

and

static void last_insert_rowid

I don't know where you got your function from, but I don't think it's SQLite.  
It might be a Ruby library that /calls/ SQLite, in which case the error is with 
whoever wrote the Ruby library.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert large data question ??

2010-05-11 Thread Richard Hipp
Does the following document help?

http://www.sqlite.org/faq.html#q19

On Tue, May 11, 2010 at 1:47 AM, 風箏  wrote:

> Dear
>
> I have about 9 million data insert string need to insert into an table
> ,each
> row data is unique
>
> this is a sample:
> insert into mydata
> VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
> 19:55:50');
>
> this is my schema:
> table|mydata|mydata|2|CREATE TABLE mydata
> (
>itno VARCHAR(20),
>lcno VARCHAR(20),
>srno VARCHAR(10),
>ran  VARCHAR(20),
>pp INTEGER,
>cdate VARCHAR(20),
>PRIMARY KEY (itno DESC, lcno ASC)
> )
> index|sqlite_autoindex_mydata_1|mydata|3|
> index|IDX_CDATE|mydata|4|CREATE INDEX IDX_CDATE  on mydata (cdate  DESC)
> index|IDX_PRICE|mydata|5|CREATE INDEX IDX_PRICE  on mydata (pp  ASC)
> index|IDX_RANK|mydata|6|CREATE INDEX IDX_RANK   on mydata (ran   DESC)
> index|IDX_LC_NO|mydata|7|CREATE INDEX IDX_LC_NO  on mydata (lcno ASC)
> index|IDX_SR_NO|mydata|8|CREATE INDEX IDX_SR_NO  on mydata (srno ASC)
>
> but i have question about performance,everytime doing the job takes about
> 63000 seconds
> is there any suggestion can make more faster with my job ??
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
-
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Manuj Bhatia
Pavel,

I do not have a requirement of persistence in my current design, but I
expect that we might extend this shared-queue solution to more areas of the
server and will require some sort of persistence then.
That is one of the main reasons I do not want to use IPC queues (there are
other reasons like fixed message sizes, minimal support for queue/message
level metadata).

One of the main attractions of SQLite-based solution is to be able to
perform all kind of queries on the queue itself (from the point of view of
maintenance scripts/production support).
In my experience, if there are lots of services sharing different types of
messages over an IPC shared queue, sometimes you run into a situation where
the queue starts backing up and there is no way for production support folks
to determine which particular service is causing the backup (by sending
messages too fast, or consuming them really slow). And, in the end the only
solution is to bounce all the services (instead of just bouncing the
culprit) and we never discover the root cause of the backup.

If I use a SQLite-backed queue, I can simply use the command line shell and
run queries like:

select sender, receiver, count(*)
from queue
group by sender, receiver;

Or any combination of message metadata to analyze the current state of the
queue.

Also, I can easily modify my queue APIs to just update a used flag, instead
of deleting the message from the db. This way, I can analyze all the
messages at the end of day and determine all kinds of statistics (like how
long does a particular type of message sits in the queue).

In short, using a SQLite-backed queue solution gives me a lot of options
that a simple IPC based (and, for that matter, even a professional Messaging
Product) does not give.

Jay,
I did think of implementing a VFS for the shared-memory, but as you
mentioned a file-based DB with all syncs off might be a simpler trade-off.

Alexey,
As Simon said, having a socket based daemon solution is something I want to
avoid because it adds another layer to the architecture.

Thanks,
Manuj



On Mon, May 10, 2010 at 10:56 AM, Simon Slavin  wrote:

>
> On 10 May 2010, at 4:47pm, Alexey Pechnikov wrote:
>
> > TCP-socket listening daemon + SQLite in-memory database may be helpful.
>
> Yes.  You can make one process, which handles all your SQLite transactions,
> and receives its orders from other processes via inter-process calls or
> TCP/IP.  I've seen a few solutions which do this and they work fine.  But
> that process will itself become some sort of bottleneck if you have many
> processes calling it.  And I think that the original post in this thread
> described a situation where that was not a good solution.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Insert large data question ??

2010-05-11 Thread 風箏
Dear

I have about 9 million data insert string need to insert into an table ,each
row data is unique

this is a sample:
insert into mydata
VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
19:55:50');

this is my schema:
table|mydata|mydata|2|CREATE TABLE mydata
(
itno VARCHAR(20),
lcno VARCHAR(20),
srno VARCHAR(10),
ran  VARCHAR(20),
pp INTEGER,
cdate VARCHAR(20),
PRIMARY KEY (itno DESC, lcno ASC)
)
index|sqlite_autoindex_mydata_1|mydata|3|
index|IDX_CDATE|mydata|4|CREATE INDEX IDX_CDATE  on mydata (cdate  DESC)
index|IDX_PRICE|mydata|5|CREATE INDEX IDX_PRICE  on mydata (pp  ASC)
index|IDX_RANK|mydata|6|CREATE INDEX IDX_RANK   on mydata (ran   DESC)
index|IDX_LC_NO|mydata|7|CREATE INDEX IDX_LC_NO  on mydata (lcno ASC)
index|IDX_SR_NO|mydata|8|CREATE INDEX IDX_SR_NO  on mydata (srno ASC)

but i have question about performance,everytime doing the job takes about
63000 seconds
is there any suggestion can make more faster with my job ??
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Documentation typo

2010-05-11 Thread Carlos Andrés Ramírez C.

Hello guys,
I was breaking my head trying to figure out how to obtain the last 
inserted row's ID --- using SQLite from Ruby.

I found 'last_insert_rowid()' in your documentation at   
http://www.sqlite.org/lang_corefunc.html   and still did not do it.

After spending a lot of time searching, I found that it was not 
'last_insert_rowid()' as documented, but instead 'last_insert_row_id()' 
 with an extra underscore character before the 'id'.

is this a Typo? --- I almost went crazy
It was like that also in the downloadable documentation, so you wanna 
check that out,

Best wishes,

Carlos Ramirez,
www.antai-group.com


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Returning column to default

2010-05-11 Thread Andy Gibbs
- Original Message - 
From: "Alexey Pechnikov"

Newsgroups: gmane.comp.db.sqlite.general
Sent: Saturday, May 08, 2010 2:27 PM
Subject: Re: Returning column to default



Please send to me this patch. I think it may be added to unofficial
http://sqlite.mobigroup.ru repository.


No problems.  It should be attached here.  If not, send me your email and 
I'll forward it direct to you.  Its created against the latest released 
version (3.6.23.1)


So that you know, this patch exploits a undocumented/unexpected state in the 
update column expression.  If this expression is set to NULL, then sqlite 
will crash by default (this is not a problem generally since you can't/don't 
ordinarily set the expression to NULL).  I use this as the indicator to set 
to default value.  It then extracts the default expression from the table 
column definition and applies it.  Note that it must duplicate it since it 
will later be cleaned up automatically.


If there is no column default then an error occurs ("no default for column: 
x").  This could easily be changed, if desired, to setting the column as 
NULL, for example, but to me it makes more sense to have the error.


The attached test file can be placed in the 'test' folder along with all the 
others and will then run along with them when doing a "make test".


As stated before, I'd be very interested to have feedback on how well it 
works for you.  I can't claim to have tested it exhaustively (e.g. as the 
test file shows!), but would be happy to augment it and fix any bugs that 
come up, assuming they're not too complex!!!


Cheers
Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] join performance query

2010-05-11 Thread Tim Romano
1. Try discrete single-column indexes rather than multi-column composite
indexes.
2. Try  breaking the query down into subsets expressed as parenthetical
queries; you can treat these parenthetical queries as if they were tables by
assigning them an alias, and then you can join against the aliases.   I have
sped queries up in SQLite using this approach and, with a little tinkering,
the time can drop from over a minute to sub-second.   Performance will
depend on the indexes and criteria used, of course. But this approach lets
you see how SQLite is optimizing the creation of the component sets from
which you can build up your ultimate query.
.
select * from SONG
JOIN

( select  id_song from

(
select  id_song from PLAYLIST_SONG
where id_playlist=2
) as MYPLAYLISTSONGS

JOIN

(
select id_song from
SONG
where genre_id = 0 AND artist = 'Las ketchup'
AND title >= 'Asereje(karaoke version)'
) as MYSONGS

on MYSONGS.id_song = MYPLAYLISTSONGS.id_song


) as SONGIDLIST

on SONG.id_song = SONGIDLIST.id_song


Regards
Tim Romano






On Tue, May 11, 2010 at 6:07 AM, Andrea Galeazzi  wrote:

> Hi guys,
> I'm in a bind for a huge time consuming query!
> I made the following database schema:
>
> CREATE TABLE Song (
>idINTEGER NOT NULL UNIQUE,
>titleVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
>artistVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
>genre_idINT NOT NULL DEFAULT 0,
> PRIMARY KEY (id),
>
> CONSTRAINT fk_Genre FOREIGN KEY (genre_id)
>REFERENCES Genre (id)
>ON DELETE SET DEFAULT
>ON UPDATE CASCADE);
>
> CREATE INDEX Song_Browse_View_idx ON Song(genre_id,artist,title);
>
> CREATE TABLE PlayList (
>id INTEGER NOT NULL UNIQUE,
>name   VARCHAR(15) NOT NULL COLLATE NOCASE, --KORGCOLLATE,
>length INT NOT NULL DEFAULT 0,
>created_date   TEXT,
> PRIMARY KEY (id));
>
> CREATE TABLE PlayList_Song (
>id_song INT NOT NULL,
>id_playlist INT NOT NULL,
>song_number INTEGER NOT NULL,
> PRIMARY KEY (id_playlist, song_number),
> CONSTRAINT fk_PlayList_Song1 FOREIGN KEY (id_song)
>REFERENCES Song (id)
>ON DELETE CASCADE
>ON UPDATE CASCADE,
> CONSTRAINT fk_PlayList_Song2 FOREIGN KEY (id_playlist)
>REFERENCES PlayList (id)
>ON DELETE CASCADE
>ON UPDATE CASCADE);
>
> CREATE INDEX PlayList_Song_song_number_idx ON PlayList_Song(song_number);
>
> Now I need to scroll title filtered by genre_id and artist both in Song
> table and Playlist.
> The query for the first case is very fast:
> SELECT id AS number,title  FROM Song WHERE genre_id = 0 AND artist =
> 'Las ketchup'
> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> version)' OR number > 258)
> ORDER BY title ASC , number ASC LIMIT 4;
>
> The second case is about 35 times slower... so the scrolling is quite
> impossible (or useless)!
> SELECT song_number AS number,title  FROM Song AS S, Playlist_Song AS PS
> WHERE S.id = PS.id_song AND
> PS.id_playlist = 2 AND genre_id = 0 AND artist = 'Las ketchup'
> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> version)' OR number > 959)
> ORDER BY title ASC , number ASC LIMIT 4;
>
> I also execute the EXPLAIN QUERY PLAN:
> 1st query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>
> 2nd query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>  1 1  TABLE Playlist_Song AS PS
> So it seems that the second plan (1,1) requires very long time!
> How can I optimized a such kind of query?
> Cheers
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] join performance query

2010-05-11 Thread Simon Davies
On 11 May 2010 11:07, Andrea Galeazzi  wrote:
> Hi guys,
> I'm in a bind for a huge time consuming query!
.
.
.
> The second case is about 35 times slower... so the scrolling is quite
> impossible (or useless)!
> SELECT song_number AS number,title  FROM Song AS S, Playlist_Song AS PS
> WHERE S.id = PS.id_song AND
> PS.id_playlist = 2 AND genre_id = 0 AND artist = 'Las ketchup'
> AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke
> version)' OR number > 959)
> ORDER BY title ASC , number ASC LIMIT 4;
>
> I also execute the EXPLAIN QUERY PLAN:
> 1st query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>
> 2nd query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
>                  1 1  TABLE Playlist_Song AS PS
> So it seems that the second plan (1,1) requires very long time!
> How can I optimized a such kind of query?

You can see that there is no index being used for looking up data on
table Playlist_Song. A good first step to improve performance is to
add an index that will be used for this query:
  create index playlistSong_id_song on Playlist_Song( id_song );
or
  create index playlistSong_id_playlist on Playlist_Song( id_playlist );

> Cheers
>

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select via Wi-fi very slow

2010-05-11 Thread Tim Romano
N.B. Queries with LIKE will not use an index if the particular
implementation of SQLite overrides LIKE. The .NET implementation I'm
familiar with has done so; the OP's may have done so too. However, GLOB was
left intact and does make use of an index on "starts with" and "equals"
substring searches.

GLOB is case-sensitive.


select * from products  where description GLOB 'shirt*'


Note the asterisk wildcard instead of the percent-symbol.


Regards
Tim Romano
Swarthmore PA




Regards
Tim Romano

On Tue, May 11, 2010 at 5:50 AM, Pavel Ivanov  wrote:

> > Sometimes search found 200 records.  When I do a query via wi-fi takes 1
> > minute.
> > How can I decrease this time?
>
> Time taken to search for the records does not depend on how many
> records found. It depends on how many records were searched through.
> Most probably for your query no indexes are used, so the whole table
> is scanned through. And that means that the whole database is copied
> to your device via WiFi, which apparently is slow.
>
> To decrease the amount of data transfered to the device you can use
> indexes. For this particular query you can create index like this:
>
> CREATE INDEX Product_Ind on Product
> (description COLLATE NOCASE);
>
>
> Pavel
>
> On Mon, May 10, 2010 at 6:31 PM, Ernany  wrote:
> > Hello,
> >
> > I'll try to explain my problem:
> >
> > I have a Symbol MC3090 Data Collector with VB.Net 2005. I have a database
> > with 80,000 records on the computer.
> >
> > For example: I search all words that begin with "shirt" and show in the
> Grid
> > Collector.
> > Sometimes search found 200 records.  When I do a query via wi-fi takes 1
> > minute.
> > How can I decrease this time?
> >
> > On the computer the same search takes a few seconds ...
> >
> >
> >  Public ConnStringDados As String = "Data Source=" & Address & "\" &
> NameDB
> > & ";Version=3;Compress=True;Synchronous=Off;Cache Size=8000;"
> >
> >
> > My select:
> >
> > "SELECT codigo, description FROM Product WHERE description Like '" & Word
> > _Search & "%'"
> >
> >
> > Thanks,
> >
> > Ernany
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] join performance query

2010-05-11 Thread Andrea Galeazzi
Hi guys,
I'm in a bind for a huge time consuming query!
I made the following database schema:

CREATE TABLE Song (
idINTEGER NOT NULL UNIQUE,
titleVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
artistVARCHAR(40) NOT NULL DEFAULT '' COLLATE NOCASE,
genre_idINT NOT NULL DEFAULT 0,
PRIMARY KEY (id),

CONSTRAINT fk_Genre FOREIGN KEY (genre_id)
REFERENCES Genre (id)
ON DELETE SET DEFAULT
ON UPDATE CASCADE);

CREATE INDEX Song_Browse_View_idx ON Song(genre_id,artist,title);

CREATE TABLE PlayList (
id INTEGER NOT NULL UNIQUE,
name   VARCHAR(15) NOT NULL COLLATE NOCASE, --KORGCOLLATE,
length INT NOT NULL DEFAULT 0,
created_date   TEXT,
PRIMARY KEY (id));

CREATE TABLE PlayList_Song (
id_song INT NOT NULL,
id_playlist INT NOT NULL,
song_number INTEGER NOT NULL,
PRIMARY KEY (id_playlist, song_number),
CONSTRAINT fk_PlayList_Song1 FOREIGN KEY (id_song)
REFERENCES Song (id)
ON DELETE CASCADE
ON UPDATE CASCADE,
CONSTRAINT fk_PlayList_Song2 FOREIGN KEY (id_playlist)
REFERENCES PlayList (id)
ON DELETE CASCADE
ON UPDATE CASCADE);

CREATE INDEX PlayList_Song_song_number_idx ON PlayList_Song(song_number);

Now I need to scroll title filtered by genre_id and artist both in Song 
table and Playlist.
The query for the first case is very fast:
SELECT id AS number,title  FROM Song WHERE genre_id = 0 AND artist = 
'Las ketchup'
AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke 
version)' OR number > 258)
ORDER BY title ASC , number ASC LIMIT 4;

The second case is about 35 times slower... so the scrolling is quite 
impossible (or useless)!
SELECT song_number AS number,title  FROM Song AS S, Playlist_Song AS PS 
WHERE S.id = PS.id_song AND
PS.id_playlist = 2 AND genre_id = 0 AND artist = 'Las ketchup'
AND title >= 'Asereje(karaoke version)' AND (title > 'Asereje(karaoke 
version)' OR number > 959)
ORDER BY title ASC , number ASC LIMIT 4;

I also execute the EXPLAIN QUERY PLAN:
1st query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY

2nd query:  0 0 TABLE Song WITH INDEX Song_Browse_View_idx ORDER BY
  1 1  TABLE Playlist_Song AS PS
So it seems that the second plan (1,1) requires very long time!
How can I optimized a such kind of query?
Cheers

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] multi processes, so many errores of SQLITE_BUSY and SQLITE_MISUSE

2010-05-11 Thread liubin liu

Multi processes, getting so many errores of SQLITE_BUSY and SQLITE_MISUSE...

And the system performance is very bad because of the three processes of
insert/read/update database.

How to improve the sqlite3's operations?


_my codes___


___proc_main.c_


// create three processes of insert/read/update database.

// proc_1 : insert;
// proc_2 : select;
// proc_3 : update;


#include 
#include   // for fork();
#include  // for fork(); execv(); usleep();
#include  // for kill();

#include 


static int createdb (void);

int main (void)
{
int ret = -1;


ret = createdb ();
usleep (5);


int i=0;
while (1)
{
pid_t p1, p2, p3;

p1 = fork();
if (0 == p1)
{
char *argv1[] = {"proc_1"};
ret = execv ("./proc_1", argv1);
}

p2 = fork();
if (0 == p2)
{
char *argv2[] = {"proc_2"};
ret = execv ("./proc_2", argv2);
}

p3 = fork();
if (0 == p3)
{
char *argv3[] = {"proc_3"};
ret = execv ("./proc_3", argv3);
}


usleep (100 * 100);

while (1)
{
ret = kill (p1, SIGKILL);
ret = kill (p2, SIGKILL);
ret = kill (p3, SIGKILL);


usleep (10 * 100);
}
}

return 0;
}

static int createdb (void)
{
int ret = -1;

sqlite3 *db = NULL;
ret = sqlite3_open ("test.db", );


ret = sqlite3_exec (db, "CREATE TABLE t1 (mp_no INTEGER, di INTEGER,
data INT64, rec_time INTEGER, data_type CHAR(1) )", NULL,NULL,NULL);
ret = sqlite3_exec (db, "CREATE UNIQUE INDEX i_t1 ON t1 (mp_no, di)",
NULL,NULL,NULL);


ret = sqlite3_close (db);
return ret;
}




___proc_1.c_


#include 
#include// for time();
#include  // for srand(); rand();
#include  // for usleep();
#include 

#define DELAY_TIME  2   // 20ms
#define REDO_TIMES  60

int main (void)
{
int ret = -1;

struct tm *tm = NULL;
time_t t;
char datee[30];

FILE *fp1;
fp1 = fopen ("proc_1.log", "a+");

srand ((int) time(0));


sqlite3 *db = NULL;
ret = sqlite3_open ("test.db", );


char *sql_f = "INSERT OR REPLACE INTO t1 VALUES (%d, %d, %llu, %d, %d)";
char *sql = NULL;

sqlite3_stmt *p_stmt = NULL;

int i=0, n=5000;
for (i=0; i

Re: [sqlite] Foreign constraints and table recreation

2010-05-11 Thread Simon Slavin

On 11 May 2010, at 8:09am, Patrick Earl wrote:

> sqlite> begin transaction;
> sqlite>
> sqlite> DROP TABLE "ParkingLotLevel";
> sqlite> DROP TABLE "Car";
> sqlite> DROP TABLE "ParkingLot";
> sqlite>
> sqlite> Commit transaction;
> Error: foreign key constraint failed
> 
> And now, we switch Car and ParkingLotLevel...
> 
> sqlite> begin transaction;
> sqlite>
> sqlite> DROP TABLE "Car";
> sqlite> DROP TABLE "ParkingLotLevel";
> sqlite> DROP TABLE "ParkingLot";
> sqlite>
> sqlite> Commit transaction;
> 
> No error!  Since the constraints are deferred, the order of the table
> drops shouldn't matter, but it clearly does.

Your database schema always has to be valid, even in the middle of a 
transaction.  Keep the above sequence but instead of using DROP TABLE, use 
DELETE FROM to delete all the rows from the table.  I'm guessing you'll find 
that it performs the way you expected.  That's what things like DEFERRED are 
for.

You are doing something weird by renaming, creating and destroying tables when 
you have already set up your schema.  Generally people just mess with records 
inside the tables, they don't move the tables themselves around.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select via Wi-fi very slow

2010-05-11 Thread Pavel Ivanov
> Sometimes search found 200 records.  When I do a query via wi-fi takes 1
> minute.
> How can I decrease this time?

Time taken to search for the records does not depend on how many
records found. It depends on how many records were searched through.
Most probably for your query no indexes are used, so the whole table
is scanned through. And that means that the whole database is copied
to your device via WiFi, which apparently is slow.

To decrease the amount of data transfered to the device you can use
indexes. For this particular query you can create index like this:

CREATE INDEX Product_Ind on Product
(description COLLATE NOCASE);


Pavel

On Mon, May 10, 2010 at 6:31 PM, Ernany  wrote:
> Hello,
>
> I'll try to explain my problem:
>
> I have a Symbol MC3090 Data Collector with VB.Net 2005. I have a database
> with 80,000 records on the computer.
>
> For example: I search all words that begin with "shirt" and show in the Grid
> Collector.
> Sometimes search found 200 records.  When I do a query via wi-fi takes 1
> minute.
> How can I decrease this time?
>
> On the computer the same search takes a few seconds ...
>
>
>  Public ConnStringDados As String = "Data Source=" & Address & "\" & NameDB
> & ";Version=3;Compress=True;Synchronous=Off;Cache Size=8000;"
>
>
> My select:
>
> "SELECT codigo, description FROM Product WHERE description Like '" & Word
> _Search & "%'"
>
>
> Thanks,
>
> Ernany
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC side and instrumentside.

2010-05-11 Thread Lei, Rick (GE EntSol, SensInsp)
Hi, Dan,

Yes, I noticed this setting. The sector size in SDHC card is 4Kbyte
which is different from the size of harddriver. But I think at least
Sqlite3 should have used most space on the sector when it request a new
sector. It looks like that the Sqlite wastes a lot of space on SDHC
card. Of course the space is not a problem now. Because we can easily
find a 8G or 16G SDHC card. However I think we still need to take care
of it. Maybe I need to return a smaller number when querying the sector
size.

BR
Rick

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Tuesday, May 11, 2010 4:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] DB files are different between PC side and
instrumentside.


On May 11, 2010, at 2:54 PM, Lei, Rick (GE EntSol, SensInsp) wrote:

>
> Hi,
>
> I ported Sqlite3 to my instrument. The database file is stored in a 
> SDHC card. Sqlite3 runs ok. However I found the database file 
> generated on instrument side is much bigger than the file on PC side. 
> I checked the files generated on instrument by UltraEdit. I found a 
> lot of space which is full filled with 0 in the file. And I can't find

> these parts in the files generated on PC side. I don't know why the 
> files are different between PC side and instrument side. I think they 
> should be same. Is there any advice?

Different page sizes.

   http://www.sqlite.org/pragma.html#pragma_page_size

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB files are different between PC side and instrument side.

2010-05-11 Thread Dan Kennedy

On May 11, 2010, at 2:54 PM, Lei, Rick (GE EntSol, SensInsp) wrote:

>
> Hi,
>
> I ported Sqlite3 to my instrument. The database file is stored in a  
> SDHC
> card. Sqlite3 runs ok. However I found the database file generated on
> instrument side is much bigger than the file on PC side. I checked the
> files generated on instrument by UltraEdit. I found a lot of space  
> which
> is full filled with 0 in the file. And I can't find these parts in the
> files generated on PC side. I don't know why the files are different
> between PC side and instrument side. I think they should be same. Is
> there any advice?

Different page sizes.

   http://www.sqlite.org/pragma.html#pragma_page_size

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Computed columns in VIEWs return NULL but should be able to be typed! Any ideas?

2010-05-11 Thread Ben Harper
To determine the type of columns in a view I use
SELECT typeof(column) FROM viewname LIMIT something;

Unfortunately if most of the column data is NULL then you can end up having to 
scan the entire table.
I'm not sure how SQlite calculates these types, but this simple workaround has 
been OK for me so far.
On my simple views I always see homogenous column types.

Ben

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Stefan Keller
Sent: 08 May 2010 04:29 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Computed columns in VIEWs return NULL but should be able 
to be typed! Any ideas?

Right, I don't want to lessen scalability of my application. But I
also don't want to introcude redundancy just because some columns on
the the view lack a type.

I assume SQLite wants to adhere to the relational model which states:
The result of a select statement is another relation. And "A view is
just a relation (a table), but stores a definition, rather than a set
of tuples." (from chapter fundamentals of "Database Management
Systems" by Ramakrishnan & Gehrke, 2002).

So, for the "consumer" a view should behave like a table. This has the
following advances:
* Decoupling: Rename physical column names without breaking code which
reads views.
* Security: One can grant read permission on a view without granting
any permission to the underlying table.
* Simplicity: It's easier to write queries.
* Helps to avoud redundancy: Views can have calculated columns, like
age (from birthdate) or tax or rebate.

So its pretty clear to me that views should have the option to return
types. CAST could be a solution.
Determintation of the return type of a calculation could be another
step. That's from the basics of computer languages.

-S.

2010/5/7 Pavel Ivanov :
>> To Pavel: My application reads the column types out in order to pretty
>> print the values - as mentioned by Tom - but also to generate a dialog
>> for entering new data (of course combined with INSTEAD OF TRIGGERs).
>
> So as I see it: you have some universal code for displaying and
> inputing data. And you don't want to lessen scalability of your
> application by hard-coding the relation between column names and their
> data types. So you can create additional table that will contain this
> information. And I think this solution is better than just relying on
> declared type of columns - more straightforward and more
> understandable by somebody coming to your project in the future.
>
>
> Pavel
>
> On Thu, May 6, 2010 at 7:11 PM, Stefan Keller  wrote:
>> Thank you, Tom and Dan, for your constructive answers.
>>
>> To Pavel: My application reads the column types out in order to pretty
>> print the values - as mentioned by Tom - but also to generate a dialog
>> for entering new data (of course combined with INSTEAD OF TRIGGERs).
>>
>> I understand that it's difficult to implement result-types for
>> expressions in general. To me it would be enough if there would be at
>> least one way to assign result-types with a CAST "wrapper" as I have
>> mentioned before.
>>
>> Does anybody know whom to present this proposal in order remedy this
>> current inconsistency in SQLite? sqlite-dev?
>>
>> In the meantime I thought of a hack and to assign the result-type by
>> hand in the data dictionary after having created the VIEW.
>> Would this be a feasible work around?
>>
>> Yours, S.
>>
>> 2010/5/6 BareFeetWare :
>>> On 06/05/2010, at 2:51 PM, Dan Bishop wrote:
>>>
 BareFeetWare wrote:
>
>>>
> I've had the same issue. In the end I had to parse my view functions in 
> my own code and look for functions that give a particular type of result. 
> So, for instance, round() gives an integer, round(..., 2) gives a real, 
> concat() gives text. I also look for a cast(... as type) to use that 
> declared type. It's fiddly and I would have hoped SQLite would have at 
> least declared the type if a cast was present, but it seems not.
>
>>>
 A CAST expression could be assigned a declared type
>>>
>>> Yes, that should be fairly easy to do in SQLite, simpler than my adding it 
>>> in my own code (which then requires my own parser - not a trivial exercise).
>>>
 but it would be difficult to implement it for expressions in general.  Not 
 only would you have to declare a type for every function
>>>
>>> Yes, probably half the functions don't have a consistently returned type 
>>> (eg length, concat). The ones that don't (eg coalesce) should just be left 
>>> to return a null (unknown) type. The schema developer can simply wrap those 
>>> expressions in a cast if they want a predetermined output type.
>>>
 but you'd have to do it for operators as well, and what type should A * B 
 be if A is "SMALLINT" and B is "UINT"?
>>>
>>> I hadn't thought of operators. As far as I know, mathematical operators (* 
>>> 

Re: [sqlite] find same type

2010-05-11 Thread Simon Davies
On 11 May 2010 08:49, Andrea Galeazzi  wrote:
> I've got this table
> TABLE T (
>    id                    INTEGER NOT NULL UNIQUE,
>   file_type            VARCHAR(10) NOT NULL)
> My goal is to check if a certain selection has all the same values. I
> thought that the following statement should be enough for my aim:
> SELECT (SELECT file_type FROM T T1, T T2 WHERE T1.id IN (1,4,5) AND
> T2.id IN (1,4,5) AND T1.file_type <> T2.file_type LIMIT 1) IS NULL
> Does anyone know a fastest query to achieve that?

I've not checked performance, but you can eliminate the join with:

select (select count( distinct file_type ) from t where id in( 1,4,5 ) )=1;

> Cheers.

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DB files are different between PC side and instrument side.

2010-05-11 Thread Lei, Rick (GE EntSol, SensInsp)
 
Hi,

I ported Sqlite3 to my instrument. The database file is stored in a SDHC
card. Sqlite3 runs ok. However I found the database file generated on
instrument side is much bigger than the file on PC side. I checked the
files generated on instrument by UltraEdit. I found a lot of space which
is full filled with 0 in the file. And I can't find these parts in the
files generated on PC side. I don't know why the files are different
between PC side and instrument side. I think they should be same. Is
there any advice?

Thanks!

Rick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] find same type

2010-05-11 Thread Andrea Galeazzi
I've got this table
TABLE T (
idINTEGER NOT NULL UNIQUE,
   file_typeVARCHAR(10) NOT NULL)
My goal is to check if a certain selection has all the same values. I 
thought that the following statement should be enough for my aim:
SELECT (SELECT file_type FROM T T1, T T2 WHERE T1.id IN (1,4,5) AND 
T2.id IN (1,4,5) AND T1.file_type <> T2.file_type LIMIT 1) IS NULL
Does anyone know a fastest query to achieve that?
Cheers.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign constraints and table recreation

2010-05-11 Thread Patrick Earl
Okay, I tried the strategy discussed previously but I'm still having
problems.  Either I'm not seeing something, or there's a bug in the
foreign constraint support.  Take a look at the following two
execution snippets:

sqlite>
sqlite> commit transaction;
sqlite>
sqlite> begin transaction;
sqlite>
sqlite> DROP TABLE "ParkingLotLevel";
sqlite> DROP TABLE "Car";
sqlite> DROP TABLE "ParkingLot";
sqlite>
sqlite> Commit transaction;
Error: foreign key constraint failed

And now, we switch Car and ParkingLotLevel...

sqlite>
sqlite> commit transaction;
sqlite>
sqlite> begin transaction;
sqlite>
sqlite> DROP TABLE "Car";
sqlite> DROP TABLE "ParkingLotLevel";
sqlite> DROP TABLE "ParkingLot";
sqlite>
sqlite> Commit transaction;
sqlite>

No error!  Since the constraints are deferred, the order of the table
drops shouldn't matter, but it clearly does.

I've included a full failing example at the following link.  Beware
that it's quite long and full of boring automated code.  The
interesting part is right at the end and corresponds with what I wrote
above.

http://patearl.net/files/broken4.txt

This problem (or some variation thereof) has been making me crazy for
the past three days.  So nice to finally have a reasonable looking
test case. :)

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users