[sqlite] sqlite Exception System.InvalidCastException: Specified cast is not valid.

2009-06-16 Thread Seysan
This is the Code in C#:

SQLiteDataReader rdr;
SQLiteCommand contentCommand;
String cmd = String.Format("SELECT
filename,content,filesize FROM documents WHERE did={0}",did);

contentCommand = sqlconn.CreateCommand();
contentCommand.CommandText = cmd;
rdr = contentCommand.ExecuteReader();
rdr.Read();
MessageBox.Show(rdr.GetFieldType(1).ToString());
MessageBox.Show(rdr.FieldCount.ToString());

-->long len = rdr.GetBytes(1, 0, null, 0, 0);
Byte[] buf = new Byte[len];
rdr.GetBytes(1, 0, buf, 0,(int) len);

I get the Exception there!  sqlite Exception
System.InvalidCastException: Specified cast is not valid.

The GetFieldType Shows: Systes.Byte[]

What is it that I'm doing wrong?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Seysan
I still have that problem.

can anyone help me in right direction?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Machin wrote:
> I have developed a table which maps most latin-decorated Unicode 
> characters into the non-decorated basic form. 

This is a fascinating article by Sean Burke (a linguist) about
converting all Unicode characters into US-ASCII.  The conversion is
primarily based on sound, so in theory running soundex on the result
could be somewhat useful.

  http://interglacial.com/~sburke/tpj/as_html/tpj22.html

You can find his tables at this link encoded as perl data structures.


http://cpansearch.perl.org/src/SBURKE/Text-Unidecode-0.04/lib/Text/Unidecode/

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAko4c4wACgkQmOOfHg372QQwUwCglqxQzZSGjHHoL13/L8Kw6NrX
46wAn3q12ugcrBryawTwpV8bjs/nYlZe
=XPU9
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread John Machin
On 17/06/2009 11:52 AM, Dennis Cote wrote:
> Jens Páll Hafsteinsson wrote:
>> Closing and opening again did not speed up steps 1-4, it actually slowed 
>> things down even more. The curve from the beginning is a bit similar to a 
>> slightly flattened log curve. When I closed the database and started the 
>> test again, a similar curve appeared again, but now starting from where the 
>> first run left off.
>>
>> I've been running the same 3.6.15 since this afternoon (the previous test 
>> was using 3.6.14) and it seems to flatten out fairly quickly but it is 
>> significantly slower (2.3 to 1.3 times slower, depending on where you 
>> measure it using the data I have). I'm not that worried about that for the 
>> time being; I'm just hoping it will stay flat.
>>
>> JP
>> On 16 Jun 2009, at 1:06pm, Jens Páll Hafsteinsson wrote:
>>> 1.   start a transaction
>>> 2.   insert 1000 records
>>> 3.   commit
>>> 4.   repeat steps 1-3 100 times
>>> 5.   delete everything from the table
>>> 6.   Start again at step 1
> 
> I suspect that you may be using a autoincrement id field and then 
> running into the extra work (both CPU load and increased disk space) 
> needed to handle the variable sized integer storage method used by 
> SQLite. This would lead to the type of logarithmic growth you are 
> seeing. The first few iterations used short single byte integer values, 
> the next bunch use 2 byte integer values, etc. The autoincrement field 
> would cause SQLite to continue at the same speed after restarting the 
> application as you have described, since the next field values used 
> would continue from where it left off at the end of the previous run.

Even after "delete everything from the table"; see below.

> 
> I would have expected the time to stabilize on 3 byte values fairly 
> qucikly, and then only change again when switching to values that 
> required 4 bytes.
> 
> This may be a part of the answer even if it is not the complete answer.

 From Jens's description, he is writing only 1000 * (1 + 100) = 101,000 
records before "delete everything from the table". A 3-byte variable 
integer will hold a number 16K to 2M approx. 4-byte: 2M to 268M approx.

On the surface, should be OK. UNDER the surface:

sqlite> create table t (p integer primary key autoincrement, x text);
sqlite> select * from sqlite_master;
table|t|t|2|CREATE TABLE t (p integer primary key autoincrement, x text)
table|sqlite_sequence|sqlite_sequence|3|CREATE TABLE 
sqlite_sequence(name,seq)
sqlite> insert into t(x) values('blah');
sqlite> insert into t(x) values('yadda');
sqlite> select * from sqlite_sequence;
t|2
sqlite> delete from t;
sqlite> select * from sqlite_sequence;
t|2 == whoops

Documented (and good behaviour) -- never re-issue a key while the table 
exists.

Possible solution (apart from DROP TABLE):

sqlite> delete from sqlite_sequence where name = 't';
sqlite> select * from sqlite_sequence;
sqlite>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jay A. Kreibich
On Wed, Jun 17, 2009 at 11:52:45AM +1000, John Machin scratched on the wall:
> On 17/06/2009 6:17 AM, Hoover, Jeffrey wrote:
> 
> > One other note, if you have a primary key whose value is continually
> > increasing your pk index can become imbalanced and therefore
> > inefficient.
> 
> A B-tree becomes imbalanced? How so?
> 
> http://www.sqlite.org/fileformat.html#btree_structures says: "The tree 
> is always of uniform height, meaning the number of intermediate levels 
> between each leaf node page and the root page is the same."
> 
> Do you have any evidence to the contrary?

  It won't become imbalanced, but if you're inserting rows with an
  explicit INTEGER PRIMARY KEY value in a non-increasing order, the
  tree will require sorting and re-balancing.  That takes time and
  requires additional disk writes (and, as others have pointed out,
  disk writes are VERY expensive due to their transactional nature).

  Also, depending on just how mixed up the pattern is, you can get into
  situations where a very large index will over-flow the default 1500
  page cache-size.  It is well known that if you want to build an index
  on a large table, increasing the cache size will help make that
  process faster.  It might be true here as well.  Try setting the page
  cache to something nice and huge, like 10x or 100x the default, and
  see if that helps.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread John Machin
On 17/06/2009 1:19 AM, Christophe Leske wrote:

>>> So far ,  so good, but my client also expects ANY simplification of a
>>> character to be recognized:
>>> Cote d'azur for instance  should return "Côte d'azur"
>>> or the Sao Paulo issue - how can a search for "Sao Paulo" return "Sào
>>> Paulo" in the result set?
>>> 
>> How are these examples different from previous ones?
>>   
> I am sorry, but I find this to be quite obvious?
> Here, the problematic char is to be found in the *result set*, not in 
> the query itself.

It's NOT different. You need to map BOTH your database values and your 
query values into the same space and then compare them. Don't fall into 
the trap of assuming that your database is correctly accented.

> 
> How do you educate SQlite to return me "Sào Paulo" if only "Sao Paulo" 
> is being entered?
> How do I know which character to substitute with a placeholder?
> 
> Is it
> S%o Paulo to look for?
> Or Sa% Paulo?
> Or Sao P%ulo?
> 
> I can't know this beforehand. These are just examples, i need a generic 
> solution if possivble.
> 
> All i can see so far is to build a table of all special characters ever 
> used in the 24000 names of cities which make problems and remap them 
> accordingly.

That's exactly what you need. And you're not the first person with this 
problem. See for example 
http://mail.python.org/pipermail/python-list/2008-July/669592.html

The technique discussed there starts off with using the unicodedata 
database and finding dynamically (and caching) Unicode characters that 
can be decomposed into a basic latin letter plus one or more accents, 
backed up by a table of cases not found by that technique. Great for 
likers of clever code who have lots of CPU and disk space (unicodeddata 
is huge!) to spare.

I have developed a table which maps most latin-decorated Unicode 
characters into the non-decorated basic form. Sometimes 2 ASCII 
characters will be produced (e.g. latin capital letter thorn -> "Th") 
but latin small letter u with diaeresis -> "u" -- not "ue" which is 
German-specific.

I can let you have a copy if you are interested. What is your 
implementation language? C/C++?

BTW someone mentioned smashing everything into lowercase for comparison 
purposes at some stage -- I'd suggest uppercase especially if you have a 
few of the good old eszett in your data :-)

BTW2: The only sane usage of soundex IMHO is as a strawman when 
proposing phonetic matching algorithms like NYSIIS and [Double 
]Metaphone :-)

HTH,
John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Dennis Cote
Jens Páll Hafsteinsson wrote:
> Closing and opening again did not speed up steps 1-4, it actually slowed 
> things down even more. The curve from the beginning is a bit similar to a 
> slightly flattened log curve. When I closed the database and started the test 
> again, a similar curve appeared again, but now starting from where the first 
> run left off.
>
> I've been running the same 3.6.15 since this afternoon (the previous test was 
> using 3.6.14) and it seems to flatten out fairly quickly but it is 
> significantly slower (2.3 to 1.3 times slower, depending on where you measure 
> it using the data I have). I'm not that worried about that for the time 
> being; I'm just hoping it will stay flat.
>
> JP
>
>
>
> On 16 Jun 2009, at 1:06pm, Jens Páll Hafsteinsson wrote:
>
>   
>> 1.   start a transaction
>>
>> 2.   insert 1000 records
>>
>> 3.   commit
>>
>> 4.   repeat steps 1-3 100 times
>>
>> 5.   delete everything from the table
>>
>> 6.   Start again at step 1
>> 
>
>   
Jens,

Can you post the schema for your table and the index (i.e. the actual 
schema you are using for the test), and also the code that shows how you 
are assigning values to the columns in your table when you do the inserts?

I suspect that you may be using a autoincrement id field and then 
running into the extra work (both CPU load and increased disk space) 
needed to handle the variable sized integer storage method used by 
SQLite. This would lead to the type of logarithmic growth you are 
seeing. The first few iterations used short single byte integer values, 
the next bunch use 2 byte integer values, etc. The autoincrement field 
would cause SQLite to continue at the same speed after restarting the 
application as you have described, since the next field values used 
would continue from where it left off at the end of the previous run.

I would have expected the time to stabilize on 3 byte values fairly 
qucikly, and then only change again when switching to values that 
required 4 bytes.

This may be a part of the answer even if it is not the complete answer.

Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread John Machin
On 17/06/2009 6:17 AM, Hoover, Jeffrey wrote:

> One other note, if you have a primary key whose value is continually
> increasing your pk index can become imbalanced and therefore
> inefficient.

A B-tree becomes imbalanced? How so?

http://www.sqlite.org/fileformat.html#btree_structures says: "The tree 
is always of uniform height, meaning the number of intermediate levels 
between each leaf node page and the root page is the same."

Do you have any evidence to the contrary?

> You may be able to get around this by using INTEGER
> PRIMARY KEY (or INTEGER PRIMARY KEY AUTOINCREMENT).

So with autoincrement SQLite3 generates the "continually increasing" pk 
values instead of the app doing it, and that solves the alleged problem?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] minimum cache_size for a database connection?

2009-06-16 Thread D. Richard Hipp

On Jun 16, 2009, at 8:28 PM, Dave Toll wrote:

> Hello list
>
>
>
> I've noticed that if I call "PRAGMA cache_size=0", my database rows  
> are
> still cached (very little disk access observed). If I call "PRAGMA
> cache_size=1" I can see that there is very little caching (disk access
> observed). Is there a minimum allowed cache_size setting? Does
> cache_size=0 mean "cache everything"? I am using SQLite 3.6.13 with  
> the
> default page cache, and shared-cache enabled.


I think any  cache size suggestion less than 10 is ignored.  That used  
to be the case.  I don't know if it still is.

Note that this does *not* control your operating systems file cache.   
This is SQLite's user-space cache only.

If more pages than the cache allotment are required, the the cache  
size limit is automatically increased.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] minimum cache_size for a database connection?

2009-06-16 Thread Dave Toll
Hello list

 

I've noticed that if I call "PRAGMA cache_size=0", my database rows are
still cached (very little disk access observed). If I call "PRAGMA
cache_size=1" I can see that there is very little caching (disk access
observed). Is there a minimum allowed cache_size setting? Does
cache_size=0 mean "cache everything"? I am using SQLite 3.6.13 with the
default page cache, and shared-cache enabled.

 

Cheers,

Dave.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread D. Richard Hipp

On Jun 16, 2009, at 6:03 PM, Kalyani Phadke wrote:

>
> if I create dump using sqlite3.5.4 and recreate DB using that dump  
> data
> using sqlite3.5.4,everything works fine. if I create dump using
> sqlite3.5.4 and recreate DB using that dump data using  
> sqlite3.6.15,gets
> error.
>
> The text generated from .dump command of sqlite3 version 3.5.4.
> CREATE TABLE Users
> (
>ID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT ,
>name VARCHAR(50) NOT NULL COLLATE NOCASE,
>Page VARCHAR(255) NULL DEFAULT ("../xyz/main.asp")
>
> );
>
> Changed text to
>
> CREATE TABLE Users
> (
>ID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT ,
>name VARCHAR(50) NOT NULL COLLATE NOCASE,
> Page VARCHAR(255) NULL DEFAULT ('../xyz/main.asp')
> );
>
> After changing everything seems to be fine.


The bug was in 3.5.4.  It should never have accepted the double-quoted  
string as a default value.  SQL always uses single quotes for string  
literals.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Joel Lucsy
Hmmm, I know I've done this before. I'll have to dig deeper and find an example.

--
Joel LucsySeysan wrote:

When I used your code I get:  No current row

If I use: rdr.Read();
before your code, I get the same exception as before.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Fred Williams
I'm aware of the limitations of the 92 Standard, as I too have suffered the
same fate.  But one thing for sure, conforming to the standard certainly
makes migration between "Standard" implementations somewhat simpler.

I'm not speaking standardally but speaking logically. (i.e. Mr. Spock,
logical) I still feel an error message rather than an unrequested result to
be much more palatable and much less potentially destructive.

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Simon Slavin
Sent: Tuesday, June 16, 2009 12:23 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Datatypes (D. Richard Hipp)



On 16 Jun 2009, at 5:32pm, Fred Williams wrote:

> Still think, logically speaking, the construct should throw an error
> message, rather than make stealth changes to the expected result.

I suppose it might be possible to provide SQLite with a 'conform to
standard' mode you could select using a PRAGMA.  However, I used to
have to work entirely within SQL92 (working for a big client which
insisted on standard conformance) and it was a pain: the standard
lacks many features we find very useful.

Simon.
--
  http://www.hearsay.demon.co.uk | I'd expect if a computer was involved
 | it all would have been much worse.
No Buffy for you.|-- John "West" McKenna
Leave quickly now. -- Anya   |  THE FRENCH WAS THERE

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Seysan
When I used your code I get:  No current row

If I use: rdr.Read();
before your code, I get the same exception as before.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread AFShin Abdollahi
The rdr is:
SQLiteDataReader rdr;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Joel Lucsy
On Tue, Jun 16, 2009 at 6:44 PM, Seysan wrote:
> String cmd = String.Format("SELECT filename,content FROM documents
> WHERE did={0}",did);
>           contentCommand = sqlconn.CreateCommand();
>           contentCommand.CommandText = cmd;
>           rdr = contentCommand.ExecuteReader();

If rdr is a IDataReader, then it should work like:

int len = rdr.GetBytes( 0, 0, null, 0, 0 );
byte[] buf = new byte[ len ];
rdr.GetBytes( 0, 0, buf, 0, len );

-- 
Joel Lucsy
"The dinosaurs became extinct because they didn't have a space
program." -- Larry Niven
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Joel Lucsy
On Tue, Jun 16, 2009 at 6:44 PM, Seysan wrote:
> String cmd = String.Format("SELECT filename,content FROM documents
> WHERE did={0}",did);
>           rdr = contentCommand.ExecuteReader();

What type is rdr? I believe it should be byte[]

-- 
Joel Lucsy
"The dinosaurs became extinct because they didn't have a space
program." -- Larry Niven
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reading Blob/Writing to File in C# 2008

2009-06-16 Thread Seysan
Hello,
I'm Using  "System.Data.SQLite" in VS 2008 C#.

I'm using DataReader to read the data.
Everything works find except the BLOB.
I want to get that Blob and Write it to a File.

Blob data is mostly Picture and Music.

Here a bit of Code:
did-> is the document id.

String cmd = String.Format("SELECT filename,content FROM documents
WHERE did={0}",did);

   contentCommand = sqlconn.CreateCommand();
   contentCommand.CommandText = cmd;
   rdr = contentCommand.ExecuteReader();

I tried rdr.GetBytes but it give me this Error:

System.InvalidCastException was unhandled
 Message="Specified cast is not valid."
 Source="System.Data.SQLite"
 StackTrace:
  at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ)
  at System.Data.SQLite.SQLiteDataReader.GetBytes(Int32 i, Int64
fieldOffset, Byte[] buffer, Int32 bufferoffset, Int32 length)
  at SDA_Viewer.Form1.btnShow_Click(Object sender, EventArgs e)
in Form1.cs:line 139
  at System.Windows.Forms.Control.OnClick(EventArgs e)
  at System.Windows.Forms.Button.OnClick(EventArgs e)
  at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
  at System.Windows.Forms.Control.WmMouseUp(Message& m,
MouseButtons button, Int32 clicks)
  at System.Windows.Forms.Control.WndProc(Message& m)
  at System.Windows.Forms.ButtonBase.WndProc(Message& m)
  at System.Windows.Forms.Button.WndProc(Message& m)
  at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message&
m)
  at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message&
m)
  at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr
hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
  at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
  at
System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32
dwComponentID, Int32 reason, Int32 pvLoopData)
  at
System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32
reason, ApplicationContext context)
  at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32
reason, ApplicationContext context)


I've read most of the mailing list archive, and any other resource on
Web, So far nothing !

Please Help,
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] NULL data .dump command

2009-06-16 Thread Joanne Pham
Hi All,
I ran two queries:
    1) select * from signature;
        I didn't see the "SQL error: database disk image is malformed"

    2) But if I ran the "select * from sig order by peerid;" then I have seen 
the malformed
        ...
        11020876449360377856|345049073990|1276|368|230383|1857|1245183730|2|0
        SQL error: database disk image is malformed
   
Is the index corruped some where?
Your help is greatly appreciated.
Thanks,
JP





From: Joanne Pham 
To: General Discussion of SQLite Database 
Sent: Tuesday, June 16, 2009 3:26:37 PM
Subject: [sqlite] NULL data .dump command

Hi All,
I have the table is defined as below:
CREATE TABLE `signature` (
  `sig` char(50) NOT NULL, 
  `id' bigint(20) default '0',

But I have ran the folowing command:
    .output mySelect
    select * from signature;
    then I didn't see NULL values in the mySelect file at all
But I ran the following command:
    .output myDump
    .dump signature
    then I viewed the file it has the following NULL values
       INSERT INTO "signature" 
VALUES('573535428650752000',345049073990,1294,365,230296,414,1245183707,2,'0');
            INSERT INTO "signature" 
VALUES(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
           even though sig is defined as NOT NULL but why I have NULL for some 
of these insert statement in my dump but not in select.
Thanks,
JP


      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] NULL data .dump command

2009-06-16 Thread Joanne Pham
Hi All,
I have the table is defined as below:
CREATE TABLE `signature` (
  `sig` char(50) NOT NULL, 
  `id' bigint(20) default '0',

But I have ran the folowing command:
    .output mySelect
    select * from signature;
    then I didn't see NULL values in the mySelect file at all
But I ran the following command:
    .output myDump
    .dump signature
    then I viewed the file it has the following NULL values
       INSERT INTO "signature" 
VALUES('573535428650752000',345049073990,1294,365,230296,414,1245183707,2,'0');
            INSERT INTO "signature" 
VALUES(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
           even though sig is defined as NOT NULL but why I have NULL for some 
of these insert statement in my dump but not in select.
Thanks,
JP


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread Kalyani Phadke
 
if I create dump using sqlite3.5.4 and recreate DB using that dump data
using sqlite3.5.4,everything works fine. if I create dump using
sqlite3.5.4 and recreate DB using that dump data using sqlite3.6.15,gets
error. 
 
The text generated from .dump command of sqlite3 version 3.5.4.
CREATE TABLE Users
(
ID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT ,
name VARCHAR(50) NOT NULL COLLATE NOCASE,   
Page VARCHAR(255) NULL DEFAULT ("../xyz/main.asp")

);

Changed text to

CREATE TABLE Users
(
ID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT ,
name VARCHAR(50) NOT NULL COLLATE NOCASE,   
 Page VARCHAR(255) NULL DEFAULT ('../xyz/main.asp')
);

After changing everything seems to be fine. 

Thanks,
-Kalyani
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of D. Richard Hipp
Sent: Tuesday, June 16, 2009 12:23 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15
issue


On Jun 16, 2009, at 3:10 PM, Kalyani Phadke wrote:

> But before entering the bug , I would like to find out how this 
> happened. Whats the cause of it. Because I am not able to duplicate it

> all the time. IS there any way to find out what went wrong?
> Thanks,
>> I have to go manually and edit sql ,make changes so that everything 
>> works fine.
>>

What do you have to manually edit to make it work.  Please show us the
text before and after you edit.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Fred Williams
I'll settle for wishful thinking if that is what it takes to clone a table
schema with a Create...Select...statement.

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Igor Tandetnik
Sent: Tuesday, June 16, 2009 12:23 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Datatypes (D. Richard Hipp)


Fred Williams 
wrote:
> What!  The standard is lacking?  Will wonders never cease?
>
> Thanks, you saved me from some boring reference reading.
>
> Still think, logically speaking, the construct should throw an error
> message, rather than make stealth changes to the expected result.

But why do you expect a particular result different from the one you
actually get? What basis do you have for your expectations, other than
wishful thinking?

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Douglas E. Fajardo
This may be a red herring, but how do you generate the 'random' characters for 
the test? Could entropy exhaustion affecting the results?

Just a thought...
*** Doug Fajardo

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: Tuesday, June 16, 2009 12:29 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

On Tue, Jun 16, 2009 at 1:59 PM, Jens Páll
Hafsteinsson wrote:
> Ok, but you might have to run this a bit longer (or more iterations).

I ran the code a bit longer, and yes, I do notice an increase. Here
are my results after 610 iterations... I have provided the numbers for
10 iterations every hundred (my numbers are lower this time than the
earlier posted 8 seconds or so per iteration because of a different
way of generating random strings).

1.  3 wallclock secs ( 0.87 usr +  0.24 sys =  1.11 CPU)
2.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
3.  3 wallclock secs ( 0.89 usr +  0.26 sys =  1.15 CPU)
4.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
5.  3 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
6.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
7.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
8.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
9.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
10.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
..
200.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
201.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
202.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
203.  3 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
204.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
205.  4 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
206.  4 wallclock secs ( 0.90 usr +  0.30 sys =  1.20 CPU)
207.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
208.  4 wallclock secs ( 0.88 usr +  0.30 sys =  1.18 CPU)
209.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
210.  3 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
..
300.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
301.  4 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
302.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
303.  6 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
304.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
305.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
306.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
307.  5 wallclock secs ( 0.89 usr +  0.31 sys =  1.20 CPU)
308.  8 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
309. 13 wallclock secs ( 0.90 usr +  0.35 sys =  1.25 CPU)
310.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
..
400.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
401.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
402.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
403.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
404.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
405.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
406.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
407.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
408.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
409.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
410.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
..
500.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
501.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
502.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
503.  6 wallclock secs ( 0.90 usr +  0.36 sys =  1.26 CPU)
504.  6 wallclock secs ( 0.88 usr +  0.37 sys =  1.25 CPU)
505. 11 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
506.  9 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
507.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
508.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
509.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
510.  6 wallclock secs ( 0.88 usr +  0.34 sys =  1.22 CPU)
..
600.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
601.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
602.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
603.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
604.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
605.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
606.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
607.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
608.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
609.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
610.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)


> My first 10 runs give the following results (in seconds):
>
> 1       15,681
> 2       16,010
> 3       16,093
> 4       16,168
> 5       16,057
> 6       16,585
> 7       16,114
> 8       16,596
> 9       16,115
> 10      16,270
>
> Jumping around a bit but it seems l

Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Hoover, Jeffrey
I vacuum frequently, particular after large updates or deletes.

Two other potential optimizations of inserts (you may already be doing
this):
- use bulk insert
- encapsulate the inserts within a transaction

One other note, if you have a primary key whose value is continually
increasing your pk index can become imbalanced and therefore
inefficient.  You may be able to get around this by using INTEGER
PRIMARY KEY (or INTEGER PRIMARY KEY AUTOINCREMENT).  As a side benefit
this kind of column queries very efficiently since the column value is
row's address (the ROWID).

-Jeff

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wes Freeman
Sent: Tuesday, June 16, 2009 4:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

VACUUM cleaned up the file in my current test, after 1200 iterations,
making it run at ~4.6seconds again, rather than ~5.1. It seemed to get
it almost back to the performance of a clean file.

Didn't know about the vacuum command--Cool. By the way, the vacuum
operation takes ~1.6 seconds for my test database. Seems like a
reasonable thing to run before a backup or something, if your
application would be prone to this sort of fragmentation.

Wes

On Tue, Jun 16, 2009 at 1:44 PM, Hoover, Jeffrey
wrote:
> Wouldn't a period VACUUMing of the database alleviate fragmentation?
>
> - Jeff
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Wes Freeman
VACUUM cleaned up the file in my current test, after 1200 iterations,
making it run at ~4.6seconds again, rather than ~5.1. It seemed to get
it almost back to the performance of a clean file.

Didn't know about the vacuum command--Cool. By the way, the vacuum
operation takes ~1.6 seconds for my test database. Seems like a
reasonable thing to run before a backup or something, if your
application would be prone to this sort of fragmentation.

Wes

On Tue, Jun 16, 2009 at 1:44 PM, Hoover, Jeffrey wrote:
> Wouldn't a period VACUUMing of the database alleviate fragmentation?
>
> - Jeff
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Wes Freeman
On Tue, Jun 16, 2009 at 2:51 PM, Jens Páll
Hafsteinsson wrote:
> Closing and opening again did not speed up steps 1-4, it actually slowed 
> things down even more. The curve from the beginning is a bit similar to a 
> slightly flattened log curve. When I closed the database and started the test 
> again, a similar curve appeared again, but now starting from where the first 
> run left off.

I noticed the same thing: if I start the test over with the same file,
the time is roughly where it left off.

I'm to 800 iterations now, and it's up to ~4.9s. I'm not really
alarmed by this increase, yet.

On Tue, Jun 16, 2009 at 3:11 PM, Jens Páll
Hafsteinsson wrote:
> Wes, I gather you are using version 3.6.14 in this test (?)

It looks like the Sqlite version included with Qt 4.5 is 3.5.9. I may
have to compile the latest. I think it's strange that you're seeing
such a big difference between minor version numbers.

Wes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread P Kishor
On Tue, Jun 16, 2009 at 1:59 PM, Jens Páll
Hafsteinsson wrote:
> Ok, but you might have to run this a bit longer (or more iterations).

I ran the code a bit longer, and yes, I do notice an increase. Here
are my results after 610 iterations... I have provided the numbers for
10 iterations every hundred (my numbers are lower this time than the
earlier posted 8 seconds or so per iteration because of a different
way of generating random strings).

1.  3 wallclock secs ( 0.87 usr +  0.24 sys =  1.11 CPU)
2.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
3.  3 wallclock secs ( 0.89 usr +  0.26 sys =  1.15 CPU)
4.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
5.  3 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
6.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
7.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
8.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
9.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
10.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
..
200.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
201.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
202.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
203.  3 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
204.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
205.  4 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
206.  4 wallclock secs ( 0.90 usr +  0.30 sys =  1.20 CPU)
207.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
208.  4 wallclock secs ( 0.88 usr +  0.30 sys =  1.18 CPU)
209.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
210.  3 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
..
300.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
301.  4 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
302.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
303.  6 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
304.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
305.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
306.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
307.  5 wallclock secs ( 0.89 usr +  0.31 sys =  1.20 CPU)
308.  8 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
309. 13 wallclock secs ( 0.90 usr +  0.35 sys =  1.25 CPU)
310.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
..
400.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
401.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
402.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
403.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
404.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
405.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
406.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
407.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
408.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
409.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
410.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
..
500.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
501.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
502.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
503.  6 wallclock secs ( 0.90 usr +  0.36 sys =  1.26 CPU)
504.  6 wallclock secs ( 0.88 usr +  0.37 sys =  1.25 CPU)
505. 11 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
506.  9 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
507.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
508.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
509.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
510.  6 wallclock secs ( 0.88 usr +  0.34 sys =  1.22 CPU)
..
600.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
601.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
602.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
603.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
604.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
605.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
606.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
607.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
608.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
609.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
610.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)


> My first 10 runs give the following results (in seconds):
>
> 1       15,681
> 2       16,010
> 3       16,093
> 4       16,168
> 5       16,057
> 6       16,585
> 7       16,114
> 8       16,596
> 9       16,115
> 10      16,270
>
> Jumping around a bit but it seems linear (but it's not). I'm not saying that 
> 3.6.13 behaves the stame as 3.6.14 but this only becomes really apparent when 
> run over an extended period of time. 3.6.15 looks promising though in this 
> respect but looking at the first 10 runs of that version does definitely not 
> say the whole story (at least in my case):
>
> 1       17,874
> 2       21,753
> 3       23,5
> 4       26,591
> 5       27,925
> 6       29,682
> 7       29,807
>

Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Nicolas Williams
D. Richard Hipp wrote:
> It was done to fix a bug.
> 
> SQLite allows arbitrary text as the "datatype" of a column.  So you
> could say (for example):
> 
> CREATE TABLE t1(a "duh! ++ x hi, yall!(+123)" unique);
> 
> And the datatype for t1.a would be "duh! ++ x hi, yall!(+123)".  It
> used to be that this datatype would be copied into the synthesized
> CREATE TABLE statements for tables generated using CREATE TABLE AS.
> But that required that the datatype name be properly quoted so that it
> could be parsed again.  The logic that did this quoting was complex
> and contained bugs.  It was much easier to strip out the whole thing
> and replace it with primitive datatype generator (that inserts "NUM"
> in place of the complex string show above) than to try to fix the
> quoting for every possible case.
> 

I agree that quoting a type like "duh! ++ x hi, yall!(+123)" would be a
pain, and that this bug fix is mostly a good one.

I worry though that in most cases no such quoting would be needed and
that in some cases this bug fix will be represent a backwards-
incompatible change.

Would it be possible to pass through those type names which require no
quoting (e.g., all alphanumeric type names, possibly alphanumeric type
names with whitespace but no other special characters)?

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread D. Richard Hipp

On Jun 16, 2009, at 3:10 PM, Kalyani Phadke wrote:

> But before entering the bug , I would like to find out how this
> happened. Whats the cause of it. Because I am not able to duplicate it
> all the time. IS there any way to find out what went wrong?
> Thanks,
>> I have to go manually and edit sql ,make changes so that everything
>> works fine.
>>

What do you have to manually edit to make it work.  Please show us the  
text before and after you edit.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Wes, I gather you are using version 3.6.14 in this test (?)

Please also try version 3.6.15 since I'm seeing drastically different behavior 
using that version (starting similarly fast as .14 but quickly slowing down 
after about 15 iterations and then leveling off after 22 iterations).

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Wes Freeman
Sent: 16. júní 2009 18:01
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

I also wrote my own test in a small Qt/C++ app.

First I tried on my 10k rpm U160 SCSI drive, and it stayed roughly
between ~23 seconds per set (one set is 100 of: start
transaction->insert 1000->commit, as described). I let it run for
about 10 minutes before stopping it. It's hard drive bound on this
drive, taking only about 20% of one of the CPU cores, and creating a
horrible-sounding disk churning noise.

Second, I tried on my two raid 0 (striped for performance) 7200rpm
drives, and it was much faster--~4.5 seconds per set. It does seem to
be increasing, but I'm only up to 200 iterations, so it's difficult to
tell if it's significant. It seems to be CPU bound for this test, as
it's maxing one of my cores at 95-100% (Intel Core 2 Quad @2.66GHz w/
12MB cache).

I'm disappointed with the performance of my "faster" drive, but I
guess this is a good showing for the benefits of raid 0. I'm wondering
if my SCSI drive has some issues at this point.

I'll continue testing over longer periods to see how much worse it
gets on my system. Konrad's comment on fragmentation sounds like a
reasonable explanation. I'm curious about a non-transactional test, to
see if this occurs at a single-insert level. This may be something to
keep in mind when writing an application that will use a database file
transactionally for months at a time.

Wes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread Kalyani Phadke
 But before entering the bug , I would like to find out how this
happened. Whats the cause of it. Because I am not able to duplicate it
all the time. IS there any way to find out what went wrong?
Thanks,
-Kalyani

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of D. Richard Hipp
Sent: Tuesday, June 16, 2009 12:02 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15
issue


On Jun 16, 2009, at 2:53 PM, Kalyani Phadke wrote:

> I am using sqlite version 3.6.15.I upgraded my database from sqlite
> 3.5.4 to 3.6.15. I am not able to open database from sqlite3.exe cmd 
> utility .
> I can do sqlite3.exe test.DB but if type .database ,it gives me error 
> malformed database schema.
> pragma integrity_check , gives error malformed database schema So I 
> took the dump of db in test.sql using .dump command. If try to create 
> database using test.sql I get bunch of errors.
> default value of column is not constant, no such table etc.
> I have to go manually and edit sql ,make changes so that everything 
> works fine.
>
> I am just curious why my database structure get corrupted / changed 
> .Will changing database engine cause schema to change or is it dump 
> command changing my database schema.


SQLite version 3.6.15 should be able to read and write any SQLite  
database back through version 3.0.0.  If you find an instance where  
this is not the case, then that is a bug.  Please report it.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Thanks for the info Konrad. This is very similar to what I'm doing so that 
might be the case. Just for curiosity's sake, I might try to drop the table 
in-between the runs to see what happens.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Konrad J Hambrick
Sent: 16. júní 2009 17:43
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down


Jens --

I have seen Index and Record Fragmentation cause
the kind of slowdowns you are describing.

Especially when there are a lot deletes followed
by a 'largish' (100,000-record) batch of inserts.

I have found on occasion that the speed is more
consistent if I can find a way to drop and create
a 'fresh' table from time to time.

The fresh table eliminates the fragmentation.

Sounds like that won't work without a lot of effort
since you have records remaining in the table.

-- kjh


On 06/16/2009 10:57 AM, Jens Páll Hafsteinsson wrote:
> The key factor here is not the total time taken to perform these operations 
> but the fact that the time is *increasing* for each run. I am looking for 
> consistency in that I need to be able to let the application perform these 
> steps in constant time over a long period of time (months).
> 
> Dropping the table is not an option since I always will have some data in the 
> table (even though I delete everything in my test). Maybe the culprit is that 
> I delete everything from the table?
> 
> JP
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Matthew L. Creech
> Sent: 16. júní 2009 15:51
> To: kon...@payplus.com; General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
> 
> On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>> Hmmm ...
>>
>> Is there a way to replace step 5 (delete everything from the table)
>> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>>
> 
> This shouldn't be necessary unless the table has triggers.  From
> http://sqlite.org/lang_delete.html :
> 
> "When the WHERE is omitted from a DELETE statement and the table being
> deleted has no triggers, SQLite uses an optimization to erase the
> entire table content without having to visit each row of the table
> individual. This "truncate" optimization makes the delete run much
> faster."
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread D. Richard Hipp

On Jun 16, 2009, at 2:53 PM, Kalyani Phadke wrote:

> I am using sqlite version 3.6.15.I upgraded my database from sqlite
> 3.5.4 to 3.6.15. I am not able to open database from sqlite3.exe cmd
> utility .
> I can do sqlite3.exe test.DB but if type
> .database ,it gives me error malformed database schema.
> pragma integrity_check , gives error malformed database schema
> So I took the dump of db in test.sql using .dump command. If try to
> create database using test.sql I get bunch of errors.
> default value of column is not constant, no such table etc.
> I have to go manually and edit sql ,make changes so that everything
> works fine.
>
> I am just curious why my database structure get corrupted / 
> changed .Will
> changing database engine cause schema to change or is it dump command
> changing my database schema.


SQLite version 3.6.15 should be able to read and write any SQLite  
database back through version 3.0.0.  If you find an instance where  
this is not the case, then that is a bug.  Please report it.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Ok, but you might have to run this a bit longer (or more iterations). My first 
10 runs give the following results (in seconds):

1   15,681
2   16,010
3   16,093
4   16,168
5   16,057
6   16,585
7   16,114
8   16,596
9   16,115
10  16,270

Jumping around a bit but it seems linear (but it's not). I'm not saying that 
3.6.13 behaves the stame as 3.6.14 but this only becomes really apparent when 
run over an extended period of time. 3.6.15 looks promising though in this 
respect but looking at the first 10 runs of that version does definitely not 
say the whole story (at least in my case):

1   17,874
2   21,753
3   23,5
4   26,591
5   27,925
6   29,682
7   29,807
8   31,944
9   32,422
10  34,144

Increasing like crazy (but seems to level off later at 40 seconds).

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: 16. júní 2009 16:55
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

On Tue, Jun 16, 2009 at 10:57 AM, Jens Páll
Hafsteinsson wrote:
> The key factor here is not the total time taken to perform these operations 
> but the fact that the time is *increasing* for each run. I am looking for 
> consistency in that I need to be able to let the application perform these 
> steps in constant time over a long period of time (months).
>


Using SQLite 3.6.13, Perl 5.8.8, on a Mac OS X Leopard Macbook with 4
GB RAM and 320 GB 7200 RPM Seagate drive, I get

[11:27 AM] ~/Projects/sqlite_test$perl db_slows.pl
1. the code took: 8 wallclock secs ( 6.83 usr +  0.30 sys =  7.13 CPU)
2. the code took: 9 wallclock secs ( 6.78 usr +  0.34 sys =  7.12 CPU)
3. the code took: 8 wallclock secs ( 6.80 usr +  0.34 sys =  7.14 CPU)
4. the code took: 8 wallclock secs ( 6.78 usr +  0.34 sys =  7.12 CPU)
5. the code took: 8 wallclock secs ( 6.78 usr +  0.33 sys =  7.11 CPU)
6. the code took: 8 wallclock secs ( 6.81 usr +  0.33 sys =  7.14 CPU)
7. the code took: 8 wallclock secs ( 6.80 usr +  0.34 sys =  7.14 CPU)
8. the code took: 9 wallclock secs ( 6.80 usr +  0.35 sys =  7.15 CPU)
9. the code took: 8 wallclock secs ( 6.79 usr +  0.34 sys =  7.13 CPU)
10. the code took: 8 wallclock secs ( 6.90 usr +  0.34 sys =  7.24 CPU)

Pretty linear performance. As stated in the OP, I
1. Insert 1000 records (an integer and a 100 random char string;
2. Commit;
3. Repeat 1 one hundred times;
4. DELETE all records;
5. Repeat 1-4 x 10.

The table is

CREATE TABLE foo (id INTEGER PRIMARY KEY, num INTEGER, desc TEXT);

There are no indexes.


man/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite3 database upgrade from 3.5.4 to 3.6.15 issue

2009-06-16 Thread Kalyani Phadke
I am using sqlite version 3.6.15.I upgraded my database from sqlite
3.5.4 to 3.6.15. I am not able to open database from sqlite3.exe cmd
utility . 
I can do sqlite3.exe test.DB but if type 
.database ,it gives me error malformed database schema.
pragma integrity_check , gives error malformed database schema
So I took the dump of db in test.sql using .dump command. If try to
create database using test.sql I get bunch of errors. 
default value of column is not constant, no such table etc.
I have to go manually and edit sql ,make changes so that everything
works fine.
 
I am just curious why my database structure get corrupted /changed .Will
changing database engine cause schema to change or is it dump command
changing my database schema.
 
Thanks,
-Kalyani
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Closing and opening again did not speed up steps 1-4, it actually slowed things 
down even more. The curve from the beginning is a bit similar to a slightly 
flattened log curve. When I closed the database and started the test again, a 
similar curve appeared again, but now starting from where the first run left 
off.

I've been running the same 3.6.15 since this afternoon (the previous test was 
using 3.6.14) and it seems to flatten out fairly quickly but it is 
significantly slower (2.3 to 1.3 times slower, depending on where you measure 
it using the data I have). I'm not that worried about that for the time being; 
I'm just hoping it will stay flat.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: 16. júní 2009 13:30
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down


On 16 Jun 2009, at 1:06pm, Jens Páll Hafsteinsson wrote:

> 1.   start a transaction
>
> 2.   insert 1000 records
>
> 3.   commit
>
> 4.   repeat steps 1-3 100 times
>
> 5.   delete everything from the table
>
> 6.   Start again at step 1

Okay, so do that until it's slow, then close the database and open it  
again, and do steps 1-4 once more.  Is it still slow, or did closing  
and opening speed it up ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Wes Freeman
I also wrote my own test in a small Qt/C++ app.

First I tried on my 10k rpm U160 SCSI drive, and it stayed roughly
between ~23 seconds per set (one set is 100 of: start
transaction->insert 1000->commit, as described). I let it run for
about 10 minutes before stopping it. It's hard drive bound on this
drive, taking only about 20% of one of the CPU cores, and creating a
horrible-sounding disk churning noise.

Second, I tried on my two raid 0 (striped for performance) 7200rpm
drives, and it was much faster--~4.5 seconds per set. It does seem to
be increasing, but I'm only up to 200 iterations, so it's difficult to
tell if it's significant. It seems to be CPU bound for this test, as
it's maxing one of my cores at 95-100% (Intel Core 2 Quad @2.66GHz w/
12MB cache).

I'm disappointed with the performance of my "faster" drive, but I
guess this is a good showing for the benefits of raid 0. I'm wondering
if my SCSI drive has some issues at this point.

I'll continue testing over longer periods to see how much worse it
gets on my system. Konrad's comment on fragmentation sounds like a
reasonable explanation. I'm curious about a non-transactional test, to
see if this occurs at a single-insert level. This may be something to
keep in mind when writing an application that will use a database file
transactionally for months at a time.

Wes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Konrad J Hambrick


Jeffrey --

Maybe so, but I should have looked further in Jen's
reply at the built-in optimizations for DELETE on a
Table without any filters and not having TRIGGERS ...

sorry :(

-- kjh


On 06/16/2009 12:44 PM, Hoover, Jeffrey wrote:
> Wouldn't a period VACUUMing of the database alleviate fragmentation?
> 
> - Jeff
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Konrad J Hambrick
> Sent: Tuesday, June 16, 2009 1:43 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
> 
> 
> Jens --
> 
> I have seen Index and Record Fragmentation cause
> the kind of slowdowns you are describing.
> 
> Especially when there are a lot deletes followed
> by a 'largish' (100,000-record) batch of inserts.
> 
> I have found on occasion that the speed is more
> consistent if I can find a way to drop and create
> a 'fresh' table from time to time.
> 
> The fresh table eliminates the fragmentation.
> 
> Sounds like that won't work without a lot of effort
> since you have records remaining in the table.
> 
> -- kjh
> 
> 
> On 06/16/2009 10:57 AM, Jens Páll Hafsteinsson wrote:
>> The key factor here is not the total time taken to perform these operations 
>> but the fact that the time is *increasing* for each run. I am looking for 
>> consistency in that I need to be able to let the application perform these 
>> steps in constant time over a long period of time (months).
>>
>> Dropping the table is not an option since I always will have some data in 
>> the table (even though I delete everything in my test). Maybe the culprit is 
>> that I delete everything from the table?
>>
>> JP
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org 
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Matthew L. Creech
>> Sent: 16. júní 2009 15:51
>> To: kon...@payplus.com; General Discussion of SQLite Database
>> Subject: Re: [sqlite] Database inserts gradually slowing down
>>
>> On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick 
>> wrote:
>>> Hmmm ...
>>>
>>> Is there a way to replace step 5 (delete everything from the table)
>>> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>>>
>> This shouldn't be necessary unless the table has triggers.  From
>> http://sqlite.org/lang_delete.html :
>>
>> "When the WHERE is omitted from a DELETE statement and the table being
>> deleted has no triggers, SQLite uses an optimization to erase the
>> entire table content without having to visit each row of the table
>> individual. This "truncate" optimization makes the delete run much
>> faster."
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Hoover, Jeffrey
Wouldn't a period VACUUMing of the database alleviate fragmentation?

- Jeff

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Konrad J Hambrick
Sent: Tuesday, June 16, 2009 1:43 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down


Jens --

I have seen Index and Record Fragmentation cause
the kind of slowdowns you are describing.

Especially when there are a lot deletes followed
by a 'largish' (100,000-record) batch of inserts.

I have found on occasion that the speed is more
consistent if I can find a way to drop and create
a 'fresh' table from time to time.

The fresh table eliminates the fragmentation.

Sounds like that won't work without a lot of effort
since you have records remaining in the table.

-- kjh


On 06/16/2009 10:57 AM, Jens Páll Hafsteinsson wrote:
> The key factor here is not the total time taken to perform these operations 
> but the fact that the time is *increasing* for each run. I am looking for 
> consistency in that I need to be able to let the application perform these 
> steps in constant time over a long period of time (months).
> 
> Dropping the table is not an option since I always will have some data in the 
> table (even though I delete everything in my test). Maybe the culprit is that 
> I delete everything from the table?
> 
> JP
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Matthew L. Creech
> Sent: 16. júní 2009 15:51
> To: kon...@payplus.com; General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
> 
> On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>> Hmmm ...
>>
>> Is there a way to replace step 5 (delete everything from the table)
>> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>>
> 
> This shouldn't be necessary unless the table has triggers.  From
> http://sqlite.org/lang_delete.html :
> 
> "When the WHERE is omitted from a DELETE statement and the table being
> deleted has no triggers, SQLite uses an optimization to erase the
> entire table content without having to visit each row of the table
> individual. This "truncate" optimization makes the delete run much
> faster."
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Konrad J Hambrick

Jens --

I have seen Index and Record Fragmentation cause
the kind of slowdowns you are describing.

Especially when there are a lot deletes followed
by a 'largish' (100,000-record) batch of inserts.

I have found on occasion that the speed is more
consistent if I can find a way to drop and create
a 'fresh' table from time to time.

The fresh table eliminates the fragmentation.

Sounds like that won't work without a lot of effort
since you have records remaining in the table.

-- kjh


On 06/16/2009 10:57 AM, Jens Páll Hafsteinsson wrote:
> The key factor here is not the total time taken to perform these operations 
> but the fact that the time is *increasing* for each run. I am looking for 
> consistency in that I need to be able to let the application perform these 
> steps in constant time over a long period of time (months).
> 
> Dropping the table is not an option since I always will have some data in the 
> table (even though I delete everything in my test). Maybe the culprit is that 
> I delete everything from the table?
> 
> JP
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Matthew L. Creech
> Sent: 16. júní 2009 15:51
> To: kon...@payplus.com; General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
> 
> On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>> Hmmm ...
>>
>> Is there a way to replace step 5 (delete everything from the table)
>> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>>
> 
> This shouldn't be necessary unless the table has triggers.  From
> http://sqlite.org/lang_delete.html :
> 
> "When the WHERE is omitted from a DELETE statement and the table being
> deleted has no triggers, SQLite uses an optimization to erase the
> entire table content without having to visit each row of the table
> individual. This "truncate" optimization makes the delete run much
> faster."
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Simon Slavin

On 16 Jun 2009, at 4:46pm, Swithun Crowe wrote:

> How about having an extra column for each column that you want to  
> search
> in? In the extra column, have a plain lowercase ASCII version of the  
> word.
> So, for 'Sào Paulo', have 'sao paulo'. You would need to write a small
> program to convert the characters. When you want to search for  
> something,
> convert your search query into something without accents, and search  
> in
> the extra column.


That would be a good solution, but it would require some intelligence  
in the users.  Instead of writing a small conversion program you could  
write your own encoding function as an SQLite extension.  More  
puzzling to start with but far more convenient to use in the long run.



Might I suggest that if you do either of those you look into doing  
soundex as part of your handling of accented characters ?  Even  
countries that do not use accented characters find soundex encoding  
very useful.



Including soundex as part of your hashing function gets rid of the  
'Zürich' problem: all three of 'Zürich', 'Zuerich', and 'Zurich' all  
render the same value using soundex, so searching for any one of them  
would return all records which contained any of the three versions.

I do not know if the SOUNDEX() function handles accented characters in  
this way.  SQLite apparently supports its built-in SOUNDEX() function  
only if built with a particular switch.

Simon.
-- 
  http://www.hearsay.demon.co.uk | I'd expect if a computer was involved
 | it all would have been much worse.
No Buffy for you.|-- John "West" McKenna
Leave quickly now. -- Anya   |  THE FRENCH WAS THERE

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Igor Tandetnik
Fred Williams 
wrote:
> What!  The standard is lacking?  Will wonders never cease?
>
> Thanks, you saved me from some boring reference reading.
>
> Still think, logically speaking, the construct should throw an error
> message, rather than make stealth changes to the expected result.

But why do you expect a particular result different from the one you 
actually get? What basis do you have for your expectations, other than 
wishful thinking?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Simon Slavin

On 16 Jun 2009, at 5:32pm, Fred Williams wrote:

> Still think, logically speaking, the construct should throw an error
> message, rather than make stealth changes to the expected result.

I suppose it might be possible to provide SQLite with a 'conform to  
standard' mode you could select using a PRAGMA.  However, I used to  
have to work entirely within SQL92 (working for a big client which  
insisted on standard conformance) and it was a pain: the standard  
lacks many features we find very useful.

Simon.
-- 
  http://www.hearsay.demon.co.uk | I'd expect if a computer was involved
 | it all would have been much worse.
No Buffy for you.|-- John "West" McKenna
Leave quickly now. -- Anya   |  THE FRENCH WAS THERE

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Simon Slavin

On 16 Jun 2009, at 4:31pm, Konrad J Hambrick wrote:

> Is there a way to replace step 5 (delete everything from the table)
> with a System-Level rm / unlink and a sqlite CREATE TABLE ?

JP can totally do that, but his original question was why the system  
gradually gets slower and slower.  and that does indicate a bug  
somewhere, which should probably be identified.


On 16 Jun 2009, at 2:52pm, Jens Páll Hafsteinsson wrote:

> does sqlite use temporary files beside the database file?



Might be interesting to have a look at those, with the application in  
a 'paused' state while it's taking a long time to do things, and see  
if any of them have got to be particularly big.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread P Kishor
On Tue, Jun 16, 2009 at 10:57 AM, Jens Páll
Hafsteinsson wrote:
> The key factor here is not the total time taken to perform these operations 
> but the fact that the time is *increasing* for each run. I am looking for 
> consistency in that I need to be able to let the application perform these 
> steps in constant time over a long period of time (months).
>


Using SQLite 3.6.13, Perl 5.8.8, on a Mac OS X Leopard Macbook with 4
GB RAM and 320 GB 7200 RPM Seagate drive, I get

[11:27 AM] ~/Projects/sqlite_test$perl db_slows.pl
1. the code took: 8 wallclock secs ( 6.83 usr +  0.30 sys =  7.13 CPU)
2. the code took: 9 wallclock secs ( 6.78 usr +  0.34 sys =  7.12 CPU)
3. the code took: 8 wallclock secs ( 6.80 usr +  0.34 sys =  7.14 CPU)
4. the code took: 8 wallclock secs ( 6.78 usr +  0.34 sys =  7.12 CPU)
5. the code took: 8 wallclock secs ( 6.78 usr +  0.33 sys =  7.11 CPU)
6. the code took: 8 wallclock secs ( 6.81 usr +  0.33 sys =  7.14 CPU)
7. the code took: 8 wallclock secs ( 6.80 usr +  0.34 sys =  7.14 CPU)
8. the code took: 9 wallclock secs ( 6.80 usr +  0.35 sys =  7.15 CPU)
9. the code took: 8 wallclock secs ( 6.79 usr +  0.34 sys =  7.13 CPU)
10. the code took: 8 wallclock secs ( 6.90 usr +  0.34 sys =  7.24 CPU)

Pretty linear performance. As stated in the OP, I
1. Insert 1000 records (an integer and a 100 random char string;
2. Commit;
3. Repeat 1 one hundred times;
4. DELETE all records;
5. Repeat 1-4 x 10.

The table is

CREATE TABLE foo (id INTEGER PRIMARY KEY, num INTEGER, desc TEXT);

There are no indexes.


> Dropping the table is not an option since I always will have some data in the 
> table (even though I delete everything in my test). Maybe the culprit is that 
> I delete everything from the table?
>
> JP
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Matthew L. Creech
> Sent: 16. júní 2009 15:51
> To: kon...@payplus.com; General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
>
> On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>>
>> Hmmm ...
>>
>> Is there a way to replace step 5 (delete everything from the table)
>> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>>
>
> This shouldn't be necessary unless the table has triggers.  From
> http://sqlite.org/lang_delete.html :
>
> "When the WHERE is omitted from a DELETE statement and the table being
> deleted has no triggers, SQLite uses an optimization to erase the
> entire table content without having to visit each row of the table
> individual. This "truncate" optimization makes the delete run much
> faster."
>
> --
> Matthew L. Creech
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Carbon Model http://carbonmodel.org/
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org/
Science Commons Fellow, Geospatial Data http://sciencecommons.org
Nelson Institute, UW-Madison http://www.nelson.wisc.edu/
---
collaborate, communicate, compete
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Simon Slavin

On 16 Jun 2009, at 1:06pm, Jens Páll Hafsteinsson wrote:

> 1.   start a transaction
>
> 2.   insert 1000 records
>
> 3.   commit
>
> 4.   repeat steps 1-3 100 times
>
> 5.   delete everything from the table
>
> 6.   Start again at step 1

Okay, so do that until it's slow, then close the database and open it  
again, and do steps 1-4 once more.  Is it still slow, or did closing  
and opening speed it up ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Fred Williams
What!  The standard is lacking?  Will wonders never cease?

Thanks, you saved me from some boring reference reading.

Still think, logically speaking, the construct should throw an error
message, rather than make stealth changes to the expected result.

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Igor Tandetnik
Sent: Tuesday, June 16, 2009 10:57 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Datatypes (D. Richard Hipp)


Fred Williams 
wrote:
> Is this a violation of SQL Standards?  Seems like it should be.

As far as I can tell, CREATE TABLE AS SELECT is not part of SQL92
standard at all. If you want to work within the standard, don't use this
construct in the first place.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Fred Williams

Is this a violation of SQL Standards?  Seems like it should be.

I'm not for dumbing down the system to compensate for the occasional idiot
programmer.  After all, we are not the government, nor should we strive to
act like it (IMHO).

If someone chooses to attempt to create a column/table/database name that is
incorrect perhaps they should receive a refreshing error message pointing
out their verbal shortcomings?  Otherwise I think this could break a lot of
good code for the sake of "maybe"...

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of D. Richard Hipp
Sent: Tuesday, June 16, 2009 9:36 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Datatypes (D. Richard Hipp)



>
>
> Ok, I can see this. But I cannot forsee the real implications
> though. If I
> have a statement like
>
> create table mytable(pnumber integer, name varchar(20), primary
> key(pnumber))
>
> what are the reported datatypes? still integer and varchar(20)?


Yes.  But if you do

CREATE TABLE t2 AS SELECT * FROM mytable;

Then the reported datatypes for t2 will be INT and TEXT.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
The key factor here is not the total time taken to perform these operations but 
the fact that the time is *increasing* for each run. I am looking for 
consistency in that I need to be able to let the application perform these 
steps in constant time over a long period of time (months).

Dropping the table is not an option since I always will have some data in the 
table (even though I delete everything in my test). Maybe the culprit is that I 
delete everything from the table?

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Matthew L. Creech
Sent: 16. júní 2009 15:51
To: kon...@payplus.com; General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>
> Hmmm ...
>
> Is there a way to replace step 5 (delete everything from the table)
> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>

This shouldn't be necessary unless the table has triggers.  From
http://sqlite.org/lang_delete.html :

"When the WHERE is omitted from a DELETE statement and the table being
deleted has no triggers, SQLite uses an optimization to erase the
entire table content without having to visit each row of the table
individual. This "truncate" optimization makes the delete run much
faster."

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Igor Tandetnik
Fred Williams 
wrote:
> Is this a violation of SQL Standards?  Seems like it should be.

As far as I can tell, CREATE TABLE AS SELECT is not part of SQL92 
standard at all. If you want to work within the standard, don't use this 
construct in the first place.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Rather SQL quiestion

2009-06-16 Thread Boris Ioffe

Thank you Pavel for the lesson. I learned to never underestimate the power of 
self join. 

Works :)


--- On Tue, 6/16/09, Pavel Ivanov  wrote:

> From: Pavel Ivanov 
> Subject: Re: [sqlite] Rather SQL quiestion
> To: "General Discussion of SQLite Database" 
> Date: Tuesday, June 16, 2009, 11:34 AM
> Try this:
> 
> select c.*
>    from players a, registrations b, players
> c
>  where a.device_id = b.device_id
>     and b.mesg_token='123456'
>     and a.table_group_id = c.table_group_id
>     and a.table_id = c.table_id
> 
> 
> Pavel
> 
> On Tue, Jun 16, 2009 at 11:25 AM, Boris Ioffe
> wrote:
> >
> > Hello folks,
> > I am having trouble understanding one very important
> concept about IN operator. Let me show you my situation
> below.
> >
> > sqlite> select sqlite_version();
> > sqlite_version()
> > 
> > 3.3.6
> >
> *
> > Here is my table schema
> >
> > CREATE TABLE players
> >        (device_id varchar(40) NOT NULL,
> table_group_id integer, table_id integer,
> >         role integer, create_ts TIMESTAMP DEFAULT
> CURRENT_TIMESTAMP,
> >
> >         FOREIGN KEY (table_id,table_group_id)
> REFERENCES tables(id, group_id),
> >         FOREIGN KEY (device_id) REFERENCES
> users(device_id),
> >         PRIMARY KEY (device_id, table_group_id,
> table_id));
> >
> >
> >
> >
> *
> > select  table_group_id , table_id  from players join
> registrations on players.device_id = registrations.device_id
>  where mesg_token='123456';
> >
> > table_group_id  table_id
> > --  --
> > 1               1
> > 1               2
> >
> > Now I need to select all players who happened to be
> playing at the same table  (table_id,table_group_id)
> REFERENCES tables(id, group_id).
> >
> >
> > Obviously something like that doesn't work:
> >
> > select * from players where (table_group_id,table_id)
> in (select  table_group_id , table_id  from players join
> registrations on players.device_id = registrations.device_id
>  where mesg_token='123456')
> >
> > -It would work only with ONE field.
> >
> > select * from players where table_group_id in (select
>  table_group_id from players join registrations on
> players.device_id = registrations.device_id  where
> mesg_token='123456') and table_group_id in (select table_id
> ...)
> >
> > it is not quite the same as it will not match the
> pair.
> > Same puzzles me with self join route. I hate to
> mindlessly  traverse all pairs using code, rather having
> one transaction.
> >
> > I appreciate any help and suggestions.
> > Thanks,
> > -B
> >
> >
> >
> >
> >
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Matthew L. Creech
On Tue, Jun 16, 2009 at 11:31 AM, Konrad J Hambrick wrote:
>
> Hmmm ...
>
> Is there a way to replace step 5 (delete everything from the table)
> with a System-Level rm / unlink and a sqlite CREATE TABLE ?
>

This shouldn't be necessary unless the table has triggers.  From
http://sqlite.org/lang_delete.html :

"When the WHERE is omitted from a DELETE statement and the table being
deleted has no triggers, SQLite uses an optimization to erase the
entire table content without having to visit each row of the table
individual. This "truncate" optimization makes the delete run much
faster."

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Igor Tandetnik
Christophe Leske  wrote:
>> You write your own comparison function that would consider these two
>> strings equal. See sqlite3_create_function, sqlite3_create_collation.
>>
> this problem pertains not only to Zürich, but to 24000 other entries,
> so
> I guess that this is no option for me.
> And again, I am using the sqlite3 command line exe and can't compile a
> custom version.

Well, I guess you could store two versions of every city name - the 
"correct" version with accents and all, and the "for search" version in 
ASCII characters only. Perhaps you would need more than one - say both 
Zurich and Zuerich for Zürich - then create a separate table with common 
spellings that would be in one-to-many relationship with the table of 
cities.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Thanks for the pointers Kees. I'll keep those in mind when I go into trying to 
optimize the database.

I was clearly way off track when I said I was perfectly aware of the issues 
concerning the database. I completely forgot about taking the spin factor into 
account as you and Jim have pointed out to me. Thanks again.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Kees Nuyt
Sent: 16. júní 2009 15:17
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Database inserts gradually slowing down

On Tue, 16 Jun 2009 14:23:47 +, Jens Páll Hafsteinsson
 wrote:

> Yes, I'm perfectly aware of this and hence I
> would expect the disk to be sweating like hell
> running this test while the CPU is relatively
> relaxed (given that sqlite is disk bound in
> this case and not CPU bound).
>
> But this is not happening; neither the disk nor
> the CPU are practically doing anything, which
> is a bit strange. It's as if both the disk and
> the CPU are waiting for each other or that
> sqlite is playing 'nice' behind my back and
> giving up the CPU when it shouldn't.

Apart from seeks, the disk has to spin until the correct
start sector is under the head. Then it can write a database
page, perhaps a few database pages.

There are a few parameters you can use to optimize this:
- PRAGMA page_size
- PRAGMA [default_]cache_size
- the number of INSERTs per transaction
- The schema: INDEX PRIMARY KEY on the first column
  instead of a non-unique index
  (if the application allows it)
- load the database in order of index(es)

Especially a non-unique index with low cardinality has a lot
of overhead.

>JP
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Swithun Crowe
Hello

CL I can't know this beforehand. These are just examples, i need a generic 
CL solution if possivble.

CL All i can see so far is to build a table of all special characters ever 
CL used in the 24000 names of cities which make problems and remap them 
CL accordingly.

How about having an extra column for each column that you want to search 
in? In the extra column, have a plain lowercase ASCII version of the word. 
So, for 'Sào Paulo', have 'sao paulo'. You would need to write a small 
program to convert the characters. When you want to search for something, 
convert your search query into something without accents, and search in 
the extra column.

This way, you preserve the accurate spellings of places, don't require all 
users to know the spelling exactly, and can find places whatever the user 
put in. You would just need to know that 'à' converts to 'a'.

Swithun.___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread Fred Williams

Is this a violation of SQL Standards?  Seems like it should be.

I'm not for dumbing down the system to compensate for the occasional idiot
programmer.  After all, we are not the government, nor should we strive to
act like it (IMHO).

If someone chooses to attempt to create a column/table/database name that is
incorrect perhaps they should receive a refreshing error message pointing
out their verbal shortcomings?  Otherwise I think this could break a lot of
good code for the sake of "maybe"...

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of D. Richard Hipp
Sent: Tuesday, June 16, 2009 9:36 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Datatypes (D. Richard Hipp)



>
>
> Ok, I can see this. But I cannot forsee the real implications
> though. If I
> have a statement like
>
> create table mytable(pnumber integer, name varchar(20), primary
> key(pnumber))
>
> what are the reported datatypes? still integer and varchar(20)?


Yes.  But if you do

CREATE TABLE t2 AS SELECT * FROM mytable;

Then the reported datatypes for t2 will be INT and TEXT.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Ah, of course. Thanks for the explanation Jim.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Jim Wilcoxson
Sent: 16. júní 2009 15:07
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

You are doing transactions here, which is a very different thing from
normal disk I/O.  Your CPU is idle because it is waiting for the disk.
 Your disk is idle because it is waiting for the platters to rotate
around again.  The best case you can achieve on a 7200RPM disk is 120
transactions (commits) per second.

In practice, you will see much lower TPS rates, and they will
typically be 50% fractions.  For 1 sync per transaction, the max rate
is 120 TPS.  For 2 syncs per transaction, the max rate is 60 TPS.  For
3 syncs per transaction, the max rate is 30 TPS.  SQLite always does
at least 2 syncs per transaction, and sometimes 3, depending on the
options you use, so a transaction rate of 30 TPS is reasonable.

The only way you can speed this up is to get a disk that rotates
faster, ie, a 10K or 15K rpm drive will do faster transactions, but
even so, you are still limited to 250 TPS with a 15K drive.  Or, get a
battery-backup caching controller that will lie to your OS and tell it
that the data is on the media, when really it is only stored in the
controller's memory.  This allows the controller to combine write
requests to increase the transaction rate.

Jim

On 6/16/09, Jens Páll Hafsteinsson  wrote:
> Yes, I'm perfectly aware of this and hence I would expect the disk to be
> sweating like hell running this test while the CPU is relatively relaxed
> (given that sqlite is disk bound in this case and not CPU bound).
>
> But this is not happening; neither the disk nor the CPU are practically
> doing anything, which is a bit strange. It's as if both the disk and the CPU
> are waiting for each other or that sqlite is playing 'nice' behind my back
> and giving up the CPU when it shouldn't.
>
> JP
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Adam DeVita
> Sent: 16. júní 2009 14:06
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
>
> Remember the implications of Moore's law and how much time has passed.
>
> CPU speed is much faster than memory speed.
> Memory speed is much faster than disk access.
>
> This is why hardware folks play all sorts of tricks with pipelines, caches,
> interleaving, and parallelism.
>
> For a single process that interacts with the HDD, the HDD will be the bottle
> neck and the CPU will spend lots of time waiting for the rest of the machine
> to catch up.  Even if you have a RAID system, the CPU is still much faster
> than the bus the hard drives are on.
>
>
> On Tue, Jun 16, 2009 at 9:52 AM, Jens Páll Hafsteinsson
> wrote:
>
>> In step 5 I execute "delete from t1" without any where clause.
>>
>> I haven't monitored the disk space used (does sqlite use temporary files
>> beside the database file?) but the database file itself has been fixed in
>> size at around 12MB (12.461.056 bytes) the whole time.
>>
>> The load on the disk is minimal (sustained 1MB/s) and the CPU load is
>> about
>> 6%, which is a bit surprising since I thought I would be putting a huge
>> load
>> on the computer running a loop like this. I'm not at all happy to see
>> these
>> low load numbers given how the test is programmed (it should practically
>> own
>> the machine). The database should utilize the computer much better than
>> this.
>>
>> I've been running the test now for about 10 minutes using 3.6.15 and it
>> 'seems' to be behaving as before, slowly increasing in execution time. I
>> want to run this a bit longer to be absolutely sure and will let you know.
>>
>> JP
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:
>> sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
>> Sent: 16. júní 2009 12:15
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Database inserts gradually slowing down
>>
>> How do you do step 5? Like "delete from table" or "delete from table
>> where ..."? Do you see any degrade in disk space used by database
>> along with slowness?
>>
>> Pavel
>>
>> On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
>> Hafsteinsson wrote:
>> > Hi
>> >
>> > I've been running some tests against sqlite and have found that inserts
>> are gradually slowing down. Since I'm new to sqlite I might be doing
>> something wrong and thought I'd ask here if anyone has seen this before or
>> know what might be causing this.
>> >
>> > The test I'm running creates a database containing a single table (int,
>> int, varchar(100)) along with an index (not unique) on the first field. I
>> then perform the following operations (all records have unique data in the
>> first field):
>> >
>> >
>> > 1.   start a transaction
>> >

Re: [sqlite] Rather SQL quiestion

2009-06-16 Thread Pavel Ivanov
Try this:

select c.*
   from players a, registrations b, players c
 where a.device_id = b.device_id
and b.mesg_token='123456'
and a.table_group_id = c.table_group_id
and a.table_id = c.table_id


Pavel

On Tue, Jun 16, 2009 at 11:25 AM, Boris Ioffe wrote:
>
> Hello folks,
> I am having trouble understanding one very important concept about IN 
> operator. Let me show you my situation below.
>
> sqlite> select sqlite_version();
> sqlite_version()
> 
> 3.3.6
> *
> Here is my table schema
>
> CREATE TABLE players
>        (device_id varchar(40) NOT NULL, table_group_id integer, table_id 
> integer,
>         role integer, create_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
>
>         FOREIGN KEY (table_id,table_group_id) REFERENCES tables(id, group_id),
>         FOREIGN KEY (device_id) REFERENCES users(device_id),
>         PRIMARY KEY (device_id, table_group_id, table_id));
>
>
>
> *
> select  table_group_id , table_id  from players join registrations on 
> players.device_id = registrations.device_id  where mesg_token='123456';
>
> table_group_id  table_id
> --  --
> 1               1
> 1               2
>
> Now I need to select all players who happened to be playing at the same table 
>  (table_id,table_group_id) REFERENCES tables(id, group_id).
>
>
> Obviously something like that doesn't work:
>
> select * from players where (table_group_id,table_id) in (select  
> table_group_id , table_id  from players join registrations on 
> players.device_id = registrations.device_id  where mesg_token='123456')
>
> -It would work only with ONE field.
>
> select * from players where table_group_id in (select  table_group_id from 
> players join registrations on players.device_id = registrations.device_id  
> where mesg_token='123456') and table_group_id in (select table_id ...)
>
> it is not quite the same as it will not match the pair.
> Same puzzles me with self join route. I hate to mindlessly  traverse all 
> pairs using code, rather having one transaction.
>
> I appreciate any help and suggestions.
> Thanks,
> -B
>
>
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Konrad J Hambrick

Hmmm ...

Is there a way to replace step 5 (delete everything from the table)
with a System-Level rm / unlink and a sqlite CREATE TABLE ?

-- kjh


On 06/16/2009 10:06 AM, Jim Wilcoxson wrote:
> You are doing transactions here, which is a very different thing from
> normal disk I/O.  Your CPU is idle because it is waiting for the disk.
>  Your disk is idle because it is waiting for the platters to rotate
> around again.  The best case you can achieve on a 7200RPM disk is 120
> transactions (commits) per second.
> 
> In practice, you will see much lower TPS rates, and they will
> typically be 50% fractions.  For 1 sync per transaction, the max rate
> is 120 TPS.  For 2 syncs per transaction, the max rate is 60 TPS.  For
> 3 syncs per transaction, the max rate is 30 TPS.  SQLite always does
> at least 2 syncs per transaction, and sometimes 3, depending on the
> options you use, so a transaction rate of 30 TPS is reasonable.
> 
> The only way you can speed this up is to get a disk that rotates
> faster, ie, a 10K or 15K rpm drive will do faster transactions, but
> even so, you are still limited to 250 TPS with a 15K drive.  Or, get a
> battery-backup caching controller that will lie to your OS and tell it
> that the data is on the media, when really it is only stored in the
> controller's memory.  This allows the controller to combine write
> requests to increase the transaction rate.
> 
> Jim
> 
> On 6/16/09, Jens Páll Hafsteinsson  wrote:
>> Yes, I'm perfectly aware of this and hence I would expect the disk to be
>> sweating like hell running this test while the CPU is relatively relaxed
>> (given that sqlite is disk bound in this case and not CPU bound).
>>
>> But this is not happening; neither the disk nor the CPU are practically
>> doing anything, which is a bit strange. It's as if both the disk and the CPU
>> are waiting for each other or that sqlite is playing 'nice' behind my back
>> and giving up the CPU when it shouldn't.
>>
>> JP
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Adam DeVita
>> Sent: 16. júní 2009 14:06
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Database inserts gradually slowing down
>>
>> Remember the implications of Moore's law and how much time has passed.
>>
>> CPU speed is much faster than memory speed.
>> Memory speed is much faster than disk access.
>>
>> This is why hardware folks play all sorts of tricks with pipelines, caches,
>> interleaving, and parallelism.
>>
>> For a single process that interacts with the HDD, the HDD will be the bottle
>> neck and the CPU will spend lots of time waiting for the rest of the machine
>> to catch up.  Even if you have a RAID system, the CPU is still much faster
>> than the bus the hard drives are on.
>>
>>
>> On Tue, Jun 16, 2009 at 9:52 AM, Jens Páll Hafsteinsson
>> wrote:
>>
>>> In step 5 I execute "delete from t1" without any where clause.
>>>
>>> I haven't monitored the disk space used (does sqlite use temporary files
>>> beside the database file?) but the database file itself has been fixed in
>>> size at around 12MB (12.461.056 bytes) the whole time.
>>>
>>> The load on the disk is minimal (sustained 1MB/s) and the CPU load is
>>> about
>>> 6%, which is a bit surprising since I thought I would be putting a huge
>>> load
>>> on the computer running a loop like this. I'm not at all happy to see
>>> these
>>> low load numbers given how the test is programmed (it should practically
>>> own
>>> the machine). The database should utilize the computer much better than
>>> this.
>>>
>>> I've been running the test now for about 10 minutes using 3.6.15 and it
>>> 'seems' to be behaving as before, slowly increasing in execution time. I
>>> want to run this a bit longer to be absolutely sure and will let you know.
>>>
>>> JP
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org [mailto:
>>> sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
>>> Sent: 16. júní 2009 12:15
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] Database inserts gradually slowing down
>>>
>>> How do you do step 5? Like "delete from table" or "delete from table
>>> where ..."? Do you see any degrade in disk space used by database
>>> along with slowness?
>>>
>>> Pavel
>>>
>>> On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
>>> Hafsteinsson wrote:
 Hi

 I've been running some tests against sqlite and have found that inserts
>>> are gradually slowing down. Since I'm new to sqlite I might be doing
>>> something wrong and thought I'd ask here if anyone has seen this before or
>>> know what might be causing this.
 The test I'm running creates a database containing a single table (int,
>>> int, varchar(100)) along with an index (not unique) on the first field. I
>>> then perform the following operations (all records have unique data in the
>>> first field):

 1.   start a transaction

 2

[sqlite] Rather SQL quiestion

2009-06-16 Thread Boris Ioffe

Hello folks, 
I am having trouble understanding one very important concept about IN operator. 
Let me show you my situation below. 

sqlite> select sqlite_version();
sqlite_version()

3.3.6
*
Here is my table schema

CREATE TABLE players 
(device_id varchar(40) NOT NULL, table_group_id integer, table_id 
integer, 
 role integer, create_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP, 
 
 FOREIGN KEY (table_id,table_group_id) REFERENCES tables(id, group_id), 
 FOREIGN KEY (device_id) REFERENCES users(device_id),  
 PRIMARY KEY (device_id, table_group_id, table_id)); 



*
select  table_group_id , table_id  from players join registrations on 
players.device_id = registrations.device_id  where mesg_token='123456';

table_group_id  table_id
--  --
1   1
1   2

Now I need to select all players who happened to be playing at the same table  
(table_id,table_group_id) REFERENCES tables(id, group_id). 


Obviously something like that doesn't work: 

select * from players where (table_group_id,table_id) in (select  
table_group_id , table_id  from players join registrations on players.device_id 
= registrations.device_id  where mesg_token='123456')

-It would work only with ONE field.

select * from players where table_group_id in (select  table_group_id from 
players join registrations on players.device_id = registrations.device_id  
where mesg_token='123456') and table_group_id in (select table_id ...) 

it is not quite the same as it will not match the pair. 
Same puzzles me with self join route. I hate to mindlessly  traverse all pairs 
using code, rather having one transaction.

I appreciate any help and suggestions. 
Thanks, 
-B 

 




  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Christophe Leske

> You write your own comparison function that would consider these two 
> strings equal. See sqlite3_create_function, sqlite3_create_collation.
>   
Well,

this problem pertains not only to Zürich, but to 24000 other entries, so 
I guess that this is no option for me.
And again, I am using the sqlite3 command line exe and can't compile a 
custom version.


> Why would you ever want two % in a row? A % matches zero or more of 
> arbitrary characters. You might be thinking of an underscore _.
>   
OK. Thanks for the hint, I was under the wrong assumption that % matches 
one character exactly, whereas this seems to be "_".

>> So far ,  so good, but my client also expects ANY simplification of a
>> character to be recognized:
>> Cote d'azur for instance  should return "Côte d'azur"
>> or the Sao Paulo issue - how can a search for "Sao Paulo" return "Sào
>> Paulo" in the result set?
>> 
> How are these examples different from previous ones?
>   
I am sorry, but I find this to be quite obvious?
Here, the problematic char is to be found in the *result set*, not in 
the query itself.

How do you educate SQlite to return me "Sào Paulo" if only "Sao Paulo" 
is being entered?
How do I know which character to substitute with a placeholder?

Is it
S%o Paulo to look for?
Or Sa% Paulo?
Or Sao P%ulo?

I can't know this beforehand. These are just examples, i need a generic 
solution if possivble.

All i can see so far is to build a table of all special characters ever 
used in the 24000 names of cities which make problems and remap them 
accordingly.



-- 
Christophe Leske

www.multimedial.de - i...@multimedial.de
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
+49(0)180102 - 06 60 02 96 // +49(0)177 249 70 31

This e-mail may contain confidential information. If you are not the intended 
recipient, 
it is appreciated that you notify the sender and delete your copy. Thank you.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Kees Nuyt
On Tue, 16 Jun 2009 14:23:47 +, Jens Páll Hafsteinsson
 wrote:

> Yes, I'm perfectly aware of this and hence I
> would expect the disk to be sweating like hell
> running this test while the CPU is relatively
> relaxed (given that sqlite is disk bound in
> this case and not CPU bound).
>
> But this is not happening; neither the disk nor
> the CPU are practically doing anything, which
> is a bit strange. It's as if both the disk and
> the CPU are waiting for each other or that
> sqlite is playing 'nice' behind my back and
> giving up the CPU when it shouldn't.

Apart from seeks, the disk has to spin until the correct
start sector is under the head. Then it can write a database
page, perhaps a few database pages.

There are a few parameters you can use to optimize this:
- PRAGMA page_size
- PRAGMA [default_]cache_size
- the number of INSERTs per transaction
- The schema: INDEX PRIMARY KEY on the first column
  instead of a non-unique index
  (if the application allows it)
- load the database in order of index(es)

Especially a non-unique index with low cardinality has a lot
of overhead.

>JP
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about searches

2009-06-16 Thread Igor Tandetnik
Christophe Leske  wrote:
> - how can SQlite be instructed to return search results which include
> a
> special character in it?
> E.g. you search literally for "Zurich" on an englisch system and
> expect
> "Zürich" to be in the result set.

You write your own comparison function that would consider these two 
strings equal. See sqlite3_create_function, sqlite3_create_collation.

> The next problem is that educated users might know that german Umlaute
> can be written out. The rules are simple:
>
> ä becomes "ae"
> ö becomes "oe"
> ü becomes "ue"
>
> So how would I go about filtering an educated user which looks for
> "Zuerich" and expects "Zürich" in the result set?

The same way.

> Best find on my behalf so far is to build a filter which replaces any
> occurence of "ae", "oe", "ue" with two placeholders ("%")

Why would you ever want two % in a row? A % matches zero or more of 
arbitrary characters. You might be thinking of an underscore _.

> So far ,  so good, but my client also expects ANY simplification of a
> character to be recognized:
> Cote d'azur for instance  should return "Côte d'azur"
> or the Sao Paulo issue - how can a search for "Sao Paulo" return "Sào
> Paulo" in the result set?

How are these examples different from previous ones?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jim Wilcoxson
You are doing transactions here, which is a very different thing from
normal disk I/O.  Your CPU is idle because it is waiting for the disk.
 Your disk is idle because it is waiting for the platters to rotate
around again.  The best case you can achieve on a 7200RPM disk is 120
transactions (commits) per second.

In practice, you will see much lower TPS rates, and they will
typically be 50% fractions.  For 1 sync per transaction, the max rate
is 120 TPS.  For 2 syncs per transaction, the max rate is 60 TPS.  For
3 syncs per transaction, the max rate is 30 TPS.  SQLite always does
at least 2 syncs per transaction, and sometimes 3, depending on the
options you use, so a transaction rate of 30 TPS is reasonable.

The only way you can speed this up is to get a disk that rotates
faster, ie, a 10K or 15K rpm drive will do faster transactions, but
even so, you are still limited to 250 TPS with a 15K drive.  Or, get a
battery-backup caching controller that will lie to your OS and tell it
that the data is on the media, when really it is only stored in the
controller's memory.  This allows the controller to combine write
requests to increase the transaction rate.

Jim

On 6/16/09, Jens Páll Hafsteinsson  wrote:
> Yes, I'm perfectly aware of this and hence I would expect the disk to be
> sweating like hell running this test while the CPU is relatively relaxed
> (given that sqlite is disk bound in this case and not CPU bound).
>
> But this is not happening; neither the disk nor the CPU are practically
> doing anything, which is a bit strange. It's as if both the disk and the CPU
> are waiting for each other or that sqlite is playing 'nice' behind my back
> and giving up the CPU when it shouldn't.
>
> JP
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Adam DeVita
> Sent: 16. júní 2009 14:06
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
>
> Remember the implications of Moore's law and how much time has passed.
>
> CPU speed is much faster than memory speed.
> Memory speed is much faster than disk access.
>
> This is why hardware folks play all sorts of tricks with pipelines, caches,
> interleaving, and parallelism.
>
> For a single process that interacts with the HDD, the HDD will be the bottle
> neck and the CPU will spend lots of time waiting for the rest of the machine
> to catch up.  Even if you have a RAID system, the CPU is still much faster
> than the bus the hard drives are on.
>
>
> On Tue, Jun 16, 2009 at 9:52 AM, Jens Páll Hafsteinsson
> wrote:
>
>> In step 5 I execute "delete from t1" without any where clause.
>>
>> I haven't monitored the disk space used (does sqlite use temporary files
>> beside the database file?) but the database file itself has been fixed in
>> size at around 12MB (12.461.056 bytes) the whole time.
>>
>> The load on the disk is minimal (sustained 1MB/s) and the CPU load is
>> about
>> 6%, which is a bit surprising since I thought I would be putting a huge
>> load
>> on the computer running a loop like this. I'm not at all happy to see
>> these
>> low load numbers given how the test is programmed (it should practically
>> own
>> the machine). The database should utilize the computer much better than
>> this.
>>
>> I've been running the test now for about 10 minutes using 3.6.15 and it
>> 'seems' to be behaving as before, slowly increasing in execution time. I
>> want to run this a bit longer to be absolutely sure and will let you know.
>>
>> JP
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:
>> sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
>> Sent: 16. júní 2009 12:15
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Database inserts gradually slowing down
>>
>> How do you do step 5? Like "delete from table" or "delete from table
>> where ..."? Do you see any degrade in disk space used by database
>> along with slowness?
>>
>> Pavel
>>
>> On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
>> Hafsteinsson wrote:
>> > Hi
>> >
>> > I've been running some tests against sqlite and have found that inserts
>> are gradually slowing down. Since I'm new to sqlite I might be doing
>> something wrong and thought I'd ask here if anyone has seen this before or
>> know what might be causing this.
>> >
>> > The test I'm running creates a database containing a single table (int,
>> int, varchar(100)) along with an index (not unique) on the first field. I
>> then perform the following operations (all records have unique data in the
>> first field):
>> >
>> >
>> > 1.   start a transaction
>> >
>> > 2.   insert 1000 records
>> >
>> > 3.   commit
>> >
>> > 4.   repeat steps 1-3 100 times
>> >
>> > 5.   delete everything from the table
>> >
>> > 6.   Start again at step 1
>> >
>> > The time taken to execute steps 1-4 increases gradually from taking
>> around 16 seconds the first time to j

Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread D. Richard Hipp

>
>
> Ok, I can see this. But I cannot forsee the real implications  
> though. If I
> have a statement like
>
> create table mytable(pnumber integer, name varchar(20), primary
> key(pnumber))
>
> what are the reported datatypes? still integer and varchar(20)?


Yes.  But if you do

CREATE TABLE t2 AS SELECT * FROM mytable;

Then the reported datatypes for t2 will be INT and TEXT.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes (D. Richard Hipp)

2009-06-16 Thread A Drent
On Jun 16, 2009, at 8:47 AM, A Drent wrote:
>
> From the docs I read that for the new version:
>
> a.. When new tables are created using CREATE TABLE ... AS SELECT ...
> the
> datatype of the columns is the simplified SQLite datatype (TEXT,
> INT, REAL,
> NUMERIC, or BLOB) instead of a copy of the original datatype from
> the source
> table.
>
> I don't know why this has been done,


It was done to fix a bug.

SQLite allows arbitrary text as the "datatype" of a column.  So you
could say (for example):

CREATE TABLE t1(a "duh! ++ x hi, yall!(+123)" unique);

And the datatype for t1.a would be "duh! ++ x hi, yall!(+123)".  It
used to be that this datatype would be copied into the synthesized
CREATE TABLE statements for tables generated using CREATE TABLE AS.
But that required that the datatype name be properly quoted so that it
could be parsed again.  The logic that did this quoting was complex
and contained bugs.  It was much easier to strip out the whole thing
and replace it with primitive datatype generator (that inserts "NUM"
in place of the complex string show above) than to try to fix the
quoting for every possible case.


D. Richard Hipp
d...@hwaci.com

Ok, I can see this. But I cannot forsee the real implications though. If I 
have a statement like

create table mytable(pnumber integer, name varchar(20), primary 
key(pnumber))

what are the reported datatypes? still integer and varchar(20)?

regards,
albert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Yes, I'm perfectly aware of this and hence I would expect the disk to be 
sweating like hell running this test while the CPU is relatively relaxed (given 
that sqlite is disk bound in this case and not CPU bound).

But this is not happening; neither the disk nor the CPU are practically doing 
anything, which is a bit strange. It's as if both the disk and the CPU are 
waiting for each other or that sqlite is playing 'nice' behind my back and 
giving up the CPU when it shouldn't.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Adam DeVita
Sent: 16. júní 2009 14:06
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

Remember the implications of Moore's law and how much time has passed.

CPU speed is much faster than memory speed.
Memory speed is much faster than disk access.

This is why hardware folks play all sorts of tricks with pipelines, caches,
interleaving, and parallelism.

For a single process that interacts with the HDD, the HDD will be the bottle
neck and the CPU will spend lots of time waiting for the rest of the machine
to catch up.  Even if you have a RAID system, the CPU is still much faster
than the bus the hard drives are on.


On Tue, Jun 16, 2009 at 9:52 AM, Jens Páll Hafsteinsson
wrote:

> In step 5 I execute "delete from t1" without any where clause.
>
> I haven't monitored the disk space used (does sqlite use temporary files
> beside the database file?) but the database file itself has been fixed in
> size at around 12MB (12.461.056 bytes) the whole time.
>
> The load on the disk is minimal (sustained 1MB/s) and the CPU load is about
> 6%, which is a bit surprising since I thought I would be putting a huge load
> on the computer running a loop like this. I'm not at all happy to see these
> low load numbers given how the test is programmed (it should practically own
> the machine). The database should utilize the computer much better than
> this.
>
> I've been running the test now for about 10 minutes using 3.6.15 and it
> 'seems' to be behaving as before, slowly increasing in execution time. I
> want to run this a bit longer to be absolutely sure and will let you know.
>
> JP
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:
> sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: 16. júní 2009 12:15
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
>
> How do you do step 5? Like "delete from table" or "delete from table
> where ..."? Do you see any degrade in disk space used by database
> along with slowness?
>
> Pavel
>
> On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
> Hafsteinsson wrote:
> > Hi
> >
> > I've been running some tests against sqlite and have found that inserts
> are gradually slowing down. Since I'm new to sqlite I might be doing
> something wrong and thought I'd ask here if anyone has seen this before or
> know what might be causing this.
> >
> > The test I'm running creates a database containing a single table (int,
> int, varchar(100)) along with an index (not unique) on the first field. I
> then perform the following operations (all records have unique data in the
> first field):
> >
> >
> > 1.   start a transaction
> >
> > 2.   insert 1000 records
> >
> > 3.   commit
> >
> > 4.   repeat steps 1-3 100 times
> >
> > 5.   delete everything from the table
> >
> > 6.   Start again at step 1
> >
> > The time taken to execute steps 1-4 increases gradually from taking
> around 16 seconds the first time to just over 28 seconds after about 2400
> iterations. To me, this is alarming since this time increase seems not to be
> asymptotic (not sure though, there is a slight curve on the graph and I
> haven't done any curve fitting) and I fear that I will end up with an
> unusable slow database after some time of use. Even if I shut down the test
> application and start again (on the same database and without re-creating
> the table), it just continues running as before it was stopped, that is,
> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
> >
> > At first I ran this for shorter periods without an index and think I saw
> the same behavior, but I have to verify that to be sure.
> >
> > I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual
> Studio 2008.
> >
> > If anyone is interested I can e-mail the collected data and the test
> program I'm using.
> >
> > Any pointers appreciated.
> >
> > Thanks,
> > JP
> >
> > -
> > "What you take away is more important than what you add." Chuck Moore
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-use

[sqlite] Question about searches

2009-06-16 Thread Christophe Leske
Hi there,

i have written an application which runs under german and englisch 
versions of Windows.
It includes a city databases which is ought to be searchable, yet there 
are a couple of issues which are of more logical nature...

My shell application surrounding the sqlite database only supports the 
ANSI charset (no Unicode), yet the underlying SQlite database has been 
created with UTF-8 support.

So far so good.

I got complaints from my client here that excentric city names like "Sào 
Paulo" cannot be found.
Further digging revealed two potential problems:

- how can SQlite be instructed to return search results which include a 
special character in it?
E.g. you search literally for "Zurich" on an englisch system and expect 
"Zürich" to be in the result set.

The next problem is that educated users might know that german Umlaute 
can be written out. The rules are simple:

ä becomes "ae"
ö becomes "oe"
ü becomes "ue"

So how would I go about filtering an educated user which looks for 
"Zuerich" and expects "Zürich" in the result set?

Best find on my behalf so far is to build a filter which replaces any 
occurence of "ae", "oe", "ue" with two placeholders ("%") which would 
effectively lead to a search of the type

select * from cities where name like "Z%%rich"


So far ,  so good, but my client also expects ANY simplification of a 
character to be recognized:
Cote d'azur for instance  should return "Côte d'azur"
or the Sao Paulo issue - how can a search for "Sao Paulo" return "Sào 
Paulo" in the result set?

Please note that I am using the normal command line sqlite3.exe application. I 
already started looking into soundex() yet unfortunately, it does not seem to 
be compiled into the normal command like executable. I also doubt that it would 
help?

Any help much appreciated, 

Christophe Leske

www.multimedial.de - i...@multimedial.de
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
+49(0)180102 - 06 60 02 96 // +49(0)177 249 70 31

This email may contain confidential information. If you are not the intended 
recipient, it would be appreciated that you delete it and notify the sender. 
Thank you.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Adam DeVita
Remember the implications of Moore's law and how much time has passed.

CPU speed is much faster than memory speed.
Memory speed is much faster than disk access.

This is why hardware folks play all sorts of tricks with pipelines, caches,
interleaving, and parallelism.

For a single process that interacts with the HDD, the HDD will be the bottle
neck and the CPU will spend lots of time waiting for the rest of the machine
to catch up.  Even if you have a RAID system, the CPU is still much faster
than the bus the hard drives are on.


On Tue, Jun 16, 2009 at 9:52 AM, Jens Páll Hafsteinsson
wrote:

> In step 5 I execute "delete from t1" without any where clause.
>
> I haven't monitored the disk space used (does sqlite use temporary files
> beside the database file?) but the database file itself has been fixed in
> size at around 12MB (12.461.056 bytes) the whole time.
>
> The load on the disk is minimal (sustained 1MB/s) and the CPU load is about
> 6%, which is a bit surprising since I thought I would be putting a huge load
> on the computer running a loop like this. I'm not at all happy to see these
> low load numbers given how the test is programmed (it should practically own
> the machine). The database should utilize the computer much better than
> this.
>
> I've been running the test now for about 10 minutes using 3.6.15 and it
> 'seems' to be behaving as before, slowly increasing in execution time. I
> want to run this a bit longer to be absolutely sure and will let you know.
>
> JP
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:
> sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: 16. júní 2009 12:15
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database inserts gradually slowing down
>
> How do you do step 5? Like "delete from table" or "delete from table
> where ..."? Do you see any degrade in disk space used by database
> along with slowness?
>
> Pavel
>
> On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
> Hafsteinsson wrote:
> > Hi
> >
> > I've been running some tests against sqlite and have found that inserts
> are gradually slowing down. Since I'm new to sqlite I might be doing
> something wrong and thought I'd ask here if anyone has seen this before or
> know what might be causing this.
> >
> > The test I'm running creates a database containing a single table (int,
> int, varchar(100)) along with an index (not unique) on the first field. I
> then perform the following operations (all records have unique data in the
> first field):
> >
> >
> > 1.   start a transaction
> >
> > 2.   insert 1000 records
> >
> > 3.   commit
> >
> > 4.   repeat steps 1-3 100 times
> >
> > 5.   delete everything from the table
> >
> > 6.   Start again at step 1
> >
> > The time taken to execute steps 1-4 increases gradually from taking
> around 16 seconds the first time to just over 28 seconds after about 2400
> iterations. To me, this is alarming since this time increase seems not to be
> asymptotic (not sure though, there is a slight curve on the graph and I
> haven't done any curve fitting) and I fear that I will end up with an
> unusable slow database after some time of use. Even if I shut down the test
> application and start again (on the same database and without re-creating
> the table), it just continues running as before it was stopped, that is,
> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
> >
> > At first I ran this for shorter periods without an index and think I saw
> the same behavior, but I have to verify that to be sure.
> >
> > I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual
> Studio 2008.
> >
> > If anyone is interested I can e-mail the collected data and the test
> program I'm using.
> >
> > Any pointers appreciated.
> >
> > Thanks,
> > JP
> >
> > -
> > "What you take away is more important than what you add." Chuck Moore
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
VerifEye Technologies Inc.
905-948-0015x245
7100 Warden Ave, Unit 3
Markham ON, L3R 8B5
Canada
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
In step 5 I execute "delete from t1" without any where clause.

I haven't monitored the disk space used (does sqlite use temporary files beside 
the database file?) but the database file itself has been fixed in size at 
around 12MB (12.461.056 bytes) the whole time.

The load on the disk is minimal (sustained 1MB/s) and the CPU load is about 6%, 
which is a bit surprising since I thought I would be putting a huge load on the 
computer running a loop like this. I'm not at all happy to see these low load 
numbers given how the test is programmed (it should practically own the 
machine). The database should utilize the computer much better than this.

I've been running the test now for about 10 minutes using 3.6.15 and it 'seems' 
to be behaving as before, slowly increasing in execution time. I want to run 
this a bit longer to be absolutely sure and will let you know.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Pavel Ivanov
Sent: 16. júní 2009 12:15
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

How do you do step 5? Like "delete from table" or "delete from table
where ..."? Do you see any degrade in disk space used by database
along with slowness?

Pavel

On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
Hafsteinsson wrote:
> Hi
>
> I've been running some tests against sqlite and have found that inserts are 
> gradually slowing down. Since I'm new to sqlite I might be doing something 
> wrong and thought I'd ask here if anyone has seen this before or know what 
> might be causing this.
>
> The test I'm running creates a database containing a single table (int, int, 
> varchar(100)) along with an index (not unique) on the first field. I then 
> perform the following operations (all records have unique data in the first 
> field):
>
>
> 1.       start a transaction
>
> 2.       insert 1000 records
>
> 3.       commit
>
> 4.       repeat steps 1-3 100 times
>
> 5.       delete everything from the table
>
> 6.       Start again at step 1
>
> The time taken to execute steps 1-4 increases gradually from taking around 16 
> seconds the first time to just over 28 seconds after about 2400 iterations. 
> To me, this is alarming since this time increase seems not to be asymptotic 
> (not sure though, there is a slight curve on the graph and I haven't done any 
> curve fitting) and I fear that I will end up with an unusable slow database 
> after some time of use. Even if I shut down the test application and start 
> again (on the same database and without re-creating the table), it just 
> continues running as before it was stopped, that is, taking 28 seconds (and 
> increasing) if I stop it after 2400 iterations.
>
> At first I ran this for shorter periods without an index and think I saw the 
> same behavior, but I have to verify that to be sure.
>
> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual Studio 
> 2008.
>
> If anyone is interested I can e-mail the collected data and the test program 
> I'm using.
>
> Any pointers appreciated.
>
> Thanks,
> JP
>
> -
> "What you take away is more important than what you add." Chuck Moore
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
No, but I downloaded 3.6.15 and am running it through the test now. Will let 
you know the results.

JP

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of hiral
Sent: 16. júní 2009 12:10
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

Sorry I got it.
Did you tried with 3.6.14.2.

Thank you.
-Hiral

On Tue, Jun 16, 2009 at 5:39 PM, hiral  wrote:

> Hi,
>
> Can you please let me know which version of sqlite, are you using?
>
> Thank you.
> -Hiral
>
>   On Tue, Jun 16, 2009 at 5:36 PM, Jens Páll Hafsteinsson <
> j...@lsretail.com> wrote:
>
>> Hi
>>
>> I've been running some tests against sqlite and have found that inserts
>> are gradually slowing down. Since I'm new to sqlite I might be doing
>> something wrong and thought I'd ask here if anyone has seen this before or
>> know what might be causing this.
>>
>> The test I'm running creates a database containing a single table (int,
>> int, varchar(100)) along with an index (not unique) on the first field. I
>> then perform the following operations (all records have unique data in the
>> first field):
>>
>>
>> 1.   start a transaction
>>
>> 2.   insert 1000 records
>>
>> 3.   commit
>>
>> 4.   repeat steps 1-3 100 times
>>
>> 5.   delete everything from the table
>>
>> 6.   Start again at step 1
>>
>> The time taken to execute steps 1-4 increases gradually from taking around
>> 16 seconds the first time to just over 28 seconds after about 2400
>> iterations. To me, this is alarming since this time increase seems not to be
>> asymptotic (not sure though, there is a slight curve on the graph and I
>> haven't done any curve fitting) and I fear that I will end up with an
>> unusable slow database after some time of use. Even if I shut down the test
>> application and start again (on the same database and without re-creating
>> the table), it just continues running as before it was stopped, that is,
>> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
>>
>> At first I ran this for shorter periods without an index and think I saw
>> the same behavior, but I have to verify that to be sure.
>>
>> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual
>> Studio 2008.
>>
>> If anyone is interested I can e-mail the collected data and the test
>> program I'm using.
>>
>> Any pointers appreciated.
>>
>> Thanks,
>> JP
>>
>> -
>> "What you take away is more important than what you add." Chuck Moore
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes

2009-06-16 Thread D. Richard Hipp

On Jun 16, 2009, at 8:47 AM, A Drent wrote:
>
> From the docs I read that for the new version:
>
> a.. When new tables are created using CREATE TABLE ... AS SELECT ...  
> the
> datatype of the columns is the simplified SQLite datatype (TEXT,  
> INT, REAL,
> NUMERIC, or BLOB) instead of a copy of the original datatype from  
> the source
> table.
>
> I don't know why this has been done,


It was done to fix a bug.

SQLite allows arbitrary text as the "datatype" of a column.  So you  
could say (for example):

CREATE TABLE t1(a "duh! ++ x hi, yall!(+123)" unique);

And the datatype for t1.a would be "duh! ++ x hi, yall!(+123)".  It  
used to be that this datatype would be copied into the synthesized  
CREATE TABLE statements for tables generated using CREATE TABLE AS.   
But that required that the datatype name be properly quoted so that it  
could be parsed again.  The logic that did this quoting was complex  
and contained bugs.  It was much easier to strip out the whole thing  
and replace it with primitive datatype generator (that inserts "NUM"  
in place of the complex string show above) than to try to fix the  
quoting for every possible case.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes

2009-06-16 Thread John Machin
On 16/06/2009 10:47 PM, A Drent wrote:
> Sorry, something went wrong on the previous post.

*AND* on this one; you are starting a new topic but you included about 
900 lines from today's digest!!

>>From the docs I read that for the new version:
> 
> a.. When new tables are created using CREATE TABLE ... AS SELECT ... the 
> datatype of the columns is the simplified SQLite datatype (TEXT, INT, REAL, 
> NUMERIC, or BLOB) instead of a copy of the original datatype from the source 
> table.

What source table? There may be 0, 1, or many tables involved. E.g.

create table foo as select 1, 2.34, 'hello', x'f00baa', null;

> I don't know why this has been done, as far as I can tell this will cause 
> trouble in several wrappers. I.e. the Delphi wrappers depend on regular 
> datatypes like varchar(xx), number etc. If SQLite will 'translate' this into 
> other datatypes this will cause inconsistancies. If this is necessary why 
> not then just allow the primitive datatypes within the 'create'? Then the 
> reported datatype will be the same as the datatypes uses within the create. 
> What will happen on a create table when the datatype is other then the 
> primitives? How does SQLite translate these? Or am I completely 
> misundertanding things?

Possibly. To save us from misunderstanding you, show us a concrete 
example of something going wrong (in your opinion).

> 
> albert

[BIG SNIP]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatypes

2009-06-16 Thread Igor Tandetnik
A Drent wrote:
> a.. When new tables are created using CREATE TABLE ... AS SELECT ...
> the datatype of the columns is the simplified SQLite datatype (TEXT,
> INT, REAL, NUMERIC, or BLOB) instead of a copy of the original
> datatype from the source table.
>
> I don't know why this has been done

Because, in general, there ain't no such thing as a "source table". 
Consider:

create table newTable as
select 1, a+1, b+c, case when d then e else f end
from oldTable1 join oldTable2 on someCondition;

> If this is necessary why not then just allow the
> primitive datatypes within the 'create'?

I'm not sure I understand. You can use any datatypes you want in CREATE 
TABLE, just not in CREATE TABLE AS SELECT. If you need a table with 
particular column types, create it the way you want, then use INSERT ... 
SELECT to populate it.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Datatypes

2009-06-16 Thread A Drent
Sorry, something went wrong on the previous post.

>From the docs I read that for the new version:

a.. When new tables are created using CREATE TABLE ... AS SELECT ... the 
datatype of the columns is the simplified SQLite datatype (TEXT, INT, REAL, 
NUMERIC, or BLOB) instead of a copy of the original datatype from the source 
table.

I don't know why this has been done, as far as I can tell this will cause 
trouble in several wrappers. I.e. the Delphi wrappers depend on regular 
datatypes like varchar(xx), number etc. If SQLite will 'translate' this into 
other datatypes this will cause inconsistancies. If this is necessary why 
not then just allow the primitive datatypes within the 'create'? Then the 
reported datatype will be the same as the datatypes uses within the create. 
What will happen on a create table when the datatype is other then the 
primitives? How does SQLite translate these? Or am I completely 
misundertanding things?

albert

- Original Message - 
From: 
To: 
Sent: Tuesday, June 16, 2009 2:38 PM
Subject: sqlite-users Digest, Vol 18, Issue 63


> Send sqlite-users mailing list submissions to
> sqlite-users@sqlite.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> or, via email, send a message with subject or body 'help' to
> sqlite-users-requ...@sqlite.org
>
> You can reach the person managing the list at
> sqlite-users-ow...@sqlite.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of sqlite-users digest..."
>
>
> Today's Topics:
>
>   1. Database inserts gradually slowing down (Jens P?ll Hafsteinsson)
>   2. Re: Database inserts gradually slowing down (hiral)
>   3. Re: Database inserts gradually slowing down (hiral)
>   4. Re: Database inserts gradually slowing down (Pavel Ivanov)
>   5. Re: SQlite3 - SQL injection using ruby (John Elrick)
>   6. Datatypes (A Drent)
>
>
> --
>
> Message: 1
> Date: Tue, 16 Jun 2009 12:06:33 +
> From: Jens P?ll Hafsteinsson 
> Subject: [sqlite] Database inserts gradually slowing down
> To: "sqlite-users@sqlite.org" 
> Message-ID:
> <66ed19f93cca594894a9bd4c9ac5b3a32c83bc1...@palli.last.local>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi
>
> I've been running some tests against sqlite and have found that inserts 
> are gradually slowing down. Since I'm new to sqlite I might be doing 
> something wrong and thought I'd ask here if anyone has seen this before or 
> know what might be causing this.
>
> The test I'm running creates a database containing a single table (int, 
> int, varchar(100)) along with an index (not unique) on the first field. I 
> then perform the following operations (all records have unique data in the 
> first field):
>
>
> 1.   start a transaction
>
> 2.   insert 1000 records
>
> 3.   commit
>
> 4.   repeat steps 1-3 100 times
>
> 5.   delete everything from the table
>
> 6.   Start again at step 1
>
> The time taken to execute steps 1-4 increases gradually from taking around 
> 16 seconds the first time to just over 28 seconds after about 2400 
> iterations. To me, this is alarming since this time increase seems not to 
> be asymptotic (not sure though, there is a slight curve on the graph and I 
> haven't done any curve fitting) and I fear that I will end up with an 
> unusable slow database after some time of use. Even if I shut down the 
> test application and start again (on the same database and without 
> re-creating the table), it just continues running as before it was 
> stopped, that is, taking 28 seconds (and increasing) if I stop it after 
> 2400 iterations.
>
> At first I ran this for shorter periods without an index and think I saw 
> the same behavior, but I have to verify that to be sure.
>
> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual 
> Studio 2008.
>
> If anyone is interested I can e-mail the collected data and the test 
> program I'm using.
>
> Any pointers appreciated.
>
> Thanks,
> JP
>
> -
> "What you take away is more important than what you add." Chuck Moore
>
>
>
> --
>
> Message: 2
> Date: Tue, 16 Jun 2009 17:39:01 +0530
> From: hiral 
> Subject: Re: [sqlite] Database inserts gradually slowing down
> To: General Discussion of SQLite Database 
> Message-ID:
> 
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> Can you please let me know which version of sqlite, are you using?
>
> Thank you.
> -Hiral
>
> On Tue, Jun 16, 2009 at 5:36 PM, Jens P?ll Hafsteinsson
> wrote:
>
>> Hi
>>
>> I've been running some tests against sqlite and have found that inserts 
>> are
>> gradually slowing down. Since I'm new to sqlite I might be doing 
>> something
>> wrong and thought I'd ask here if anyone has seen this before or know 
>> what
>> might be causing this

[sqlite] Datatypes

2009-06-16 Thread A Drent
>From the new version I read:

- Original Message - 
From: 
To: 
Sent: Tuesday, June 16, 2009 2:00 PM
Subject: sqlite-users Digest, Vol 18, Issue 62


> Send sqlite-users mailing list submissions to
> sqlite-users@sqlite.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> or, via email, send a message with subject or body 'help' to
> sqlite-users-requ...@sqlite.org
>
> You can reach the person managing the list at
> sqlite-users-ow...@sqlite.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of sqlite-users digest..."
>
>
> Today's Topics:
>
>   1. Re: sqlite3_step performance degredation (Ken)
>   2. Re: sqlite3_step performance degredation (Simon Slavin)
>   3. Re: sqlite3_step performance degredation (Jim Wilcoxson)
>   4. Re: SQlite3 - SQL injection using ruby (dave lilley)
>   5. Bug in retrieving last rowid? (hartwig.wiesm...@online.nl)
>   6. Sqlite-3.5.9: getting sqlite_autoindex error (h o)
>
>
> --
>
> Message: 1
> Date: Mon, 15 Jun 2009 14:33:04 -0700 (PDT)
> From: Ken 
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: General Discussion of SQLite Database 
> Message-ID: <780955.31349...@web81003.mail.mud.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> Also is there an index on the table B.ID field?
>
> --- On Mon, 6/15/09, Mike Borland  wrote:
>
>> From: Mike Borland 
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>> To: "General Discussion of SQLite Database" 
>> Date: Monday, June 15, 2009, 4:11 PM
>> Nuno, unfortunately your psychic
>> skills are a bit off on this one.? Sorry I wasn't more
>> explicit.? I am not using any LIMIT or OFFSET to do any
>> virtual scrolling.? Basically I have table A which has
>> 900 rows.? Table B has 180,000 rows (900 * 200) which
>> has a foreign key relationship back to table A.? So for
>> each row in table A, there are 200 rows in table B.? My
>> query is basically a "SELECT * FROM Table B WHERE ID =
>> TableA.ID".? I'm executing this query 900 times, once
>> for each row in table A.?
>>
>> When I start the 900 read iterations (always in the same
>> order), the first one generally reads in about 50ms and by
>> the last read, it's taking roughly 1000ms.? Sometimes
>> it slows down immediately, sometimes after the 100th
>> iteration.? The only absolutely reproducible aspect is
>> that it always slows down eventually and once it slows down,
>> it never speeds back up.? I don't believe it's a
>> locking issue since my timer doesn't start until the query
>> is successfully executed.
>>
>> Any ideas?? Would the occasional write operation in
>> the midst of these reads cause any permanent slow down to
>> the read time?? Thanks.
>>
>> Mike Borland
>>
>> -Original Message-
>> From: Nuno Lucas [mailto:ntlu...@gmail.com]
>>
>> Sent: Friday, June 12, 2009 7:16 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>
>> On Sat, Jun 13, 2009 at 1:52 AM, Mike
>> Borland
>> wrote:
>> > I have a fairly complex program where I am seeing a
>> performance
>> > degradation of the sqlite3_step() function.
>> ?Basically I'm iterating
>> > roughly 200 rows at a time, over and over. ?The only
>> work happening
>> > during the iteration is I'm copying the record into an
>> array. ?At first,
>> > sqlite3_step() takes less than a millisecond to run.
>> ?After 0-50
>> > iterations, it's taking anywhere from 10-100ms.
>> >
>> > Does anybody have any insight into what's happening
>> behind the scenes
>> > with this function to help me track down the cause?
>> ?I appreciate it!
>>
>> You should explicitly say what your SQL query is. Without
>> that we can
>> only guess.
>>
>> My current "psychic" guess is that you are using LIMIT to
>> obtain those
>> 200 rows, one "page" at a time, and as you go advancing
>> "pages" it
>> becomes slower and slower.
>> If this is true, then you should re-think your design as
>> LIMIT just
>> skips the rows, but it will? "generate" them before,
>> meaning it
>> becomes slower as you advance on the offset given.
>> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>> (and
>> notice the "What not to do" at the end, talking about
>> "LIMIT" and
>> "OFFSET").
>>
>> If my my psychic abilities are becoming weak, then please
>> supply your
>> exact query that is getting slower? (and maybe your
>> database schema)
>> and then someone can give you an exact answer.
>>
>>
>> Regards,
>> ~Nuno Lucas
>>
>> >
>> > Mike Borland
>> >
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>> >
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/m

Re: [sqlite] SQlite3 - SQL injection using ruby

2009-06-16 Thread John Elrick
dave lilley wrote:
> Many thanks to all who have replied,
>
> I know understand the difference and shall use that approach to creating my
> queries.
>
> regarding the "ruby way" it was more how I saw saving code typing by
> injection different table, field and user data into one query thus saving
> typing.
>
> BUT in the interests of eliminating/ reducing SQL injection I shall pass the
> table name to my method and test against it to select which query to use
> against that table and use the SQLite3 binding method.
>
> e.g.
> def makesql (tablename, uservar)
> case tablename
>
> when customers == tablename
>stmt = select * from customers where cust_nos = ?"
>
> when jobs == tablename
>   stmt = 
>
> end
>
> row = db.execute(stmt,uservar)
>
>   

Unless your user is typing in the table name also, you don't have to go 
that far. Interpolation is fine for system defined variables. 


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Pavel Ivanov
How do you do step 5? Like "delete from table" or "delete from table
where ..."? Do you see any degrade in disk space used by database
along with slowness?

Pavel

On Tue, Jun 16, 2009 at 8:06 AM, Jens Páll
Hafsteinsson wrote:
> Hi
>
> I've been running some tests against sqlite and have found that inserts are 
> gradually slowing down. Since I'm new to sqlite I might be doing something 
> wrong and thought I'd ask here if anyone has seen this before or know what 
> might be causing this.
>
> The test I'm running creates a database containing a single table (int, int, 
> varchar(100)) along with an index (not unique) on the first field. I then 
> perform the following operations (all records have unique data in the first 
> field):
>
>
> 1.       start a transaction
>
> 2.       insert 1000 records
>
> 3.       commit
>
> 4.       repeat steps 1-3 100 times
>
> 5.       delete everything from the table
>
> 6.       Start again at step 1
>
> The time taken to execute steps 1-4 increases gradually from taking around 16 
> seconds the first time to just over 28 seconds after about 2400 iterations. 
> To me, this is alarming since this time increase seems not to be asymptotic 
> (not sure though, there is a slight curve on the graph and I haven't done any 
> curve fitting) and I fear that I will end up with an unusable slow database 
> after some time of use. Even if I shut down the test application and start 
> again (on the same database and without re-creating the table), it just 
> continues running as before it was stopped, that is, taking 28 seconds (and 
> increasing) if I stop it after 2400 iterations.
>
> At first I ran this for shorter periods without an index and think I saw the 
> same behavior, but I have to verify that to be sure.
>
> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual Studio 
> 2008.
>
> If anyone is interested I can e-mail the collected data and the test program 
> I'm using.
>
> Any pointers appreciated.
>
> Thanks,
> JP
>
> -
> "What you take away is more important than what you add." Chuck Moore
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread hiral
Sorry I got it.
Did you tried with 3.6.14.2.

Thank you.
-Hiral

On Tue, Jun 16, 2009 at 5:39 PM, hiral  wrote:

> Hi,
>
> Can you please let me know which version of sqlite, are you using?
>
> Thank you.
> -Hiral
>
>   On Tue, Jun 16, 2009 at 5:36 PM, Jens Páll Hafsteinsson <
> j...@lsretail.com> wrote:
>
>> Hi
>>
>> I've been running some tests against sqlite and have found that inserts
>> are gradually slowing down. Since I'm new to sqlite I might be doing
>> something wrong and thought I'd ask here if anyone has seen this before or
>> know what might be causing this.
>>
>> The test I'm running creates a database containing a single table (int,
>> int, varchar(100)) along with an index (not unique) on the first field. I
>> then perform the following operations (all records have unique data in the
>> first field):
>>
>>
>> 1.   start a transaction
>>
>> 2.   insert 1000 records
>>
>> 3.   commit
>>
>> 4.   repeat steps 1-3 100 times
>>
>> 5.   delete everything from the table
>>
>> 6.   Start again at step 1
>>
>> The time taken to execute steps 1-4 increases gradually from taking around
>> 16 seconds the first time to just over 28 seconds after about 2400
>> iterations. To me, this is alarming since this time increase seems not to be
>> asymptotic (not sure though, there is a slight curve on the graph and I
>> haven't done any curve fitting) and I fear that I will end up with an
>> unusable slow database after some time of use. Even if I shut down the test
>> application and start again (on the same database and without re-creating
>> the table), it just continues running as before it was stopped, that is,
>> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
>>
>> At first I ran this for shorter periods without an index and think I saw
>> the same behavior, but I have to verify that to be sure.
>>
>> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual
>> Studio 2008.
>>
>> If anyone is interested I can e-mail the collected data and the test
>> program I'm using.
>>
>> Any pointers appreciated.
>>
>> Thanks,
>> JP
>>
>> -
>> "What you take away is more important than what you add." Chuck Moore
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread hiral
Hi,

Can you please let me know which version of sqlite, are you using?

Thank you.
-Hiral

On Tue, Jun 16, 2009 at 5:36 PM, Jens Páll Hafsteinsson
wrote:

> Hi
>
> I've been running some tests against sqlite and have found that inserts are
> gradually slowing down. Since I'm new to sqlite I might be doing something
> wrong and thought I'd ask here if anyone has seen this before or know what
> might be causing this.
>
> The test I'm running creates a database containing a single table (int,
> int, varchar(100)) along with an index (not unique) on the first field. I
> then perform the following operations (all records have unique data in the
> first field):
>
>
> 1.   start a transaction
>
> 2.   insert 1000 records
>
> 3.   commit
>
> 4.   repeat steps 1-3 100 times
>
> 5.   delete everything from the table
>
> 6.   Start again at step 1
>
> The time taken to execute steps 1-4 increases gradually from taking around
> 16 seconds the first time to just over 28 seconds after about 2400
> iterations. To me, this is alarming since this time increase seems not to be
> asymptotic (not sure though, there is a slight curve on the graph and I
> haven't done any curve fitting) and I fear that I will end up with an
> unusable slow database after some time of use. Even if I shut down the test
> application and start again (on the same database and without re-creating
> the table), it just continues running as before it was stopped, that is,
> taking 28 seconds (and increasing) if I stop it after 2400 iterations.
>
> At first I ran this for shorter periods without an index and think I saw
> the same behavior, but I have to verify that to be sure.
>
> I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual Studio
> 2008.
>
> If anyone is interested I can e-mail the collected data and the test
> program I'm using.
>
> Any pointers appreciated.
>
> Thanks,
> JP
>
> -
> "What you take away is more important than what you add." Chuck Moore
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jens Páll Hafsteinsson
Hi

I've been running some tests against sqlite and have found that inserts are 
gradually slowing down. Since I'm new to sqlite I might be doing something 
wrong and thought I'd ask here if anyone has seen this before or know what 
might be causing this.

The test I'm running creates a database containing a single table (int, int, 
varchar(100)) along with an index (not unique) on the first field. I then 
perform the following operations (all records have unique data in the first 
field):


1.   start a transaction

2.   insert 1000 records

3.   commit

4.   repeat steps 1-3 100 times

5.   delete everything from the table

6.   Start again at step 1

The time taken to execute steps 1-4 increases gradually from taking around 16 
seconds the first time to just over 28 seconds after about 2400 iterations. To 
me, this is alarming since this time increase seems not to be asymptotic (not 
sure though, there is a slight curve on the graph and I haven't done any curve 
fitting) and I fear that I will end up with an unusable slow database after 
some time of use. Even if I shut down the test application and start again (on 
the same database and without re-creating the table), it just continues running 
as before it was stopped, that is, taking 28 seconds (and increasing) if I stop 
it after 2400 iterations.

At first I ran this for shorter periods without an index and think I saw the 
same behavior, but I have to verify that to be sure.

I'm using sqlite version 3.6.14 on Windows XP, compiled using Visual Studio 
2008.

If anyone is interested I can e-mail the collected data and the test program 
I'm using.

Any pointers appreciated.

Thanks,
JP

-
"What you take away is more important than what you add." Chuck Moore

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite-3.5.9: getting sqlite_autoindex error

2009-06-16 Thread h o
Hi,

I am using sqlite-3.5.9 and observing a 'disk image malformed' error
nfs, on doing 'PRAGMA integrity_check' I got following messages...

SQLite version 3.5.9
Enter ".help" for instructions
sqlite> PRAGMA integrity_check;
*** in database main ***
Page 5275 is never used
wrong # of entries in index sqlite_autoindex__1
sqlite>

Can you please let me know what is the problem here.

Thank you.
-Hiral
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug in retrieving last rowid?

2009-06-16 Thread hartwig . wiesmann
I have created  Tables A & AS_FTS

"create table A (id integer primary key, string Text);"

"create virtual table AS_FTS (Name);"

and a trigger

"insert into A_FTS (rowid,Name) values (New.%@,New.%@);"

(and a not shown delete trigger).

Now, I enter two strings into tue table: one and two. I delete table entry one 
and insert afterwrds two times one again. Finally I check the last inserted 
rowid using "sqlite_last_insert_rowid". Unfortunately, this is wrong.

BTW: other non-FTS insert triggers seem to work. I am using 3.6.14.

Any ideas?

Hartwig
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users