Re: [sqlite] Support for clustered indexing (or something similar)?

2008-10-26 Thread Dan

On Oct 27, 2008, at 12:38 PM, Julian Bui wrote:

> Thanks for the reply dan.
>
> You probably don't "need" clustered indexing as such, but this would  
> be
>>
>> the kind of case where it provides some advantages. You can get the  
>> same
>> effect in SQLite by including all the data columns in your index
>> definition.
>>
>>
> Unfortunately, because I will be making very frequent inserts (every
> 10-100ms), an index spanning all columns would be much too expensive  
> and
> would hurt insert times.

Have you been able to verify this experimentally? Predicting performance
is a tricky exercise.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Support for clustered indexing (or something similar)?

2008-10-26 Thread Julian Bui
Thanks for the reply dan.

You probably don't "need" clustered indexing as such, but this would be
>
> the kind of case where it provides some advantages. You can get the same
> effect in SQLite by including all the data columns in your index
> definition.
>
>
Unfortunately, because I will be making very frequent inserts (every
10-100ms), an index spanning all columns would be much too expensive and
would hurt insert times.

Julian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Support for clustered indexing (or something similar)?

2008-10-26 Thread Dan

On Oct 26, 2008, at 5:15 PM, Julian Bui wrote:

> Hi all,
>
> I have records w/ a timestamp attribute which is not unique and  
> cannot be
> used as a primary key.  These records will be inserted according to
> timestamp value.  From this important fact, I've gathered I need a  
> clustered
> index since my SELECT statements use a time-range in the WHERE clause.
>
> Does sqlite support clustered indexing or something that would take
> advantage of nature of my records and the nature of how they're being
> inserted?

You probably don't "need" clustered indexing as such, but this would be
the kind of case where it provides some advantages. You can get the same
effect in SQLite by including all the data columns in your index  
definition.
The data will be stored in the database twice, but SELECT queries on  
ranges
of timestamps will get the same benefits that a clustered index would  
provide.

For Example, if this is your table:

   CREATE TABLE t1(time TIMESTAMP, data1, data2, data3);

Then you do:

   CREATE INDEX time_idx ON t1(time);
   SELECT * FROM t1 WHERE time >= '2008-10-25 10:00' AND time <=  
'2008-10-25 11:00';

then SQLite will use the index to find the set of rows that match the  
WHERE
constraint. Each index entry contains a reference to the corresponding  
entry
in the main table structure that SQLite will use to find the values of
data1, data2 and data3 to return. i.e. a non-clustered index.

If you do this instead:

   CREATE INDEX time_idx ON t1(time, data1, data2, data3);
   SELECT * FROM t1 WHERE time >= '2008-10-25 10:00' AND time <=  
'2008-10-25 11:00';

Then each index entry will have the corresponding values of data1,  
data2 and
data3 stored with it. In this case, there is no need to seek to the  
corresponding
table entry for each matching row, as the data stored as part of the  
index
entry can be inspected to find the values of data1, data2 and data3 to  
return.
Just like a clustered index.

As with all performance tips, you should probably test this with your  
specific
data to make sure it really does provide some benefits.

Dan.





> Please let me know.
>
> Thanks,
> Julian
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] last_insert_rowid() syntax

2008-10-26 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Karl Lautman wrote:
> Can someone point out to me the syntax error in the following?  I've omitted
> the set-up code for brevity, but cur is a cursor with a connection to the
> database.  Thanks.  
> 
 x = cur.execute('last_insert_rowid()')

last_insert_rowid() is a function just like min, max, hex, length etc.
See http://www.sqlite.org/lang_corefunc.html

So to get the value you need to use:

  cur.execute("select last_insert_rowid()")

It looks like you are using pysqlite so you can just do this which does
the same thing behind the scenes:

  x=cur.lastrowid

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkkFREsACgkQmOOfHg372QQGcQCg5mJ6n7KY7lIF33nXuj10zQDX
698AnRfN3ZplfMYf4SvT3vzfsg40kUJV
=q4tC
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum needed?

2008-10-26 Thread Mohit Sindhwani
Cory Nelson wrote:
> On Sun, Oct 26, 2008 at 8:17 PM, Mohit Sindhwani <[EMAIL PROTECTED]> wrote:
>   
>> I'm setting to to delete a bunch of old records ever 2 weeks in a cron
>> job and initially I just wanted to do delete * from table where
>> datetime(created_on, 'localtime') < some_date.
>>
>> Then, I remembered about vacuum - do I need to vacuum the database
>> whenever I delete the records?  Should I just 'exec' the statement by
>> setting the sql to "delete  something ... ; vacuum;" and execute
>> that?  Is that the recommended approach?
>>
>> 
>
> vacuum shrinks the database size by removing empty pages.  sqlite will
> normally reuse empty pages - so vacuum is only useful if you don't
> plan to insert anything else, otherwise it will be slower.
>   


Thanks Cory!  That clears a major worry (regarding the speed of 
vacuuming a large database with a large number of records deleted) in my 
mind.  Temporarily, letting the database take up extra space is not a 
worry, so we'll just leave it in the records deleted state and let 
SQLite reuse the memory as it goes along.

Best regards
Mohit.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum needed?

2008-10-26 Thread Cory Nelson
On Sun, Oct 26, 2008 at 8:17 PM, Mohit Sindhwani <[EMAIL PROTECTED]> wrote:
> I'm setting to to delete a bunch of old records ever 2 weeks in a cron
> job and initially I just wanted to do delete * from table where
> datetime(created_on, 'localtime') < some_date.
>
> Then, I remembered about vacuum - do I need to vacuum the database
> whenever I delete the records?  Should I just 'exec' the statement by
> setting the sql to "delete  something ... ; vacuum;" and execute
> that?  Is that the recommended approach?
>

vacuum shrinks the database size by removing empty pages.  sqlite will
normally reuse empty pages - so vacuum is only useful if you don't
plan to insert anything else, otherwise it will be slower.

-- 
Cory Nelson
http://www.int64.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] last_insert_rowid() syntax

2008-10-26 Thread Karl Lautman
Can someone point out to me the syntax error in the following?  I've omitted
the set-up code for brevity, but cur is a cursor with a connection to the
database.  Thanks.  

>>> x = cur.execute('last_insert_rowid()')

Traceback (most recent call last):
  File "", line 1, in 
x = cur.execute('last_insert_rowid()')
OperationalError: near "last_insert_rowid": syntax error

Karl

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Vacuum needed?

2008-10-26 Thread Mohit Sindhwani
I'm setting to to delete a bunch of old records ever 2 weeks in a cron 
job and initially I just wanted to do delete * from table where 
datetime(created_on, 'localtime') < some_date.

Then, I remembered about vacuum - do I need to vacuum the database 
whenever I delete the records?  Should I just 'exec' the statement by 
setting the sql to "delete  something ... ; vacuum;" and execute 
that?  Is that the recommended approach?

Thanks,
Mohit.
10/27/2008 | 11:16 AM.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 delete action is too slow

2008-10-26 Thread PennyH

My question is How can improve delete speed? This speed can not satisfy my
application.


Igor Tandetnik wrote:
> 
> "yhuang" <[EMAIL PROTECTED]>
> wrote in message news:[EMAIL PROTECTED]
>> I create a DB and only one table in the DB. There are 3641043 records
>> in the DB file. Min id is 27081364, Max id is 30902585.
>>
>> I did follow operation:
>>
>> sqlite> delete from XXX where userId>3090 and userId<30902000;
>>
>> took 1'32''
>>
>>
>>
>> sqlite> delete from XXX where userId>2900 and userId<29902000;
>>
>> spent 3 hours and 33minutes and  26secs
> 
> If your question is about why the second statment takes so much longer 
> than the first, it's because it deletes 902,000 records vs 2,000 for the 
> first. Did you perhaps mean to write userId>2990?
> 
> Igor Tandetnik 
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Sqlite3-delete-action-is-too-slow-tp20163980p20181076.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Replay logging best practices

2008-10-26 Thread David Barrett
What's the right way to use update/commit/rollback hooks to produce a 
replay log?

Currently I'm doing it at a high level by just recording all SQL 
statements into a replay log, and that works really well except fails in 
some cases like with the use of CURRENT_TIMESTAMP.  (Replaying that will 
insert at the time of replay, and not the time of original insert.)

I'm considering switching to a low-level replay log using the hook 
functions.  What's the recommended way to do this?

In particular:

1) How do I hook schema and index changes?  The docs say the update hook 
ignores changes to system tables.

2) Is there any way to determine which columns of a row changed?

3) Replaying INSERT/DELETE is straightforward, where the replay log 
would simply contain something like:

DELETE FROM  WHERE rowid=;

And:
INSERT INTO  VALUES ( , ... );

But what's the best way to replay an UPDATE?  If I can't determine which 
columns were modified, is there any choice but to update everything?  If 
so, is there any more compact way to do this than:

UPDATE  SET =, =, ... WHERE rowid=;

My tables are rather wide, so this will be pretty inefficient, 
especially if I were to update a single column of an entire table.

Thanks for your suggestions!

-david

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database growing surprising rapidly

2008-10-26 Thread P Kishor
On 10/26/08, Julian Bui <[EMAIL PROTECTED]> wrote:
> Puneet, I think I see what you're saying about the data types and their
>  affinities, but what does that have to do with the MUCH bigger table size
>  than what was expected?
>

That you were expecting smallint and bigint to behave the way they
should, but SQLite doesn't care about all that. If it sees the term
int in the description (actually, it is not clear from the statement
in the docs that "If the datatype contains the string 'INT' then it is
assigned INTEGER affinity" that smallint would be interpreted as
integer, but, for the moment let us assume so) it makes the column
integer. The size of the column is dependent on the value stored. So,
it can be as large as 8 bytes.

The char(8) also means nothing to SQLite. Soon as it sees "char" in
the description, it makes the column TEXT, and that is as long as it
takes. It is up to you to have your application croak if the length of
the text is longer than 8 characters.

So, per your original post, instead of 28 bytes per row (smallint +
bigint + smallint + double + char(8)), it would actually be (up to 8
bytes + up to 8 bytes + up to 8 bytes + 8 bytes + 8 bytes). That is 40
bytes. I think there is some internal overhead of about a byte or so
per column for SQLite.

Of course, that is still a lot less than your estimate of 2500+ bytes per row.

Are you sure you are storing your timestamp as Julianday (as a float)?
Else, it would be treated as TEXT.

Are you sure you are storing only 8 characters in the char(8) field?

How about dumping the entire database and even reloading it on another
machine to see what is going on?





>  Also, I made a mistake.  In an attempt to censor out my table name and
>  attribute names I forgot to fix everything.  So yes, I am making sure I'm
>  using one table and not two.
>
>  The code ought to be:
>
>
>  
>  //ps is a prepared statement
>  ps = conn.prepareStatement("CREATE table IF NOT EXISTS MY_TABLE(aaa SMALLINT
>  NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee CHAR(8));");
>  ps.execute();
>
>  
>
>
> ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_DDD on MYTABLE
>  (ddd)");
>  ps.execute();
>
>  ...
>
>  dataInsertPs = conn.prepareStatement("INSERT into MY_TABLE(aaa, bbb, ccc,
>  ddd, eee) VALUES (?, ?, ?, ?, ?
>
> //every dataInsertPs gets added to a batch and committed every 1000 records
>
>  
>
>
> On 10/26/08, Julian Bui 
> > wrote:
>  >* HI everyone,
>  *>*
>  *>*  I have records in my db that consist of smallint, bigint, smallint, 
> double,
>  *>*  char(8).  By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes 
> per
>  *>*  record.  I also have an index over the attribute that is a double.
>  *
>  Why not start with reading TFM 
>
>  "2.1 Determination Of Column Affinity
>
>  The type affinity of a column is determined by the declared type of
>  the column, according to the following rules:
>
>  If the datatype contains the string "INT" then it is assigned INTEGER 
> affinity.
>
>  If the datatype of the column contains any of the strings "CHAR",
>  "CLOB", or "TEXT" then that column has TEXT affinity. Notice that the
>  type VARCHAR contains the string "CHAR" and is thus assigned TEXT
>  affinity.
>
>  If the datatype for a column contains the string "BLOB" or if no
>  datatype is specified then the column has affinity NONE.
>
>  If the datatype for a column contains any of the strings "REAL",
>  "FLOA", or "DOUB" then the column has REAL affinity
>
>  Otherwise, the affinity is NUMERIC."
>
>  What you expected was not what you got. That was expected.
>
>  >*
>  *>*  I inserted 100,000 records into a clean database and the database grew 
> to
>  *>*  240.0MB.  That comes out to 2519 Bytes per record (240 * 1024 * 1024 /
>  *>*  100,000).  How is that possible?  The index can't add THAT MUCH
>  extra space.
>  *>*
>  *>*  Results were duplicated when I started with a clean database and 
> inserted
>  *>*  other # of records.
>  *>*
>  *>*  This is very surprising.  I must be doing something wrong.
>  *>*
>  *>*  I'm attaching some code.  Also to note, I'm using the xerial JDBC 
> driver to
>  *>*  do this in java.
>  *>*
>  *>*  If anyone knows what may be wrong I'd love to hear feedback.
>  *>*
>  *>*  Thanks,
>  *>*  Julian
>  *>*
>  *>*  
>  *>*  //ps is a prepared statement
>  *>*  ps = conn.prepareStatement("CREATE table IF NOT EXISTS
>  MY_TABLE(aaa SMALLINT
>  *>*  NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee 
> CHAR(8));");
>  *>*  ps.execute();
>  *>*
>  *>*  
>  *>*
>  *>*  ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_TIME on
>  *>*  CAN_MESSAGES (timeStamp)");
>  *>*  ps.execute();
>  *>*
>  *>*  ...
>  *>*
>  *>*  dataInsertPs = conn.prepareStatement("INSERT into CAN_MESSAGES(lru, pgn,
>  *>*  bus, timeStamp, messageData) VALUES (?, ?, ?, ?, ?
>  *>*  //every dataInsertPs gets 

Re: [sqlite] database growing surprising rapidly

2008-10-26 Thread D. Richard Hipp

On Oct 26, 2008, at 5:58 PM, Julian Bui wrote:

> HI everyone,
>
> I have records in my db that consist of smallint, bigint, smallint,  
> double,
> char(8).  By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28  
> Bytes per
> record.  I also have an index over the attribute that is a double.
>
> I inserted 100,000 records into a clean database and the database  
> grew to
> 240.0MB.  That comes out to 2519 Bytes per record (240 * 1024 * 1024 /
> 100,000).  How is that possible?  The index can't add THAT MUCH  
> extra space.

2519 bytes per row does seem excessive - by about two orders of  
magnitude.  Have you tried running sqlite3_analyzer.exe on the  
database to see what is going on?  How big is the database file after  
inserting only 100 records, instead of 100,000.  Can you send me (via  
private email) a copy of the database with only 100 records so that I  
can poke around a bit?

D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database growing surprising rapidly

2008-10-26 Thread Julian Bui
Puneet, I think I see what you're saying about the data types and their
affinities, but what does that have to do with the MUCH bigger table size
than what was expected?

Also, I made a mistake.  In an attempt to censor out my table name and
attribute names I forgot to fix everything.  So yes, I am making sure I'm
using one table and not two.

The code ought to be:


//ps is a prepared statement
ps = conn.prepareStatement("CREATE table IF NOT EXISTS MY_TABLE(aaa SMALLINT
NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee CHAR(8));");
ps.execute();



ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_DDD on MYTABLE
(ddd)");
ps.execute();

...

dataInsertPs = conn.prepareStatement("INSERT into MY_TABLE(aaa, bbb, ccc,
ddd, eee) VALUES (?, ?, ?, ?, ?
//every dataInsertPs gets added to a batch and committed every 1000 records



On 10/26/08, Julian Bui http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>> wrote:
>* HI everyone,
*>*
*>*  I have records in my db that consist of smallint, bigint, smallint, double,
*>*  char(8).  By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes per
*>*  record.  I also have an index over the attribute that is a double.
*
Why not start with reading TFM 

"2.1 Determination Of Column Affinity

The type affinity of a column is determined by the declared type of
the column, according to the following rules:

If the datatype contains the string "INT" then it is assigned INTEGER affinity.

If the datatype of the column contains any of the strings "CHAR",
"CLOB", or "TEXT" then that column has TEXT affinity. Notice that the
type VARCHAR contains the string "CHAR" and is thus assigned TEXT
affinity.

If the datatype for a column contains the string "BLOB" or if no
datatype is specified then the column has affinity NONE.

If the datatype for a column contains any of the strings "REAL",
"FLOA", or "DOUB" then the column has REAL affinity

Otherwise, the affinity is NUMERIC."

What you expected was not what you got. That was expected.

>*
*>*  I inserted 100,000 records into a clean database and the database grew to
*>*  240.0MB.  That comes out to 2519 Bytes per record (240 * 1024 * 1024 /
*>*  100,000).  How is that possible?  The index can't add THAT MUCH
extra space.
*>*
*>*  Results were duplicated when I started with a clean database and inserted
*>*  other # of records.
*>*
*>*  This is very surprising.  I must be doing something wrong.
*>*
*>*  I'm attaching some code.  Also to note, I'm using the xerial JDBC driver to
*>*  do this in java.
*>*
*>*  If anyone knows what may be wrong I'd love to hear feedback.
*>*
*>*  Thanks,
*>*  Julian
*>*
*>*  
*>*  //ps is a prepared statement
*>*  ps = conn.prepareStatement("CREATE table IF NOT EXISTS
MY_TABLE(aaa SMALLINT
*>*  NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee CHAR(8));");
*>*  ps.execute();
*>*
*>*  
*>*
*>*  ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_TIME on
*>*  CAN_MESSAGES (timeStamp)");
*>*  ps.execute();
*>*
*>*  ...
*>*
*>*  dataInsertPs = conn.prepareStatement("INSERT into CAN_MESSAGES(lru, pgn,
*>*  bus, timeStamp, messageData) VALUES (?, ?, ?, ?, ?
*>*  //every dataInsertPs gets added to a batch and committed every 1000 records
*>*
*>*  
*>*  ___
*>*  sqlite-users mailing list
*>*  sqlite-users at sqlite.org

*>*  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
*>*
*

-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database growing surprising rapidly

2008-10-26 Thread P Kishor
On 10/26/08, Julian Bui <[EMAIL PROTECTED]> wrote:
> HI everyone,
>
>  I have records in my db that consist of smallint, bigint, smallint, double,
>  char(8).  By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes per
>  record.  I also have an index over the attribute that is a double.

Why not start with reading TFM 

"2.1 Determination Of Column Affinity

The type affinity of a column is determined by the declared type of
the column, according to the following rules:

If the datatype contains the string "INT" then it is assigned INTEGER affinity.

If the datatype of the column contains any of the strings "CHAR",
"CLOB", or "TEXT" then that column has TEXT affinity. Notice that the
type VARCHAR contains the string "CHAR" and is thus assigned TEXT
affinity.

If the datatype for a column contains the string "BLOB" or if no
datatype is specified then the column has affinity NONE.

If the datatype for a column contains any of the strings "REAL",
"FLOA", or "DOUB" then the column has REAL affinity

Otherwise, the affinity is NUMERIC."

What you expected was not what you got. That was expected.

>
>  I inserted 100,000 records into a clean database and the database grew to
>  240.0MB.  That comes out to 2519 Bytes per record (240 * 1024 * 1024 /
>  100,000).  How is that possible?  The index can't add THAT MUCH extra space.
>
>  Results were duplicated when I started with a clean database and inserted
>  other # of records.
>
>  This is very surprising.  I must be doing something wrong.
>
>  I'm attaching some code.  Also to note, I'm using the xerial JDBC driver to
>  do this in java.
>
>  If anyone knows what may be wrong I'd love to hear feedback.
>
>  Thanks,
>  Julian
>
>  
>  //ps is a prepared statement
>  ps = conn.prepareStatement("CREATE table IF NOT EXISTS MY_TABLE(aaa SMALLINT
>  NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee CHAR(8));");
>  ps.execute();
>
>  
>
>  ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_TIME on
>  CAN_MESSAGES (timeStamp)");
>  ps.execute();
>
>  ...
>
>  dataInsertPs = conn.prepareStatement("INSERT into CAN_MESSAGES(lru, pgn,
>  bus, timeStamp, messageData) VALUES (?, ?, ?, ?, ?
>  //every dataInsertPs gets added to a batch and committed every 1000 records
>
>  
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] database growing surprising rapidly

2008-10-26 Thread Julian Bui
HI everyone,

I have records in my db that consist of smallint, bigint, smallint, double,
char(8).  By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes per
record.  I also have an index over the attribute that is a double.

I inserted 100,000 records into a clean database and the database grew to
240.0MB.  That comes out to 2519 Bytes per record (240 * 1024 * 1024 /
100,000).  How is that possible?  The index can't add THAT MUCH extra space.

Results were duplicated when I started with a clean database and inserted
other # of records.

This is very surprising.  I must be doing something wrong.

I'm attaching some code.  Also to note, I'm using the xerial JDBC driver to
do this in java.

If anyone knows what may be wrong I'd love to hear feedback.

Thanks,
Julian


//ps is a prepared statement
ps = conn.prepareStatement("CREATE table IF NOT EXISTS MY_TABLE(aaa SMALLINT
NOT NULL, bbb BIGINT NOT NULL, ccc SMALLINT, ddd DOUBLE, eee CHAR(8));");
ps.execute();



ps = conn.prepareStatement("CREATE INDEX IF NOT EXISTS IDX_TIME on
CAN_MESSAGES (timeStamp)");
ps.execute();

...

dataInsertPs = conn.prepareStatement("INSERT into CAN_MESSAGES(lru, pgn,
bus, timeStamp, messageData) VALUES (?, ?, ?, ?, ?
//every dataInsertPs gets added to a batch and committed every 1000 records


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fwd: Help! Excel-->SQLite -- getting numbers to behave like numbers!

2008-10-26 Thread David Akin
Cancel that last plea of help: Excel was adding a trailng space in one
of the fields containing numbers which I did not discover until
looking at the CSV file in text editor. A quick search-and-replace
later, I've dumped the data backed into SQLite and problem is solved
...




-- Forwarded message --
From: David Akin <[EMAIL PROTECTED]>
Date: Sun, Oct 26, 2008 at 2:37 PM
Subject: Help! Excel-->SQLite -- getting numbers to behave like numbers!
To: sqlite-users@sqlite.org


I'm fiddling with SQLite running on Mac OSX and am using the Firefox
extension SQLite Manager ...

I have some Excel tables I'd like to re-create in SQLite.

It looks easy to me:
1. Save your Excel table as a .csv file.
2. In SQLite Manager used DATABASE-->IMPORT ... and follow the directions.

Great. There's my table looking all spiffy in an SQL manager except
that numbers ain't acting like numbers. Here's what I mean:

Here's my table

NAME   AGE
--
Fred  9
Anne   21
Ruth   97

When I ask SQLite to sort the table from youngest to oldest, it returns:

NAME   AGE
--
Anne21
Fred   9
Ruth 97

In the AGE column, the DB is apparently looking only at the first
character in the AGE field and sorting on that. It is not treating the
string in AGE as a complete INTEGER. Now I can go in and edit the each
record within SQLIte, keying in the values each time for AGE and the
database will then sort as if the contents of AGE are, in fact, an
integer. But the real data set I'm working with as 1,600 plus records
and I'd rather not go through all that :)

Now I'm no SQL super-jock so maybe I'm not declaring something
properly in the statement that created the table: Here it is; the
table is called  BIRTHDAYS

CREATE TABLE "BIRTHDAYS" ("NAME" TEXT,"AGE" NUMERIC NOT NULL )

Note: I've also tried the giving the AGE field other numeric datatypes
such as INTEGER and FLOATING ...

Thanks!


--
David Akin
---
http://www.davidakin.com



--
David Akin
---
http://www.davidakin.com



-- 
David Akin
---
http://www.davidakin.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Help! Excel-->SQLite -- getting numbers to behave like numbers!

2008-10-26 Thread David Akin
I'm fiddling with SQLite running on Mac OSX and am using the Firefox
extension SQLite Manager ...

I have some Excel tables I'd like to re-create in SQLite.

It looks easy to me:
1. Save your Excel table as a .csv file.
2. In SQLite Manager used DATABASE-->IMPORT ... and follow the directions.

Great. There's my table looking all spiffy in an SQL manager except
that numbers ain't acting like numbers. Here's what I mean:

Here's my table

NAME   AGE
--
Fred  9
Anne   21
Ruth   97

When I ask SQLite to sort the table from youngest to oldest, it returns:

NAME   AGE
--
Anne21
Fred   9
Ruth 97

In the AGE column, the DB is apparently looking only at the first
character in the AGE field and sorting on that. It is not treating the
string in AGE as a complete INTEGER. Now I can go in and edit the each
record within SQLIte, keying in the values each time for AGE and the
database will then sort as if the contents of AGE are, in fact, an
integer. But the real data set I'm working with as 1,600 plus records
and I'd rather not go through all that :)

Now I'm no SQL super-jock so maybe I'm not declaring something
properly in the statement that created the table: Here it is; the
table is called  BIRTHDAYS

CREATE TABLE "BIRTHDAYS" ("NAME" TEXT,"AGE" NUMERIC NOT NULL )

Note: I've also tried the giving the AGE field other numeric datatypes
such as INTEGER and FLOATING ...

Thanks!


--
David Akin
---
http://www.davidakin.com



-- 
David Akin
---
http://www.davidakin.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Obtaining last row's primary key value

2008-10-26 Thread P Kishor
On 10/26/08, Karl Lautman <[EMAIL PROTECTED]> wrote:
> Let's say I've just added a record to a table where one of the columns is
>  designated an integer primary key, and I have sqlite autoincrement that
>  column for me with each insert.  Now I need to add some records to other
>  tables linked to the first record.  So I want to use the first record's PK
>  value as the foreign key for all these other records.  How can I get the
>  first record's PK?  I'm a newbie, so I can think of all sorts of ways that
>  seem awkward to me, but I assume there's a simple, built-in method.  I just
>  can't find it.  Thanks.

last_insert_rowid()

http://www.sqlite.org/lang_corefunc.html


>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Obtaining last row's primary key value

2008-10-26 Thread Karl Lautman
Let's say I've just added a record to a table where one of the columns is
designated an integer primary key, and I have sqlite autoincrement that
column for me with each insert.  Now I need to add some records to other
tables linked to the first record.  So I want to use the first record's PK
value as the foreign key for all these other records.  How can I get the
first record's PK?  I'm a newbie, so I can think of all sorts of ways that
seem awkward to me, but I assume there's a simple, built-in method.  I just
can't find it.  Thanks.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rtree cast warnings on 64bit OS - strange parameter use

2008-10-26 Thread William Kyngesburye
On Oct 25, 2008, at 12:21 PM, Dan wrote:

>> I thought the two pAux parameters were odd - one bare and one cast to
>> (int), so I looked up rtreeInit().
>
> Good point. I removed the first of the two "pAux" parameters from
> rtreeInit(). It was not being used.
>
>   http://www.sqlite.org/cvstrac/chngview?cn=5842
>
Well, I only get one warning now, but at least I understand that it's  
expected, since you're not really using the pointer as a pointer.

-
William Kyngesburye 
http://www.kyngchaos.com/

"I ache, therefore I am.  Or in my case - I am, therefore I ache."

- Marvin


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] setting the timezone on windows xp

2008-10-26 Thread Jay Sprenkle
Thanks Doug.

On Sat, Oct 25, 2008 at 11:48 AM, Doug <[EMAIL PROTECTED]> wrote:

> Hi Jay --
>
> I used to have a problem like this a few years back.  I don't remember all
> the hows and whys, but my apps call the following at start up and the
> problems are gone:
>
> _tsetlocale(LC_ALL, _T(""));
> _tzset();
>
> HTH
>
> Doug
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:sqlite-users-
> > [EMAIL PROTECTED] On Behalf Of Jay Sprenkle
> > Sent: Saturday, October 25, 2008 10:32 AM
> > To: General Discussion of SQLite Database
> > Subject: [sqlite] setting the timezone on windows xp
> >
> > Hello all,
> > this isn't really an sqlite question but it tangentially touches on it.
> > Please disregard it if it offends.
> >
> > I'm trying to write an atom feed exporter for my sqlite database and I
> > need
> > to export the utc time. My development box, windows xp, insists (both
> > in C
> > and in SQLite) that UTC is 10 hours different than localtime. I've
> > clearly
> > set the timezone to US CST in the control panel. I've also tried
> > setting
> > both the TZ and TIME_ZONE environment variables (CST6CDT). No luck.
> >
> > Any suggestions?
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Support for clustered indexing (or something similar)?

2008-10-26 Thread Julian Bui
Hi all,

I have records w/ a timestamp attribute which is not unique and cannot be
used as a primary key.  These records will be inserted according to
timestamp value.  From this important fact, I've gathered I need a clustered
index since my SELECT statements use a time-range in the WHERE clause.

Does sqlite support clustered indexing or something that would take
advantage of nature of my records and the nature of how they're being
inserted?

Please let me know.

Thanks,
Julian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users