Re: [sqlite] About select by "=" condition to a string

2008-11-01 Thread Tomas Lee
On 2008 October 31 (Fri) 10:57:34pm PDT, yoky <[EMAIL PROTECTED]> wrote:
>> Hi all,
>> I create a table like this "create table tbl1 (ID integer primary
>> key, name, addr)" ,
>> then insert a record:
>> "insert into tbl1 values(1, 'aa', 'bb')",
>> select the record:
>> "select * from tbl1 where name ='aa' "   ,can get this  record :
>> (1, 'aa', 'bb')
>> I change insert way like this:
>> char *pName = "aa";
>> sqlite3_prepare(db," insert into tbl1(1,?,'bb')",-1, , 0);
>> sqlite3_bind_blob( stat, 1, pName,  strlen(pName)+1, 0 );
>> sqlite3_step(stat);
>> Then select the record:
>> sqlite3_prepare(db,  "select * from tbl1 where name ='aa' ", -1,
>> , 0);
>> sqlite3_step(stat);
>> By this way, I can not get the record I want : (1, 'aa', 'bb').
>> Change the SQL statement by  "like":
>> sqlite3_prepare(db,  "select * from tbl1 where name  like  'aa'
>> ", -1, , 0);sqlite3_step(stat);
>> I can get the record.
>> Why? and How can I select record by "=" condition to a string?
> 
> 
>>> You are binding a blob and specifying a length of *three*.  strlen(pName)
>>> returns 2 which is the proper length, but you're adding 1 to that.
>>> Therefore the column contains three characters, 'a', 'a', '\0' which does
>>> not equal "aa" but does begin with "aa".
> 
>>> So you want to either use the correct length for the blob (assuming you
>>> really need blobs) or bind a type other than blob.
> Derrell
> --
> Thank you!
> I have try to  bind like this:
> sqlite3_bind_blob( stat, 1, pName,  strlen(pName), 0 );
> binding a blob and specifying a length of *two* and still can not  select
> the record I want by " select * from tbl1 where name ='aa' ",
> I think maybe I should use:  sqlite3_bind_text(  ),
> because this column type is a string.  I will try last week!
> 
> yoky

Yes, you should use sqlite3_bind_text().  If you want to compare it to
a string, you need to insert it as a string.  TEXT values don't
compare equal to BLOB values.  TEXT values always compare as less than
BLOB values.  See .

sqlite> select cast ('aa' as blob) = 'aa';
0
sqlite> select cast ('aa' as blob) > 'aa';
1
sqlite> create table tbl (id, col);
sqlite> insert into tbl (id, col) values (1, 'aa');
sqlite> insert into tbl (id, col) values (2, cast ('aa' as blob));
sqlite> select * from tbl;
1|aa
2|aa
sqlite> select * from tbl where col = 'aa';
1|aa
sqlite> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug or working as designed?

2008-10-29 Thread Tomas Lee
On 2008 October 29 (Wed) 03:26:45pm PDT, James Sheridan <[EMAIL PROTECTED]> 
wrote:
> Related addendum:
> 
> In reading it appears that MySQL treats "," as a CROSS JOIN and implements it 
> effectively as an INNER JOIN.
> a) Is this correct?

Not really, no.

MySQL treats "," as a CROSS JOIN, which is consistent with everyone else.

Now, in "standard" SQL, when you use "INNER JOIN", you have to use a
join condition.  (A join condition is something like "ON tasks.id =
projects.task_id" or "USING (id)".)  When you use "CROSS JOIN", you
can't use a join condition.

In MySQL, you can always substitute "CROSS JOIN" for "INNER JOIN" and
vice versa.  So if you say "INNER JOIN" but don't give a join condition,
MySQL assumes you mean "CROSS JOIN", and if you use "CROSS JOIN" but
give a join condition, MySQL acts like you meant "INNER JOIN".

So it's not a matter of implementation.  It's just syntax.

> and b) Is Sqlite acting the same or treating it as a true CROSS JOIN?

SQLite treats "," as a CROSS JOIN, the same as MySQL.

> I suspect I'm writing things that I've used with MySQL for a while that are 
> just 
> not acting the same on Sqlite. That's fine, but I do want to know that if so 
> :)

In the example you gave, both MySQL and SQLite are going to return 0
matches from a cross join with a table with 0 rows.  That's how cross
joins work.  It sounds like you want a LEFT JOIN.

mysql> CREATE TABLE Query (
-> id INTEGER NOT NULL,
-> creatorID INTEGER NOT NULL,
-> ownerID INTEGER NOT NULL,
-> type TEXT NOT NULL
-> );
Query OK, 0 rows affected (0.03 sec)

mysql> CREATE TABLE UserQuery (
-> userID INTEGER NOT NULL,
-> queryID INTEGER NOT NULL
-> );
Query OK, 0 rows affected (0.03 sec)

mysql> INSERT INTO Query (id, creatorID, ownerID, type) VALUES (1, 2, 3, 'a');
Query OK, 1 row affected (0.01 sec)

mysql> SELECT  Q.*
-> FROMQuery Q,
-> UserQuery UQ
-> WHERE   Q.type = 'a' OR
-> (Q.id = UQ.queryID AND
->  UQ.userID = '1');
Empty set (0.01 sec)

mysql> SELECT  Q.*
-> FROMQuery Q
->  LEFT JOIN UserQuery UQ
->   ON (Q.id = UQ.queryID)
-> WHERE   Q.type = 'a' OR
->  UQ.userID = '1';
++---+-+--+
| id | creatorID | ownerID | type |
++---+-+--+
|  1 | 2 |   3 | a| 
++---+-+--+
1 row in set (0.00 sec)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happened to .bail?

2008-09-25 Thread Tomas Lee
On 2008 September 25 (Thu) 08:40:13am PDT, "Ribeiro, Glauber" <[EMAIL 
PROTECTED]> wrote:
> The documentation for sqlite3 on the web site:
> http://www.sqlite.org/sqlite.html
> 
> Lists a .bail (on|off) command (stop after hitting an error. Default
> off)
> 
> This doesn't seem to be implemented in the current version. Why? This
> seems to be an important feature for creating robust applications. 

Huh?  I just downloaded the latest sqlite3 command line program from
the website, and the .bail command is still there.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETEs using a range from an indexed column

2008-09-22 Thread Tomas Lee
On 2008 September 16 (Tue) 07:12:02am PDT, "Jay A. Kreibich" <[EMAIL 
PROTECTED]> wrote:
> On Mon, Sep 15, 2008 at 10:57:37PM -0700, Tomas Lee scratched on the wall:
> 
>> I want to delete the 1000 members with the lowest scores.  Assume that
>> it is extremely unlikely for two members to have identical scores.
>> Also, the 1000 is arbitrary -- it could be more or less on different
>> days.  But each time I do this deletion, it will be a fixed number.
>> 
>> What's the fastest way to do this?  
> 
> 
> Your best bet is to just try things out and see.
> 
> 
>> Method A:
>> * find the 1000th lowest score:
>> SELECT score FROM members ORDER BY score LIMIT 1 OFFSET 999;
>> * delete the records equal to or lower than that score
>> DELETE FROM members WHERE score <= $thousandth_lowest_score;
> 
> Do it as one command using a sub-select:
> 
> DELETE FROM members WHERE score <= (
> SELECT score FROM members ORDER BY score LIMIT 1 OFFSET 999);
> 
> Personally I don't like this, since it has the potential to delete
> too many records.  Having repeating scores may be rare, but that's
> not never.  On the other hand, getting rid of "ties" at the cut-off
> point may be desirable.
> 
>> Method B:
>> * find the uids for the 1000 lowest scores:
>> SELECT uid FROM members ORDER BY score LIMIT 1000;
>> * delete those records
>> DELETE FROM members WHERE uid IN ([join $uids ,]);
> 
> Again, do it as one command:
> 
> DELETE FROM members WHERE uid IN (
> SELECT uid FROM members ORDER BY score LIMIT 1000);
> 
> I'm not sure about speed, but I like this best from from a readability
> standpoint.  It is clean and straight forward, and I wouldn't be
> surprised to find out it is the fastest.
> 
> The only odd thing about this is if several records share the cut-off
> score, there isn't any good way of knowing which will be deleted and
> which will be left behind.  That may or may not matter.
> 
>> Method C:
>> * delete the records as you find them:
>> 
>> sqlite3_prepare_v2(db, "DELETE FROM members WHERE uid = ?", -1, _d, 
>> NULL);
>> sqlite3_prepare_v2(db, "SELECT uid FROM members ORDER BY score LIMIT 1000", 
>> -1, _q, NULL);
>> while (sqlite3_step(stmt_q) == SQLITE_ROW) {
>> int uid = sqlite3_column_int(stmt_q, 0);
>> sqlite3_bind_int(stmt_d, 0, uid);
>> sqlite3_step(stmt_d);
>> sqlite3_reset(stmt_d);
>> }
>> sqlite3_finalize(stmt_d);
>> sqlite3_finalize(stmt_q);
> 
> Essentially a manual version of Method B.
> 
> 
> Which is fastest may depend on the contents of the table and if the
> index will actually be used or not.  There might also be variations if
> the size (the 1000) changes significantly.
> 
> Again, just try it and see.  Using the sub-selects you can do this
> from the command line on a test database to give you a rough idea.

Thanks for the help.  I knew about using sub-selects, but I didn't
want to confuse the issue.  I didn't realize method B and method C
were the same -- I thought SQLite would need to keep all the uids in
memory in method B.

When I was doing my experiments, method A came out faster than method
B, and I like method A better anyway.  But I think either one would
work fast enough for my purposes anyway.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Date/Time Pains

2008-09-22 Thread Tomas Lee
On 2008 September 22 (Mon) 06:25:34am PDT, jason weaver <[EMAIL PROTECTED]> 
wrote:
> "jason weaver" <[EMAIL PROTECTED]> wrote:
> news:[EMAIL PROTECTED]
>>> However, due to database locking issues, I need to do a bunch of
>>> inserts in one transaction or batch.  Thus, I store them in a simple
>>> queue.  Therefore, the julianday('now') won't work because all of my
>>> batch inserts will have the same date and time.  And that doesn't
>>> work very well. ;)
> 
>> From: "Igor Tandetnik" <[EMAIL PROTECTED]>
>> You will have to deal with this in any case. E.g. on Windows the system
>> timer resolution is 15ms by default. You can insert quite a few records
>> in 15 ms.
>> It is unrealistic to expect that every record could be inserted with a
>> unique timestamp. Find some other way to ensure uniqueness (e.g. just
>> assign a sequential number to each).
> 
> 
> Thanks for your response.  However, my batch inserts and the uniqueness of
> my timestamps aren't the issue.  I guess I didn't explain my situation well
> enough.  Let me try again.
> 
> 
> 
> I take readings every X seconds which gives me plenty of uniqueness for each
> reading.  I save the SQL statements and then insert them in small batches.
> However, from reading this newsgroup I've learned that the correct way to
> put dates into SQLite is as I described before:
> 
> - create table my_table(date_stuff real);
> 
> - insert into my_table values(julianday('now'));
> 
> In my batch loop, I can't use julianday("now") - I need the timestamp
> to reflect when I took the reading.
> 
> 
> 
> If the right way to put datetime in the dbase is the julianday('now')
> format, I need to be able to create and capture that format in python.
> 
> 
> 
> What is the julianday("now") equivalent in python?  I can't find a simple,
> straight-forward answer to this question.

There isn't a Python equivalent of julianday().  Well, you could write
one yourself, and it's not that hard, but it's doesn't come with Python,
but you don't need it.

See, julianday() can convert a lot of different strings that represent
dates and times into the Julian day equivalents.  What you want is the
equivalent of 'now', and that's easy to get!

>>> import time  
>>> time.strftime("%Y-%m-%dT%H:%M:%S",time.gmtime())
'2008-09-22T16:26:10'

Then you use the string that you get and pass that into the julianday()
function in SQLite

  insert into my_table values(julianday($string_that_strftime_gave_you));

And that's that.

This only gives you resolution down to a second.  If you want
subsecond resolution, then you'll have to use time.time() in Python.
That gives you the number of seconds (which could be a non-integer)
since the epoch.  If you're on a Unix machine, then things are easy:

  julianday($secs_since_epoch, 'unixepoch')

is what you want.

The problem is that you might be on a different machine where the epoch
might be different.  I'm not sure if Python always gives you the Unix
epoch or not -- you'd have to check.  But you can still deal with different
epochs.

First, save the string representing the zero point of the epoch

>>> import time
>>> time.strftime("%Y-%m-%dT%H:%M:%S", time.gmtime(0))
'1970-01-01T00:00:00'

You only need to do this once at the beginning of the program.

Use the time.time() string as before.

Then what you want is

  julianday($epoch_string, '$secs_since_epoch seconds')

And julianday will just start at the epoch and add time.time() to it,
which is what you wanted.

Another thing you can do is just ask SQLite for julianday('now') at the
appropiate time, and save the real number it gives you for later use.

  select julianday('now');

This might be easier.

But do you really need to use the Julian day format?  What do you do with
these dates and times?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DELETEs using a range from an indexed column

2008-09-16 Thread Tomas Lee
I have this schema:

CREATE TABLE members
  (uid INTEGER PRIMARY KEY AUTOINCREMENT,
   name TEXT,
   score INTEGER);
CREATE INDEX members_score_index ON log (score);

I want to delete the 1000 members with the lowest scores.  Assume that
it is extremely unlikely for two members to have identical scores.
Also, the 1000 is arbitrary -- it could be more or less on different
days.  But each time I do this deletion, it will be a fixed number.

What's the fastest way to do this?  I want to lock the database for as
little time as possible.  Would it be:

Method A:

 * find the 1000th lowest score:

   SELECT score FROM members ORDER BY score LIMIT 1 OFFSET 999;

 * delete the records equal to or lower than that score

   DELETE FROM members WHERE score <= $thousandth_lowest_score;

Or would it be:

Method B:

 * find the uids for the 1000 lowest scores:

   SELECT uid FROM members ORDER BY score LIMIT 1000;

 * delete those records

   DELETE FROM members WHERE uid IN ([join $uids ,]);

 or:

   foreach doomed_uid in $uids do:
  DELETE FROM members WHERE uid = $doomed_uid

Or would it be:

Method C:

 * delete the records as you find them:

   sqlite3_prepare_v2(db, "DELETE FROM members WHERE uid = ?", -1, _d, 
NULL);
   sqlite3_prepare_v2(db, "SELECT uid FROM members ORDER BY score LIMIT 1000", 
-1, _q, NULL);
   while (sqlite3_step(stmt_q) == SQLITE_ROW) {
   int uid = sqlite3_column_int(stmt_q, 0);
   sqlite3_bind_int(stmt_d, 0, uid);
   sqlite3_step(stmt_d);
   sqlite3_reset(stmt_d);
   }
   sqlite3_finalize(stmt_d);
   sqlite3_finalize(stmt_q);

Or perhaps something else entirely?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Tcl 8.3 and SQLite 3.6.2

2008-09-05 Thread Tomas Lee
How can I use the Tcl extentions of SQL 3.6.2 with Tcl 8.3?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Dealing with monetary huge values in sqlite

2008-04-13 Thread Tomas Lee
On 2008 April 13 (Sun) 12:43:22pm PDT, Aladdin Lamp? <[EMAIL PROTECTED]> wrote:
> Thank you Nicolas for your answer. I understand that an int64
> certainly gives me enough precision in the general case.
>
> Now what am I supposed to do if the user decides to add a virtual 4
> decimal digits number to another number which has only 2 decimal
> digits? I should first identify this and then multiply by 100 the
> second number in order to be able to use sqlite built-in operators
> and functions... Not so simple, I think...
>
> And in my case, it is the user who determines the precision he
> desires for each number within the application, so I cannot have the
> precision fixed once for all like you suggest.
>
> Aladdin

You may want to look at , which
is about "General Decimal Arithmetic."  There's a lot of information
there about doing decimal arithmetic, both in fixed-size and
arbitrary-precision.  You can download implementations with very
liberal licenses too.



>> Date: Sun, 13 Apr 2008 13:41:46 -0500
>> From: [EMAIL PROTECTED]
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] Dealing with monetary huge values in sqlite
>> 
>> On Sun, Apr 13, 2008 at 07:37:33PM +0200, Aladdin Lamp? wrote:
>>> In my opinion (please tell me if I'm wrong), your method only works if
>>> you want to *display* the values in your column, and if the decimal
>>> precision doesn't change form line to line.
>>> 
>>> I would like to be able to perform operations (+, /, %, etc.) and to
>>> store intermediary results into other columns (such as x% of the
>>> value, etc.), which may have an arbitrary precision, and - only at the
>>> end - round the result to cents/pennies etc. This is required into the
>>> technical specifications provided by my client, because high precision
>>> matters for them. Indeed, an error of 0.01$ on 10,000,000 lines would
>>> lead to a significant error at the end...
>> 
>> The proposed method allows you do perform arithmetic operations using
>> SQLite's built-in operators and functions.
>> 
>> If the greatest error your customer is willing to accept is, say, one
>> part in a million (that is, a millionth of a cent), then using 64-bit
>> would allow you to represent values up to ~ 10e13 -- 10 trillion, not so
>> much, but perhaps good enough for you.
>> 
>> If the tolerance is more like one in 10,000 ($0.001), then you can
>> represent up to ~ 1,000 trillion as the largest value. That's still
>> pretty small, IMO, but almost certainly good enough for you.
>> 
>> You can adjust where you put the virtual fixed point. If you can't find
>> a suitable break then you should consider your arbitrary precision
>> extended function function scheme (or a database engine that provides
>> the features you need).
>> 
>> Nico
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_LOCKED behavior

2008-04-13 Thread Tomas Lee
On 2008 April 12 (Sat) 05:44:53pm PDT, Shawn Wilsher <[EMAIL PROTECTED]> wrote:
> When using SQLite 3.5.4.1 (a special branch cut for Mozilla, based
> mostly off of 3.5.4 with some OS/2 fixes), I'm getting SQLITE_LOCKED
> returned unexpectedly.  The documentation seems to indicate that I
> should only be getting SQLITE_LOCKED if I'm calling sqlite3_exec
> recursively writing to the same table.  However, it seems to me that
> I'm having that happen when two different threads are trying to write
> to the same table.  I would expect to get SQLITE_BUSY at this point,
> but perhaps I'm misusing the API or have the wrong expectations.
> 
> This is happening by using a different sqlite3 database pointers, one
> for each thread.

Are you using a shared cache?  You can get also get SQLITE_LOCKED when
using a shared cache.  See section 2.2 of
.

I've not used a shared cache myself.  One day I was wondering if I
needed to worry about handling SQLITE_LOCKED errors and I came across
that page.  Are these the only times you can get SQLITE_LOCKED errors?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] step back (again)

2008-03-15 Thread Tomas Lee
On 2008 March 15 (Sat) 05:21:53pm PDT, Jeff Hamilton <[EMAIL PROTECTED]> wrote:
> What about something like this:
> 
> SELECT title FROM tracks
>  WHERE singer='Madonna'
>AND (title<:firsttitle OR (title=:firsttitle AND rowid<:firstrowid))
>  ORDER BY title DESC, rowid ASC
>  LIMIT 5;
> 
> Then you only have to remember the single title and rowid of the first
> item in the list. You'd have to add the rowid ASC to your index as
> well, but the index already needs to store the rowid so I don't think
> it would take more space.

That's a clever idea.  But is SQLite's query optimizer smart enough to
know it can use the index on title for that query?  If you re-write it
to be

SELECT title FROM tracks
 WHERE singer='Madonna'
   AND title<=:firsttitle AND (title!=:firsttitle OR rowid<:firstrowid)
 ORDER BY title DESC, rowid ASC
 LIMIT 5;

then it surely should realize that it can use the index on title.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Berkeley DB vs. SQLite for threaded application

2008-01-16 Thread Tomas Lee
On 2008 January 16 (Wed) 04:57:42am PST, [EMAIL PROTECTED] wrote:
> Tomas Lee <[EMAIL PROTECTED]> wrote:
> > I've got an application that has Berkeley DB embedded in it.  I want
> > to replace Berkeley DB with SQLite.  (I plan to use the 3.5.4
> > almagamation, which is the latest I could find.)  The thing is, this
> > application uses threads.  I know threads are evil, but this
> > application uses them, and there it is.  So, I wanted to understand
> > what I had to do to be safe.
> > 
> > As I understand it, Berkeley DB has free-threaded database handles, so
> > my application can open a Berkeley DB database connection and have all
> > of its thread use that same connection.  But SQLite doesn't allow
> > this, so I'm going to have to change things.  In theory, I could just
> > open and close a new connection whenever I want to access the
> > database.  How much a performance hit is that?
> > 
> 
> As of version 3.5.0, SQLite allows free-threaded database handles.

Ah!  That makes things a lot easier.  Thanks!

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Berkeley DB vs. SQLite for threaded application

2008-01-15 Thread Tomas Lee
I've got an application that has Berkeley DB embedded in it.  I want
to replace Berkeley DB with SQLite.  (I plan to use the 3.5.4
almagamation, which is the latest I could find.)  The thing is, this
application uses threads.  I know threads are evil, but this
application uses them, and there it is.  So, I wanted to understand
what I had to do to be safe.

As I understand it, Berkeley DB has free-threaded database handles, so
my application can open a Berkeley DB database connection and have all
of its thread use that same connection.  But SQLite doesn't allow
this, so I'm going to have to change things.  In theory, I could just
open and close a new connection whenever I want to access the
database.  How much a performance hit is that?

 tells me that I can move a connection
handle across threads as long as that connection is holding no fcntl()
locks, and that this is due to fcntl() bugs on some OSes, like RedHat9.
But what if I'm running on an OS that doesn't have these bugs -- do
I have to worry about moving a connection handle across threads?  And
how can I tell if my OS has these bugs?

-
To unsubscribe, send email to [EMAIL PROTECTED]
-