Re: [sqlite] commit time

2009-10-21 Thread Dan Kennedy

On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:

> An open cursor will block. I've watched it. It was a major problem,  
> and
> I spent many many hours stepping through SQLite before I finally  
> figured
> it out. Once I carefully closed out cursors, the problem went away.  
> (In
> my case I had a long running write process trying to commit a
> transaction so it could yield to another connection in a separate  
> thread
> that wanted to write. If cursors were open on a table, the other
> connection would refuse to grab a write lock on that table, even  
> though
> the transaction was committed and there were no open writers.)
>
> I don't remember where for sure (may have been in
> sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.
>
> The write lock doesn't stop you from reading, but an open cursor DOES
> stop you from writing. You have to check for SQLITE_LOCKED, no way
> around it.

I don't understand the situation described in the first paragraph. But
the statement above is at least not universally true. Tcl test cases
"shared-[01].3.11" and "shared-[01].3.1.2" (see around line 229 of
the test/shared.test file in the source distribution) are examples of
one connection writing to a table while a second connection is scanning
through the same table using an open cursor. In this case it is the
"second connection" is operating in read_uncommitted mode.

Dan.




>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, October 22, 2009 12:06 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
>
>> if thread 1 opens a read cursor in read uncommitted mode it can
>> block a write lock?  i thought the read happens w/o a lock?
>
> If using read-uncommitted mode, a reader thread will not block a
> writer thread that is using the same shared-cache. Except, it does
> block a writer from modifying the database schema.
>
> Dan.
>
>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 12:03 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Good, a single write thread saves you all the hassle involved with
>> yielding. Unfortunately, even without multiple writers blocking is
>> still
>> possible. If thread 1 opens a cursor, and thread 2 tries to write
>> before
>> that cursor has been closed, it will return SQLITE_LOCKED. Since any
>> read query will return a cursor, there is always a possibility for
>> blocking, and you need to handle SQLITE_LOCKED.
>>
>> John
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 2:09 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> very good.  i don't anticipate multiple writers so this should be
>> pretty
>> simple.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 9:15 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Yes, you have to call sqlite3_enable_shared_cache before opening any
>> database connections, then execute "PRAGMA read_uncommitted = true;"
>> on
>> each connection. Blocking can still happen in some situations, but  
>> you
>> can handle it as I described in my original reply.
>>
>> John
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 12:05 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>>
>> it sounds like this is the feature you recommend using:
>>
>> "A database connection in read-uncommitted mode _does not attempt to
>> obtain read-locks before reading_ from database tables as described
>> above. This can lead to inconsistent query results if another  
>> database
>> connection modifies a table while it is being read, but it also means
>> that a read-transaction opened by a connection in read-uncommitted
>> mode
>> can neither block nor be blocked by any other connection."
>>
>> this is precisely what i need.  thanks very much.
>>
>> 
>> From: sqlite-users-boun...@sqlite.org [sqlite-users-
>> boun...@sqlite.org]
>> On Behalf Of John Crenshaw [johncrens...@priacta.com]
>> Sent: Tuesday, October 20, 2009 7:18 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Sounds like a great candidate for shared cache with PRAGMA
>> read_uncommitted = true.
>>
>> If other threads may also need a write 

Re: [sqlite] commit time

2009-10-21 Thread John Crenshaw
An open cursor will block. I've watched it. It was a major problem, and
I spent many many hours stepping through SQLite before I finally figured
it out. Once I carefully closed out cursors, the problem went away. (In
my case I had a long running write process trying to commit a
transaction so it could yield to another connection in a separate thread
that wanted to write. If cursors were open on a table, the other
connection would refuse to grab a write lock on that table, even though
the transaction was committed and there were no open writers.)

I don't remember where for sure (may have been in
sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.

The write lock doesn't stop you from reading, but an open cursor DOES
stop you from writing. You have to check for SQLITE_LOCKED, no way
around it.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Thursday, October 22, 2009 12:06 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:

> if thread 1 opens a read cursor in read uncommitted mode it can  
> block a write lock?  i thought the read happens w/o a lock?

If using read-uncommitted mode, a reader thread will not block a
writer thread that is using the same shared-cache. Except, it does
block a writer from modifying the database schema.

Dan.


>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org 
> ] On Behalf Of John Crenshaw
> Sent: Wednesday, October 21, 2009 12:03 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Good, a single write thread saves you all the hassle involved with
> yielding. Unfortunately, even without multiple writers blocking is  
> still
> possible. If thread 1 opens a cursor, and thread 2 tries to write  
> before
> that cursor has been closed, it will return SQLITE_LOCKED. Since any
> read query will return a cursor, there is always a possibility for
> blocking, and you need to handle SQLITE_LOCKED.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Wednesday, October 21, 2009 2:09 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> very good.  i don't anticipate multiple writers so this should be  
> pretty
> simple.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
> Sent: Wednesday, October 21, 2009 9:15 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Yes, you have to call sqlite3_enable_shared_cache before opening any
> database connections, then execute "PRAGMA read_uncommitted = true;"  
> on
> each connection. Blocking can still happen in some situations, but you
> can handle it as I described in my original reply.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Wednesday, October 21, 2009 12:05 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> it sounds like this is the feature you recommend using:
>
> "A database connection in read-uncommitted mode _does not attempt to
> obtain read-locks before reading_ from database tables as described
> above. This can lead to inconsistent query results if another database
> connection modifies a table while it is being read, but it also means
> that a read-transaction opened by a connection in read-uncommitted  
> mode
> can neither block nor be blocked by any other connection."
>
> this is precisely what i need.  thanks very much.
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users- 
> boun...@sqlite.org]
> On Behalf Of John Crenshaw [johncrens...@priacta.com]
> Sent: Tuesday, October 20, 2009 7:18 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Sounds like a great candidate for shared cache with PRAGMA
> read_uncommitted = true.
>
> If other threads may also need a write lock on that table, you should
> handle SQLITE_LOCKED by incrementing a waiter count and calling
> sqlite3_unlock_notify. The thread doing the inserting can check to see
> if anybody is waiting (blocked) and yield by committing the current
> transaction and waiting for the blocked thread to unblock. Be aware,  
> you
> should also close any open cursors before yielding, because open  
> cursors
> will prevent write locks and you'll waste time yielding for nothing.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Tuesday, October 20, 2009 8:05 PM
> To: sqlite-users@sqlite.org
> 

Re: [sqlite] manipulating arguments (in C)

2009-10-21 Thread Jean-Christophe Deschamps
Roger,

Thank you for your answer.  I knew from old days that va_* things are 
very fragile (not only from the portability point of view).

In the "duct tape programming" situation where I currently am, the best 
I can came up with is by fixing the max # of arguments to 32 and using 
a _ugly_ kludge on the stack.  It gives something along the line of:

select printf('abc %lli %s %s %g %s %lld',
 -357,
 Proper('febÆhùivÊ 1no 2no'),
 null,
 3.1415926,
 Upper('ådefàßùç'),
 1234567890123456
) as Result;

returning:

Result
abc -357 Febæhùivê 1No 2No  3.14159 ÅDEFÀSSÙÇ 1234567890123456

I confess that the way I pass sqlite3_mprintf(fmt, stuff) the "stuff" 
structure containing ... ahem ... exactly what the stack is supposed to 
contain is probably not a very academic programming model.   (but it 
works >:-} )



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Dan Kennedy

On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:

> if thread 1 opens a read cursor in read uncommitted mode it can  
> block a write lock?  i thought the read happens w/o a lock?

If using read-uncommitted mode, a reader thread will not block a
writer thread that is using the same shared-cache. Except, it does
block a writer from modifying the database schema.

Dan.


>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org 
> ] On Behalf Of John Crenshaw
> Sent: Wednesday, October 21, 2009 12:03 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Good, a single write thread saves you all the hassle involved with
> yielding. Unfortunately, even without multiple writers blocking is  
> still
> possible. If thread 1 opens a cursor, and thread 2 tries to write  
> before
> that cursor has been closed, it will return SQLITE_LOCKED. Since any
> read query will return a cursor, there is always a possibility for
> blocking, and you need to handle SQLITE_LOCKED.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Wednesday, October 21, 2009 2:09 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> very good.  i don't anticipate multiple writers so this should be  
> pretty
> simple.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
> Sent: Wednesday, October 21, 2009 9:15 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Yes, you have to call sqlite3_enable_shared_cache before opening any
> database connections, then execute "PRAGMA read_uncommitted = true;"  
> on
> each connection. Blocking can still happen in some situations, but you
> can handle it as I described in my original reply.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Wednesday, October 21, 2009 12:05 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> it sounds like this is the feature you recommend using:
>
> "A database connection in read-uncommitted mode _does not attempt to
> obtain read-locks before reading_ from database tables as described
> above. This can lead to inconsistent query results if another database
> connection modifies a table while it is being read, but it also means
> that a read-transaction opened by a connection in read-uncommitted  
> mode
> can neither block nor be blocked by any other connection."
>
> this is precisely what i need.  thanks very much.
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users- 
> boun...@sqlite.org]
> On Behalf Of John Crenshaw [johncrens...@priacta.com]
> Sent: Tuesday, October 20, 2009 7:18 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
> Sounds like a great candidate for shared cache with PRAGMA
> read_uncommitted = true.
>
> If other threads may also need a write lock on that table, you should
> handle SQLITE_LOCKED by incrementing a waiter count and calling
> sqlite3_unlock_notify. The thread doing the inserting can check to see
> if anybody is waiting (blocked) and yield by committing the current
> transaction and waiting for the blocked thread to unblock. Be aware,  
> you
> should also close any open cursors before yielding, because open  
> cursors
> will prevent write locks and you'll waste time yielding for nothing.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
> Sent: Tuesday, October 20, 2009 8:05 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] commit time
>
> i have a simple join table containing two ids from two other  
> tables.  i
> have an index on each of the ids in the join table.
>
> CREATE TABLE ContentWordItem (word_id INT, item_id INT);
>
> CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
> // index to perform fast queries by word_id
>
> CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
> // index to perform fast deletes by item_id
>
> i have a large amount of data to insert into this join table on a slow
> embedded device.  i need to avoid locking this join table for more  
> than
> a second or two at a time so that i can make queries on this table.
>
> so here's the question:  how do i insert small chunks of data into  
> this
> table w/o taking a hit each time i commit?
>
> what i'm doing is:
>
> * read a chunk of data from flat data file into vector of id
> pairs
>
> * begin transaction
>
> * loop thru vector of id pairs binding and inserting
>
> * commit transaction
>
> * repeat until data is exhausted

Re: [sqlite] Inner Join Performance Issue

2009-10-21 Thread John Crenshaw
Try to EXPLAIN the query and verify that the index is actually used.
There are a lot of reasons why this query would probably NOT be using
the index.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf
Sent: Wednesday, October 21, 2009 5:50 PM
To: 'General Discussion of SQLite Database'
Subject: [sqlite] Inner Join Performance Issue

Hello Forum,
[>> ] 
I have a select that joins 15 Tables the where clause consist of 8 like
relations (all fields are indexed), this is to implement a sort of
"search
engine".
The performance is awful. It takes around 10sec. 
Is this how it should be or is there anything I can do?

If you need more infos pls. let me know

Thx
Ralf

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread John Crenshaw
Read sort of does happen without a lock, but write requires a lock, and
SQLite can't grab a write lock if another connection has open cursors
(notwithstanding the fact that they technically don't have a lock.) It's
complicated. Just trust me. You won't get that write lock while cursors
are open, so you'll still have to handle SQLITE_LOCKED or pay the
consequences.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 6:21 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

if thread 1 opens a read cursor in read uncommitted mode it can block a
write lock?  i thought the read happens w/o a lock?

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Good, a single write thread saves you all the hassle involved with
yielding. Unfortunately, even without multiple writers blocking is still
possible. If thread 1 opens a cursor, and thread 2 tries to write before
that cursor has been closed, it will return SQLITE_LOCKED. Since any
read query will return a cursor, there is always a possibility for
blocking, and you need to handle SQLITE_LOCKED.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 2:09 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

very good.  i don't anticipate multiple writers so this should be pretty
simple.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

*  

Re: [sqlite] index for a group by

2009-10-21 Thread Simon Slavin

On 21 Oct 2009, at 11:34pm, Sylvain Pointeau wrote:

> if your "book" contains all lines (a,b,c,t,d)and you create an index  
> on
> (a,b,c,t)

I assume you meant to add ',d'in there.

> then your index is as fat as your book, isn't it?

Yes.  And it still isn't as useful for any SELECT that doesn't use the  
fields in the right order.  And it's more useful for a SELECT that  
does use those fields in the right order.  And it's less useful if you  
ever want to read the text in an order that makes any sense: I cannot  
read the index to a text book and learn the information the textbook  
wants to teach me.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] index for a group by

2009-10-21 Thread Nicolas Williams
On Thu, Oct 22, 2009 at 12:34:26AM +0200, Sylvain Pointeau wrote:
> if your "book" contains all lines (a,b,c,t,d)and you create an index on
> (a,b,c,t)
> 
> then your index is as fat as your book, isn't it?

Depends on the size of d.

Also, if you add a constraint declaring t, a, b, and c (you want 't'
first!) to be unique then the DB could make the whole thing smaller than
if you first create the table, then the index.  (I'm not sure of SQLite3
does that, but it could).

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] index for a group by

2009-10-21 Thread Sylvain Pointeau
if your "book" contains all lines (a,b,c,t,d)and you create an index on
(a,b,c,t)

then your index is as fat as your book, isn't it?

cheers,
Sylvain

On Wed, Oct 21, 2009 at 11:52 PM, Simon Slavin  wrote:

>
> On 21 Oct 2009, at 9:19pm, Sylvain Pointeau wrote:
>
> > Thank you for your answers.
> > knowing that I have a table T (a,b,c,d,t)
> > where d is a value
> > a,b,c some dimensions
> > and t the time
> >
> > where I need to make a subset with a "group by" like
> >
> > select a,b,c,sum(d)
> > from T
> > where t>x1 and t > group by a,b,c
> >
> > do you have an idea on how to choose the best index in my case?
>
> The first thing that this SELECT command is doing is rejecting all the
> rows which do not have the right value for t.  So your index should
> start with t.  Your SELECT is then grouping by a,b,c.  So my guess at
> a good index would be
>
> CREATE INDEX searchOnT ON T (t,a,b,c)
>
> You can perhaps speed up your search by replacing your 'and' with
> 'between' like this:
>
> select a,b,c,sum(d)
> from T
> where t between x1 and x2
> group by a,b,c
>
> And for other reasons it might also be better to include d in the index:
>
> CREATE INDEX searchOnT ON T (t,a,b,c)
>
> > is it better to choose (a,b,c) ?
>
> This would not be as useful because the first thing the computer is
> trying to do is reject most of the table first, by checking the value
> of t.  Only after it has done that do the values of a,b,c become
> important.
>
> > or (a,b,c,t) ?
>
> It checks the value of t first, so you want to put the t first.
>
> > (the issue is that it is like I duplicate my table right?)
>
> No.  An index is not like duplicating your table.  Think of the TABLE
> as a book, and the INDEX as the index at the back of the book.  It's
> not an entire copy of the book, it's a fast way of knowing which page
> in the book to look at.  In a normal book people do not want to find
> all the green objects, all the red objects, all the blue objects, so
> you do not make an index for object colours.  You need to know what
> people are most likely to want to find.
>
> I think you might find it helpful to read some basic information about
> databases and indexing before you start to worry about the details of
> your particular program.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] manipulating arguments (in C)

2009-10-21 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jean-Christophe Deschamps wrote:
> I feel the need to wrap an SQLite printf-like function into a scalar 
> function.

You can just do that sort of thing in your application.  There is no need to
do it via a SQL function.

> My question is slightly off topic but there are experienced users here 
> who have probably done it before.

You can retrieve any value as any type.  For example if an integer is
returned but you call column_text then it will be converted to text for you.

> In the scalar function (printf), I get arguments as the usual array of 
> pointers. How would you pass the variable list arguments to 
> sqlite3_mprintf()?
> Is there any nice way not imposing a limitation on n?

This kind of thing is extra-ordinarily tricky under C, error prone, easy to
crash and usually non-portable.  This is how you receive a variable number
of arguments:

  http://en.wikipedia.org/wiki/Stdarg.h

sqlite3_vmprintf can receive that.  GCC can do some variable argument
calling but not really useful here:

  http://gcc.gnu.org/onlinedocs/gcc/Constructing-Calls.html

But what you are really trying to do is take an arbitrary number of
parameters, unwrap them from SQLite types into native types and then compose
a call to sqlite3_mprintf.  The only way this is directly possible is using
a library that can do this for you at runtime which pretty much means this one:

  http://en.wikipedia.org/wiki/Libffi

You really don't want to go down this route.  If you do then you really
don't want to use C.  As an example your requirement is trivial to do in
Python - it would be a one liner using either wrapper.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrfi+sACgkQmOOfHg372QTuhwCg3rYnGnuFi5SJexL4rpF+N09d
DvcAoMFzJymLEnTY2CQl0juFMQdoW+Y7
=jL1v
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
if thread 1 opens a read cursor in read uncommitted mode it can block a write 
lock?  i thought the read happens w/o a lock?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Good, a single write thread saves you all the hassle involved with
yielding. Unfortunately, even without multiple writers blocking is still
possible. If thread 1 opens a cursor, and thread 2 tries to write before
that cursor has been closed, it will return SQLITE_LOCKED. Since any
read query will return a cursor, there is always a possibility for
blocking, and you need to handle SQLITE_LOCKED.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 2:09 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

very good.  i don't anticipate multiple writers so this should be pretty
simple.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it 

[sqlite] manipulating arguments (in C)

2009-10-21 Thread Jean-Christophe Deschamps

I feel the need to wrap an SQLite printf-like function into a scalar 
function.
I wish to use it as in:

select printf(format_string, list of arguments);

My question is slightly off topic but there are experienced users here 
who have probably done it before.

In the scalar function (printf), I get arguments as the usual array of 
pointers. How would you pass the variable list arguments to 
sqlite3_mprintf()?
Is there any nice way not imposing a limitation on n?



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] index for a group by

2009-10-21 Thread Simon Slavin

On 21 Oct 2009, at 9:19pm, Sylvain Pointeau wrote:

> Thank you for your answers.
> knowing that I have a table T (a,b,c,d,t)
> where d is a value
> a,b,c some dimensions
> and t the time
>
> where I need to make a subset with a "group by" like
>
> select a,b,c,sum(d)
> from T
> where t>x1 and t group by a,b,c
>
> do you have an idea on how to choose the best index in my case?

The first thing that this SELECT command is doing is rejecting all the  
rows which do not have the right value for t.  So your index should  
start with t.  Your SELECT is then grouping by a,b,c.  So my guess at  
a good index would be

CREATE INDEX searchOnT ON T (t,a,b,c)

You can perhaps speed up your search by replacing your 'and' with  
'between' like this:

select a,b,c,sum(d)
from T
where t between x1 and x2
group by a,b,c

And for other reasons it might also be better to include d in the index:

CREATE INDEX searchOnT ON T (t,a,b,c)

> is it better to choose (a,b,c) ?

This would not be as useful because the first thing the computer is  
trying to do is reject most of the table first, by checking the value  
of t.  Only after it has done that do the values of a,b,c become  
important.

> or (a,b,c,t) ?

It checks the value of t first, so you want to put the t first.

> (the issue is that it is like I duplicate my table right?)

No.  An index is not like duplicating your table.  Think of the TABLE  
as a book, and the INDEX as the index at the back of the book.  It's  
not an entire copy of the book, it's a fast way of knowing which page  
in the book to look at.  In a normal book people do not want to find  
all the green objects, all the red objects, all the blue objects, so  
you do not make an index for object colours.  You need to know what  
people are most likely to want to find.

I think you might find it helpful to read some basic information about  
databases and indexing before you start to worry about the details of  
your particular program.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inner Join Performance Issue

2009-10-21 Thread Ralf
Hello Forum,
[>> ] 
I have a select that joins 15 Tables the where clause consist of 8 like
relations (all fields are indexed), this is to implement a sort of "search
engine".
The performance is awful. It takes around 10sec. 
Is this how it should be or is there anything I can do?

If you need more infos pls. let me know

Thx
Ralf

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread Kees Nuyt
On Wed, 21 Oct 2009 17:35:41 +0100 (BST), Keith Roberts
 wrote:

>On Wed, 21 Oct 2009, Unabashed wrote:
>
>> To: sqlite-users@sqlite.org
>> From: Unabashed 
>> Subject: [sqlite]  Slow SELECTs in application
>> 
>>
>> Hello!
>> I'm using SQLite as DB in my application. My problem consists of two
>> aspects. First , it works great, but on large data SELECTs are very slow
>> (10-20s!). Queries are in one transaction. My table structure is:
>> CREATE TABLE mgWords (
>>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>>  id_norm INTEGER,
>>  word TEXT,
>>  wform TEXT)
>
>You could try removing the AUTOINCREMENT constraint to 
>speed things up a little.
>
>All you need is:
>
>id INTEGER PRIMARY KEY,
>
>That will allow you to reference the rowid using the 'id' 
>identifyer. They both referer to the same thing.

id will also alias to rowid with the AUTOINCREMENT clause.
There is no reason why it wouldn't. INTEGER PRIMARY KEY is
decisive by itself.

>Please see: http://www.sqlite.org/autoinc.html

, which explicitly states what AUTOINCREMENT does for ROWID.

>for all the details.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Simon Slavin

On 21 Oct 2009, at 5:58pm, Jan wrote:

> thx bruce, but I am addicted to open-source.

If you want an open source server/client SQL engine, designed from the  
ground up to work correctly when accessed from many computers at once,  
I recommend MySQL.  It is easier, faster and more efficient to just  
use MySQL as it is designed than it is to try to turn SQLite into a  
networked concurrent client system.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] index for a group by

2009-10-21 Thread Sylvain Pointeau
Thank you for your answers.
knowing that I have a table T (a,b,c,d,t)
where d is a value
a,b,c some dimensions
and t the time

where I need to make a subset with a "group by" like

select a,b,c,sum(d)
from T
where t>x1 and twrote:

> Actually, I thought exactly what you said when I saw the question. When
> I saw your answer though I realized I'd been wrong, there are ways I
> could slow indexing down, and therefore, ways to speed it up.
>
> Splitting across transactions is about the insertion of data, not the
> creation of the index. This is for the case where you can't insert first
> and create the index later (maybe you're inserting a lot of data into a
> table that already has data for example.) The recommendation in this
> case is to wrap the whole batch of inserts in a transaction, but to
> commit the transaction at regular intervals, breaking the process into
> multiple pieces so that you don't spill over the memory cache.
> SUPPOSEDLY this positively impacts indexing performance, but I've not
> personally tested that claim. The more significant impact in this case
> is actually the individual transactions you avoid, which makes a huge
> difference.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: Tuesday, October 20, 2009 2:47 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] index for a group by
>
> I want to notice, John, that my words are in context "I have table
> with a lot of data, I want to create a particular index on it, how can
> I do it quickly". In this context only your 5 bullet is applicable, I
> admit I've forgot about that. And I don't understand how can one split
> creating of index across several transactions.
>
> Pavel
>
> On Tue, Oct 20, 2009 at 2:15 PM, John Crenshaw
>  wrote:
> >> Nothing in this process can be sped up.
> >
> > Actually, that isn't entirely true. While it always requires a full
> data
> > scan, Some things can make the indexing part of the process faster.
> > Since indexing is done basically using a comparative sort, anything
> that
> > would speed up the sort, will speed up the indexing.
> > 1. It is faster to sort 1000 data points, than to insert 1000
> datapoints
> > into a constantly sorted list. Creating the index after all inserts is
> > faster than creating the index, then inserting.
> > 2. If possible, avoid indexes on long data strings, since the compares
> > can be time consuming.
> > 3. If you have a field that stores one of several strings (as an
> "enum")
> > consider using integers instead. Integers have lower overhead, and can
> > be compared (and sorted) more quickly than strings.
> > 4. If you are feeling really gutsy, you could mod the code and
> implement
> > a radix sort or something similar for integer values. I'm not really
> > recommending this, just saying, inserts and lookups in a radix index
> are
> > faster than a btree.
> > 5. Make sure the memory cache is large enough for the sort. Writing
> data
> > to disk is very costly, compared to sorting in memory. Default is 2000
> > pages (2MB) worth of btree data. If you are about to build an index
> that
> > will require more btree than that, increase the size, or split across
> > several transactions.
> >
> > John
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> > Sent: Tuesday, October 20, 2009 7:35 AM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] index for a group by
> >
> >> please could you let me know which index could be better or faster?
> >
> > For this particular query index on (t,a,b,c) or even on (t,a,b,c,d)
> > would be better and cause the query to execute faster (of course if by
> > conditions t>x1 and t > table).
> >
> >> also do you know by chance how to speed up the index creation?
> >
> > There's no way to do that. SQLite have to scan the whole table, read
> > data from all rows and put necessary information into the index.
> > Nothing in this process can be sped up.
> >
> > Pavel
> >
> > On Mon, Oct 19, 2009 at 5:50 PM, Sylvain Pointeau
> >  wrote:
> >> hello,
> >> I have a table T (a,b,c,d,t)
> >> where c is a value
> >> a,b,c some dimensions
> >> and t the time
> >>
> >> I need to make a subset with a "group by"
> >> like
> >>
> >> select a,b,c,sum(d)
> >> from T
> >> where t>x1 and t >> group by a,b,c
> >>
> >> I created an index on a,b,c
> >> but this table is large and the index creation is time consuming (few
> > hours)
> >>
> >> please could you let me know which index could be better or faster?
> >> also do you know by chance how to speed up the index creation?
> >>
> >> Best regards,
> >> Sylvain
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@sqlite.org
> >> 

[sqlite] R: RE: Like do not use index as previous version

2009-10-21 Thread ge...@iol.it
Thanks for your answer.
 I wrote a simple example to show the difference 
between last and previous version of sqlite.
The problem is that I have some 
prepared statement which can be used either with wildchars or not, depending on 
user input; more complex (and less readable) code will needed if I have to 
check wildchar presence and choose the prepared statement. I think that it 
would be better if this will be resolved inside sqlite engine (and I suppose 
this is a common expected behaviour for a db).
  

>Messaggio 
originale
>Da: donald.gri...@allscripts.com
>Data: 21/10/2009 20.15
>A: 
, "General Discussion of SQLite Database"
>Ogg: RE: [sqlite] Like do not use index as previous version
>
> 
>
>-
Original Message-
>From: sqlite-users-boun...@sqlite.org
>[mailto:sqlite-
users-boun...@sqlite.org] On Behalf Of ge...@iol.it
>Sent: Wednesday, October 
21, 2009 2:03 PM
>To: sqlite-users@sqlite.org
>Subject: [sqlite] Like do not 
use index as previous version
>
>Hi all,
>it seems that in last versions on 
sqlite3 LIKE clause stopped to use
>indexes; I created a new empty database 
with SQLIte 3.6.13 and I run
>these statements  :
>
>CREATE TABLE TEST
>(TEXT_1 
text PRIMARY KEY, TEXT_2 text, TEXT_3 text COLLATE NOCASE);
>CREATE INDEX 
TEST_IDX_2 ON TEST (TEXT_2); CREATE INDEX
>TEST_IDX_3 ON TEST (TEXT_3);
>

>Running explain query plan on "select * from test where text_3 like
>'x';", I 
have the following result:
>
>0|0|TABLE test WITH
>INDEX TEST_IDX_3
>
>And it's 
what I  expected.
>
>If I execute the some statements 
>in SQLite 3.6.16 and 
3.6.19, I have this result:
>
>0|0|TABLE test
>
>So It's not 
>using the index 
as in 3.6.13 version. 
>The some if I try to use the operator 
>GLOB with field 
text_2: in SQLite 3.6.13 it uses the TEST_IDX_2 index
>but it's 
>not used in 
SQLite 3.6.19.
>
>Any suggestions?
>
>==
>
>Regarding 
suggestions:
>   Since "like" and "glob" are intended for use with wildcards 
and
>you're not using wildcards, why not use
> "where text_3 == 'x';"

>instead?
>
> 
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP TRIGGER and sqlite3_close()

2009-10-21 Thread Dave Toll
I saw that DRH had added the following comment to the ticket:

"2009-Oct-20 16:49:55 by drh: 
When ever a prepared statement runs, it first checks to see if the schema has 
changed. If the schema has changed, then the statement must be recompiled and 
retried. This loop of checking for schema changes and recompiling is run 5 
times, after which statement execution is abandoned with an error.
What appears to be happening in this ticket is that one thread keeps trying to 
run a prepared statement, but the schema keeps getting changed out from under 
it by another thread. Each time the original thread reprepares its statement, 
another thread changes the schema yet again before the first thread has an 
opportunity to run the statement.

If the attached test program is modified so that the statement is retried even 
if it returns SQLITE_ERROR (in addition to SQLITE_SCHEMA and other values) then 
it works as desired.

So, in other words, this appears to be an application problem."


Retrying on SQLITE_ERROR does indeed work in my test case. However I was under 
the impression that SQLITE_ERROR implied a serious error - should my 
application always retry on SQLITE_ERROR returned from sqlite3_exec() or 
sqlite3_step()? Wouldn't SQLITE_SCHEMA be the more appropriate error code in 
this case?

Cheers,
Dave.


-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com]
Sent: Tue 10/20/2009 6:19 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Ticket 3810: SQLITE_ERROR on concurrent CREATE TEMP 
TRIGGER and sqlite3_close()
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dave Toll wrote:
> Could you point me to the test case you referred to?

Look near the bottom of the ticket where it lists the various checkins.
[6955] is the test case.  You can examine the current source in case that
has changed at http://sqlite.org/src/dir

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrd8TYACgkQmOOfHg372QQmLgCffJjKBUwjZ6uND78KbHe7fuWq
kGsAoM993+BlkjyBMeACJBYQapF0lQBV
=/txR
-END PGP SIGNATURE-


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Like do not use index as previous version

2009-10-21 Thread Griggs, Donald
 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of ge...@iol.it
Sent: Wednesday, October 21, 2009 2:03 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Like do not use index as previous version

Hi all,
it seems that in last versions on sqlite3 LIKE clause stopped to use
indexes; I created a new empty database with SQLIte 3.6.13 and I run
these statements  :

CREATE TABLE TEST
(TEXT_1 text PRIMARY KEY, TEXT_2 text, TEXT_3 text COLLATE NOCASE);
CREATE INDEX TEST_IDX_2 ON TEST (TEXT_2); CREATE INDEX
TEST_IDX_3 ON TEST (TEXT_3);

Running explain query plan on "select * from test where text_3 like
'x';", I have the following result:

0|0|TABLE test WITH
INDEX TEST_IDX_3

And it's what I  expected.

If I execute the some statements 
in SQLite 3.6.16 and 3.6.19, I have this result:

0|0|TABLE test

So It's not 
using the index as in 3.6.13 version. 
The some if I try to use the operator 
GLOB with field text_2: in SQLite 3.6.13 it uses the TEST_IDX_2 index
but it's 
not used in SQLite 3.6.19.

Any suggestions?

==

Regarding suggestions:
   Since "like" and "glob" are intended for use with wildcards and
you're not using wildcards, why not use
 "where text_3 == 'x';"
instead?

 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
very good.  i don't anticipate multiple writers so this should be pretty simple.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___

Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent
hmm.. okay. i'll have to refactor a bit (currently two separate processes).

this is still very helpful.  thanks.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:19 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

My understanding is that the shared cache allows table level locking for
multiple threads in a single process, and can do so efficiently because
the threads all share the same memory space, but if multiple processes
attempt to access the database, they will each use the original (full
database lock) methods for concurrency. Therefore, if my understanding
is correct, the "elsewhere" is the location that describes the normal
database level locking.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


reading up on shared cache mode and found this:

"The locking protocol used to arbitrate between multiple shared-caches
or regular database users is described _elsewhere_."

where is this described?


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org

[sqlite] Like do not use index as previous version

2009-10-21 Thread ge...@iol.it
Hi all,
it seems that in last versions on sqlite3 LIKE clause stopped to use 
indexes; I created a new empty database with SQLIte 3.6.13 and I run these 
statements  :

CREATE TABLE TEST 
(TEXT_1 text PRIMARY KEY, TEXT_2 text, TEXT_3 
text COLLATE NOCASE);
CREATE INDEX TEST_IDX_2 ON TEST (TEXT_2);
CREATE INDEX 
TEST_IDX_3 ON TEST (TEXT_3);

Running explain query plan on "select * from test 
where text_3 like 'x';", I have the following result:

0|0|TABLE test WITH 
INDEX TEST_IDX_3

And it's what I  expected.

If I execute the some statements 
in SQLite 3.6.16 and 3.6.19, I have this result:

0|0|TABLE test

So It's not 
using the index as in 3.6.13 version. 
The some if I try to use the operator 
GLOB with field text_2: in SQLite 3.6.13 it uses the TEST_IDX_2 index  but it's 
not used in SQLite 3.6.19.

Any suggestions?



 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite DB Structure

2009-10-21 Thread Griggs, Donald
Regarding:
   
"I  NEED  HELP! I have SQLite3 DB (contacts from iPhone), it 
contains 1 record,  but when I open it with NOTEPAD.EXE, I see more than 1 
record.
I  need  to  repair  all  records  from  it"

С уважением,

I received your db (via private email) and the good news is that the 
PRAGMA INTEGRITY_CHECK
reports that it is a healthy database.

The *bad* news is that I suspect the data you are looking for was deleted

From the database point of view, at bottom are the tables and the number of 
rows in each table.  I can provide you a full dump of these records and schema, 
but I suspect that's not what you are seeking.

I know of no utility to help you seek out data from deleted records whose data 
may nonetheless be still present in the database file.  Using something like 
NOTEPAD may actually be your best bet -- see what cyrilic strings you can find. 
  Also, John's suggestion to contact the application's support dept (Apple?) 
may help.

There's some great-looking documention on the sqlite file structure here:
http://www.sqlite.org/fileformat.html
but it's not for the faint-of-heart (i.e., not so easily digested).

I'm reminded of my old "databank" wristwatch.   When it's battery died, I joked 
that I had to go get a whole new set of friends.  :-(


0   ABGroup
0   ABGroupChanges
0   ABGroupMembers
3   ABMultiValue
0   ABMultiValueEntry
8   ABMultiValueEntryKey
7   ABMultiValueLabel
3   ABPerson
0   ABPersonChanges
2   ABPersonMultiValueDeletes
3   ABPersonSearchKey
0   ABPhoneLastFour
0   ABRecent
3   ABStore
26  FirstSortSectionCount
29  LastSortSectionCount
16  _SqliteDatabaseProperties
2   sqlite_sequence

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread Simon Slavin

On 21 Oct 2009, at 2:47pm, Unabashed wrote:

> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data SELECTs are very  
> slow
> (10-20s!). Queries are in one transaction. My table structure is:
> CREATE TABLE mgWords (
>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>  id_norm INTEGER,
>  word TEXT,
>  wform TEXT)
> It seems, that the problem is in sql. How to optimize table to have  
> fast
> selects?

Add the appropriate indexes to your database.

If you would like us to suggest an appropriate index, please supply  
examples of your SELECT commands.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Bruce Robertson
I'm addicted to amazingly powerful solutions done in minutes; using  
the one-stop-shopping Lowe's Hardware of data solutions has its own  
addictive powers.

Depends on how you value your time.

On Oct 21, 2009, at 9:58 AM, Jan wrote:

> thx bruce, but I am addicted to open-source.
>
> Bruce Robertson schrieb:
>> You might try Filemaker. That's what is was designed for. Make
>> everything so easy.
>>
>> On Oct 21, 2009, at 8:10 AM, Jan wrote:
>>
>>> Thank you John. Seems postgres might be a better choice. Although it
>>> is
>>> so nice to work with sqlite cause non of this user/administration
>>> "crap"
>>> is necessary.
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Jan
thx bruce, but I am addicted to open-source.

Bruce Robertson schrieb:
> You might try Filemaker. That's what is was designed for. Make  
> everything so easy.
> 
> On Oct 21, 2009, at 8:10 AM, Jan wrote:
> 
>> Thank you John. Seems postgres might be a better choice. Although it  
>> is
>> so nice to work with sqlite cause non of this user/administration  
>> "crap"
>> is necessary.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread John Crenshaw
It isn't just speed. That is probably the cause of the insert error. A
PRIMARY KEY column is implied UNIQUE and NOT NULL but the insert doesn't
specify a value for id. Since it isn't aliased to rowid (and therefore
doesn't autoincrement) it would raise an error.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Keith Roberts
Sent: Wednesday, October 21, 2009 12:36 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow SELECTs in application

On Wed, 21 Oct 2009, Unabashed wrote:

> To: sqlite-users@sqlite.org
> From: Unabashed 
> Subject: [sqlite]  Slow SELECTs in application
> 
>
> Hello!
> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data SELECTs are very
slow
> (10-20s!). Queries are in one transaction. My table structure is:
> CREATE TABLE mgWords (
>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>  id_norm INTEGER,
>  word TEXT,
>  wform TEXT)

You could try removing the AUTOINCREMENT constraint to 
speed things up a little.

All you need is:

id INTEGER PRIMARY KEY,

That will allow you to reference the rowid using the 'id' 
identifyer. They both referer to the same thing.

Please see: http://www.sqlite.org/autoinc.html

for all the details.

Kind Regards,

Keith Roberts

-
Websites:
http://www.php-debuggers.net
http://www.karsites.net
http://www.raised-from-the-dead.org.uk

All email addresses are challenge-response protected with
TMDA [http://tmda.net]
-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread Keith Roberts
On Wed, 21 Oct 2009, Unabashed wrote:

> To: sqlite-users@sqlite.org
> From: Unabashed 
> Subject: [sqlite]  Slow SELECTs in application
> 
>
> Hello!
> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data SELECTs are very slow
> (10-20s!). Queries are in one transaction. My table structure is:
> CREATE TABLE mgWords (
>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>  id_norm INTEGER,
>  word TEXT,
>  wform TEXT)

You could try removing the AUTOINCREMENT constraint to 
speed things up a little.

All you need is:

id INTEGER PRIMARY KEY,

That will allow you to reference the rowid using the 'id' 
identifyer. They both referer to the same thing.

Please see: http://www.sqlite.org/autoinc.html

for all the details.

Kind Regards,

Keith Roberts

-
Websites:
http://www.php-debuggers.net
http://www.karsites.net
http://www.raised-from-the-dead.org.uk

All email addresses are challenge-response protected with
TMDA [http://tmda.net]
-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread John Crenshaw
Someone correct me if I'm wrong, but I don't think that UNIQUE
(id_norm,word,wform) is going to have the desired result. Won't that
create a single tricolumn unique index? I suspect this table needs a
separate index for each. Just put the UNIQUE keyword (with no arguments)
after the type on each column.

Secondly, I think the problem is AUTOINCREMENT. SQLite will alias id to
the rowid (the internal autoincrementing id) if and only if the type is
"INTEGER PRIMARY KEY". It is very picky about this. AUTOINCREMENT is
probably messing it up (and meaningless in SQLite anyway).

If that doesn't help, we probably need to know the error.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Unabashed
Sent: Wednesday, October 21, 2009 9:47 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Slow SELECTs in application


Hello!
I'm using SQLite as DB in my application. My problem consists of two
aspects. First , it works great, but on large data SELECTs are very slow
(10-20s!). Queries are in one transaction. My table structure is:
CREATE TABLE mgWords (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  id_norm INTEGER,
  word TEXT,
  wform TEXT)
It seems, that the problem is in sql. How to optimize table to have fast
selects?
Second part of my trouble is in using unique values. When i'm trying to
change structure to
CREATE TABLE mgWords (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  id_norm INTEGER,
  word TEXT,
  wform TEXT,
  UNIQUE (id_norm,word,wform))
and use
INSERT INTO mgWords (id_norm,word,wform) VALUES (0,'aword','awform')
it clauses error.
I'm not good in sql, so I'll be very glad to receive a professional
answer.
Sorry, please, for my English - this is not my native language. Thank
you fo
reading it :)
-- 
View this message in context:
http://www.nabble.com/Slow-SELECTs-in-application-tp25992880p25992880.ht
ml
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread John Crenshaw
My understanding is that the shared cache allows table level locking for
multiple threads in a single process, and can do so efficiently because
the threads all share the same memory space, but if multiple processes
attempt to access the database, they will each use the original (full
database lock) methods for concurrency. Therefore, if my understanding
is correct, the "elsewhere" is the location that describes the normal
database level locking.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


reading up on shared cache mode and found this:

"The locking protocol used to arbitrate between multiple shared-caches
or regular database users is described _elsewhere_."

where is this described?


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This 

Re: [sqlite] commit time

2009-10-21 Thread John Crenshaw
Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Wednesday, October 21, 2009 12:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to
obtain read-locks before reading_ from database tables as described
above. This can lead to inconsistent query results if another database
connection modifies a table while it is being read, but it also means
that a read-transaction opened by a connection in read-uncommitted mode
can neither block nor be blocked by any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email

Re: [sqlite] Problem about write data into the DB

2009-10-21 Thread John Crenshaw
It is likely that the file can't be opened for a very predictable
reason. For example, perhaps the specified path doesn't exist. (A common
variation of this would be a hard coded string with single backslashes,
most languages require you to escape backslashes in strings.) Perhaps
the file is read only, already locked by another process, or has
insufficient permissions. Most likely, this is going to be one of the
regular reasons for failing to open a file.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Rich Shepard
Sent: Wednesday, October 21, 2009 11:52 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Problem about write data into the DB

On Wed, 21 Oct 2009, ?? wrote:

>  I deployed a django app on my laptop, the whole environment is like
this:
> the OS is UBUNTU904, the web server is Apache, and the database is
> sqlite3. The deployment is success, but when I try to write some data
into
> the database, I get the HTTP 500 error. And I check the error log, it
> shows "*OperationalError: unable to open database file*". What does
this
> error mean? If there are some operation permission need configure?

   I'd look at the django code to see where it opens the database and
what
happens to inform the user if that attempt fails. I know nothing about
django so I cannot suggest where you should look.

Rich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent

it sounds like this is the feature you recommend using:

"A database connection in read-uncommitted mode _does not attempt to obtain 
read-locks before reading_ from database tables as described above. This can 
lead to inconsistent query results if another database connection modifies a 
table while it is being read, but it also means that a read-transaction opened 
by a connection in read-uncommitted mode can neither block nor be blocked by 
any other connection."

this is precisely what i need.  thanks very much.


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] On 
Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-21 Thread Tom Broadbent

reading up on shared cache mode and found this:

"The locking protocol used to arbitrate between multiple shared-caches or 
regular database users is described _elsewhere_."

where is this described?


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] On 
Behalf Of John Crenshaw [johncrens...@priacta.com]
Sent: Tuesday, October 20, 2009 7:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Sounds like a great candidate for shared cache with PRAGMA
read_uncommitted = true.

If other threads may also need a write lock on that table, you should
handle SQLITE_LOCKED by incrementing a waiter count and calling
sqlite3_unlock_notify. The thread doing the inserting can check to see
if anybody is waiting (blocked) and yield by committing the current
transaction and waiting for the blocked thread to unblock. Be aware, you
should also close any open cursors before yielding, because open cursors
will prevent write locks and you'll waste time yielding for nothing.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
Sent: Tuesday, October 20, 2009 8:05 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] commit time

i have a simple join table containing two ids from two other tables.  i
have an index on each of the ids in the join table.

CREATE TABLE ContentWordItem (word_id INT, item_id INT);

CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
// index to perform fast queries by word_id

CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
// index to perform fast deletes by item_id

i have a large amount of data to insert into this join table on a slow
embedded device.  i need to avoid locking this join table for more than
a second or two at a time so that i can make queries on this table.

so here's the question:  how do i insert small chunks of data into this
table w/o taking a hit each time i commit?

what i'm doing is:

* read a chunk of data from flat data file into vector of id
pairs

* begin transaction

* loop thru vector of id pairs binding and inserting

* commit transaction

* repeat until data is exhausted

i'm seeing that the reading, binding, and inserting is very fast (300
ms) but the commit is taking upwards of 3 seconds.  when i increase my
chunk size by a factor of 10 the insert doesn't appear to take 10x
longer but the commit still takes upwards of 3 seconds.  the point is
that the commit hit appears to be much greater than the insert hit but
doesn't appear to scale directly.

it appears that the commit is updating the indexes and taking a long
time.  is this a correct evaluation?

it also appears that the commit takes longer as the size of the table
grows (i.e. the index is getting bigger).  is this expected?

what i'm worried about is that by reducing the chunk size (to avoid
locking the db for a long time) i add a significant amount of time to
the insert process because the commits are costing several seconds.
however, locking the db for a long time is not desirable.

i'm also concerned about the commit time increasing over time as the
amount of data in the table increases.

is there a better approach?

thanks
tom

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Mac OS X Automator works on SQLite db

2009-10-21 Thread P Kishor
fascinating... one can feed it a file of commands, or even a string of
commands, and the "Apply SQL" action will run against a specified or a
newly created SQLite database and format the output as plain text,
HTML or CSV.


-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
Sent from Madison, WI, United States
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem about write data into the DB

2009-10-21 Thread Rich Shepard
On Wed, 21 Oct 2009, ?? wrote:

>  I deployed a django app on my laptop, the whole environment is like this:
> the OS is UBUNTU904, the web server is Apache, and the database is
> sqlite3. The deployment is success, but when I try to write some data into
> the database, I get the HTTP 500 error. And I check the error log, it
> shows "*OperationalError: unable to open database file*". What does this
> error mean? If there are some operation permission need configure?

   I'd look at the django code to see where it opens the database and what
happens to inform the user if that attempt fails. I know nothing about
django so I cannot suggest where you should look.

Rich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem about write data into the DB

2009-10-21 Thread 邓超
Hi sqlite-users,
  I deployed a django app on my laptop, the whole environment is like this:
the OS is UBUNTU904, the web server is Apache, and the database is sqlite3.
The deployment is success, but  when I try to write some data into the
database, I get the HTTP 500 error. And I check the error log, it
shows "*OperationalError:
unable to open database file*". What does this error mean? If there are some
operation permission need configure?

-- 
Deng Chao
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Jan
Thank you John. Seems postgres might be a better choice. Although it is 
so nice to work with sqlite cause non of this user/administration "crap" 
is necessary.

John Elrick schrieb:
> Jan wrote:
>> Hi,
>>
>> Although I read in a recent post by drh that it is not recommended to 
>> use sqlite in a local network I would like to ask if the following 
>> approach would work:
>>
>> A database in the local network needs to be accessed by about 20 people. 
>> I suppose the max. number at the same time will be 4-5. Only one is able 
>> to write to the database at the same time. The one who wants to write to 
>> the database acquires an exclusive look with "PRAGMA 
>> locking_mode=EXCLUSIVE" if locking_mode is currently NORMAL. AFAIR all 
>> others should then still be able to read, but not to write.
>>
>> Is that correct and more or less save? Does anyone have experience with 
>> sqlite on a networkdrive?
>>   
> 
> In my experience, adding multi-user capability to an application 
> increases the complexity by at least an order of magnitude.  If you have 
> 20 people who need access there are two options (IMO):
> 
> 1.  Web based application.  In this case the database itself has one and 
> only one consumer, the web server.  If you can keep the access to a 
> single thread, you have multi-user with no greater complexity than 
> single user -- albeit the user will have to stare at a web browser if 
> some long running process interferes.
> 2.  Client/Server.  Again, the database has only one consumer, the local 
> server, which manages all the complex details.  MySQL, Firebird, and 
> PostgreSQL are open source/free/low cost examples of this type of 
> system, however, the tricks that will work for a local database (lists 
> and grids are a big offender here) will NOT work effectively in a C/S 
> environment.
> 
> 
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread John Elrick
Jan wrote:
> Hi,
>
> Although I read in a recent post by drh that it is not recommended to 
> use sqlite in a local network I would like to ask if the following 
> approach would work:
>
> A database in the local network needs to be accessed by about 20 people. 
> I suppose the max. number at the same time will be 4-5. Only one is able 
> to write to the database at the same time. The one who wants to write to 
> the database acquires an exclusive look with "PRAGMA 
> locking_mode=EXCLUSIVE" if locking_mode is currently NORMAL. AFAIR all 
> others should then still be able to read, but not to write.
>
> Is that correct and more or less save? Does anyone have experience with 
> sqlite on a networkdrive?
>   

In my experience, adding multi-user capability to an application 
increases the complexity by at least an order of magnitude.  If you have 
20 people who need access there are two options (IMO):

1.  Web based application.  In this case the database itself has one and 
only one consumer, the web server.  If you can keep the access to a 
single thread, you have multi-user with no greater complexity than 
single user -- albeit the user will have to stare at a web browser if 
some long running process interferes.
2.  Client/Server.  Again, the database has only one consumer, the local 
server, which manages all the complex details.  MySQL, Firebird, and 
PostgreSQL are open source/free/low cost examples of this type of 
system, however, the tricks that will work for a local database (lists 
and grids are a big offender here) will NOT work effectively in a C/S 
environment.


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread O'Neill, Owen


Really does depend on the query (sql) you are running.

To investigate start by looking at the explain plan
http://www.sqlite.org/lang_explain.html
http://www.razorsql.com/docs/sqlite_explain_plan.html

Classic "slow query" problems are table scans - where the engine has to
scan the entire table to get the data you want.
This can be improved by adding indexes on the offending columns - at the
cost of slower insert and update speeds.
Other factors can include whether temp files are being written to ram or
disc
http://www.sqlite.org/tempfiles.html

then after that you're into things like cache sizes.

All the above is quite a simplistic overview - there are many more
devils and tradeoffs lurking in the details when it comes to performance
tuning.

Cheers.
Owen.



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Unabashed
Sent: Wednesday, October 21, 2009 2:47 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Slow SELECTs in application


Hello!
I'm using SQLite as DB in my application. My problem consists of two
aspects. First , it works great, but on large data SELECTs are very slow
(10-20s!). Queries are in one transaction. My table structure is:
CREATE TABLE mgWords (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  id_norm INTEGER,
  word TEXT,
  wform TEXT)
It seems, that the problem is in sql. How to optimize table to have fast
selects?
Second part of my trouble is in using unique values. When i'm trying to
change structure to
CREATE TABLE mgWords (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  id_norm INTEGER,
  word TEXT,
  wform TEXT,
  UNIQUE (id_norm,word,wform))
and use
INSERT INTO mgWords (id_norm,word,wform) VALUES (0,'aword','awform')
it clauses error.
I'm not good in sql, so I'll be very glad to receive a professional
answer.
Sorry, please, for my English - this is not my native language. Thank
you fo
reading it :)
-- 
View this message in context:
http://www.nabble.com/Slow-SELECTs-in-application-tp25992880p25992880.ht
ml
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECTs in application

2009-10-21 Thread Pavel Ivanov
> It seems, that the problem is in sql. How to optimize table to have fast
> selects?

Different sql queries require different things to be the fastest
possible. There's no one magic pill for all.

> INSERT INTO mgWords (id_norm,word,wform) VALUES (0,'aword','awform')
> it clauses error.

Which means you already have row with these id_norm, word and wform.
What's the problem?


Pavel

On Wed, Oct 21, 2009 at 9:47 AM, Unabashed  wrote:
>
> Hello!
> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data SELECTs are very slow
> (10-20s!). Queries are in one transaction. My table structure is:
> CREATE TABLE mgWords (
>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>  id_norm INTEGER,
>  word TEXT,
>  wform TEXT)
> It seems, that the problem is in sql. How to optimize table to have fast
> selects?
> Second part of my trouble is in using unique values. When i'm trying to
> change structure to
> CREATE TABLE mgWords (
>  id INTEGER PRIMARY KEY AUTOINCREMENT,
>  id_norm INTEGER,
>  word TEXT,
>  wform TEXT,
>  UNIQUE (id_norm,word,wform))
> and use
> INSERT INTO mgWords (id_norm,word,wform) VALUES (0,'aword','awform')
> it clauses error.
> I'm not good in sql, so I'll be very glad to receive a professional answer.
> Sorry, please, for my English - this is not my native language. Thank you fo
> reading it :)
> --
> View this message in context: 
> http://www.nabble.com/Slow-SELECTs-in-application-tp25992880p25992880.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-21 Thread Ron Arts
Pavel Ivanov schreef:
>> Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it in the wild. I was hoping to make it
>> more dynamic and adaptive to database layout changes.
> 
> Sorry, Ron, but I don't get it. You're saying that adding more columns
> means adding more fields into your structure and making your
> application to know about those fields, and that means you need to
> recompile and it's a bad thing. But if you're working with SQLite's
> in-memory database then you have to create all tables in that database
> at startup, so you need an SQL statement in your application
> containing all columns you use, so again adding a column you need to
> recompile, right? Or if you generate your CREATE TABLE statement
> dynamically from postgresql then how does your code know about new
> fields added? How does it work with it? Are your newly added columns
> used only from sql coming directly from clients?
> Maybe you just need to re-think your database schema so that you need
> only one select statement per network request and so your requirements
> to sql engine speed would be lowered by the order of magnitude?
> 

Pavel,

You are right, and I'll look into rewriting a bigger part of our application.

Thanks,
Ron


> Pavel
> 
> On Sun, Oct 18, 2009 at 2:23 PM, Ron Arts  wrote:
>> P Kishor schreef:
>>> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts  wrote:
 Very true Simon,

 this has been the fastest way so far and I can do around
 35 selects/second this way, using prepared statements
 (on my machine at least), but I need more speed.

 That's why I want to skip the SQL processing entirely
 and write a C function that reaches directly into the
 internal memory structures to gets my record from there.
>>> I might have missed the discussion, but... why don't you ditch SQLite
>>> and use something like Berkeley DB? Sounds to me you need a hash db
>>> instead of an rdbms, especially since you have no need for SQL.
>>>
>> Well, that's what I use at the moment (I use glib2 g_hash_table()),
>> but now the requirement to accept SQL queries for that database
>> has come up. And I don't want to write my own SQL parser..
>>
>> Next I tried to add a virtual table driver to SQLite for my database.
>> That worked, but because the application is evolving, columns
>> get added/changed on a regular basis. Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it in the wild. I was hoping to make it
>> more dynamic and adaptive to database layout changes.
>>
>> Thanks,
>> Ron
>>
>>
 thanks,
 Ron

 Simon Slavin schreef:
> On 18 Oct 2009, at 8:37am, Ron Arts wrote:
>
>> Is there a way to bypass the virtual machine altogether and reach
>> directly
>> into the btree and just retrieve one record by it's oid (primary
>> integer key),
>> and return it in a form that would allow taking out the column
>> values by name?
> The primary integer key column can always be referred to as the
> special name ROWID, even if you have assigned it a column name of your
> own.  So you can do
>
> SELECT ROWID,myCol1,myCol2 FROM myTable
>
> as long as you don't explicitly declare a primary integer key column
> and then change the values in it.
>
> Simon.
>>>
>>>
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Jan
Pavel Ivanov schrieb:
I'm not sure you want to do that.

No, actually not. Thanks for clarification.

jan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Pesristence in SQLITE3

2009-10-21 Thread Pavel Ivanov
Are you sure you run it with the real database file, not with
in-memory or temporary database? Temporary database is used if you run
sqlite3 command line tool without any file name given in the command
line.

Pavel

On Wed, Oct 21, 2009 at 1:32 AM, Saurabh Agarwal  wrote:
> Hi, i am facing a problem, when I download and install the sqlite3.6.18 and
> compile it and then execute it, it works fine.. But if I quit it and again
> restart it then all the previous tables are lost..
>
> Can Someone help me..
>
> Saurabh Agarwal
> BE(Hons) Computer Science
> BITS,Pilani
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LAN and exclusive lock

2009-10-21 Thread Pavel Ivanov
Disregarding network drive issue "PRAGMA locking_mode=EXCLUSIVE" means
that once your writer requires PENDING or EXCLUSIVE lock nobody else
will be able to read the database anymore until writer closes
connection to it. In other words without this pragma with relatively
small transactions in writer nobody will be able to read database
during small periods of time while writer commits transactions. But
with this pragma you will throw away all readers after the first
commit. I'm not sure you want to do that.

Pavel

On Wed, Oct 21, 2009 at 3:44 AM, Jan  wrote:
> Hi,
>
> Although I read in a recent post by drh that it is not recommended to
> use sqlite in a local network I would like to ask if the following
> approach would work:
>
> A database in the local network needs to be accessed by about 20 people.
> I suppose the max. number at the same time will be 4-5. Only one is able
> to write to the database at the same time. The one who wants to write to
> the database acquires an exclusive look with "PRAGMA
> locking_mode=EXCLUSIVE" if locking_mode is currently NORMAL. AFAIR all
> others should then still be able to read, but not to write.
>
> Is that correct and more or less save? Does anyone have experience with
> sqlite on a networkdrive?
>
> Thank you
> Jan
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite DB Structure

2009-10-21 Thread 1120.free
Hi. I'm from Belarus. Sorry for my English.

I  NEED  HELP! I have SQLite3 DB (contacts from iPhone), it contains 1
record,  but when I open it with NOTEPAD.EXE, I see more than 1 record.
I  need  to  repair  all  records  from  it. Maybe it help: when I use
'VACUUM'  command file size decreases too much. I attached file that I
need to repair. If you can, extract lost data from it, please. It's VERY 
IMPORTANT for
me. I don't know who can help me except you.


-- 
С уважением,
 1120.free  mailto:1120.f...@gmail.com___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite DB Structure

2009-10-21 Thread 1120.free
Hi. I'm from Belarus. Sorry for my English.

I  NEED  HELP! I have SQLite3 DB (contacts from iPhone), it contains 1
record,  but when I open it with NOTEPAD.EXE, I see more than 1 record.
I  need  to  repair  all  records  from  it. Maybe it help: when I use
'VACUUM'  command file size decreases too much. I attached file that I
need to repair. If you can, extract lost data from it, please. It's VERY 
IMPORTANT for
me. I don't know who can help me except you.

-- 
С уважением,
 1120.free  mailto:1120.f...@gmail.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pesristence in SQLITE3

2009-10-21 Thread Saurabh Agarwal
Hi, i am facing a problem, when I download and install the sqlite3.6.18 and
compile it and then execute it, it works fine.. But if I quit it and again
restart it then all the previous tables are lost..

Can Someone help me..

Saurabh Agarwal
BE(Hons) Computer Science
BITS,Pilani
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] LAN and exclusive lock

2009-10-21 Thread Jan
Hi,

Although I read in a recent post by drh that it is not recommended to 
use sqlite in a local network I would like to ask if the following 
approach would work:

A database in the local network needs to be accessed by about 20 people. 
I suppose the max. number at the same time will be 4-5. Only one is able 
to write to the database at the same time. The one who wants to write to 
the database acquires an exclusive look with "PRAGMA 
locking_mode=EXCLUSIVE" if locking_mode is currently NORMAL. AFAIR all 
others should then still be able to read, but not to write.

Is that correct and more or less save? Does anyone have experience with 
sqlite on a networkdrive?

Thank you
Jan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] index for a group by

2009-10-21 Thread John Crenshaw
Actually, I thought exactly what you said when I saw the question. When
I saw your answer though I realized I'd been wrong, there are ways I
could slow indexing down, and therefore, ways to speed it up.

Splitting across transactions is about the insertion of data, not the
creation of the index. This is for the case where you can't insert first
and create the index later (maybe you're inserting a lot of data into a
table that already has data for example.) The recommendation in this
case is to wrap the whole batch of inserts in a transaction, but to
commit the transaction at regular intervals, breaking the process into
multiple pieces so that you don't spill over the memory cache.
SUPPOSEDLY this positively impacts indexing performance, but I've not
personally tested that claim. The more significant impact in this case
is actually the individual transactions you avoid, which makes a huge
difference.

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
Sent: Tuesday, October 20, 2009 2:47 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] index for a group by

I want to notice, John, that my words are in context "I have table
with a lot of data, I want to create a particular index on it, how can
I do it quickly". In this context only your 5 bullet is applicable, I
admit I've forgot about that. And I don't understand how can one split
creating of index across several transactions.

Pavel

On Tue, Oct 20, 2009 at 2:15 PM, John Crenshaw
 wrote:
>> Nothing in this process can be sped up.
>
> Actually, that isn't entirely true. While it always requires a full
data
> scan, Some things can make the indexing part of the process faster.
> Since indexing is done basically using a comparative sort, anything
that
> would speed up the sort, will speed up the indexing.
> 1. It is faster to sort 1000 data points, than to insert 1000
datapoints
> into a constantly sorted list. Creating the index after all inserts is
> faster than creating the index, then inserting.
> 2. If possible, avoid indexes on long data strings, since the compares
> can be time consuming.
> 3. If you have a field that stores one of several strings (as an
"enum")
> consider using integers instead. Integers have lower overhead, and can
> be compared (and sorted) more quickly than strings.
> 4. If you are feeling really gutsy, you could mod the code and
implement
> a radix sort or something similar for integer values. I'm not really
> recommending this, just saying, inserts and lookups in a radix index
are
> faster than a btree.
> 5. Make sure the memory cache is large enough for the sort. Writing
data
> to disk is very costly, compared to sorting in memory. Default is 2000
> pages (2MB) worth of btree data. If you are about to build an index
that
> will require more btree than that, increase the size, or split across
> several transactions.
>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov
> Sent: Tuesday, October 20, 2009 7:35 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] index for a group by
>
>> please could you let me know which index could be better or faster?
>
> For this particular query index on (t,a,b,c) or even on (t,a,b,c,d)
> would be better and cause the query to execute faster (of course if by
> conditions t>x1 and t table).
>
>> also do you know by chance how to speed up the index creation?
>
> There's no way to do that. SQLite have to scan the whole table, read
> data from all rows and put necessary information into the index.
> Nothing in this process can be sped up.
>
> Pavel
>
> On Mon, Oct 19, 2009 at 5:50 PM, Sylvain Pointeau
>  wrote:
>> hello,
>> I have a table T (a,b,c,d,t)
>> where c is a value
>> a,b,c some dimensions
>> and t the time
>>
>> I need to make a subset with a "group by"
>> like
>>
>> select a,b,c,sum(d)
>> from T
>> where t>x1 and t> group by a,b,c
>>
>> I created an index on a,b,c
>> but this table is large and the index creation is time consuming (few
> hours)
>>
>> please could you let me know which index could be better or faster?
>> also do you know by chance how to speed up the index creation?
>>
>> Best regards,
>> Sylvain
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org