[sqlite] ODBC driver C source file includes a header which was not shipped

2008-04-24 Thread Abshagen, Martin RD-AS2
Hi,

am trying to use the Sqlite3 ODBC driver for C, provided by Christian Werner 
(http://www.ch-werner.de/sqliteodbc/, product sqliteodbc-0.77-1.src.rpm).
If _WIN32 is defined, sqlite3odbc.c includes a  "resource3.h" header file, 
which is not shipped with the version. How could I work around or find 
"resource3.h"?

Best regards

Martin Abshagen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
I got it. Fixed my program with "IMMEDIATE" transaction.

On Fri, Apr 25, 2008 at 12:01 AM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
> "Alexander Batyrshin" <[EMAIL PROTECTED]>
>  wrote in message
>  news:[EMAIL PROTECTED]
>
> >>  Dropping the read lock is the same as rolling back the
>  >>  transaction.  The first process can, in fact, do this.  And
>  >>  the second process is waiting for the first process to do
>  >>  this.  But the first process cannot do it automatically.  The
>  >>  application must issue a "COMMIT" or "ROLLBACK" command
>  >>  to make it happen.
>  >
>
> > For example, if 2 processes executes simple SQL INSERT commands and
>  > gets situation like above, one of them can easily drop read lock and
>  > wait for another one. There is no problem for this case.
>
>  Two concurrent inserts never result in a deadlock. For a deadlock to
>  occur in SQLite, at least one transaction should start as a read-only
>  (with a select statement) and later attempt to promote to read-write
>  (with insert, update or delete statements). In this case you may get
>  into a situation where the first transaction holds a shared lock and
>  waits to promote it to reserved, and the second one holds a pending
>  lock, wants to promote it to exclusive and waits for all readers (shared
>  locks) to clear.
>
>  Igor Tandetnik
>
>
>
>
>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Alexander Batyrshin aka bash
bash = Biomechanica Artificial Sabotage Humanoid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance statistics?

2008-04-24 Thread Richard Klein
Does SQLite have a mechanism, in addition to the
ANALYZE statement, for recording and dumping
performance statistics?

Thanks,
- Richard Klein

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Igor Tandetnik
"Alexander Batyrshin" <[EMAIL PROTECTED]>
wrote in message
news:[EMAIL PROTECTED]
>>  Dropping the read lock is the same as rolling back the
>>  transaction.  The first process can, in fact, do this.  And
>>  the second process is waiting for the first process to do
>>  this.  But the first process cannot do it automatically.  The
>>  application must issue a "COMMIT" or "ROLLBACK" command
>>  to make it happen.
>
> For example, if 2 processes executes simple SQL INSERT commands and
> gets situation like above, one of them can easily drop read lock and
> wait for another one. There is no problem for this case.

Two concurrent inserts never result in a deadlock. For a deadlock to 
occur in SQLite, at least one transaction should start as a read-only 
(with a select statement) and later attempt to promote to read-write 
(with insert, update or delete statements). In this case you may get 
into a situation where the first transaction holds a shared lock and 
waits to promote it to reserved, and the second one holds a pending 
lock, wants to promote it to exclusive and waits for all readers (shared 
locks) to clear.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to set memory usage as high as possible -- but not too high?

2008-04-24 Thread Ralf Junker
Jay A. Kreibich wrote:

>> >Are you using a 32 bit or 64 bit process.
>> 
>> 32, but it does not matter to the problem.
>
>  When you give examples like "PRAGMA cache_size = 1000;", or 10M
>  pages which would take something on the order of 15 GB of memory to
>  service with the default 1K page size, a few of us start to wonder.

Good point. But I believe you misunderstood my intention. I was not interested 
in absolute but in relative numbers, regardless of a 32 or 64 bit system. All I 
need to know is related to the amount of RAM available when the application 
starts. The aim is to use as much RAM as possible, but never more than 
reasonably available. The last bit is the difficult one.

>> >I am curious why you think memory is the bottleneck anyway!
>> 
>> It has often been pointed out on this list that inserts into
>> indexed tables (regular or FTS) run faster with a high page cache.
>> My own tests 2nd this. A few 100 MB more or less can make an
>> difference of more than 100%.
>
>  Given that the default page cache is 2000 pages, or on the order of
>  3MB, it seems that you're hitting some serious limits.  If hundreds
>  of megabytes (!) is giving you a return on the order of 2x, then there
>  is no magic bullet-- you aren't going to find a setting that suddenly
>  gives you a 10x speedup.  You're hitting diminishing returns in a
>  serious kind of way.

Some numbers: 32 bit app, inserting 6 million records into 2 tables each (text, 
fts, and blob). The database finally grows to over 6 GB in size. As the last 
step, a simple index is created on one text field.

With the default 2000 pages cache size (1 KB page size), this takes about a 
full day or more. Raising the page cache to some 18 pages uses about 270 MB 
of memory but brings the timing down to less than one hour.

My testing shows that inserts with lots of random disk searches (indexes, fts) 
hugely benefit from a large cache size for the simple reason that it reduces 
disk IO.

>  Personally, I'd pick a number, like half your RAM size or ~1.5GB*
>  (whichever is smaller), set the page cache, and be done with it.

That's what I ended up doing. In addition, I regularly check 
sqlite3_memory_used() and reduce the page cache if it exceeds the limit. Then I 
call sqlite3_release_memory() repeatedly until the memory usage has dropped 
sufficiently.

>  It sounds like you've already found most of the other PRAGMAs that
>  are going to get you something.  You might be able to tune the size
>  of your INSERT transactions, but if you're around 1000 or so, going
>  higher isn't likely to buy you too much.

Currently I use just a single transaction for all inserts into a newly created 
database. This reduces the number of cache flushes to a single time when all 
data is inserted and just the used memory is being freed.

As another optimization option I am looking forward for the new journal pragma 
and will hopefully not need to use journal file after all.

Thanks for the feedback and my apologies for the late response,

Ralf 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
>  Dropping the read lock is the same as rolling back the
>  transaction.  The first process can, in fact, do this.  And
>  the second process is waiting for the first process to do
>  this.  But the first process cannot do it automatically.  The
>  application must issue a "COMMIT" or "ROLLBACK" command
>  to make it happen.

This is looks little bit odd for me.
For example, if 2 processes executes simple SQL INSERT commands and
gets situation like above, one of them can easily drop read lock and
wait for another one. There is no problem for this case.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to execute the statment file using sqlite API

2008-04-24 Thread Dennis Cote
Joanne Pham wrote:
> So I don't want to read the file into array and executes it.
> I have the define the array of characters as below:
> .output outputFile.mode csv select startTime, appName, appType, 
> isAppDeleted, remoteWXId;
> but it didn't work. 

 From the ".output" command above it appears you are using the sqlite3 
shell and not the SQLite API functions from your own application. If 
this is not true then you need to know that all the meta-commands that 
start with a period are implemented by the sqlite3 shell application, 
and not by the sqlite library, so they cannot be used by your application.

If you are using the sqlite3 shell, you can use the .read command to 
execute all the commands in a file.

 sqlite>.read getData.sql

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob truncation

2008-04-24 Thread Dennis Cote
Vasil Boshnyakov wrote:
> 
> It is not so effective to read the file twice - the first time to check the
> compressed size and the second pass - to do the actual streaming in the
> blob. That is what I am trying to avoid. 

Yes, I understand that, but SQLite is designed to store and retrieve 
blobs, fixed size chunks of arbitrary data. It is not designed to 
manipulate the contents of the blobs.

> What do you mean with: "... but you
> can replace it with another 
> blob with a different size."? 

You can't directly change the size of a blob. You can only change the 
size by replacing one blob with another one of a different size.

> Is it possible to store the file in a bigger
> blob and then replace the blob with smaller one without to copy the raw blob
> data?
> 

No, when you replace a blob you will need to copy the data into the 
newly created smaller blob.

Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread D. Richard Hipp

On Apr 24, 2008, at 7:42 AM, Alexander Batyrshin wrote:

> I am not understand this example. First of all second process can't
> promote exclusive lock from reserved. It should use intermediate
> pending lock.

It does go to pending.  But it still cannot complete the transaction
until it is able to promote the pending lock to exclusive.  And it
cannot promote to exclusive until the first process drops its read
lock.

> And secondary why first process can't just drop read
> lock and then invoke busy handler?

Dropping the read lock is the same as rolling back the
transaction.  The first process can, in fact, do this.  And
the second process is waiting for the first process to do
this.  But the first process cannot do it automatically.  The
application must issue a "COMMIT" or "ROLLBACK" command
to make it happen.

>
>
> In this case any write to database that already has process
>
> On Thu, Apr 24, 2008 at 6:01 PM, Simon Davies
> <[EMAIL PROTECTED]> wrote:
>> Alexander,
>>
>> From http://www.sqlite.org/c3ref/busy_handler.html
>>
>> "The presence of a busy handler does not guarantee that it will be
>> invoked when there is lock contention. If SQLite determines that
>> invoking the busy handler could result in a deadlock, it will go  
>> ahead
>> and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of invoking  
>> the
>> busy handler. Consider a scenario where one process is holding a read
>> lock that it is trying to promote to a reserved lock and a second
>> process is holding a reserved lock that it is trying to promote to an
>> exclusive lock. The first process cannot proceed because it is  
>> blocked
>> by the second and the second process cannot proceed because it is
>> blocked by the first. If both processes invoke the busy handlers,
>> neither will make any progress. Therefore, SQLite returns SQLITE_BUSY
>> for the first process, hoping that this will induce the first process
>> to release its read lock and allow the second process to proceed"
>>
>> Rgds,
>> Simon
>>
>> 2008/4/24 Alexander Batyrshin <[EMAIL PROTECTED]>:
>>
>>
>>> Oh... Nope, I am not using any thread-mechanism.
>>> I am using simple processes (via fork). So synchronization should be
>>> task for SQLite library.
>>>
>>> But right now I am confused, because my processes do not blocks on
>>> sqlite3_exec. They immediately report BUSY_TIMEOUT, without awaiting
>>> for time set by sqlite3_busy_timeout.
>>>
>>>
>>>
>>> On Thu, Apr 24, 2008 at 4:29 PM, John Stanton  
>>> <[EMAIL PROTECTED]> wrote:
 If it is one process I would assign a mutex to the resource  
 (Sqlite) and
 wait on it to get access to the resource.  When the Sqlite  
 operation is
 complete release the mutex and the next thread will have exclusive
 access to it.

 If you use pthreads you can use read and write locks to get  
 concurrency
 on reads.

 To my mind syncing on a mutex is better and simpler than polling  
 the
 resource using SQLITE_BUSY.



 Alexander Batyrshin wrote:
> So, you advice me, to implement synchronization inside my  
> process by my self?
>
> On Thu, Apr 24, 2008 at 3:40 PM, John Stanton  
> <[EMAIL PROTECTED]> wrote:
>> You have a single shared resource, Sqlite, and you have to  
>> synchronize
>> access.  You can use the internal locking in Sqlite and use  
>> polling or
>> wait on a mutex or semaphore.
>>
>>
>> Alexander Batyrshin wrote:
>>> Hello All,
>>>
>>> I am observing situation, that my concurrency process does not  
>>> have
>>> access to SQLite database with equal probability.
>>>
>>> Here is example. I have N process that do work like this:
>>>
>>> while (1) {
>>>do_some_work(); // takes ~ 30 sec
>>>save_work_result_to_sqlite(); // takes ~ 1 sec
>>> }
>>>
>>> So, as you can see, these N process has concurrency access to  
>>> SQLite database.
>>> In theory in worst case, save_work_result_to_sqlite() should  
>>> NOT wait
>>> for access to database longer than N * 1 sec.
>>> But in practice, some process blocks on save_work_to_sqlite()  
>>> more
>>> than N*2 sec and dies on my SQLITE_BUSY asserts :/
>>>
>>> So, I am wondering, is there any ideas how to avoid this?
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>>
>>>
>>>
>>> --
>>> Alexander Batyrshin aka bash
>>> bash = Biomechanica Artificial Sabotage Humanoid
>>> ___
>>>
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/

Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread John Stanton
If you are using processes you can sync them using a semaphore so that 
it automatically blocks.  Alternatively do not use sqlite3_exec (it is 
an old interface) and instead use sqlite3_prepare ... sqlite3_step.  If 
you get an SQLITE_BUSY returned by sqlite3_step then pause a hundred mS 
or so and try again.

If you don't use threads you can compile Sqlite without THREADSAFE and 
have it run a little faster.

Alexander Batyrshin wrote:
> Oh... Nope, I am not using any thread-mechanism.
> I am using simple processes (via fork). So synchronization should be
> task for SQLite library.
> 
> But right now I am confused, because my processes do not blocks on
> sqlite3_exec. They immediately report BUSY_TIMEOUT, without awaiting
> for time set by sqlite3_busy_timeout.
> 
> 
> On Thu, Apr 24, 2008 at 4:29 PM, John Stanton <[EMAIL PROTECTED]> wrote:
>> If it is one process I would assign a mutex to the resource (Sqlite) and
>>  wait on it to get access to the resource.  When the Sqlite operation is
>>  complete release the mutex and the next thread will have exclusive
>>  access to it.
>>
>>  If you use pthreads you can use read and write locks to get concurrency
>>  on reads.
>>
>>  To my mind syncing on a mutex is better and simpler than polling the
>>  resource using SQLITE_BUSY.
>>
>>
>>
>>  Alexander Batyrshin wrote:
>>  > So, you advice me, to implement synchronization inside my process by my 
>> self?
>>  >
>>  > On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> wrote:
>>  >> You have a single shared resource, Sqlite, and you have to synchronize
>>  >>  access.  You can use the internal locking in Sqlite and use polling or
>>  >>  wait on a mutex or semaphore.
>>  >>
>>  >>
>>  >>  Alexander Batyrshin wrote:
>>  >>  >  Hello All,
>>  >>  >
>>  >>  > I am observing situation, that my concurrency process does not have
>>  >>  > access to SQLite database with equal probability.
>>  >>  >
>>  >>  > Here is example. I have N process that do work like this:
>>  >>  >
>>  >>  > while (1) {
>>  >>  > do_some_work(); // takes ~ 30 sec
>>  >>  > save_work_result_to_sqlite(); // takes ~ 1 sec
>>  >>  > }
>>  >>  >
>>  >>  > So, as you can see, these N process has concurrency access to SQLite 
>> database.
>>  >>  > In theory in worst case, save_work_result_to_sqlite() should NOT wait
>>  >>  > for access to database longer than N * 1 sec.
>>  >>  > But in practice, some process blocks on save_work_to_sqlite() more
>>  >>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
>>  >>  >
>>  >>  > So, I am wondering, is there any ideas how to avoid this?
>>  >>  >
>>  >>
>>  >>  ___
>>  >>  sqlite-users mailing list
>>  >>  sqlite-users@sqlite.org
>>  >>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>  >>
>>  >
>>  >
>>  >
>>
>>  ___
>>  sqlite-users mailing list
>>  sqlite-users@sqlite.org
>>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
I am not understand this example. First of all second process can't
promote exclusive lock from reserved. It should use intermediate
pending lock. And secondary why first process can't just drop read
lock and then invoke busy handler?

In this case any write to database that already has process

On Thu, Apr 24, 2008 at 6:01 PM, Simon Davies
<[EMAIL PROTECTED]> wrote:
> Alexander,
>
>  From http://www.sqlite.org/c3ref/busy_handler.html
>
>  "The presence of a busy handler does not guarantee that it will be
>  invoked when there is lock contention. If SQLite determines that
>  invoking the busy handler could result in a deadlock, it will go ahead
>  and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of invoking the
>  busy handler. Consider a scenario where one process is holding a read
>  lock that it is trying to promote to a reserved lock and a second
>  process is holding a reserved lock that it is trying to promote to an
>  exclusive lock. The first process cannot proceed because it is blocked
>  by the second and the second process cannot proceed because it is
>  blocked by the first. If both processes invoke the busy handlers,
>  neither will make any progress. Therefore, SQLite returns SQLITE_BUSY
>  for the first process, hoping that this will induce the first process
>  to release its read lock and allow the second process to proceed"
>
>  Rgds,
>  Simon
>
>  2008/4/24 Alexander Batyrshin <[EMAIL PROTECTED]>:
>
>
> > Oh... Nope, I am not using any thread-mechanism.
>  > I am using simple processes (via fork). So synchronization should be
>  > task for SQLite library.
>  >
>  > But right now I am confused, because my processes do not blocks on
>  > sqlite3_exec. They immediately report BUSY_TIMEOUT, without awaiting
>  > for time set by sqlite3_busy_timeout.
>  >
>  >
>  >
>  > On Thu, Apr 24, 2008 at 4:29 PM, John Stanton <[EMAIL PROTECTED]> wrote:
>  > > If it is one process I would assign a mutex to the resource (Sqlite) and
>  > >  wait on it to get access to the resource.  When the Sqlite operation is
>  > >  complete release the mutex and the next thread will have exclusive
>  > >  access to it.
>  > >
>  > >  If you use pthreads you can use read and write locks to get concurrency
>  > >  on reads.
>  > >
>  > >  To my mind syncing on a mutex is better and simpler than polling the
>  > >  resource using SQLITE_BUSY.
>  > >
>  > >
>  > >
>  > >  Alexander Batyrshin wrote:
>  > >  > So, you advice me, to implement synchronization inside my process by 
> my self?
>  > >  >
>  > >  > On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> 
> wrote:
>  > >  >> You have a single shared resource, Sqlite, and you have to 
> synchronize
>  > >  >>  access.  You can use the internal locking in Sqlite and use polling 
> or
>  > >  >>  wait on a mutex or semaphore.
>  > >  >>
>  > >  >>
>  > >  >>  Alexander Batyrshin wrote:
>  > >  >>  >  Hello All,
>  > >  >>  >
>  > >  >>  > I am observing situation, that my concurrency process does not 
> have
>  > >  >>  > access to SQLite database with equal probability.
>  > >  >>  >
>  > >  >>  > Here is example. I have N process that do work like this:
>  > >  >>  >
>  > >  >>  > while (1) {
>  > >  >>  > do_some_work(); // takes ~ 30 sec
>  > >  >>  > save_work_result_to_sqlite(); // takes ~ 1 sec
>  > >  >>  > }
>  > >  >>  >
>  > >  >>  > So, as you can see, these N process has concurrency access to 
> SQLite database.
>  > >  >>  > In theory in worst case, save_work_result_to_sqlite() should NOT 
> wait
>  > >  >>  > for access to database longer than N * 1 sec.
>  > >  >>  > But in practice, some process blocks on save_work_to_sqlite() more
>  > >  >>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
>  > >  >>  >
>  > >  >>  > So, I am wondering, is there any ideas how to avoid this?
>  > >  >>  >
>  > >  >>
>  > >  >>  ___
>  > >  >>  sqlite-users mailing list
>  > >  >>  sqlite-users@sqlite.org
>  > >  >>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>  > >  >>
>  > >  >
>  > >  >
>  > >  >
>  > >
>  > >  ___
>  > >  sqlite-users mailing list
>  > >  sqlite-users@sqlite.org
>  > >  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>  > >
>  >
>  >
>  >
>  > --
>  > Alexander Batyrshin aka bash
>  > bash = Biomechanica Artificial Sabotage Humanoid
>  > ___
>  >
>  > sqlite-users mailing list
>  > sqlite-users@sqlite.org
>  > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>  >
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Alexander Batyrshin aka bash
bash = Biomechanica Artificial Sabotage Humanoid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailm

Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Simon Davies
Alexander,

>From http://www.sqlite.org/c3ref/busy_handler.html

"The presence of a busy handler does not guarantee that it will be
invoked when there is lock contention. If SQLite determines that
invoking the busy handler could result in a deadlock, it will go ahead
and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of invoking the
busy handler. Consider a scenario where one process is holding a read
lock that it is trying to promote to a reserved lock and a second
process is holding a reserved lock that it is trying to promote to an
exclusive lock. The first process cannot proceed because it is blocked
by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers,
neither will make any progress. Therefore, SQLite returns SQLITE_BUSY
for the first process, hoping that this will induce the first process
to release its read lock and allow the second process to proceed"

Rgds,
Simon

2008/4/24 Alexander Batyrshin <[EMAIL PROTECTED]>:
> Oh... Nope, I am not using any thread-mechanism.
> I am using simple processes (via fork). So synchronization should be
> task for SQLite library.
>
> But right now I am confused, because my processes do not blocks on
> sqlite3_exec. They immediately report BUSY_TIMEOUT, without awaiting
> for time set by sqlite3_busy_timeout.
>
>
>
> On Thu, Apr 24, 2008 at 4:29 PM, John Stanton <[EMAIL PROTECTED]> wrote:
> > If it is one process I would assign a mutex to the resource (Sqlite) and
> >  wait on it to get access to the resource.  When the Sqlite operation is
> >  complete release the mutex and the next thread will have exclusive
> >  access to it.
> >
> >  If you use pthreads you can use read and write locks to get concurrency
> >  on reads.
> >
> >  To my mind syncing on a mutex is better and simpler than polling the
> >  resource using SQLITE_BUSY.
> >
> >
> >
> >  Alexander Batyrshin wrote:
> >  > So, you advice me, to implement synchronization inside my process by my 
> > self?
> >  >
> >  > On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> wrote:
> >  >> You have a single shared resource, Sqlite, and you have to synchronize
> >  >>  access.  You can use the internal locking in Sqlite and use polling or
> >  >>  wait on a mutex or semaphore.
> >  >>
> >  >>
> >  >>  Alexander Batyrshin wrote:
> >  >>  >  Hello All,
> >  >>  >
> >  >>  > I am observing situation, that my concurrency process does not have
> >  >>  > access to SQLite database with equal probability.
> >  >>  >
> >  >>  > Here is example. I have N process that do work like this:
> >  >>  >
> >  >>  > while (1) {
> >  >>  > do_some_work(); // takes ~ 30 sec
> >  >>  > save_work_result_to_sqlite(); // takes ~ 1 sec
> >  >>  > }
> >  >>  >
> >  >>  > So, as you can see, these N process has concurrency access to SQLite 
> > database.
> >  >>  > In theory in worst case, save_work_result_to_sqlite() should NOT wait
> >  >>  > for access to database longer than N * 1 sec.
> >  >>  > But in practice, some process blocks on save_work_to_sqlite() more
> >  >>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
> >  >>  >
> >  >>  > So, I am wondering, is there any ideas how to avoid this?
> >  >>  >
> >  >>
> >  >>  ___
> >  >>  sqlite-users mailing list
> >  >>  sqlite-users@sqlite.org
> >  >>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >  >>
> >  >
> >  >
> >  >
> >
> >  ___
> >  sqlite-users mailing list
> >  sqlite-users@sqlite.org
> >  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> Alexander Batyrshin aka bash
> bash = Biomechanica Artificial Sabotage Humanoid
> ___
>
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
Oh... Nope, I am not using any thread-mechanism.
I am using simple processes (via fork). So synchronization should be
task for SQLite library.

But right now I am confused, because my processes do not blocks on
sqlite3_exec. They immediately report BUSY_TIMEOUT, without awaiting
for time set by sqlite3_busy_timeout.


On Thu, Apr 24, 2008 at 4:29 PM, John Stanton <[EMAIL PROTECTED]> wrote:
> If it is one process I would assign a mutex to the resource (Sqlite) and
>  wait on it to get access to the resource.  When the Sqlite operation is
>  complete release the mutex and the next thread will have exclusive
>  access to it.
>
>  If you use pthreads you can use read and write locks to get concurrency
>  on reads.
>
>  To my mind syncing on a mutex is better and simpler than polling the
>  resource using SQLITE_BUSY.
>
>
>
>  Alexander Batyrshin wrote:
>  > So, you advice me, to implement synchronization inside my process by my 
> self?
>  >
>  > On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> wrote:
>  >> You have a single shared resource, Sqlite, and you have to synchronize
>  >>  access.  You can use the internal locking in Sqlite and use polling or
>  >>  wait on a mutex or semaphore.
>  >>
>  >>
>  >>  Alexander Batyrshin wrote:
>  >>  >  Hello All,
>  >>  >
>  >>  > I am observing situation, that my concurrency process does not have
>  >>  > access to SQLite database with equal probability.
>  >>  >
>  >>  > Here is example. I have N process that do work like this:
>  >>  >
>  >>  > while (1) {
>  >>  > do_some_work(); // takes ~ 30 sec
>  >>  > save_work_result_to_sqlite(); // takes ~ 1 sec
>  >>  > }
>  >>  >
>  >>  > So, as you can see, these N process has concurrency access to SQLite 
> database.
>  >>  > In theory in worst case, save_work_result_to_sqlite() should NOT wait
>  >>  > for access to database longer than N * 1 sec.
>  >>  > But in practice, some process blocks on save_work_to_sqlite() more
>  >>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
>  >>  >
>  >>  > So, I am wondering, is there any ideas how to avoid this?
>  >>  >
>  >>
>  >>  ___
>  >>  sqlite-users mailing list
>  >>  sqlite-users@sqlite.org
>  >>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>  >>
>  >
>  >
>  >
>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Alexander Batyrshin aka bash
bash = Biomechanica Artificial Sabotage Humanoid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread John Stanton
If it is one process I would assign a mutex to the resource (Sqlite) and 
wait on it to get access to the resource.  When the Sqlite operation is 
complete release the mutex and the next thread will have exclusive 
access to it.

If you use pthreads you can use read and write locks to get concurrency 
on reads.

To my mind syncing on a mutex is better and simpler than polling the 
resource using SQLITE_BUSY.

Alexander Batyrshin wrote:
> So, you advice me, to implement synchronization inside my process by my self?
> 
> On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> wrote:
>> You have a single shared resource, Sqlite, and you have to synchronize
>>  access.  You can use the internal locking in Sqlite and use polling or
>>  wait on a mutex or semaphore.
>>
>>
>>  Alexander Batyrshin wrote:
>>  >  Hello All,
>>  >
>>  > I am observing situation, that my concurrency process does not have
>>  > access to SQLite database with equal probability.
>>  >
>>  > Here is example. I have N process that do work like this:
>>  >
>>  > while (1) {
>>  > do_some_work(); // takes ~ 30 sec
>>  > save_work_result_to_sqlite(); // takes ~ 1 sec
>>  > }
>>  >
>>  > So, as you can see, these N process has concurrency access to SQLite 
>> database.
>>  > In theory in worst case, save_work_result_to_sqlite() should NOT wait
>>  > for access to database longer than N * 1 sec.
>>  > But in practice, some process blocks on save_work_to_sqlite() more
>>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
>>  >
>>  > So, I am wondering, is there any ideas how to avoid this?
>>  >
>>
>>  ___
>>  sqlite-users mailing list
>>  sqlite-users@sqlite.org
>>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob truncation

2008-04-24 Thread Vasil Boshnyakov
Dennis Cote:
You can't truncate or resize a blob, but you can replace it with another 
blob with a different size. Your problem is that you don't know the size 
of the compressed blob until after you have done the compression (Note 
truncation may not always work if the compressor actually ends up 
expanding a previously compressed file).

If you can get the compressed size of the file (i.e. by doing a dummy 
compression with output sent to a null device) you can then set the size 
of the blob correctly before you start writing to it.



It is not so effective to read the file twice - the first time to check the
compressed size and the second pass - to do the actual streaming in the
blob. That is what I am trying to avoid. What do you mean with: "... but you
can replace it with another 
blob with a different size."? Is it possible to store the file in a bigger
blob and then replace the blob with smaller one without to copy the raw blob
data?

Thank you for the help!






-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dennis Cote
Sent: Wednesday, April 23, 2008 5:29 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Blob truncation

[EMAIL PROTECTED] wrote:
> Is there a way to truncate a blob (for example set it size to the file
> size before compression and after inserting the compressed data to
> truncate the unused blob space)? Or it is possible to change the blob size
> "on the fly"?
> 
> Any help is very welcome and thank you in advice!
> 

Please don't hijack threads on other topics.

http://en.wikipedia.org/wiki/Thread_hijacking

You can't truncate or resize a blob, but you can replace it with another 
blob with a different size. Your problem is that you don't know the size 
of the compressed blob until after you have done the compression (Note 
truncation may not always work if the compressor actually ends up 
expanding a previously compressed file).

If you can get the compressed size of the file (i.e. by doing a dummy 
compression with output sent to a null device) you can then set the size 
of the blob correctly before you start writing to it.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
So, you advice me, to implement synchronization inside my process by my self?

On Thu, Apr 24, 2008 at 3:40 PM, John Stanton <[EMAIL PROTECTED]> wrote:
> You have a single shared resource, Sqlite, and you have to synchronize
>  access.  You can use the internal locking in Sqlite and use polling or
>  wait on a mutex or semaphore.
>
>
>  Alexander Batyrshin wrote:
>  >  Hello All,
>  >
>  > I am observing situation, that my concurrency process does not have
>  > access to SQLite database with equal probability.
>  >
>  > Here is example. I have N process that do work like this:
>  >
>  > while (1) {
>  > do_some_work(); // takes ~ 30 sec
>  > save_work_result_to_sqlite(); // takes ~ 1 sec
>  > }
>  >
>  > So, as you can see, these N process has concurrency access to SQLite 
> database.
>  > In theory in worst case, save_work_result_to_sqlite() should NOT wait
>  > for access to database longer than N * 1 sec.
>  > But in practice, some process blocks on save_work_to_sqlite() more
>  > than N*2 sec and dies on my SQLITE_BUSY asserts :/
>  >
>  > So, I am wondering, is there any ideas how to avoid this?
>  >
>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Alexander Batyrshin aka bash
bash = Biomechanica Artificial Sabotage Humanoid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency access to SQLite

2008-04-24 Thread John Stanton
You have a single shared resource, Sqlite, and you have to synchronize 
access.  You can use the internal locking in Sqlite and use polling or 
wait on a mutex or semaphore.

Alexander Batyrshin wrote:
>  Hello All,
> 
> I am observing situation, that my concurrency process does not have
> access to SQLite database with equal probability.
> 
> Here is example. I have N process that do work like this:
> 
> while (1) {
> do_some_work(); // takes ~ 30 sec
> save_work_result_to_sqlite(); // takes ~ 1 sec
> }
> 
> So, as you can see, these N process has concurrency access to SQLite database.
> In theory in worst case, save_work_result_to_sqlite() should NOT wait
> for access to database longer than N * 1 sec.
> But in practice, some process blocks on save_work_to_sqlite() more
> than N*2 sec and dies on my SQLITE_BUSY asserts :/
> 
> So, I am wondering, is there any ideas how to avoid this?
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Concurrency access to SQLite

2008-04-24 Thread Alexander Batyrshin
 Hello All,

I am observing situation, that my concurrency process does not have
access to SQLite database with equal probability.

Here is example. I have N process that do work like this:

while (1) {
do_some_work(); // takes ~ 30 sec
save_work_result_to_sqlite(); // takes ~ 1 sec
}

So, as you can see, these N process has concurrency access to SQLite database.
In theory in worst case, save_work_result_to_sqlite() should NOT wait
for access to database longer than N * 1 sec.
But in practice, some process blocks on save_work_to_sqlite() more
than N*2 sec and dies on my SQLITE_BUSY asserts :/

So, I am wondering, is there any ideas how to avoid this?

-- 
Alexander Batyrshin aka bash
bash = Biomechanica Artificial Sabotage Humanoid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob :: storing files within sqlite3

2008-04-24 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

João Macaíba wrote:
> I've installed sqlite3 through a binary package. How can I know what is 
> the maximum length permited to a blob column ?

The sqlite3_limit C api can tell you (introduced in 3.5.8).  Unless
compiled with a non-default value for SQLITE_MAX_LENGTH, the maximum is 1GB.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIEFqemOOfHg372QQRAj/MAJ9CWq5bjmh9JHfFDy3+d9/pmjXrPQCfZMG2
Byj/6mC31duPdpKzQuFB+tk=
=kH8H
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob :: storing files within sqlite3

2008-04-24 Thread Eduardo Morras
At 09:57 24/04/2008, you wrote:

>Hi.
>
>I intend to use sqlite3 to store files.
>
>I've tested inserting an 7.6MB file in a blob column but it returns a
>SQLite3::TooBigException. I'm using ruby API.
>
>I've installed sqlite3 through a binary package. How can I know what is
>the maximum length permited to a blob column ?
>
>Is there any recomendation on how could I store the files ? Split it
>into permited length pieces ?

Don't store the files in the Blob, just a 
absolute/relative path to the file or a 
compressed/zipped file. If you choose the 
compressed file, you can put in more than one 
file if you add the number of the file.

>Thanks in advance.

HTH

>-- João Macaíba.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob :: storing files within sqlite3

2008-04-24 Thread John Stanton
My guees is that you have encountered a limitation in the Ruby wrapper. 
  Can you write the BLOB in chunks using your interface?

João Macaíba wrote:
> Hi.
> 
> I intend to use sqlite3 to store files.
> 
> I've tested inserting an 7.6MB file in a blob column but it returns a 
> SQLite3::TooBigException. I'm using ruby API.
> 
> I've installed sqlite3 through a binary package. How can I know what is 
> the maximum length permited to a blob column ?
> 
> Is there any recomendation on how could I store the files ? Split it 
> into permited length pieces ?
> 
> Thanks in advance.
> 
> -- João Macaíba.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] blob :: storing files within sqlite3

2008-04-24 Thread João Macaíba
Hi.

I intend to use sqlite3 to store files.

I've tested inserting an 7.6MB file in a blob column but it returns a 
SQLite3::TooBigException. I'm using ruby API.

I've installed sqlite3 through a binary package. How can I know what is 
the maximum length permited to a blob column ?

Is there any recomendation on how could I store the files ? Split it 
into permited length pieces ?

Thanks in advance.

-- João Macaíba.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database open problem

2008-04-24 Thread John Stanton
In your second case you have the wrong pathname for your database file 
and Sqlite is creating an empty one at whatever path you actually specified.

Yang WenYuan wrote:
> Hi,
> 
> There is a very strange problem when I play with the sqlite-ce database.
> 
> I put the database file and my application program together in the same 
> folder. I call sqlite3_open(), the database can be opened properly. I call 
> sqlite3_step(), it return SQLITE_ROW, that means it found the record in the 
> database.
> 
> However, if I put the database file in the other disk, the problem occurred. 
> I call sqlite3_open(), the database can still be opened properly. However, I 
> call sqlite3_step(), it return SQLITE_DONE, that means it cannot found the 
> record in the database. 
> 
> Actually, the database file is exactly same, only different is the store 
> place. I wonder whether it is limitation for sqlite-Wince? 
> 
> Any comment? Thanks! Code is as follows:
> 
> {
>   sqlite3 *db;
>   char *zErrMsg = 0;
>   int rc, length;
>   char *sqlcmd;
>   sqlite3_stmt * stat;
>   TCHAR MsgChar[ 256 ], MsgTmpChar[ 256 ];
> 
> //   * Put in the same folder, it works properly. *
> //rc = sqlite3_open( "initprint.db", &db);
>  
> //   * Put in the different folder, it cannot work properly. * 
>   rc = sqlite3_open( "\\FlashDisk\\System\\initprint.db", &db);
>   if( rc )
>   {
>   sqlite3_close(db);
>   return false;
>   }
> 
>   sqlcmd = sqlite3_mprintf( "SELECT * FROM PRINT WHERE TAGTYPE=%d AND 
> BARCODE='%s';", intTagType, chrBarCode );
>   rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
>   if( rc != SQLITE_OK )
>   {
>   mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
> sqlite3_errmsg( db ) ) );
>   wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );
> 
>   MessageBox( MsgChar, _T( "Error Info" ), MB_OK );
> 
>   sqlite3_free( sqlcmd );
>   sqlite3_close(db);
>   return false;
>   }
> 
>   rc = sqlite3_step( stat );  
>   if( rc != SQLITE_ROW )
>   {
>   mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
> sqlite3_errmsg( db ) ) );
>   wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );
> 
>   MessageBox( MsgChar, _T( "Error Info" ), MB_OK );
> 
>   sqlite3_free( sqlcmd );
>   sqlite3_close(db);
>   return false;
>   }
>   else
>   {
>   const void *printdata = sqlite3_column_blob( stat, 2 );
>   length = sqlite3_column_bytes( stat, 2 );
>   memcpy( buffer, printdata, length );
>   buffer[ length ] = 0x00;
>   *len = length;
>   }
> 
>   rc = sqlite3_finalize( stat );
>   if( rc != SQLITE_OK )
>   {
>   mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
> sqlite3_errmsg( db ) ) );
>   wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );
> 
>   MessageBox( MsgChar, _T( "Error Info" ), MB_OK );
> 
>   sqlite3_free( sqlcmd );
>   sqlite3_close(db);
>   return false;
>   }
> 
>   sqlite3_free( sqlcmd );
>   sqlite3_close(db);
>   return true;
> 
> }
> 
> 
> WenYuan
> 
> 
> 
>   
> 
> Be a better friend, newshound, and 
> know-it-all with Yahoo! Mobile.  Try it now.  
> http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database open problem

2008-04-24 Thread Yang WenYuan
Hi,

There is a very strange problem when I play with the sqlite-ce database.

I put the database file and my application program together in the same folder. 
I call sqlite3_open(), the database can be opened properly. I call 
sqlite3_step(), it return SQLITE_ROW, that means it found the record in the 
database.

However, if I put the database file in the other disk, the problem occurred. I 
call sqlite3_open(), the database can still be opened properly. However, I call 
sqlite3_step(), it return SQLITE_DONE, that means it cannot found the record in 
the database. 

Actually, the database file is exactly same, only different is the store place. 
I wonder whether it is limitation for sqlite-Wince? 

Any comment? Thanks! Code is as follows:

{
sqlite3 *db;
char *zErrMsg = 0;
int rc, length;
char *sqlcmd;
sqlite3_stmt * stat;
TCHAR MsgChar[ 256 ], MsgTmpChar[ 256 ];

//   * Put in the same folder, it works properly. *
//  rc = sqlite3_open( "initprint.db", &db);
 
//   * Put in the different folder, it cannot work properly. * 
rc = sqlite3_open( "\\FlashDisk\\System\\initprint.db", &db);
if( rc )
{
sqlite3_close(db);
return false;
}

sqlcmd = sqlite3_mprintf( "SELECT * FROM PRINT WHERE TAGTYPE=%d AND 
BARCODE='%s';", intTagType, chrBarCode );
rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
if( rc != SQLITE_OK )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}

rc = sqlite3_step( stat );  
if( rc != SQLITE_ROW )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}
else
{
const void *printdata = sqlite3_column_blob( stat, 2 );
length = sqlite3_column_bytes( stat, 2 );
memcpy( buffer, printdata, length );
buffer[ length ] = 0x00;
*len = length;
}

rc = sqlite3_finalize( stat );
if( rc != SQLITE_OK )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}

sqlite3_free( sqlcmd );
sqlite3_close(db);
return true;

}


WenYuan



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users