Hello Simon,
Code looks like this:
/* move content */
sqlite3_backup *pBackup;
pBackup = sqlite3_backup_init(destDBHandle, main, sourceDBHandle,
main);
if(pBackup)
{
int sqlErrno;
if ((sqlErrno = sqlite3_backup_step(pBackup, -1)) != SQLITE_DONE)
On 26 Jun 2014, at 7:42am, Vivek Ranjan viveksar...@gmail.com wrote:
Code looks like this:
Thanks. I was wondering whether you called _step() with strange value but
you're calling it with -1, which seems to be the best thing to do in your case.
And I don't see anything else wrong with your
On 25 Jun 2014, at 9:14pm, Vivek Ranjan viveksar...@gmail.com wrote:
sqlite3_backup_init()
http://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupinit
sqlite3_backup_step()
http://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupstep
sqlite3_backup_finish()
On 30 Mar 2010, at 9:02am, Akbar Syed wrote:
Unfortunately, I could not think about any other option than to keep
the devices attached all the time.
I hope you don't have to handle many attachable devices, because SQLite can't
handle more than 30 simultaneous ATTACHes. See section 11 of
On 26 Mar 2010, at 10:47am, Akbar Syed wrote:
Unfortunately, my application restricts me to use independent
databases than to a single database
as each database exists on a different device and contains the info of
that device in the database.
Multiple devices are allowed to connect to my
On Thu, Mar 25, 2010 at 05:22:04PM +0100, Akbar Syed scratched on the wall:
I have been trying to improve the performance and memory usage for my
application whereby i have maximum of 30 databases attached. In total I have
31 databases with 30 databases attached to the first one. Each database
On Sun, 4 Oct 2009, Simon Slavin wrote:
To: General Discussion of SQLite Database sqlite-users@sqlite.org
From: Simon Slavin slav...@hearsay.demon.co.uk
Subject: Re: [sqlite] SQLite performance with lots of data
On 4 Oct 2009, at 6:11pm, Cory Nelson wrote:
On Fri, Oct 2, 2009 at 12:34 PM
On 5 Oct 2009, at 8:02am, Keith Roberts wrote:
On Sun, 4 Oct 2009, Simon Slavin wrote:
But note that the fields of the row are stored in (more or less) a
list. So accessing the 20th column takes twice (-ish) as long as
accessing the 10th column. If you make a table with 100 columns it
On Fri, Oct 2, 2009 at 12:34 PM, Cory Nelson phro...@gmail.com wrote:
On Fri, Oct 2, 2009 at 9:45 AM, Francisc Romano fran...@gmail.com wrote:
Wow. I did not expect such a quick answer...
Is there somewhere I can read exactly how fast and how big databases SQLite
can take, please?
SQLite
On 4 Oct 2009, at 6:11pm, Cory Nelson wrote:
On Fri, Oct 2, 2009 at 12:34 PM, Cory Nelson phro...@gmail.com
wrote:
On Fri, Oct 2, 2009 at 9:45 AM, Francisc Romano fran...@gmail.com
wrote:
Wow. I did not expect such a quick answer...
Is there somewhere I can read exactly how fast and
On Fri, Oct 2, 2009 at 11:42 AM, Francisc Romano fran...@gmail.com wrote:
Hello!
I am not entirely certain this is the right way to proceed, but I haven't
been able to find the appropriate official SQLite forum (if one exists).
I want to create a rather complex AIR application that will have
Wow. I did not expect such a quick answer...
Is there somewhere I can read exactly how fast and how big databases SQLite
can take, please?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Fri, Oct 2, 2009 at 11:45 AM, Francisc Romano fran...@gmail.com wrote:
Wow. I did not expect such a quick answer...
Is there somewhere I can read exactly how fast and how big databases SQLite
can take, please?
See http://www.sqlite.org/limits.html for how big. You will have to
do your own
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Francisc Romano wrote:
how big databases SQLite can take, please?
Someone told me recently they have 37GB and 66 million rows in their data
set. Another user is using the virtual table functionality together with
synthetic indices to optimise
Very good idea! Thank you!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Fri, Oct 2, 2009 at 9:45 AM, Francisc Romano fran...@gmail.com wrote:
Wow. I did not expect such a quick answer...
Is there somewhere I can read exactly how fast and how big databases SQLite
can take, please?
SQLite uses a b+tree internally, which is logarithmic in complexity.
Every time
the hard disk is shared so it is a critical resource,as well as 100
processes doesn't seem realistic on a single processor, dual core or not.
so I can understand your result, I find even them not too bad ...
Cheers,
Sylvain
On Thu, May 28, 2009 at 12:38 AM, zhrahman rahman_zao...@yahoo.com
So yes the hard disk is shared. I tried to even load the database in memory.
It is still horribly slow. I want to understand this. If I load the database
in memory how can I make the memory sharable among 100 processes. I am
running in quad core environement. So my goal here is to load the
Few other info
I am running it on Linux. So to make the long story short, kindly suggest
how to properly have the database shared in memory among n number of
processes. So they can execute select operatins(read only no update on teh
database) effeciently.
--
View this message in context:
zhrahman wrote:
Few other info
I am running it on Linux. So to make the long story short, kindly suggest
how to properly have the database shared in memory among n number of
processes. So they can execute select operatins(read only no update on teh
database) effeciently.
Multiprocessing
On Thu, May 28, 2009 at 10:53:34AM -0700, zhrahman scratched on the wall:
Few other info
I am running it on Linux. So to make the long story short, kindly suggest
how to properly have the database shared in memory among n number of
processes.
You can't. :memory: databases cannot be
Subject: Re: [sqlite] SQlite performance on multi process env
Hello, Zhrahman,
Regarding: ... kindly suggest how to properly have the database
shared in memory among n number of processes. So they can execute select
operatins(read only no update on teh
database) effeciently
On Wed, 08 Apr 2009 23:17:02 +0200, Florian Nigsch
f...@nigsch.eu wrote:
Hi all,
I have been trying to implement a couple of things in SQLite because I
only need it for myself (so no concurrency issues here).
I am working on Arch Linux (uname: 2.6.28-ARCH #1 SMP PREEMPT Sun Mar
8 10:18:28
at the schema output.
Daniel
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Griggs, Donald
Sent: Wednesday, December 03, 2008 3:51 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Hi again, Daniel,
So I guess you're
Hi again, Daniel,
So I guess you're still having certain queries that take about 200x
longer than with your custom code, right?
There's nothing magical about sqlite, so it's not surprizing that code
customized for an application can outperform a generalized sql engine,
but a factor of 200 does
I am not using the amalgamation version of the source as I
have our my
VFS implementations for two of the platforms I work with
based on the
original win_os.c VFS and the amalgamation does not provide
the
necessary header files (os_common.h and sqliteInt.h) to
make VFS
integration
Hi Daniel,
Regarding:
What I'd like to know is if there is anything we can do with
our queries, SQLite set-up or library configuration to improve the
speed?
Unless indicies would be inappropriate, did you mention whether you've
defined any indicies and does EXPLAIN QUERY PLAN show that
Subject: Re: [sqlite] SQLite performance woe
Hi Daniel,
Regarding:
What I'd like to know is if there is anything we can do with
our queries, SQLite set-up or library configuration to improve the
speed?
Unless indicies would be inappropriate, did you mention whether you've
defined any
, Donald
Sent: Tuesday, December 02, 2008 9:52 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Hi Daniel,
Regarding:
What I'd like to know is if there is anything we can do with
our queries, SQLite set-up or library configuration
] On Behalf Of John Stanton
Sent: Tuesday, December 02, 2008 2:20 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Databases work by using indices. A search for a row in a table of 1
million rows goes from having to do as many as a million row reads
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brown, Daniel
Sent: Tuesday, December 02, 2008 5:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Hello Donald Others,
I have primary keys set for each
-Original Message-
Subject: Re: [sqlite] SQLite performance woe
I maybe confused but indices sound similar to what I understand primary
keys do, I already have primary keys on each table. Unless I'm mistaken
as to what primary keys are? From your explanation I guess I'm slightly
of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Hello Donald Others,
I have primary keys set for each of the table but no indicies (that I am
aware of) as I simply converted the data from our existing database
system which does not support indicies. As my current system only
Of Griggs, Donald
Sent: Tuesday, December 02, 2008 9:52 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
Hi Daniel,
Regarding:
What I'd like to know is if there is anything we can do with
our queries, SQLite set-up or library configuration to improve
: Tuesday, December 02, 2008 5:06 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite performance woe
To efficiently execute the SQL SELECT * FROM mytab WHERE myid = '1234'
you must have an index on the myid colunm. Each row has an index
which uses a rowid as a key
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brown, Daniel wrote:
I am not using the amalgamation version of the source as I have our my
VFS implementations for two of the platforms I work with based on the
original win_os.c VFS and the amalgamation does not provide the
necessary header
On 4/17/07, Alberto Simões [EMAIL PROTECTED] wrote:
On 4/17/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
=?ISO-8859-1?Q?Alberto_Sim=F5es?= [EMAIL PROTECTED] wrote:
Consider the following database schema:
CREATE TABLE tetragrams (word1 INTEGER, word2 INTEGER, word3 INTEGER,
word4
On Wed, 2007-04-18 at 11:06 +0100, Alberto Simões wrote:
On 4/17/07, Alberto Simões [EMAIL PROTECTED] wrote:
On 4/17/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
=?ISO-8859-1?Q?Alberto_Sim=F5es?= [EMAIL PROTECTED] wrote:
Consider the following database schema:
CREATE TABLE
SELECT * FROM tetragrams
WHERE word1 = 'x' AND word2||'' = 'y'
ORDER BY occs;
Better as
SELECT * FROM tetragrams
WHERE word1 = 'x' AND +word2 = 'y'
ORDER BY occs;
See http://www.sqlite.org/optoverview.html section 6.
Hugh
On Tue, 2007-04-17 at 11:53 +0100, Alberto Simões wrote:
Hi
I've found SQLite faster than MySQL and Postgres for small/medium
databases. Now I have big ones and I really do not want to change, but
I have some performance issues.
Consider the following database schema:
CREATE TABLE
On 4/17/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
=?ISO-8859-1?Q?Alberto_Sim=F5es?= [EMAIL PROTECTED] wrote:
Consider the following database schema:
CREATE TABLE tetragrams (word1 INTEGER, word2 INTEGER, word3 INTEGER,
word4 INTEGER, occs INTEGER, PRIMARY KEY (word1, word2, word3,
=?ISO-8859-1?Q?Alberto_Sim=F5es?= [EMAIL PROTECTED] wrote:
Consider the following database schema:
CREATE TABLE tetragrams (word1 INTEGER, word2 INTEGER, word3 INTEGER,
word4 INTEGER, occs INTEGER, PRIMARY KEY (word1, word2, word3,
word4));
CREATE INDEX tet_b ON tetragrams (word2);
CREATE
Ken wrote:
This is a write only app. 100% insert.
Ken,
Why bother putting the data into a database if you are never going to
read it back out? Other formats, such as a flat text file are much
better for logs or archives.
If, in fact, you will be reading the data at some point then
Dennis,
Yes the data will be read later by down stream processing.
I do have the option of either putting the data into sqlite at the start
(when its read) or putting it into a flat file and then Later loading it into a
sqlite db via a downstream job.
A great deal of the data
Ken wrote:
I do have the option of either putting the data into sqlite at the start (when its read) or putting it into a flat file and then Later loading it into a sqlite db via a downstream job.
A great deal of the data columns are simple numeric values and thats where sqlite
At 15:51 16/03/2007, you wrote:
Dennis,
Yes the data will be read later by down stream processing.
I do have the option of either putting the data into sqlite at
the start (when its read) or putting it into a flat file and then
Later loading it into a sqlite db via a downstream job.
Dennis's insight is apt. Just as you would eliminate common sub
expressions in your programs (or have an optimizer do it for you) it is
a good idea to do the same with systems.
Writing numbers in an endian agnostic manner can easily be done on your
flat file.
Dennis Cote wrote:
Ken wrote:
Ken wrote:
I'm looking for suggestions on improving performance of my sqlite application.
Here are system timings for a run where the sqlite db has been replaced with a flat file output.
real 0m1.459s
user0m0.276s
sys 0m0.252s
This is a run when using sqlite as the output
Regarding:
Creation of flat file takes 1.5 secs vs 3 seconds to create sqlite db.
Flat file is 13 MB, sqlite db is 11 MB.
Any ideas how to get the sqlite output timings to a more respectable
level would be appreciated.
I may be way off base if I'm not understanding correctly, but how can
To answer your question:
Yes I can use a flat file at this stage, but eventually it needs to be
imported into some type of structure. So to that end I decided early on to use
sqlite to write the data out.
I was hoping for better performance. The raw I/O to read the data and process
is
ok my bad for poor wording...
I'll try with Synchronous off. I may also try disabling the journal file since
I can easily recreate the data if it is not successful.
Thanks,
Ken
Griggs, Donald [EMAIL PROTECTED] wrote: Regarding:
Creation of flat file takes 1.5 secs vs 3 seconds to
Scott,
The whole job is wrapped in an explicit transaction.
Variables are bound and statements prepared only once, using reset.
This is a write only app. 100% insert.
Ken
Scott Hess [EMAIL PROTECTED] wrote: Are you using explicit transactions at
all? If not, as a quick test,
put
There are no free lunches. When Sqlite stores your data item it not
only writes it into a linked list of pages in a file but also inserts at
least on key into a B-Tree index. It does it quite efficiently so what
you are seeing is the inevitable overhead of storing the data in a
structured
Hello,
IIRC (it was a while ago), one way to speed up insertion for large
data sets is to drop the indexes, do the inserts (wrapped around a
transaction) and then rebuild the indexes. For smaller data sets, the
drop/rebuild indexes solution doesn't make sense because the time it
takes to
Donald,
I set the PRAGMA synchronous= OFF and here are the results:
real0m2.258s
user0m1.736s
sys 0m0.168s
--
Pragma synchronous= NORMAL
real0m2.395s
user0m1.520s
sys 0m0.128s
Pragma synchronous= FULL
real0m3.228s
user
Tito,
There are no indices built besides the default ones. Hmm maybe I should try
this by dropping the primary Keys.. I'll give that a try as well, GOOD idea!
The entire batch of inserts (about 8 tables) is done in a single transaction.
As an Oracle DBA, I'm pretty familar with tuning.
Ken [EMAIL PROTECTED] wrote:
I should be able to run with synchronous=off. Since
the application maintains state in a seperate DB elsewhere.
Just to clarify the implications where, if you run with
synchronous=off and you take a power failure or an OS
crash in the middle of a
Griggs, Donald wrote on 03/15/2007 01:49:30 PM:
Regarding:
Creation of flat file takes 1.5 secs vs 3 seconds to create sqlite db.
Flat file is 13 MB, sqlite db is 11 MB.
Any ideas how to get the sqlite output timings to a more respectable
level would be appreciated.
I think you may
Tito,
Its even better now!
Synchronous=normal and No primary keys (except 1 table) for auto increment.
real0m1.975s
user0m1.436s
sys 0m0.140s
Vs flat file test case:
real0m0.862s
user0m0.228s
sys 0m0.188s
This is now very respectable.
Thanks,
Ken
DRH,
Thanks for your valuable insite.
When the DB is closed when in synchrounous mode, is it then persistent at the
OS level even from power failures etc?
[EMAIL PROTECTED] wrote: Ken wrote:
I should be able to run with synchronous=off. Since
the application maintains state
Ken [EMAIL PROTECTED] wrote:
When the DB is closed when in synchrounous mode,
is it then persistent at the OS level even from power failures etc?
You don't have to close the DB. All you have to do is
commit. Before the commit finishes, all of your data
is guaranteed to be on oxide.**
Hi Ken,
you can get the exact insert speed of the flatfile.dat:
- dump your data into the flat file
- create a virtual table implementation for your flat file
http://www.sqlite.org/cvstrac/wiki?p=VirtualTables1150734307
- and use it from SQLite
http://www.sqlite.org/lang_createvtab.html
Bill King wrote:
Roger Binns wrote:
Im sorry for being so harsh, and i know im not winning any friends
here,
So far noone has agreed with you :-)
This would be incorrect. The correct statement is so far no one has
vocally agreed with you.
If people didn't agree, this whole once a
@sqlite.org
Subject: Re: [sqlite] sqlite performance, locking threading
Roger Binns wrote:
Im sorry for being so harsh, and i know im not winning any friends
here,
So far noone has agreed with you :-)
This would be incorrect. The correct statement is so far no one has
vocally
For anyone who is interested i have created a standalone test case
which demonstrates the threading behaviour that i had, or as close as
i can get it. Feel free to use the code for whatever purposes you see
fit.
It will compile on linux and windows, and comes with 4 versions of the
sqlite
Emerson Clarke wrote:
Fix the out of date documentation
The wiki is there and open to all.
I look forward to reading your additions to it.
Gerry
-
To unsubscribe, send email to [EMAIL PROTECTED]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Gerry Snyder wrote:
The wiki is there and open to all.
I look forward to reading your additions to it.
To be fair, only some of the documentation is in the wiki. The
remainder is generated. For example you can't edit any of the pages
listed
Emerson Clarke [EMAIL PROTECTED] wrote:
The problem i had was with sqlite not being compatible with the simple
design that i wanted. I did try several alternate designs, but only
as a way of working around the problem i had with sqlite. It took a
long time but eventually i managed to get
Richard,
I have to admit i am a little dissapointed. As the primary author of
the software i would have thought that you would have a good
understanding of what the thread safety characteristics of your own
api were.
Suggesting that suppressing the safety checks will result in random
and non
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Emerson Clarke wrote:
I have to admit i am a little dissapointed. As the primary author of
the software i would have thought that you would have a good
understanding of what the thread safety characteristics of your own
api were.
He does! It is
Roger,
Of course you can test threading behaviour, yes its not exactly
repeatable but under most circumstances and with enough test cases you
can catch the problems.
I don't think sqlite is such a large and complicated piece of software
that it would be impossible to reproduce such errors.
Emerson,
I agree with you somewhat. Not 100% convinced but, I like you am a little
dissapointed how sqlite handles threadsafe and multiple connections. Even in
the test_server.c module is not concurrent As it serializes all processing
to a single thread, this is not concurrent processing.
Ken,
Thanks for you comments. I have coded and tested a module just like
test_server.c and by disabling the safety checks i have also been able
to code and test an example which uses a single connection, single
transaction, single table with up to 50 threads doing
insert/update/delete with no
Roger Binns wrote:
Im sorry for being so harsh, and i know im not winning any friends
here,
So far noone has agreed with you :-)
This would be incorrect. The correct statement is so far no one has
vocally agreed with you.
If people didn't agree, this whole once a month people
Emerson Clarke wrote:
The indexing process works like this.
1.) Open a document and parse its contents.
2.) Look up records in the first database based on the contents of the
document, updating records where appropriate and inserting new ones.
3.) Transforming the document based on what was
Bill,
Thanks for the description, thats pretty much how i designed the
index, but with a few modifications. The filesystem becomes the tree
structure which is indexed by a hash of the original document url. It
works like a big hashtable so its quite scalable.
Sorry if this has been posited
ownership of the
queue, except may be the main thread.
Michael
-Ursprüngliche Nachricht-
Von: Emerson Clarke [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 3. Januar 2007 00:57
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking threading
Nico,
I have implemented all
On Tue, Jan 02, 2007 at 11:56:42PM +, Emerson Clarke wrote:
The single connection multiple thread alternative apparently has
problems with sqlite3_step being active on more than one thread at the
same moment, so cannot easily be used in a safe way. But it is by far
the fastest and
: Mittwoch, 3. Januar 2007 15:14
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite performance, locking threading
Michael,
Im not sure that atomic operations would be a suitable alternative.
The reason why im using events/conditions is so that the client thread
blocks until the server thread has
Nicholas,
My oppologies, your right that explanation had been given.
But i didnt actually take it seriously, i guess i found it hard to
believe that it being the easier option was the only reason why this
limitation was in place.
If this is the case, then surely the fix is simple. Given that
On Thu, Jan 04, 2007 at 12:50:01AM +, Emerson Clarke wrote:
My oppologies, your right that explanation had been given.
OK.
But i didnt actually take it seriously, i guess i found it hard to
believe that it being the easier option was the only reason why this
limitation was in place.
On Sat, Dec 30, 2006 at 03:34:01PM +, Emerson Clarke wrote:
Technically sqlite is not thread safe. [...]
Solaris man pages describe APIs with requirements like SQLite's as
MT-Safe with exceptions and the exceptions are listed in the man page.
That's still MT-Safe, but the caller has to
Nico,
I have implemented all three strategies (thead specific connections,
single connection multiple threads, and single thread server with
multiple client threads).
The problem with using thread specific contexts is that you cant have
a single global transaction which wraps all of those
Emerson Clarke [EMAIL PROTECTED] wrote:
Firstly can i clarify what you mean regarding the same moment. Do you
mean that no two threads can be executing the call, or that no two
threads can be in the middle of stepping through a series of results
using the step function (assuming there is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Emerson Clarke wrote:
| I have deliberately tried to avoid giving too much detail on the
| architecture of the index since that was not the point and i didnt
| want to end up debating it.
I don't want to debate your index architecture either :-).
Roger,
My original question was in fact not a statement. I did not want
sqlite to work differently. Rather the opposite, sqlite already works
differently to the way i, and probably a lot of users assume that it
would. So all i wanted to know was why that is the case.
It seemed to me that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Emerson Clarke wrote:
| I am left to assume that all other locking mechanisms like ipc and
| files have already been tried and been found wanting. I also assume
| that priority has been given to making sqlite operate across network
| boundaries
Emerson Clarke [EMAIL PROTECTED] wrote:
It seemed to me that making a library which only functioned on a per
thread basis was something that you would have to do deliberately and
by design.
I'm still trying to understand what your complaint is.
--
D. Richard Hipp [EMAIL PROTECTED]
Richard,
My complaint, if you want to call it that. Was simply that there are
seemingly artificial constraints on what you can and cant do accross
threads.
If i have a linked list, i can use it across threads if i want to,
provided that i synchronise operations in such a way that the list
does
Emerson Clarke wrote:
If i have a linked list, i can use it across threads if i want to,
provided that i synchronise operations in such a way that the list
does not get corrupted.
And of course you also have to know about memory barriers and compiler
re-ordering. That is highly dependent on
Emerson Clarke [EMAIL PROTECTED] wrote:
Richard,
My complaint, if you want to call it that. Was simply that there are
seemingly artificial constraints on what you can and cant do accross
threads.
If i have a linked list, i can use it across threads if i want to,
provided that i
Roger,
I think sqlite suffers somewhat from a bit of an identity crisis.
Whilst it is both a library and a piece of code which you embed in a
project it is often talked about as though it is some external
component.
Technically sqlite is not thread safe. Just because the library has
explicitly
Emerson Clarke [EMAIL PROTECTED] wrote:
Even on the
platforms where a single sqlite3 * structure can be used on multiple
threads (provided it is not at the same time), it is not possible to
have a transaction which works across these threads.
I beg to differ. What makes you think this does
Richard,
Ok, im pretty clear on the file locking being the cause of the
problems with the sqlite3 * structures, but thanks for confirming it.
I understand that on platforms that dont have this issue its not a
problem.
But why then can i not have a single transaction wrapping a single
connection
Emerson Clarke [EMAIL PROTECTED] wrote:
But why then can i not have a single transaction wrapping a single
connection which is used within multiple threads, obvioulsy not at the
same time.
You can. What makes you think you can't?
--
D. Richard Hipp [EMAIL PROTECTED]
Richard,
Are you sure we are not just getting into semantic knots here ?
Do we have the same definition of at the same time. I mean
concurrently, so that both threads use the same sqlite3 * structure,
within mutexes. Each query is allowed to complete before the other one
starts, but each
, 2006 9:34 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite performance, locking threading
Roger,
I think sqlite suffers somewhat from a bit of an identity crisis.
Whilst it is both a library and a piece of code which you embed in a
project it is often talked about as though
Emerson Clarke [EMAIL PROTECTED] wrote:
I have code which creates a transaction on a connection in the parent
thread, then creates several child threads which attempt to use the
same connection and transaction in a synchronised mannor. It does not
work, and by all the documentation that i
Emerson Clarke [EMAIL PROTECTED] wrote:
Richard,
Are you sure we are not just getting into semantic knots here ?
Do we have the same definition of at the same time. I mean
concurrently, so that both threads use the same sqlite3 * structure,
within mutexes. Each query is allowed to
Michael Ruck [EMAIL PROTECTED] wrote:
Richard,
I believe his problem is this:
Each query is allowed to complete before the other one starts, but each
thread may have multiple statements or result sets open.
The open resultsets/multiple started statements are causing him =
headaches.
1 - 100 of 218 matches
Mail list logo