I established re-try logic to get this to work. :(
On Jun 5, 2009, at 7:08 PM, Rosemary Alles wrote:
>
> I have several (identical processors) accessing a sqlite3 database
> over NFS. I have a busy handler (see below) and use "begin exclusive"
>
> Periodically I get
I have several (identical processors) accessing a sqlite3 database
over NFS. I have a busy handler (see below) and use "begin exclusive"
Periodically I get the following error from from sqlite3:
Function:artd_sql_exec_stmt error in stmt:begin exclusive against
database:/wise/fops/ref/artid/l
Thanks Simon. I have been leaning that way too - considering switching.
-rosemary.
On May 22, 2009, at 5:55 PM, Simon Slavin wrote:
>
> On 23 May 2009, at 12:10am, Rosemary Alles wrote:
>
>> Multiple machines with multiple cpus. [snip]
>
>> The total size of
>> c
will be
sufficient?
-rosemary.
On May 22, 2009, at 4:10 PM, Rosemary Alles wrote:
> Dear Olaf,
>
> On May 22, 2009, at 3:21 PM, Olaf Schmidt wrote:
>
>>
>> "Rosemary Alles" schrieb im
>> Newsbeitrag
>> news:f113017d-8851-476d-8e36-56b2c4165...@ipac.calte
Using busy_timeout in itself won't do the job. From what I'm
gathering I need to rollback the transaction that returned BUSY, reset
the statement and retry once the database is not BUSY anymore?I
probably also don't want the default BEGIN. I probably want either
IMMEDIATE or EXCLUSIVE. Add
Dear Olaf,
On May 22, 2009, at 3:21 PM, Olaf Schmidt wrote:
>
> "Rosemary Alles" schrieb im
> Newsbeitrag
> news:f113017d-8851-476d-8e36-56b2c4165...@ipac.caltech.edu...
>
>> I have a database (simple schema) with two tables on which I
>> p
Hullo all,
Does anyone have solid code examples (in C/C++) or pseudo code of how
to establish re-try code/logic successfully?
I have a database (simple schema) with two tables on which I perform
"concurrent" udpates over NFS (yes, terrible I know - but it's what we
have). Eventually, I get
Many thanks to all of you for your responses. Helped a great deal. I
think I'm experiencing a "duh" moment.
:)
-rosemary.
On May 6, 2009, at 10:34 AM, Olaf Schmidt wrote:
>
> "Rosemary Alles" schrieb im
> Newsbeitrag news:AF79A266-B697-4924-
> b304-2b1f
Hullo all,
Run on a single processor, the following query is quite fast:
// Select Statement
sprintf(sql_statements,
"select lp.%s, lp.%s, lp.%s, lp.%s, pb.%s from %s lp, %s pb "
"where lp.%s > ? and lp.%s=pb.%s "
"order by lp.%s, lp.%s, pb.%s",
Hullo all,
Including the following in my c-program:
sql_rc = sqlite3_open_v2(database_name,
&sql_db,
SQLITE_OPEN_READONLY,
NULL);
if (sql_rc != SQLITE_OK) {
fprintf(stderr, "Function:%s c
Thanks Puneet. Those suggestions really help.
-rosemary.
On Apr 7, 2009, at 5:52 PM, P Kishor wrote:
> On Tue, Apr 7, 2009 at 5:18 PM, Rosemary Alles
> wrote:
>> Puneet,
>>
>> As you suggested I have supplied a brief background re: the problem:
>>
>> Ba
a WITH INDEX source_id_index_lp_tbl
Many thanks,
rosemary.
On Apr 7, 2009, at 1:57 PM, P Kishor wrote:
> On Tue, Apr 7, 2009 at 3:45 PM, Rosemary Alles
> wrote:
>> Hullo Puneet,
>>
>> Many thanks for your response.
>>
>> My understanding of a sqlite3 "tra
s there no difference in behavior between a SINGLE select and several
of them within the context of transaction?
And yes, each of the many SELECTS have a different WHERE clause.
-rosemary.
On Apr 7, 2009, at 12:38 PM, P Kishor wrote:
> On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles
>
p. In what context does one parse the results? Do we not
have synchronizing issue here?
Thanks again,
rosemary
On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
> "Rosemary Alles" wrote
> in message news:20a6b796-613b-4f5d-bfca-359d6b9fa...@ipac.caltech.edu
>> I want to s
I want to speed up my app. Can I run SELECT statements within the
context of a transaction. If so, how does one handle the query
results? I would assume this cannot be done with sql_prepare,
sql_bind, sql_step? Would I *have* to use sql_exec - such that a
callback can be specified to handle
Hullo everyone,
I'm relatively new to sqlite. I have an optimization problem regarding
an sql query.
Background:
I have a database with two tables one with -say- 12k rows of data, and
the other with more.
The first table (lets calls it A) has the following columns:
source_id, x_pos, y_pos,
16 matches
Mail list logo