the cache size,
multiple connections are now flying. Im getting insane speeds. SQLite FTW!
Regards.
Werner
pompomJuice wrote:
>
>
> Hello there.
>
> I need some insight into how SQLite's caching works. I have a database
> that
> is quite large (5Gb) sitting on a produc
That sounds like an awesome trick. I will definitely do as you suggest and
decrease cache_size as even at the moment it does not really seem to help
much.
With regards to the memory being volatile and such. That is not really a big
problem for me as a complete loss of the lookup table is not a
; You need to get things logically working first. Ram drives are great to
> help improve performance where seeks are and rotational access
> requirements dictate.
>
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
>
> AArrgh.
>
> That is the one thing that I wont
t me if this is a bad idea!!!
>
>
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
> I suspected something like this, as it makes sense.
>
> I have multiple binaries/different connections ( and I cannot make them
> share a connection ) using this one lookup tab
.
Regards.
Christian Smith-4 wrote:
>
> pompomJuice uttered:
>
>>
>> I suspected something like this, as it makes sense.
>>
>> I have multiple binaries/different connections ( and I cannot make them
>> share a connection ) using this one lookup table and depend
.
Dan Kennedy-4 wrote:
>
> On Tue, 2007-06-19 at 01:06 -0700, pompomJuice wrote:
>> Hello there.
>>
>> I need some insight into how SQLite's caching works. I have a database
>> that
>> is quite large (5Gb) sitting on a production server that's IO is severel
When I pressed post message it told me "read error" Connection reset or
something and after the third time I thought I'd restart my browser only to
see that it did post 3 times!.
--
View this message in context:
Hello there.
I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU
Hello there.
I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU
Hello there.
I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU
Thanks.
Igor Tandetnik wrote:
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
>> Basically I am looking for somthing simular to oracle's code 1403
>> where a query returned zero rows.
>
> If a resultset is empty, the very first call to sqlite3_step would
> retu
of the calls
sqlite3_column_text, sqlite3_column_int, sqlite3_column_blob or
sqlite3_column_bytes will perform the best in such a loop.
Igor Tandetnik wrote:
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
>> What is the best way to determine that sqlite3_step returned a null
>>
Hello.
What is the best way to determine that sqlite3_step returned a null now? At
the moment the only way I see is checking each select column with
sqlite3_column_bytes and setting the row ato null id all of those calls
return 0. Is there maybe a better way? I can't seem to find such an
If the column has text affinity, then my
> understanding is that if the bound parameter will be converted to text
> prior
> to execution beginning, a perhaps unnecessary overhead.
>
> --a
>
>
>
> On 4/11/07, pompomJuice <[EMAIL PROTECTED]> wrote:
>>
>>
&g
Hi,
I am no expert but try to increase your btree page size to the default page
size of your storage. I think sqlite defaults to a 1K page size but im sure
you can bump it up to 4K and see if that helps. I work with rather large
databases ( 5-8Gb ) and although increasing my page size from 1K to
happy I
dont have to mess around with the btree anymore, its a bit complicated.
Thanks for the tip.
drh wrote:
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
>> I could get a maximum of 300-400 lookups per second using
>> a conventional "select * from table where column =
mmm... this is very embarrassing. I will implement the sql again then.
Clearly I messed up big time.
Thanks for the help.
drh wrote:
>
> pompomJuice <[EMAIL PROTECTED]> wrote:
>> I could get a maximum of 300-400 lookups per second using
>> a conventional "sel
007-04-09 18:09:36[INFO] INC key = 0
2007-04-09 18:09:36[INFO] cursor stay not next
2007-04-09 18:09:36[INFO] HDR-Offset = `3'
2007-04-09 18:09:36[INFO] Serial type [0]=`21'
2007-04-09 18:09:36[INFO] Serial type [1]=`1'
2007-04-09 18:09:36[INFO] Col[0]=`8400' ( 4 bytes )
2007-04-
Hi.
I am using sqlite in an unusual way. I’m using it as a lookup table which
failed miserably when I realized upon implementation that I could get a
maximum of 300-400 lookups per second using a conventional "select * from
table where column = key" type query. Clearly there was overhead
these
SQLITE_SCHEMA errors.
I hope that answeres your questions.
Dan Kennedy-4 wrote:
>
> On Thu, 2007-04-05 at 05:37 -0700, pompomJuice wrote:
>> Yes this is with the 3.3.14 code. I initially got the problem with the
>> 3.3.12
>> code so I just upgraded to the 3.
.
Regards,
Werner
Dan Kennedy-4 wrote:
>
> On Thu, 2007-04-05 at 04:04 -0700, pompomJuice wrote:
>> Ok.
>>
>> I went and re-prepared the statement anyway even though the documentation
>> says it won't work. This trick only works if you finalize the failed
>&
.
Interesting.
pompomJuice wrote:
>
> Hello.
>
> I recently rewrote most of my SQLite wrappers to now ignore SCHEMA errors,
> as these are now automagically handled by the new sqlite3_prepare_v2 API.
> The logic changes were simple so I did not bother to test it and continued
>
Hello.
I recently rewrote most of my SQLite wrappers to now ignore SCHEMA errors,
as these are now automagically handled by the new sqlite3_prepare_v2 API.
The logic changes were simple so I did not bother to test it and continued
with development. Now that development is complete and testing
23 matches
Mail list logo