Schrum, Allan wrote:
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>> boun...@sqlite.org] On Behalf Of Dmitri Priimak
>> Sent: Wednesday, November 25, 2009 11:39 AM
>> To: General Discussion of SQLite Database
>>
Simon Slavin wrote:
> On 25 Nov 2009, at 6:19pm, Dmitri Priimak wrote:
>
>
>> Simon Slavin wrote:
>>
>>> On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
>>>
>>>
>>>> 000 6166 6c69 6465 7420 206f 706f 6e65 6420
>&
Simon Slavin wrote:
> On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
>
>
>> 000 6166 6c69 6465 7420 206f 706f 6e65 6420
>> 010 7461 6261 7361 2065 7274 6e61 6173 7463
>> 020 6f69 206e 3632 3a20 6620 6c69 2065 7369
>> 030 6520 636e 7972 7470 64
746f 6120 6420 7461 6261 7361 2065
050 684f 6d20 2e79 5720 2065 6166 6c69 6465
060 7420 206f 6f72 6c6c 6162 6b63 7420 6172
Any explanation for this? I do not believe I have sqlite v2 sitting
anywhere on that computer.
--
Dmitri Priimak
___
sq
into EXCLUSIVE lock, which is cleared when update/write completed and then
new/pending SHARED locks are allowed to proceed. This should mean
that with many processes reading and only one writing there is no need to
use sqlite3_busy_timeout() function, which is to be used when we have
many process
Kees Nuyt wrote:
> On Fri, 14 Aug 2009 07:24:31 -0700, Dmitri Priimak
> wrote:
>
>> Hi.
>>
>> I have a database with few simple tables. Database is updated regularly
>> and than distributed to the clients, which only use it for reading. So,
>> concurren
Thanks, a lot guys.
I will run a few tests with dummy database populated with my target amount
rows and will let you know results. I realize now that I seem to have some
unfounded fear of large files (FOLF) :) Hopefully it will pass.
--
Dmitri Priimak
one is even larger. Counting all rows in all tables it has
about 5 million rows. In a few year it will grow to about 80 million rows.
So, do you think that SQLite can scale to that level?
--
Dmitri Priimak
___
sqlite-users mailing list
sqlite-users
8 matches
Mail list logo