Hi guys. Thank all of you for reply.
I think I need to inform more details about my test-bed to you.
The machine is iPAQ h5550 (CPU speed is about 400MHz).
Cache size = 500 pages * 2KB = 1MB
Cache size = 50 pages * 2KB = 100KB
Cache size = 25 pages * 2KB = 50KB
The test code is written in c cod
Tried to compile 2.8.16 but got the below errors. Any suggestions?
Thanks.
C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\btree_rb.c(314) :
warning C4013: 'printf' undefined; assuming extern returning int
C:\Windows CE Tools\wce211\PDT7200\Samples\sqlite\vdbe.c(389) : warning
C4013: 'getc' und
Hello list,
I've troubles finding developer list, so i'm posting here.
Ever wanted recursive triggers in sqlite?
This is somewhat brute wanna-be hack allowing sqlite triggers to
recurse up to certain depth. I'd guess that trigger deletetion might be
broken, as well as other things depending on
Any thoughts on this problem? I've been running with this patch and it seems
to deal with the memory leak but no auto-vacuum. :(.
Thanks,
Rick Keiner
On 6/9/06, Rick Keiner <[EMAIL PROTECTED]> wrote:
There seems to be a bug in the memoryTruncate function in the pager. When
it iterates through
"Roger Binns" <[EMAIL PROTECTED]> wrote:
> > The new SCM I (and others) are working on will allow you to
> > quickly and easily download the entire source code/wiki/ticket
> > repository and/or synchronize your local repository with remote
> > changes. So ultimately this will not be an issue. But
Hello C.Peachment,
>It appears that VACUUM is not the only SQL command to cause
>this behaviour. I had done a number of INSERT and UPDATE
>commands before closing the database and attempting to use
>it with PHP. The VACUUM command was not used.
I am surprised to read this. I would be interested i
D.Richard Hipp <[EMAIL PROTECTED]> wrote:
>Perhaps it would be sufficient to take snapshots of the wiki and
>ship that with each release?
Yes, shipping wiki snapshots with each build should be fine.
Even better: A versioned wiki - so users of legacy versions can edit and
improve documentation f
The new SCM I (and others) are working on will allow you to
quickly and easily download the entire source code/wiki/ticket
repository and/or synchronize your local repository with remote
changes. So ultimately this will not be an issue. But all that
is still in the future.
Is this available pu
Christian Smith
<[EMAIL PROTECTED]> wrote:
Igor Tandetnik uttered:
You want to enable sharing. Pass FILE_SHARE_READ | FILE_SHARE_WRITE
as the third parameter.
Surely not FILE_SHARE_WRITE! You don't want other processes writing
the database while you're copying it.
The file is already opened
At 10:19 21/06/2006, you wrote:
Hi,all
I'm trying to bulid a database engine based on uc/os-II RTOS with my
own customized file system(similar with FAT16, but not exactly the
same). I find that SQLite is a good choice.
SQLite is the best choice, we have it running in a PPC440GX embedded
sys
Insun Kang wrote:
Hi.
I tested big deletes performance and big insert performance on a
Windows CE
device in various cache size configurations.
( 1MB, 100KB, 50KB )
Insert 3000 records performs within 23sec, 43sec and 61sec, with
respect to
each cache size configuration.
However, delete 1000
You'll need to have some communication between your processes
so one knows that the other has locked the file and the copy can
proceed. I wrote my replication program to be run from cron.
It waits for a time trying to establish the correct lock, you might
try the 'delay and retry' method.
True
Ha! This made the trick. I tried only with FILE_SHARE_READ and this didn't
work, but I didn't try with both of them.
Thanks a lot!
Ran
On 6/21/06, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
Ran <[EMAIL PROTECTED]> wrote:
> Thanks for your reply. I know that I should lock the file before
> copy
Igor Tandetnik uttered:
Ran <[EMAIL PROTECTED]> wrote:
Thanks for your reply. I know that I should lock the file before
copying it,
and the "BEGIN IMMEDIATE" is indeed a nice trick.
However, I think I didn't explain my problem clearly. I would like to
copy
that file _without_ using the sqlite l
On 6/21/06, Christian Smith <[EMAIL PROTECTED]>
Adding to the free list will touch each page at most once, and thus
caching adds no benefit (and has no loss for a smaller cache.)
Inserting may touch each page multiple times, for such operations as
rebalancing the tree. Therefore, a larger cache
On 6/21/06, Ran <[EMAIL PROTECTED]> wrote:
Thanks for your reply. I know that I should lock the file before copying it,
and the "BEGIN IMMEDIATE" is indeed a nice trick.
However, I think I didn't explain my problem clearly. I would like to copy
that file _without_ using the sqlite library (so usi
Ran <[EMAIL PROTECTED]> wrote:
Thanks for your reply. I know that I should lock the file before
copying it,
and the "BEGIN IMMEDIATE" is indeed a nice trick.
However, I think I didn't explain my problem clearly. I would like to
copy
that file _without_ using the sqlite library (so using the windo
Thanks for your reply. I know that I should lock the file before copying it,
and the "BEGIN IMMEDIATE" is indeed a nice trick.
However, I think I didn't explain my problem clearly. I would like to copy
that file _without_ using the sqlite library (so using the windows API
only).
When I try to do
Insun Kang uttered:
Hi.
I tested big deletes performance and big insert performance on a Windows CE
device in various cache size configurations.
( 1MB, 100KB, 50KB )
Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to
each cache size configuration.
However, delete 1000
Christian Smith wrote:
Because my project will run in an embedded environment, so I have to
take care of the RAM consumption. I have went through the mail list,
but not found the description of minimum RAM usage.
Could anyone tell me how much RAM is needed to run SQLite in an
embedded env
Might be obvious but make sure you do all your inserts and deletes within a
single transaction as I believe this has a big impact on performance. Might
bring the insert and delete times closer.
--
View this message in context:
http://www.nabble.com/Delete-performance-vs.-Insert-performance-t1823
I could not port my code quickly to Cygwin but a quick investigation shows
me that the lock (wrFlag) is never set back to 1. Which API is supposed to
do this ?
Right now, I have the following stack trace:
sqlite3_step
sqlite3VbdeExec
Cp_OP_OpenRead
Sqlite
inserts make sqlite write large amounts of data to disk, deletes make
it (quickly) mark affected pages as unused.
On 6/21/06, Insun Kang <[EMAIL PROTECTED]> wrote:
Hi.
I tested big deletes performance and big insert performance on a Windows CE
device in various cache size configurations.
( 1MB
On 6/21/06, Ran <[EMAIL PROTECTED]> wrote:
I have an application that uses sqlite3 API, and open the database file.
While the file is opened (for reading) by sqlite3, I would like to copy the
database file (so to have a copy of the file). I guess I need to place a
shared lock on the file (like sq
On 6/21/06, Ran <[EMAIL PROTECTED]> wrote:
I have an application that uses sqlite3 API, and open the database file.
While the file is opened (for reading) by sqlite3, I would like to copy the
database file (so to have a copy of the file). I guess I need to place a
shared lock on the file (like sq
Hi.
I tested big deletes performance and big insert performance on a Windows CE
device in various cache size configurations.
( 1MB, 100KB, 50KB )
Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to
each cache size configuration.
However, delete 1000 records among 3000 rec
Ҷ�� uttered:
Hi,all
I'm trying to bulid a database engine based on uc/os-II RTOS with my own
customized file system(similar with FAT16, but not exactly the same). I
find that SQLite is a good choice.
I have read the SQLite source code for several days, but I still have no
idea where I shou
On Wed, 21 Jun 2006 09:24:35 +0200, Ralf Junker wrote:
>>1. SQLiteSpy is able to read and work with database files
>>formatted by versions of Sqlite earlier than 3.3.6 but it also
>>appears to change the database format rather than leave
>>it as it was found.
>>
>>I use php version 5.1.4 including
Hi all,
I wonder if someone can guide me how to open for reading the database file
of sqlite3 on WindowsXP, while the database is already opened by sqlite3
API.
I have an application that uses sqlite3 API, and open the database file.
While the file is opened (for reading) by sqlite3, I would lik
Ralf Junker <[EMAIL PROTECTED]> wrote:
>
> This is especially valuable for all all who need to work with older versions
> of the SQLite because their environment has not yet updated to the latest
> release. It can be very unfortunate for them to find updated information
> which might be incorre
[EMAIL PROTECTED] wrote:
Eric Bohlman <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > I'm thinking that all documentation is better placed in
> > a wiki.
>
> Hmmm. The problem I see is that it makes access to the full
> documentation contingent on connectivity to a possibility ephemera
>>I'm thinking that all documentation is better placed in
>>a wiki.
>
>Hmmm. The problem I see is that it makes access to the full documentation
>contingent on connectivity to a possibility ephemeral external site.
Quite true. I very much consider it a feature of SQLite that each version ships
Hello C.Peachment,
>1. SQLiteSpy is able to read and work with database files
>formatted by versions of Sqlite earlier than 3.3.6 but it also
>appears to change the database format rather than leave
>it as it was found.
>
>I use php version 5.1.4 including Sqlite version 3.2.8.
>There is a databas
Hi,all
I'm trying to bulid a database engine based on uc/os-II RTOS with my own
customized file system(similar with FAT16, but not exactly the same). I find
that SQLite is a good choice.
I have read the SQLite source code for several days, but I still have no idea
where I should begin with.
[EMAIL PROTECTED] wrote:
I'm thinking that all documentation is better placed in
a wiki.
Hmmm. The problem I see is that it makes access to the full
documentation contingent on connectivity to a possibility ephemeral
external site. Maybe the solution is to incorporate wiki snapshots into
t
35 matches
Mail list logo