On Fri, Dec 7, 2012 at 3:28 PM, Keith Chew keith.c...@gmail.com wrote:
We can see the times taken to do the syscalls are fast, so what is
causing the delay between the seek and write? I had a look at the
code, and found the method seekAndWrite(), which looks pretty straight
forward. The only
Hi Richard
On Fri, Dec 7, 2012 at 2:08 AM, Richard Hipp d...@sqlite.org wrote:
If that does cure your latencies, it also adds another problem, in that it
does nothing to prevent two or more processes, or even two threads within
the same process, from trying to use the database at the same time
Hi Dan
On Wed, Dec 5, 2012 at 6:38 PM, Dan Kennedy danielk1...@gmail.com wrote:
If it's not fsync() then IO delays are normally caused by read().
You could try [strace -T -eread ...] to check.
Are SELECT statements fast on the same database? How large is the
database compared to the machines
Hi All
That hypothesis could be easily tested by examining the corresponding strace
output for mysql... I don't know if sqlite3's unaligned log writes are
actually slow (didn't see this info in any of Keith's messages), but if they
are that would suggest the hypothesis is false; it would
On Thu, Dec 6, 2012 at 9:16 AM, Keith Chew keith.c...@gmail.com wrote:
Thank you so much for all the input. I tried 2 things, using only a
single connection (as suggested by Clemen) and to change the page size
to 32K. Single connection did not reduce the average latency, but did
help
On Thu, Dec 6, 2012 at 1:01 PM, Keith Chew keith.c...@gmail.com wrote:
I used sar and iostat to profile what is happening at the disk level,
and below are the results. You can see that the TPS on the disk is
very small, only 6-7 tps, and both mysql and sqlite profiles show very
close patterns
On Thu, Dec 6, 2012 at 3:52 PM, Keith Chew keith.c...@gmail.com wrote:
I am guess it may have something to do with the locking of the files.
sqlite could be getting blocked by the filesystem when trying to
obtain the lock before writing to the file...
This is a stab in the dark... I had a look
Hi
I have an existing application, which I am experimenting with
replacing MySQL with sqlite. I have 2 boxes, one running MySQL, the
other sqlite.
Execution times for every insert/update/delete SQLs is measured, and a
warn log is printed if it takes 250ms to perform the SQL.
OS configuration:
Hi Richard
On Wed, Dec 5, 2012 at 9:05 AM, Richard Hipp d...@sqlite.org wrote:
The $dir will contain initial database contents and scripts of SQL
statements that were run. You can rerun those SQL statement using the
command-line shell to find slow ones, then do things like EXPLAIN QUERY
PLAN
On Wed, Dec 5, 2012 at 9:33 AM, Keith Chew keith.c...@gmail.com wrote:
Any other ideas for me to trace this at a lower level would be greatly
appreciated. I think I will need to get deeper into linux's block
layer stuff, so perhaps this is not the correct place to ask the
question, but I
On Wed, Dec 5, 2012 at 10:28 AM, Keith Chew keith.c...@gmail.com wrote:
I wonder what could be causing sqlite to hang so long? Will try to
remove all indexes to see if that narrows things down.
It is not an indexing issue. For one of the UPDATE SQLs, it is
updating a table with only 1 record
On Wed, Dec 5, 2012 at 10:56 AM, Richard Hipp d...@sqlite.org wrote:
It is not an indexing issue. For one of the UPDATE SQLs, it is
updating a table with only 1 record in it. And this takes 350ms...
Can you provide specifics: The schema and the UPDATE statement?
Schema and SQL are below:
Hi Michael
On Wed, Dec 5, 2012 at 10:57 AM, Black, Michael (IS)
michael.bla...@ngc.com wrote:
Could it be waitiing on the prior transaction though?
Since disk I/O lies it might be syncing the last transaction causing the new
one to wait longer.
Yes, that is possible. In the application, it
On Wed, Dec 5, 2012 at 11:10 AM, Keith Chew keith.c...@gmail.com wrote:
The strange thing is that I am setting sqlite it to use WAL,
autocheckpoint off and synchronous off. Even in this setup, I still
see 350ms transactions times for less than 3 TPS. A bit hard to
believe, so I am now doing
Hi Michael
On Wed, Dec 5, 2012 at 12:02 PM, Black, Michael (IS)
michael.bla...@ngc.com wrote:
Can you re-run your strace as strace -tt and look at the timings to help
pinpoint it?
Since there is no fsync (I know this using grep on the strace ouput),
it is hard to tell what to look for in the
On Wed, Dec 5, 2012 at 1:14 PM, Keith Chew keith.c...@gmail.com wrote:
Since there is no fsync (I know this using grep on the strace ouput),
it is hard to tell what to look for in the strace. The output is very
noisy, so it makes it hard to go through them.
Does anyone know any handy flags
Thanks for this Simon, but unfortunately this is legacy code, which has to
be used for a while. I just did an update to check for column change (we
have a column to tell us the record has changed, so we do not need to check
every column), and the WAL file size growth has dropped significantly.
Hi
I found that when an update SQL is issued against a table, the WAL file
gets updated with the new record, even though the record is exactly the
same as the current record.
Eg user table has 1 record with user_name = 'Keith'. Issuing this SQL will
increase the WAL file:
update user set
Hi Richard
UPDATE user SET user_name='Keith' WHERE user_name!='Keith';
The example I provided was simplified only to explain the scenario. In a
production environment, there are over 40-50 columns, and the suggested
workaround above is impractical.
Regards
Keith
Hi
Ok, thanks for the all the suggestions, I will find a workaround. The
reason I am asking is that I am using sqlite to perform data
synchronisation between a server and client, and after a day, the WAL file
size can grow to 3GB, quite a bit. I will look at improving the SQL update
to reduce
Okay, if you're doing this set up updates to do synchronisation then
you're going about it a very poor way. We've written a lot on this list
about the problems with synchronisation over the years and you'll find that
your way isn't going to be efficient.
Instead of keeping a list of the
Hi Dan
In both NORMAL mode, we do the following:
1. Write a bunch of transactions into the WAL file.
2. fsync() the WAL file.
3. Copy the data from the WAL file into the database file.
4. fsync() the database file.
If a power failure occurs at any time, the next process to read
Hi
Using journal_mode=WAL and synchronous=2, it is my understanding (from
the docs and my own power loss testing) that sqlite will maintain
durability on a power loss.
My question is, if sqlite can guarantee durability with synchronous=2,
why can't it (from reading the docs and other posts) do
Hi Simon
Thank you for your reply.
Probably worth us knowing whether you've read the 'Performance
Considerations' section of
http://www.sqlite.org/wal.html
Yes, I have read that, along with all the threads/posts I could find
from the Internet. The thing that I cannot fully understand is
Hi Jay
http://www.sqlite.org/pragma.html#pragma_synchronous
When synchronous is FULL (2), the SQLite database engine will use
the xSync method of the VFS to ensure that all content is safely
written to the disk surface prior to continuing. This ensures
that an
Hi Jay
In WAL mode that's only half incorrect. Your description of NORMAL
seems correct, but FULL should be fully durable. The WAL file may
need to be checkpointed on startup, but the if the commit happens in
WAL/FULL, you should have full durability.
This is the reason for my
26 matches
Mail list logo