> If you could use DTrace you could really find out, but since we have
How about something like sysinternals diskmon?
http://technet.microsoft.com/en-us/sysinternals/bb896646
That should give you (OP) some indication of what disk activity
is going on.
Paul.
_
On Thu, Nov 3, 2011 at 12:39 PM, Fabian wrote:
> I just tested it, and it made no difference. The root cause of the problem
> is most likely not slow writes, because inserting duplicate values (which
> are ignored instead of written to disk) are just as slow.
If you could use DTrace you could rea
I just tested it, and it made no difference. The root cause of the problem
is most likely not slow writes, because inserting duplicate values (which
are ignored instead of written to disk) are just as slow.
But your suggestion may help me with another problem: when I fill an empty
database with mi
Have you tried setting journal_mode to the default DELETE option ?
Without atommic commits, maybe your inserts are going to disk one by one
instead of in a single step, when commiting your transactions, thus slowing
down disk writes.
Fabian-40 wrote:
>
> 2011/11/2 Black, Michael (IS)
> I do not
On Wed, Nov 2, 2011 at 2:13 PM, Fabian wrote:
> 2011/11/2 Nico Williams
>> But note that this can still fail you when the file is larger than
>> available RAM. In that case such a flag would be very bad. And
>> SQLite3 can't know how much RAM is available. The OS can know
>> (sortof) and the u
2011/11/2 Nico Williams
>
> Incidentally, it pays to make the SQLite3 page size match the
> filesystem preferred block size.
>
>
I already have the page_size set to 4096, which should match the default
NTFS cluster size.
But note that this can still fail you when the file is larger than
> availa
On Wed, Nov 2, 2011 at 1:20 PM, Fabian wrote:
> Linux will not read the whole file in, but Windows eventually does. The
> inserts go progressively faster when they are reaching halfway, and Windows
> reads very large pages from disk, even if you request only 10 bytes. So in
The largest read will
2011/11/2 Nico Williams
>
> But that's NOT what SQLite3 is doing. SQLite3 is doing random I/O.
> And the OS sees the random I/O pattern and concludes it's better to
> not read the whole file in. So for those 10K inserts you pay -worst
> case- 10K I/Os. At ~12ms per random I/O (likely the seek
2011/11/2 Black, Michael (IS)
> Then, when you restart the file is on system disk and it is flushing each
> insert to system disk on the WAL file slowing things down dramaticalliy.
>
I do not use WAL, since I have turned 'journal_mode' off (to improve insert
performance), and as far as I know WAL
On Wed, Nov 2, 2011 at 11:41 AM, Fabian wrote:
> 2011/11/2 Mr. Puneet Kishor
>> ahh, so you *are* getting expected behavior, just not what *you* expected.
>> Did you have a different number in mind instead of a factor of 300? And, if
>> so, why?
>
> To read an (un-cached) 150mb file from disk, do
Northrop Grumman Information Systems
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Fabian [fabianpi...@gmail.com]
Sent: Wednesday, November 02, 2011 11:57 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sq
2011/11/2 Simon Slavin
>
> So compare your 'restart-and-INSERT' test with one where you restart, log
> in, then do a few random operations for a minute: start your web browser
> and load a page. Open and close a few documents or control panels. Sit
> and do nothing for a minute. /Then/ carry o
On 2 Nov 2011, at 4:31pm, Fabian wrote:
> 2011/11/2 Mr. Puneet Kishor
>
>> Others will have better answers, but methinks that when you reboot the
>> computer, the operating system's caches are flushed out, which slows the
>> operation. Try working with the db for a bit (SELECT, repeat INSERTs,
2011/11/2 Paul Corke
>
> A stateful antivirus that does lots of heavy processing when you first
> open the file?
>
>
I'm running these tests on a completey clean Win7 install, without any
anti-virus (or other software) installed. I should note that it runs
virtualized (in VirtualBox) so I can qui
On 11/02/2011 12:31 PM, Fabian wrote:
2011/11/2 Mr. Puneet Kishor
Others will have better answers, but methinks that when you reboot the
computer, the operating system's caches are flushed out, which slows the
operation. Try working with the db for a bit (SELECT, repeat INSERTs, etc.)
and noti
On 02 November 2011 16:42, Fabian wrote:
> Maybe there is a very simple explanation, I just can't think of any.
A stateful antivirus that does lots of heavy processing when you first
open the file?
Have you tried:
1) Reboot
2) Wait 10 minutes (don't even touch the computer)
3) Test
Paul.
__
2011/11/2 Mr. Puneet Kishor
>
> ahh, so you *are* getting expected behavior, just not what *you* expected.
> Did you have a different number in mind instead of a factor of 300? And, if
> so, why?
To read an (un-cached) 150mb file from disk, doesn't take 30 seconds. And
SQLite shouldn't have to
On Nov 2, 2011, at 11:31 AM, Fabian wrote:
> 2011/11/2 Mr. Puneet Kishor
>
>>
>> Others will have better answers, but methinks that when you reboot the
>> computer, the operating system's caches are flushed out, which slows the
>> operation. Try working with the db for a bit (SELECT, repeat IN
I would like to stick my neck out over the chopping block and agree. My
experience is the opposite, but appears to support Puneet's assertion.
With me, it takes my C# application 12 seconds to pass 103,00 records and
insert 98,000 rows into the db from it. The next time I run the
application (wh
2011/11/2 Mr. Puneet Kishor
>
> Others will have better answers, but methinks that when you reboot the
> computer, the operating system's caches are flushed out, which slows the
> operation. Try working with the db for a bit (SELECT, repeat INSERTs, etc.)
> and notice if the speed increases again
On Nov 2, 2011, at 11:24 AM, Fabian wrote:
> Now if I re-open the database, I can add an additional 10.000 rows very
> fast (<1 sec). But if I reboot the (Windows) PC, and insert an additional
> 10.000 rows, it takes at least 30 secs, which seems very slow, if I can add
> the first 1 million in u
I asked a similar question before, but received no response, so maybe it's
a very stupid question, but if so, feel free to say so.
I create a database, create one table with a TEXT column, insert 1 million
rows in 10 secs, create an index in 10 secs, VACUUM the database, and close
the database.
N
22 matches
Mail list logo