"Marcus Grimm" <mgr...@medcom-online.de> schrieb im
Newsbeitrag news:4a1e6034.3030...@medcom-online.de...
> just for anybody who is interested:
>
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional overhead sqlite needs to do.

Just tested the TPS here on windows too, using a
(stdcall- VC8) compile of engine-version 3.6.14.1
(based on the "normal amalgamation-download").

And I basically get, what the sqlite-site says, in case
I use a pagesize of 4096 - and somewhat better values,
when I use a pagesize of 1024 (always starting with a
"fresh db").

Running a loop of 1000 small transactions (with
Begin Immediate under the hood) gives:

5400rpm Notebook-Disk, Win-XP-NTFS:
pagesize=4096
16.26msec, 61TPS (sync=2)
14.86msec, 67TPS (sync=1)
pagesize=1024
8.98msec, 111TPS (sync=2)
7.94msec, 126TPS (sync=1)

same test against a 7200rpm-Drive on XP behaves
proportionally (gives somewhat better values with
the correct "rpm-relation").


Under Linux, running the same tests (over the wine-
layer), working there finally against ext3-partitions -
I basically get the (high) values Jim has measured,
so this "wine-check" (using the same binaries which
caused "normal" results on windows) shows, that this
is apparently caused by the IO-subsystem (either due
to an incorrect wine-mapping or by the kernel itself)
and not because of wrong default-settings in the
amalgamation-compile.

Running on a debian-sid - kernel 2.6.29.2
against a WD-"green-drive" with an Ext3-partition
(1TB with only 5400rpm rotation-speed):
pagesize=4096
3.99msec, 251TPS (sync=2)
3.38msec, 296TPS (sync=1)
pagesize=1024
2.82msec, 355TPS (sync=2)
2.79msec, 359TPS (sync=1)

switching the WriteCache off on that disk, gives
basically the same factor 10 performance-decrease
as Jim was reporting = ca. 30TPS (+-3)  remaining.


And here for comparison the values, running within a VM
in VirtualBox, hosted on that very same  WD-harddisk
(guest was Win-XP and the writecache on that disk
was reenabled beforehand):
pagesize=4096
11.66msec, 86TPS (sync=2)
10.00msec, 100TPS (sync=1)
pagesize=1024
8.27msec, 121TPS (sync=2)
7.00msec, 143TPS (sync=1)

So, also that is basically matching with Jims results.


And just for those who are interested in values
against one of these "first affordable SSDs without that
JMicron WriteLag-bug" (a small OCZ Vertex 30GB,
running as the system-disk).
This was measured on the same Linux-machine (debian-sid,
kernel 2.6.29.2, running on Ext3 too - and therefore yet
without the Trim-Cmd-support which was (is?) proposed
for kernel 2.6.30, to support the write-direction of SSDs
somewhat better):
pagesize=4096
2.13msec, 469TPS (sync=2)
1.98msec, 505TPS (sync=1)
pagesize=1024
2.17msec, 461TPS (sync=2)
2.01msec, 497TPS (sync=1)


The Write-Cache on that disk is not off-switchable
(although hdparm reports, that the disk has one).
Maybe that is, because the internal 64MB Cache
of (not only) the Vertex-SSDs is "highly important", to be
able to deal with the Flash-chips in a performant way in
either case - and I could imagine, that these disks will
ensure a proper Cache-flushing to the Flash-Chips even
in case of a power-loss, since these disks don't need that
much energy (only 1-2W), so a small GoldCap(acitor)
should be enough, to hold and provide the needed energy
for the reqired "final Cache-Flushing-TimeFrame".

I was a bit disappointed tough, since I expected higher values -
but from what I've read from different SSD-related posts - it
seems, that (not only) the current Linux-Kernel yet treats these
"disks" as rotational media in the FS-layer - and I tried my
best, to signalize the IO-scheduler, that we have an SSD here,
but not much difference (tried with 'rotational'=0 and also the
other scheduler-switches as noop, deadline etc. - finally the
values I just posted were measured with 'noop' in the
scheduler - but this way only slightly better than 'deadline' or
the usual 'cfq' (or former 'anticipatory') default-settings.

Also tried Jims python-script directly on that debian-box
(python 2.54 - sqlite 3.6.13 = the original sid apt-packages).
and achieve on the SSD:
9663 TPS (off)
345 TPS (normal)
323 TPS (full)

Then against the 5400rpm disk, which surprisingly came
up with these values, running the python-script:
9441 TPS (off)
524 TPS (normal)
497 TPS (full)

Now that was baffling, since I expected the exact opposite
(I checked twice - the path-settings and results regarding
my tests over the wine-layer and also the directly performed
python-results - but no mistakes there - the SSD was coming
up with better results over wine than the 5400rpm WD-disk
and the opposite is seen, when running natively over python
(though against an older sqlite-version in that case).

Hmm - now since the first results from windows-binaries in a
VM were matching the results on a "native XP", I tried
my windows-sqlite-binaries (sqlite-version 3.6.14.1) again
in the VM - this time against a fresh (and small) VM-disk(file),
hosted on the SSD and visible inside the VM as a second
Harddisk.
Now the relations were switched again - the SSD now
performing somewhat like a real 7200rpm-disk on a real XP-
machine, giving for example ca. 110 TPS for 4K-pages and
sync = full (2) - where the XP-system-VM-diskfile, running
on the larger 5400rpm "host-disk" (as already posted above)
achieved ca. 86 TPS.


Really have no clue, how to interpret all that (especially the
Linux-results, be it the python ones, or the ones measured over
the Wine-layer), but the values I've posted are really checked
twice, no mistakes there.

Are these high values just caused by modern (disk-internal)
writeback-caches, where the vendors of these disks made
sure, that even in case of a power-loss the internal disk-cache
can be safely flushed?

Also reading the following articles was not helping much:

Possible data-loss in ext4:
http://www.heise.de/english/newsticker/news/134483
At the bottom of this article there's ext3-behaviour mentioned,
which was interesting to read.

Also interesting with regards to current ext3-behaviour was
the following one:

Kernel developers squabble over Ext3 and Ext4
http://www.heise.de/english/newsticker/news/135446

And finally some new default-ext3-settings in the new kernel?

Solving the ext3 latency problem:
http://lwn.net/Articles/328363/

Is fsync() somehow "messed up" on linux currently?


Olaf Schmidt


P.S.
the following was my test-code:

Const DBPath As String = "c:\transact_test.db"
'Const DBPath As String = "d:\transact_test.db"
Dim Cnn As cConnection, Cmd As cCommand, T As Single, i As Long

  Set Cnn = New cConnection
  Cnn.CreateNewDB DBPath
  Cnn.Execute "pragma page_size=4096"
  'Cnn.Execute "pragma page_size=1024"

  Cnn.Execute "pragma synchronous=2"
  'Cnn.Execute "pragma synchronous=1"

  Cnn.Execute "Create Table T(Int32 Integer)"

  T = Timer
    Set Cmd = Cnn.CreateCommand("Insert Into T Values(?)")

    For i = 1 To 1000
      Cnn.BeginTrans
        Cmd.SetInt32 1, i
        Cmd.Execute
      Cnn.CommitTrans
    Next i

  T = Timer - T
  Print Format(T, "#.00msec"), Format(1000 / T, "#TPS")

  Set Cmd = Nothing
  Set Cnn = Nothing
  Kill DBPath




_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to