Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us > wrote:

On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed pushed closer to the disks, but there may be considerably more latency associated with getting that data into the controller NVRAM cache than there is into a dedicated slog SSD.

I don't see how, as the SSD is behind a controller it still must make it to the controller.

If you take a look at 'iostat -x' output you will see that the system knows about a queue for each device. If it was any other way, then a slow device would slow down access to all of the other devices. If there is concern about lack of bandwidth (PCI-E?) to the controller, then you can use a separate controller for the SSDs.

It's not bandwidth. Though with a lot of mirrors that does become a concern.

Well the duplexing benefit you mention does hold true. That's a complex real-world scenario that would be hard to benchmark in production.

But easy to see the effects of.

I actually meant to say, hard to bench out of production.

Tests done by others show a considerable NFS write speed advantage when using a dedicated slog SSD rather than a controller's NVRAM cache.

I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential write). It's a Dell PERC 6/e with 512MB onboard.

I get 47.9 MB/s (60.7 MB/s peak) here too (also with 512MB NVRAM), but that is not very good when the network is good for 100 MB/s. With an SSD, some other folks here are getting essentially network speed.

In testing with ram disks I was only able to get a max of around 60MB/ s with 4k block sizes, with 4 outstanding.

I can do 64k blocks now and get around 115MB/s.

I just ran some filebench microbenchmarks against my 10 Gbit testbox
which is a Dell R905, 4 x 2.5 Ghz AMD Quad Core CPU's and 64 GB RAM.

My current pool is comprised of 7 mirror vdevs (SATA disks), 2 Intel
X25-E as slogs and 1 Intel X25-M for the L2ARC.

The pool is a MD1000 array attached to a PERC 6/E using 2 SAS cables.

The nic's are ixgbe based.

Here are the numbers : Randomwrite benchmark - via 10Gbit NFS : IO Summary: 4483228 ops, 73981.2 ops/s, (0/73981 r/w) 578.0mb/s, 44us cpu/op, 0.0ms latency

Randomread benchmark - via 10Gbit NFS :
IO Summary: 7663903 ops, 126467.4 ops/s, (126467/0 r/w) 988.0mb/s, 5us cpu/op, 
0.0ms latency

The real question is if these numbers can be trusted - I am currently
preparing new test runs with other software to be able to do a
comparison.
There is still bus and controller plus SSD latency. I suppose one could use a pair of disks as an slog mirror, enable NVRAM just for those and let the others do write-through with their disk caches

But this encounters the problem that when the NVRAM becomes full then you hit the wall of synchronous disk write performance. With the SSD slog, the write log can be quite large and disk writes are then done in a much more efficient ordered fashion similar to non- sync writes.

Yes, you have a point there.

So, what SSD disks do you use?

-Ross


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to