roland writes:
> >SSDs with capacitor-backed write caches
> >seem to be fastest.
>
> how to distinguish them from ssd`s without one?
> i never saw this explicitly mentioned in the specs.
They probably don't have one then (or they should fire their
entire marketing dept).
Capacitors allows
>SSDs with capacitor-backed write caches
>seem to be fastest.
how to distinguish them from ssd`s without one?
i never saw this explicitly mentioned in the specs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
The things I'd pay most attention to would be all single threaded 4K,
32K, and 128K writes to the raw device.
Maybe sure the SSD has a capacitor and enable the write cache on the
device.
-r
Le 5 juil. 09 à 12:06, James Lever a écrit :
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
I
> "pe" == Peter Eriksson writes:
pe> With c1t15d0s0 added as log it takes 1:04.2, but with the same
pe> c1t15d0s0 added, but wrapped inside a SVM metadevice the same
pe> operation takes 10.4 seconds...
so now SVM discards cache flushes, too? great.
pgpFnpp1mdyTO.pgp
Descriptio
Oh, and for completeness: If I wrap 'c1t12d0s0' inside a SVM metadevice to and
use that to create the "TEST" zpool (without a log) I run the same test command
in 36.3 seconds... Ie:
# metadb -f -a -c3 c1t13d0s0
# metainit d0 1 1 c1t13d0s0
# metainit d2 1 1 c1t12d0s0
# zpool create TEST /dev/md/d
You might wanna try one thing I just noticed - wrap the log device inside a SVM
(disksuite) metadevice - makes wonders for the performance on my test server
(Sun Fire X4240)... I do wonder what the downsides might be (except for having
to fiddle with Disksuite again). Ie:
# zpool create TEST c1
James Lever wrote:
>
> On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
>
>> Have you tried putting the slog on this controller, either as an SSD or
>> regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
>
> What exactly are you suggesting here? Configure one disk on thi
On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
Have you tried putting the slog on this controller, either as an SSD
or
regular disk? It's supported by the mega_sas driver, x86 and amd64
only.
What exactly are you suggesting here? Configure one disk on this
array as a dedicated ZIL?
James Lever wrote:
> We also have a PERC 6/E w/512MB BBWC to test with or fall back to if we
> go with a Linux solution.
Have you tried putting the slog on this controller, either as an SSD or
regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
--
James Andrewartha | Sysadmi
On Jul 5, 2009, at 9:20 PM, Richard Elling
wrote:
Ross Walker wrote:
Thanks for the info. SSD is still very much a moving target.
I worry about SSD drives long term reliability. If I mirror two of
the same drives what do you think the probability of a double
failure will be in 3, 4, 5
Ross Walker wrote:
On Jul 5, 2009, at 7:47 PM, Richard Elling
wrote:
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation
On Jul 5, 2009, at 7:47 PM, Richard Elling
wrote:
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (a
On 06/07/2009, at 9:31 AM, Ross Walker wrote:
There are two types of SSD drives on the market, the fast write SLC
(single level cell) and the slow write MLC (multi level cell). MLC
is usually used in laptops as SLC drives over 16GB usually go for
$1000+ which isn't cost effective in a lapt
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with c
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with capacitor-backed wr
James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with so-
called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
Do you have any
On Sat, 4 Jul 2009, James Lever wrote:
Any insightful observations?
Probably multiple slog devices are used to expand slog size and not
used in parallel since that would require somehow knowing the order.
The principle bottleneck is likely the update rate of the first device
in the chain, f
On 04/07/2009, at 2:08 PM, Miles Nordin wrote:
iostat -xcnXTdz c3t31d0 1
on that device being used as a slog, a higher range of output looks
like:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 1477.80.0 2955.
> "jl" == James Lever writes:
jl> if I had disabled the ZIL, writes would have to go direct to
jl> disk (not ZIL) before returning, which would potentially be
jl> even slower than ZIL on zpool.
no, I'm all but certain you are confused.
jl> Has anybody been measuring the IOPS
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote:
Slog in ramdisk is analogous to no slog at all and disable zil
(well, it may be actually a bit worse). If you say that your old
system is 5 years old difference in above numbers may be due to
difference in CPU and memory speed, and so it
On Fri, Jul 3, 2009 at 7:34 AM, James Lever wrote:
> Hi Mertol,
>
> On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
>
>> ZFS SSD usage behaviour heavly depends on access pattern and for asynch
>> ops ZFS will not use SSD's. I'd suggest you to disable SSD's , create a
>> ram disk and use it as SL
This is something that I've run into as well across various installs
very similar to the one described (PE2950 backed by an MD1000). I
find that overall the write performance across NFS is absolutely
horrible on 2008.11 and 2009.06. Worse, I use iSCSI under 2008.11 and
it's just fine with
> "vl" == Victor Latushkin writes:
vl> Above results make me question whether your Linux NFS server
vl> is really honoring synchronous semantics or not...
Any idea how to test it?
pgpB0K5gXsZ5o.pgp
Description: PGP signature
___
zfs-discu
On Fri, 3 Jul 2009, James Lever wrote:
I did some tests with a ramdisk slog and the the write IOPS seemed to run
about the 4k/s mark vs about 800/s when using the SSD as slog and 200/s
without a slog.
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge
On 03.07.09 15:34, James Lever wrote:
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable SSD's
, create a ram disk and use it as SLOG device to compare the
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable
SSD's , create a ram disk and use it as SLOG device to compare the
performance. If performance doesnt
Hej Henrik,
On 03/07/2009, at 8:57 PM, Henrik Johansen wrote:
Have you tried running this locally on your OpenSolaris box - just to
get an idea of what it could deliver in terms of speed ? Which NFS
version are you using ?
Most of the tests shown in my original message are local except the
Hi,
James Lever wrote:
Hi All,
We have recently acquired hardware for a new fileserver and my task,
if I want to use OpenSolaris (osol or sxce) on it is for it to perform
at least as well as Linux (and our 5 year old fileserver) in our
environment.
Our current file server is a whitebox
Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of James Lever
Sent: Friday, July 03, 2009 10:09 AM
To: Brent Jones
Cc: zfs-discuss; storage-disc...@opensolaris.org
Subject: Re: [zfs-discuss] surprisingly poor performance
On 03/07/2009, at 5:03 PM, Bre
On 03/07/2009, at 5:03 PM, Brent Jones wrote:
Are you sure the slog is working right? Try disabling the ZIL to see
if that helps with your NFS performance.
If your performance increases a hundred fold, I'm suspecting the slog
isn't perming well, or even doing its job at all.
The slog appears
On Thu, Jul 2, 2009 at 11:39 PM, James Lever wrote:
> Hi All,
>
> We have recently acquired hardware for a new fileserver and my task, if I
> want to use OpenSolaris (osol or sxce) on it is for it to perform at least
> as well as Linux (and our 5 year old fileserver) in our environment.
>
> Our cur
Hi All,
We have recently acquired hardware for a new fileserver and my task,
if I want to use OpenSolaris (osol or sxce) on it is for it to perform
at least as well as Linux (and our 5 year old fileserver) in our
environment.
Our current file server is a whitebox Debian server with 8x 10,
33 matches
Mail list logo