From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
this is
the part I am not certain about - it is roughly as cheap to READ the
gzip-9 datasets as it is to read lzjb (in terms of CPU decompression).
Nope. I know LZJB is not LZO,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I really hope someone better versed in compression - like Saso -
would chime in to say whether gzip-9 vs. lzjb (or lz4) sucks in
terms of read-speeds from the pools. My HDD-based
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I really hope someone better versed in compression - like Saso -
would chime in to say whether gzip-9 vs. lzjb (or lz4)
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
There are very few situations where (gzip) option is better than the
default lzjb.
Well, for the most part my question regarded the slowness (or lack of)
gzip DEcompression as compared to lz* algorithms. If there are files
and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
can I make e.g. LSI SAS3442E
directly do SSD caching (it says something about CacheCade,
but I'm not sure it's an OS-side driver thing), as it
is supposed to boost IOPS?
On Tue, Nov 27, 2012 at 12:12:43PM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
can I make e.g. LSI SAS3442E
directly do SSD caching (it says
Performance-wise, I think you should go for mirrors/raid10, and
separate the pools (i.e. rpool mirror on SSD and data mirror on
HDDs). If you have 4 SSDs, you might mirror the other couple for
zoneroots or some databases in datasets delegated into zones,
for example. Don't use dedup. Carve out
Now that I thought of it some more, a follow-up is due on my advices:
1) While the best practices do(did) dictate to set up zoneroots in
rpool, this is certainly not required - and I maintain lots of
systems which store zones in separate data pools. This minimizes
write-impact on rpools
On Tue, Nov 27, 2012 at 5:13 AM, Eugen Leitl eu...@leitl.org wrote:
Now there are multiple configurations for this.
Some using Linux (roof fs on a RAID10, /home on
RAID 1) or zfs. Now zfs on Linux probably wouldn't
do hybrid zfs pools (would it?)
Sure it does. You can even use the whole disk
Dear internets,
I've got an old SunFire X2100M2 with 6-8 GBytes ECC RAM, which
I wanted to put into use with Linux, using the Linux
VServer patch (an analogon to zones), and 2x 2 TByte
nearline (WD RE4) drives. It occured to me that the
1U case had enough space to add some SSDs (e.g.
2-4 80
10 matches
Mail list logo