Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-19 Thread Monish Shah

Hello Greg,

I'm curious how much performance benefit you gain from the ZIL accelerator. 
Have you measured that?  If not, do you have a gut feel about how much it 
helped?  Also, for what kind of applications does it help?


(I know it helps with synchronous writes.  I'm looking for real world 
answers like: "Our XYZ application was running like a dog and we added an 
SSD for ZIL and the response time improved by X%.")


Of course, I would welcome a reply from anyone who has experience with this, 
not just Greg.


Monish

- Original Message - 
From: "Greg Mason" 

To: "HUGE | David Stahl" 
Cc: "zfs-discuss" 
Sent: Thursday, August 20, 2009 4:04 AM
Subject: Re: [zfs-discuss] Ssd for zil on a dell 2950


Hi David,

We are using them in our Sun X4540 filers. We are actually using 2 SSDs
per pool, to improve throughput (since the logbias feature isn't in an
official release of OpenSolaris yet). I kind of wish they made an 8G or
16G part, since the 32G capacity is kind of a waste.

We had to go the NewEgg route though. We tried to buy some Sun-branded
disks from Sun, but that's a different story. To summarize, we had to
buy the NewEgg parts to ensure a project stayed on-schedule.

Generally, we've been pretty pleased with them. Occasionally, we've had
an SSD that wasn't behaving well. Looks like you can replace log devices
now though... :) We use the 2.5" to 3.5" SATA adapter from IcyDock, in a
Sun X4540 drive sled. If you can attach a standard sata disk to a Dell
sled, this approach would most likely work for you as well. Only issue
with using the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.

-Greg

--
Greg Mason
System Administrator
High Performance Computing Center
Michigan State University


HUGE | David Stahl wrote:
We have a setup with ZFS/ESX/NFS and I am looking to move our zil to a 
solid state drive.
So far I am looking into this one 
http://www.newegg.com/Product/Product.aspx?Item=N82E16820167013

Does anyone have any experience with this drive as a poorman’s logzilla?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Monish Shah

A related question:  If you are on a UPS, is it OK to disable ZIL?

The evil tuning guide says "The ZIL is an essential part of ZFS and should 
never be disabled."  However, if you have a UPS, what can go wrong that 
really requires ZIL?


Opinions?

Monish

- Original Message - 
From: "Ross" 

To: 
Sent: Tuesday, June 30, 2009 3:04 PM
Subject: Re: [zfs-discuss] ZFS, power failures, and UPSes


I've seen enough people suffer from corrupted pools that a UPS is 
definitely good advice.  However, I'm running a (very low usage) ZFS 
server at home and it's suffered through at least half a dozen power 
outages without any problems at all.


I do plan to buy a UPS as soon as I can, but it seems to be surviving very 
well so far.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-17 Thread Monish Shah

Unless you're in GIMP working on JPEGs, or doing some kind of MPEG
video editing--or ripping audio (MP3 / AAC / FLAC) stuff. All of
which are probably some of the largest files in most people's
homedirs nowadays.


indeed.  I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.


If we are talking about data on people's desktops and laptops, yes, it is 
not very common to see a lot of compressible data.  There will be some other 
examples, such as desktops being used for engineering drawings.  The CAD 
files do tend to be compressible and they tend to be big.


In any case, the really interesting case for compression is for business 
data (databases, e-mail servers, etc.) which tends to be quite compressible.


...


I'd be interested to see benchmarks on MySQL/PostgreSQL performance
with compression enabled.  my *guess* would be it isn't beneficial
since they usually do small reads and writes, and there is little gain
in reading 4 KiB instead of 8 KiB.


OK, now you have switched from compressibility of data to performance 
advantage.  As I said above, this kind of data usually compresses pretty 
well.


I agree that for random reads, there wouldn't be any gain from compression. 
For random writes, in a copy-on-write file system, there might be gains, 
because the blocks may be arranged in sequential fashion anyway.  We are in 
the process of doing some performance tests to prove or disprove this.


Now, if you are using SSDs for this type of workload, I'm pretty sure that 
compression will help writes.  The reason is that the flash translation 
layer in the SSD has to re-arrange the data and write it page by page.  If 
there is less data to write, there will be fewer program operations.


Given that write IOPS rating in an SSD is often much less than read IOPS, 
using compression to improve that will surely be of great value.


At this point, this is educated guesswork.  I'm going to see if I can get my 
hands on an SSD to prove this.


Monish


what other uses cases can benefit from compression?
--
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-17 Thread Monish Shah

Hello Richard,


Monish Shah wrote:
What about when the compression is performed in dedicated hardware? 
Shouldn't compression be on by default in that case?  How do I put in an 
RFE for that?


Is there a bugs.intel.com? :-)


I may have misled you.  I'm not asking for Intel to add hardware 
compression.  Actually, we already have gzip compression boards that we have 
integrated into OpenSolaris / ZFS and they are also supported under 
NexentaStor.  What I'm saying is that if such a card is installed, 
compression should be enabled by default.



NB, Solaris already does this for encryption, which is often a more
computationally intensive operation.


Actually, compression is more compute intensive than symmetric encryption 
(such as AES).


Public key encryption, on the other hand, is horrendously compute intensive, 
much more than compression or symmectric encryption.  But, nobody uses 
public key encryption for bulk data encryption, so that doesn't apply.


Your mileage may vary.  You can always come up with compression algorithms 
that don't do a very good job of compressing, but which are light on CPU 
utilization.


Monish


I think the general cases are performed well by current hardware, and
it is already multithreaded. The bigger issue is, as Bob notes, resource
management. There is opportunity for people to work here, especially
since the community has access to large amounts of varied hardware.
Should we spin up a special interest group of some sort?
-- richard




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-16 Thread Monish Shah

Hello,

I would like to add one more point to this.

Everyone seems to agree that compression is useful for reducing load on the 
disks and the disagreement is about the impact on CPU utilization, right?


What about when the compression is performed in dedicated hardware? 
Shouldn't compression be on by default in that case?  How do I put in an RFE 
for that?


Monish




On Mon, 15 Jun 2009, dick hoogendijk wrote:


IF at all, it certainly should not be the DEFAULT.
Compression is a choice, nothing more.


I respectfully disagree somewhat.  Yes, compression shuould be a
choice, but I think the default should be for it to be enabled.


I agree that "Compression is a choice" and would add :

  Compression is a choice and it is the default.

Just my feelings on the issue.

Dennis Clarke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-11 Thread Monish Shah

Hello,

Thanks to everyone who replied.

Dan, your suggestions (quoted below) are excellent and yes, I do want to 
make this work with SSDs, as well.  However, I didn't tell you one thing.  I 
want to compress the data on the drive.  This would be particularly 
important if an SSD is used, as the cost per GB is high.  This is why I 
wanted to put it in a zpool.


Before somebody points out that compression with increase the CPU 
utilization, I'd like to mention that we have hardware accelerated gzip 
compression technology already working with ZFS, so the CPU will not be 
loaded.


I'm also hoping that write IOPS will improve with compression, because more 
writes can be combined into a single block of storage.  I don't know enough 
about ZFS allocation policies to be sure, but we'll try to run some tests.


It looks like, for now, the mirror disks will also have to be SSDs. 
(Perhaps raidz1 will be OK, instead.)  Eventually, we will look into 
modifying ZFS to support the kind of asymmetric mirroring I mentioned in the 
original post.  The other alternative is to modify ZFS to compress L2ARC, 
but that sounds much more complicated to me.  Any insights from ZFS 
developers would be appreciated.


Monish

Monish Shah
CEO, Indra Networks, Inc.
www.indranetworks.com


Use the SAS drives as l2arc for a pool on sata disks.   If your l2arc is 
the full size of your pool, you won't see reads from the pool (once the 
cache is primed).


If you're purchasing all the gear from new, consider whether SSD in this 
mode would be better than 15k sas.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Monish Shah

Hello everyone,

I'm wondering if the following makes sense:

To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS 
drives.  For high IOPS, I believe it is best to let ZFS stripe them, instead 
of doing a raidz1 across them.  Therefore, I would like to mirror the drives 
for reliability.


Now, I'm wondering if I can get away with using a large capacity 7200 RPM 
SATA drive as mirror for multiple SAS drives.  For example, say I had 3 SAS 
drives of 150 GB each.  Could I take a 500 GB SATA drive, partition it into 
3 vdevs and use each one as a mirror for one SAS drive?  I believe this is 
possible.


The problem is in performance.  What I want is for all reads to go to the 
SAS drives so that the SATA drive will only see writes.  I'm hoping that due 
to the copy-on-write nature of ZFS, the writes will get bunched into 
sequential blocks, so write bandwidth will be good, even on a SATA drive. 
But, the reads must be kept off the SATA drive.  Is there any way I can get 
ZFS to do that?


Thanks,

Monish 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-29 Thread Monish Shah

Hello David and Michael,

Well I might back up the more important stuff offsite. But in theory 
it's all replaceable. Just would be a pain.


And what is the cost of the time to replace it versus the price of a  hard 
disk? Time ~ money.


This is true, but there is one counterpoint.  If you do raidz2, you are 
definitely paying for extra disk(s).  If you stay with raidz1, the cost of 
the time to recover the data would be incurred if and only if he has a 
failure in raidz1 followed by a second failure during the re-build process. 
So, the statistically correct thing to do is to multiply the cost of 
recovery by the probability and see if that exceeds the cost of the new 
drives.


To be really accurate, the cost of raidz2 option should also include the 
cost of moving the data from the existing raidz1 to the new raidz2 and then 
re-formatting the raidz1 into raidz2.


However, all this calculating is probably not worthwhile.  My feeling is: 
it's just a home video server and Michael still has the original media (I 
think).  Raidz1 is good enough.


Monish

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption through compression?

2009-03-12 Thread Monish Shah

Hello Darren,



Monish Shah wrote:

Hello everyone,

My understanding is that the ZFS crypto framework will not release until 
2010.


That is incorrect information, where did you get that from ?


It was in Mike Shapiro's presentation at the Open Solaris Storage Summit 
that took place a couple of weeks ago.  Perhaps I mis-read the slide, but 
I'm pretty sure it listed encryption as a feature for 2010.


...


3.  There is no key management framework.


That is impossible there has to be key management somewhere.


What I meant was, the compression framework does not have key management 
framework.  Using our hardware (which I mentioned later in my mail), the key 
management would come with the hardware, since we store keys in the 
hardware.  We provide a utility to manage the keys stored in the hardware.


...

If it is specific to your companies hardware I doubt it would ever get 
integrated into OpenSolaris particularly given the existing zfs-crypto 
project has no hardware dependencies at all.


The better way to use your encryption hardware is to get it plugged into 
the OpenSolaris cryptographic framework (see the crypto project on 
OpenSolaris.org)


That was precisely what I want thinking originally.  However, if it is out 
in 2010, there is temptation to do our own project, which I thought could be 
done in a couple of months.  (In light of your comment below, my estimate 
may have been wildly optimistic, but the foregoing is merely an explanation 
of what I was thinking.)


Does anyone see any problems with this?  There are probably various 
gotchas here that I haven't thought of.  If you can think of any, please 
let me know.


The various gotchas are the things that have been taking me and the rest 
of the ZFS team a large part of the zfs-crypto project to resolve.  It 
really isn't as simple as you think it is - if it were then the zfs-crypto 
project would be done by now!


If you really want to help get encryption for ZFS then please come and 
join the already existing project rather than starting another one from 
scratch.


If the schedule is much sooner than 2010, I would definitely do so.  What is 
your present schedule estimate?



--
Darren J Moffat



Monish 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Encryption through compression?

2009-03-12 Thread Monish Shah

Hello everyone,

My understanding is that the ZFS crypto framework will not release until 
2010.  In light of that, I'm wondering if the following approach to 
encryption could make sense for some subset of users:


The idea is to use the compression framework to do both compression and 
encryption in one pass.  This would be done by defining a new compression 
type, which might be called "compress-encrypt" or something like that. 
There could be two levels, one that does both compress and encrypt and 
another that does encrypt only.


I see the following issues with this approach:

1.  ZFS compression framework presently takes compressed data only if there 
was at least 12.5% reduction.  For data that didn't compress, you would wind 
up storing it unencrypted, even if encryption was on.


2.  Meta-data would not be encrypted.  I.e., even if you don't have the key, 
you will be able to do directory listings and see file names, etc.


3.  There is no key management framework.

I would deal with these as follows:

Issue #1 can be solved by changing ZFS code such that it always accepts the 
"compressed" data.  I guess this is an easy change.


Issue #2 may be a limitation to some and feature to others.  May be OK.

Issue #3 can be solved using encryption hardware (which my company happens 
to make).  The keys are stored in hardware and can be used directly from 
that.  Of course, this means that the solution will be specific to our 
hardware, but that's fine by me.


The idea is that we would do this project on our own and supply this 
modified ZFS with our compression/encryption hardware to our customers.  We 
may submit the patch for inclusion in some future version of OS, if the 
developers are amenable to that.


Does anyone see any problems with this?  There are probably various gotchas 
here that I haven't thought of.  If you can think of any, please let me 
know.


Thanks,

Monish

Monish Shah
CEO, Indra Networks, Inc.
www.indranetworks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss