Re: [zfs-discuss] A couple of newbie questions about ZFS compression

2008-11-07 Thread Ross Becker
The compress on-write behavior is what I expected, but I wanted to validate 
that for sure.  Thank you.

On the 2nd question, the obvious answer is that  I'm doing work where knowing 
how large the total file sizes tells me how much work has been completed, and I 
don't have any other feedback which tells me how far along a job is.  When it's 
a directory of 100+ files,  or a whole tree with hundreds of files, it's not 
convenient to add the file sizes up to get the answer.  I could write a perl 
script but it honestly should a be built-in command.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] A couple of newbie questions about ZFS compression

2008-11-07 Thread Ross Becker
I'm about to enable compression on my ZFS filesystem, as most of the data I 
intend to store should be highly compressible.

Before I do so, I'd like to ask a couple of newbie questions

First -  if you were running a ZFS without compression, wrote some files to it, 
then turned compression on, will those original uncompressed files ever get 
compressed via some background work, or will they need to be copied in order to 
compress them?

Second- clearly the "du" command shows post-compression size; opensolaris 
doesn't have a man page for it, but I'm wondering if there's either an option 
to show "original" size for du, or if there's a suitable replacement I can use 
which will show me the uncompressed size of a directory full of files? (no, 
knowing the compression ratio of the whole filesystem and the du size isn't 
suitable;  I'm looking for a straight-up du substitute which would tell me 
original sizes")


Thanks
   Ross
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-30 Thread Ross Becker
At this point, ZFS is performing admirably with the Areca card.  Also, that 
card is only 8-port, and the Areca controllers I have are 12-port.  My chassis 
has 24 SATA bays, so being able to cover all the drives with 2 controllers is 
preferable.

Also, the driver for the Areca controllers is being integrated into OpenSolaris 
as we discuss, so the next spin of Opensolaris won't even require me to add the 
driver for it.


--Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-29 Thread Ross Becker
I have to come back and face the shame;  this was a total newbie mistake by 
myself.

I followed the ZFS shortcuts for noobs guide off bigadmin; 
http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs

What that had me doing was creating a UFS filesystem on top of a ZFS volume, so 
I was using only 2 layers of ZFS.

I just re-did this against end-to-end ZFS, and the results are pretty freaking 
impressive;  ZFS is handily outrunning the hardware RAID.  Bonnie++ is 
achieving 257 mb/sec write, and 312 mb/sec read. 

My apologies for wasting folks time; this is my first experience with a solaris 
of recent vintage.

--Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Ross Becker
That was part of my testing of the RAID controller settings;  turning off the 
controller cache dropped me to 20 mb/sec read & write under raidz2/zfs.


--Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Ross Becker
Okay, after doing some testing, it appears that the issue is on the ZFS side.  
I fiddled around a while with options on the areca card, and never got any 
better performance results than my first test. So, my best out of the raidz2 is 
42 mb/s write and 43 mb/s read.  I also tried turning off crc's (not how I'd 
run production, but for testing), and got no performance gain.

After fiddling with options, I destroyed my zfs & zpool, and tried some 
single-drive bits.   I simply used newfs to create filesystems on single 
drives, mounted them, and ran some single-drive bonnie++ tests.  On a single 
drive, I got 50 mb/sec write & 70 mb/sec read.   I also tested two benchmarks 
on two drives simultaneously, and on each of the tests, the result dropped by 
about 2mb/sec, so I got a combined 96 mb/sec write & 136 mb/sec read with two 
separate UFS filesystems on two separate disks.

So next steps? 

--ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Ross Becker
Well,   I just got in a system I am intending to be a BIG fileserver;  
background-  I work for a SAN startup, and we're expecting in our first year to 
collect 30-60 terabytes of Fibre Channel traces.  The purpose of this is to be 
a large repository for those traces w/ statistical analysis run against them. 
Looking at that storage figure,  I decided this would be a perfect application 
for ZFS.  I purchased a Super Micro chassis that's 4u and has 24 slots for SATA 
drives.  I've put one quad-core 2.66 ghz processor in & 8gig of ECC ram.   I 
put in two Areca 1231ML ( http://www.areca.com.tw/products/pcie341.htm ) 
controllers which come with Solaris drivers.  I've half-populated the chassis 
with 12  1Tb drives to begin with, and I'm running some experiments.  I loaded 
OpenSolaris 05-2008 on the system.  

I configured up an 11 drive RAID6 set + 1 hot spare on the Areca controller put 
a ZFS on that raid volume, and ran bonnie++ against it (16g size), and achieved 
150 mb/s  write, & 200 mb/s read.  I then blew that away, configured the Areca 
to present JBOD, and configured ZFS with RAIDZ2  11 disks, and a hot spare.  
Running bonnie++ against that, it  achieved 40 mb/sec read and 40 mb/sec write. 
 I wasn't expecting RAIDZ to outrun the controller-based RAID, but I wasn't 
expecting 1/3rd to 1/4 the performance.  I've looked at the ZFS tuning info on 
the solaris site, and mostly what they said is "tuning is evil", with a few 
things for Database tuning.   

Anyone got suggestions on whether there's something I might poke at to at least 
get this puppy up closer to 100 mb/sec?  Otherwise,  I may dump the JBOD and go 
back to the controller-based RAID.

Cheers
   Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss