Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Jeff Savit

On 11/11/2011 01:02 AM, darkblue wrote:



2011/11/11 Jeff Savit jeff.sa...@oracle.com 
mailto:jeff.sa...@oracle.com


On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

From:zfs-discuss-boun...@opensolaris.org  
mailto:zfs-discuss-boun...@opensolaris.org  [mailto:zfs-discuss-
boun...@opensolaris.org  mailto:boun...@opensolaris.org] On Behalf Of 
Jeff Savit

Also, not a good idea for
performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk 
write cache gets disabled.  But it's trivial to simply force enable it thus 
solving the problem.


Granted - I just didn't want to get into a long story. With a
self-described 'newbie' building a storage server I felt the best
advice is to keep as simple as possible without adding steps (and
without adding exposition about cache on partitioned disks - but
now that you brought it up, yes, he can certainly do that).

Besides, there's always a way to fill up the 1TB disks :-) Besides
the OS image, it could also store gold images for the guest
virtual machines, maintained separately from the operational images.


how big of the solaris os'partition do you suggest?
That's one of the best things about ZFS and *not* putting separate pools 
on the same disk - you don't have to worry about sizing partitions. Use 
two of the rotating disks to install Solaris on a mirrored root pool 
(rpool). The OS build will take up a small portion of the 1TB usable 
data (and you don't want to go above 80% full so it's really 800GB 
effectively). You can use the remaining space in that pool for 
additional ZFS datasets to hold golden OS images, iTunes, backups, 
whatever. Or simply not worry about it and let there be unused space. 
Disk space is relatively cheap - complexity and effort are not. For all 
we know, the disk space you're buying is more than ample for the 
application and it might not even be worth devising the most 
space-efficient layout.  If that's not the case, then the next topic 
would be how to stretch capacity via clones, compression, and RAIDZn.


Along with several others posting here, I recommend you use Solaris 11 
rather than Solaris 10. A lot of things are much easier, such as 
managing boot environments and sharing file systems via NFS, CIFS, 
iSCSI, and there's a lot of added functionality.  I further (and 
strongly) endorse the suggestion of using a system from Oracle with 
supported OS and hardware, but I don't want to get into any arguments 
about hardware or licensing please.


regards, Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Jeff Savit

On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeff Savit

Also, not a good idea for
performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk write 
cache gets disabled.  But it's trivial to simply force enable it thus solving 
the problem.

Granted - I just didn't want to get into a long story. With a 
self-described 'newbie' building a storage server I felt the best advice 
is to keep as simple as possible without adding steps (and without 
adding exposition about cache on partitioned disks - but now that you 
brought it up, yes, he can certainly do that).


Besides, there's always a way to fill up the 1TB disks :-) Besides the 
OS image, it could also store gold images for the guest virtual 
machines, maintained separately from the operational images.


regards, Jeff



--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread Jeff Savit

Hi darkblue, comments in-line

On 11/09/2011 06:11 PM, darkblue wrote:

hi, all
I am a newbie on ZFS, recently, my company is planning to build a 
entry-level enterpirse storage server.

here is the hardware list:

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

this storage is going to serve:
1、100+ VMware and xen guest
2、backup storage

my original plan is:
1、create a mirror root within a pair of SSD, then partition one the 
them for cache (L2ARC), Is this reasonable?
Why would you want your root pool to be on the SSD? Do you expect an 
extremely high I/O rate for the OS disks? Also, not a good idea for 
performance to partition the disks as you suggest.



2、the other pair of SSD will be used for ZIL

How about using 1 pair of SSD for ZIL, and the other pair of SSD for L2ARC


3、I haven't got a clear scheme for the 22 WD disks.
I suggest a mirrored pool on the WD disks for a root ZFS pool, and the 
other 20 disks for a data pool (quite possibly also a mirror) that also 
incorporates the 4 SSD, using 2 each for ZIL and L2ARC.  If you want to 
isolate different groups of virtual disks then you could have other 
possibilities. Maybe split the 20 disks between guest virtual disks and 
a backup pool. Lots of possibilities.




any suggestion?
especially how to get No 1 step done? 
Creating the mirrored root pool is easy enough and install time - just 
save the SSD for the guest virtual disks.  All of this is in absence of 
the actual performance characteristics you expect, but that's a 
reasonable starting point.


I hope that's useful...  Jeff

--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Jeff Savit

On 07/24/2011 08:07 AM, Orvar Korvar wrote:

So, when I created users with the GUI
System -  Administration -  Users and Groups
meny, it did not automatically create a zfs filesystem?? I must do that 
manually? Or are the users on a separate user filesystem?

I tried to move a large file from user to /rpool, but the system copied instead of 
moving. This is an indication of user having a separate filesystem. But the filesystem is 
not listed in zfs list.

Orvar,

What does it say in the new users' entries in /etc/passwd, and what do 
you see if you 'ls -l /export/home'? Perhaps you only have directories 
underneath /export/home instead of new ZFS datasets.


Jeff

--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedup question

2011-01-28 Thread Jeff Savit

 On 01/28/11 02:38 PM, Igor P wrote:

I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared

The thing I was wondering about was it seems like ZFS only dedup at the file 
level and not the block. When I make multiple copies of a file to the store I 
see an increase in the deup ratio, but when I copy similar files the ratio 
stays at 1.00x.
Igor, ZFS does indeed perform dedup at the block level. Identical files 
have identical blocks, of course, but similar files may have 
differences such that data is inserted, deleted or changed so each block 
is different. Same data has to be on the same block alignment to have 
duplicate blocks. Also, it's important to have lots of RAM or high speed 
devices to quickly access metadata, or removing data will take a lot of 
time, so please use appropriately sized systems. That's been discussed a 
lot on this list.


See Jeff Bonwick's blog for a very good description: 
http://blogs.sun.com/bonwick/entry/zfs_dedup


I hope that's helpful,
  Jeff (a different Jeff)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-12 Thread Jeff Savit

 Stephan,

There are a bunch of tools you can use, mostly provided with Solaris 11 
Express, plus arcstat, arc_summary that are available as downloads.  The 
latter tools will tell you the size and state of ARC, which may be 
specific to your issues since you cite memory.   For the list, could you 
describe the ZFS pool configuration (zpool status), and summarize output 
from vmstat, iostat, and zpool iostat?  Also, it might be helpful to 
issue 'prstat -s rss' to see if any process is growing its resident 
memory size.  An excellet source of information is the ZFS evil tuning 
guide (just Google those words), which has a wealth of information.


I hope that helps (for a start at least)
  Jeff



On 01/12/11 08:21 AM, Stephan Budach wrote:

Hi all,

I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 
32 GB RAM installed. I am running Sol11Expr on this host and I use it 
to primarily serve Netatalk AFP shares. From day one, I have noticed 
that the amount of free RAM decereased and along with that  decrease 
the overall performance of ZFS decreased as well.


Now, since I am still quite a Solaris newbie, I seem to cannot track 
where the heck all the memory has gone and why ZFS performs so poorly 
after an uptime of only 5 days.
I can reboot Solaris, which I did for testing, and that would bring 
back the performance to reasonable levels, but otherwiese I am quite 
at my witts end.
To give some numbers: the ZFS performance decreases down to 1/10th of 
the initial throughput, either read or write.


Anybody having some tips up their sleeves, where I should start 
looking for the missing memory?


Cheers,
budy


--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.sun.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Has anyone seen zpool corruption with VirtualBox shared folders?

2010-09-12 Thread Jeff Savit

Hi Warren,

This may not help much, except perhaps as a way to eliminate possible 
causes, but I ran b134 with VirtualBox and guests on ZFS for quite a 
long time without any such symptoms. My pool is a simple, unmirrored 
one, so the difference may be there. I used shared folders without 
incident. Guests include Linux (several distros, including RH), Windows, 
Solaris, BSD.


--Jeff

On 09/12/10 11:05 AM, Warren Strange wrote:

I posted the following to the VirtualBox forum. I would be interested in 
finding out if anyone else has ever seen zpool corruption with VirtualBox as a 
host on OpenSolaris:

-
I am running OpenSolaris b134 as a VirtualBox host, with a Linux guest.

I have experienced 6-7 instances of my zpool getting corrupted.  I am wondering 
if anyone else has ever seen this before.

This is on a mirrored zpool - using drives from two different manufacturers 
(i.e. it is very unlikely both drives would fail at the same time, with the 
same blocks going bad). I initially thought I might have a memory problem - 
which could explain the simultaneous disk failures. After running memory 
diagnostics for 24 hours with no errors reported, I am beginning to suspect it 
might be something else.

I am using shared folders from the guest - mounted at guest boot up time.

Is it possible that the Solaris vboxsf shared folder kernel driver is causing 
corruption? Being in the kernel, would it allow bypassing of the normal zfs 
integrity mechanisms? Or is it possible there is some locking issue or race 
condition that triggers the corruption?

Anecdotally, when I see the corruption the sequence of events seems to be:

- dmesg reports various vbox drivers being loaded (normal - just loading the 
drivers)
- Guest boots - gets just pass grub boot screen to the initial redhat boot 
screen.
- The Guest hangs and never boots.
- zpool status -v  reports corrupted files. The files are on the zpool 
containing the shared folders and the VirtualBox images


Thoughts?
   



--


Jeff Savit | Principal Sales Consultant
Phone: 602.824.6275
Email: jeff.sa...@oracle.com | Blog: http://blogs.sun.com/jsavit
Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] benefits of zfs root over ufs root

2010-04-01 Thread Jeff Savit




On 03/31/10 05:11 PM, Brett wrote:

  Hi Folks,

Im in a shop thats very resistant to change. The management here are looking for major justification of a move away from ufs to zfs for root file systems. Does anyone know if there are any whitepapers/blogs/discussions extolling the benefits of zfsroot over ufsroot?

Regards in advance
Rep
  

Hi,

Benefits of ZFS boot are described in a number of places, such as the
ZFS boot discussion at 
http://hub.opensolaris.org/bin/view/Community+Group+zfs/boot and on
BigAdmin, along with a lot of "how to" documents.

Some other URLs you may find helpful:
http://blogs.sun.com/storage/entry/zfs_boot_in_solaris_10
http://blogs.sun.com/tabriz/entry/zfs_boot
http://www.sun.com/bigadmin/content/submitted/zfs_root_clone.jsp

FWIW I touched on it briefly in a blog entry primarily on function
added after initial ZFS boot support)
http://blogs.sun.com/jsavit/entry/zfs_live_upgrade_and_flash 
and at http://blogs.sun.com/jsavit/entry/a_new_look_at_an 

Here are a few of the specific reasons:

- You have a pool of storage and don't have to worry about creating
slices for /, /var and so forth and finding out you didn't create them
with enough space (or with too much). Putting this another way, you
don't have to preallocate file systems, and they only consume as much
space as they need.

- If you have a volume manager - you no longer need it, which reduces
complexity and possibly cost.

- You get data integrity and mirroring without effort - something you
really want on a boot device. It's just a lot easier.

- Creating an alternative boot environment for Live Upgrade is much
faster and easier, cloning existing boot environments and only storing
changed bits instead of duplicating all of them. You can have as many
boot environments as you feel like instead of being limited by the
number of slices. ZFS lets you leverage snapshots and clones to speed
up and simplify system management. Initial lucreate is faster, and
subsequent ones are MUCH faster.

and perhaps my favorite:
- on-disk data consistency. No more fsck, ever!

I hope that's helpful.

regards, Jeff

-- 








Jeff Savit |
Principal Sales Consultant
Email: jeff.sa...@oracle.com | Blog:
http://blogs.sun.com/jsavit
Oracle
North America Commercial Hardware
Infrastructure
Software Pillar
2355 E Camelback Rd | Phoenix, AZ 85016






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fat32 ntfs or zfs

2010-02-27 Thread Jeff Savit




Dick Hoogendijk wrote:
Op
27-2-2010 13:15, Mertol Ozyoney schreef:
  
  This depends on what you are looking for.
Generaly zfs will be more secure due to checksum feature. Having seen a
lot of ntfs / fat drives going south die to bad sectors i'd not clasify
them very secure. However ntfs and fat can be used nearly on every os.


And also you shouldnt forget the extra capabilities of zfs like
snaphots ...

  
  
I'll go with ZFS. Like someone said with 'copies=2' for extra safety.
That should do it I think.
  
Compression will slow my system down too much, so I'll skip that one.
  
  

Dick - while you're working out your options, perhaps reconsider using
compression. I haven't observed the default compression algorithm
slowing things down: the CPU cost is modest and possibly that's
compensated by fewer I/O operations.

regards, Jeff

-- 








Jeff Savit |
Principal Sales Consultant
Phone: 732.537.3451
Email: jeff.sa...@sun.com | Blog:
http://blogs.sun.com/jsavit
Oracle
North America Commercial Hardware
Infrastructure
Software Pillar
2355 E Camelback Rd | Phoenix, AZ 85016






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Flash Jumpstart and mini-root version

2010-01-29 Thread Jeff Savit
Apologies if this has already been answered and I missed it.  You need 
to be at Solaris 10 10/09 (that is, u8), or apply the following 
patches to enable this feature:


   * SPARC:
 o 119534-15 : fixes to the /usr/sbin/flarcreate and
   /usr/sbin/flar command
 o 124630-26: updates to the install software
   * x86:
 o 119535-15 : fixes to the /usr/sbin/flarcreate and
   /usr/sbin/flar command
 o 124631-27: updates to the install software

I blogged about this a few months ago at: 
http://blogs.sun.com/jsavit/entry/zfs_live_upgrade_and_flash so have a 
look at that for a little more detail.


regards, Jeff


On 01/28/10 08:06 PM, Tony MacDoodle wrote:


Getting the following error when trying to do a ZFS Flash install via 
jumpstart.


error: field 1 - keyword pool

Do I have to have Solaris 10 u8 installed as the mini-root, or will 
previous versions of Solaris 10 work?


jumpstart profile below

install_type flash_install
archive_location nfs://192.168.1.230/export/install/media/sol10u8.flar 
http://192.168.1.230/export/install/media/sol10u8.flar

partitioning explicit
pool rpool auto 8g 8g yes
bootenv installbe bename c1t0d0s0



--
Jeff Savit
Principal Field Technologist
Sun Microsystems, Inc.
2398 E Camelback Rd   Email: jeff.sa...@sun.com
Phoenix, AZ  85016http://blogs.sun.com/jsavit/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup question

2009-11-03 Thread Jeff Savit

On 11/ 2/09 07:42 PM, Craig S. Bell wrote:

I just stumbled across a clever visual representation of deduplication:

http://loveallthis.tumblr.com/post/166124704

It's a flowchart of the lyrics to Hey Jude.  =-)

Nothing is compressed, so you can still read all of the words.  Instead, all of 
the duplicates have been folded together.   -cheers, CSB
  
This should reference the prior (April 1, 1984) research by Donald Knuth 
at http://www.cs.utexas.edu/users/arvindn/misc/knuth_song_complexity.pdf  


:-) Jeff

--
Jeff Savit
Principal Field Technologist
Sun Microsystems, Inc.Phone: 732-537-3451 (x63451)
2398 E Camelback Rd   Email: jeff.sa...@sun.com
Phoenix, AZ  85016http://blogs.sun.com/jsavit/ 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-25 Thread Jeff Savit

On 10/24/09 12:31 PM, Jim Mauro wrote:

Posting to zfs-discuss. There's no reason this needs to be
kept confidential.


okay.


5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
Seems pointless - they'd be much better off using mirrors,
which is a better choice for random IO...


Hmm, they're giving up so much % capacity as is, they could just as well 
give up some more and get better performance. Great idea!


--
Jeff Savit
Principal Field Technologist
Sun Microsystems, Inc.Phone: 732-537-3451 (x63451)
2398 E Camelback Rd   Email: jeff.sa...@sun.com
Phoenix, AZ  85016http://blogs.sun.com/jsavit/ 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss