[zfs-discuss] How to get nfs work with zfs?

2011-12-05 Thread darkblue
I am going to share a dir and it's subdir through NFS to Virtual Host,
which include XEN(CentOS/netbsd) and ESXi,but failed, the following step is
what I did:

solaris 11:

 zfs create tank/iso
 zfs create tank/iso/linux
 zfs create tank/iso/windows

 share -F nfs -o rw,nosuid,root=VM-host1:VM-host2 /tank/iso
 chmod -R 777 /tank/iso


centos:

 mkdir /home/iso
 mount -t nfs -o rw,nosuid solaris11:/tank/iso /home/iso


echo newfile  /home/iso/newfile.txt
success

echo newfile  /home/iso/linux/newfile.txt
failed,and display: permission denied

and the, check the dir on solaris11:

 ls -al /tank/iso

 drwxrwxrwx   5 root root   8 Dec  5 13:04 .
 drwxr-xr-x   4 root root   4 Dec  2 22:45 ..
 drwxrwxrwx   2 root root   2 Dec  2 16:54 bsd
 drwxrwxrwx   2 root root   2 Dec  2 16:54 linux
 -rw-r--r--   1 nobody   nobody 8 Dec  5 12:57 newfile.txt
 drwxrwxrwx   2 root root   2 Dec  2 16:54 windows


check the dir on CentOS:

 ls -al /home/iso

 drwxr-xr-x+ 2 root  root   2 Dec  2 16:54 bsd
 drwxr-xr-x+ 2 root  root   2 Dec  2 16:54 linux
 -rw-r--r--+ 1 nfsnobody nfsnobody  8 Dec  5 12:57 newfile.txt
 drwxr-xr-x+ 2 root  root   2 Dec  2 16:54 windows


I got couple questions:
1、why the owner of newfile.txt is nfsnobody on CentOS, and on solaris, it's
nobody?
2、why the subdir do not have write access, how to accomplish it;
3、what does + mean?
4、do I need to remount a share dir after changing the file access on
solaris(NFS server)?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread darkblue
2011/11/11 Jeff Savit jeff.sa...@oracle.com

  On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss 
 zfs-discuss-boun...@opensolaris.org] On Behalf Of Jeff Savit

 Also, not a good idea for
 performance to partition the disks as you suggest.

  Not totally true.  By default, if you partition the disks, then the disk 
 write cache gets disabled.  But it's trivial to simply force enable it thus 
 solving the problem.


  Granted - I just didn't want to get into a long story. With a
 self-described 'newbie' building a storage server I felt the best advice is
 to keep as simple as possible without adding steps (and without adding
 exposition about cache on partitioned disks - but now that you brought it
 up, yes, he can certainly do that).

 Besides, there's always a way to fill up the 1TB disks :-) Besides the OS
 image, it could also store gold images for the guest virtual machines,
 maintained separately from the operational images.


how big of the solaris os'partition do you suggest?

regards, Jeff




 --


 *Jeff Savit* | Principal Sales Consultant
 Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog:
 http://blogs.oracle.com/jsavit
 Oracle North America Commercial Hardware
 Operating Environments  Infrastructure S/W Pillar
 2355 E Camelback Rd | Phoenix, AZ 85016




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread darkblue
2011/11/11 Ian Collins i...@ianshome.com

 On 11/11/11 08:52 PM, darkblue wrote:



 2011/11/11 Ian Collins i...@ianshome.com mailto:i...@ianshome.com


On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org

 mailto:zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org
 
[mailto:zfs-discuss- mailto:zfs-discuss-
boun...@opensolaris.org 
 mailto:bounces@opensolaris.**orgboun...@opensolaris.org
 ]

On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and
although many people will say they do this without problem,
I've heard just as many people (including myself) saying it's
unstable that way.


I've never had issues with Supermicro boards.  I'm using a similar
model and everything on the board is supported.

I recommend buying either the oracle hardware or the nexenta
on whatever they recommend for hardware.

Definitely DO NOT run the free version of solaris without
updates and expect it to be reliable.


That's a bit strong.  Yes I do regularly update my supported
(Oracle) systems, but I've never had problems with my own build
Solaris Express systems.

I waste far more time on (now luckily legacy) fully supported
Solaris 10 boxes!


 what does it mean?


 Solaris 10 live upgrade is a pain in the arse!  It gets confused when you
 have lots of filesystems, clones and zones.


  I am going to install solaris 10 u10 on this server.it http://server.it
 that any problem about compatible?

 and which version of solaris or solaris derived do you suggest to build
 storage with the above hardware.


 I'm running 11 Express now, upgrading to Solaris 11 this weekend.  Unless
 you have good reason to use Solaris 10, use Solaris 11 or OpenIndiana.


I was once consider Openindiana, but it's still on development stage, I
don't know if this version(oi_151a) is stable enough for production usage

-- 
 Ian.

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread darkblue
2011/11/11 Ian Collins i...@ianshome.com

 On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

 From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org[mailto:
 zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue

 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

 I just want to say, this isn't supported hardware, and although many
 people will say they do this without problem, I've heard just as many
 people (including myself) saying it's unstable that way.


 I've never had issues with Supermicro boards.  I'm using a similar model
 and everything on the board is supported.

  I recommend buying either the oracle hardware or the nexenta on whatever
 they recommend for hardware.

 Definitely DO NOT run the free version of solaris without updates and
 expect it to be reliable.


 That's a bit strong.  Yes I do regularly update my supported (Oracle)
 systems, but I've never had problems with my own build Solaris Express
 systems.

 I waste far more time on (now luckily legacy) fully supported Solaris 10
 boxes!


what does it mean?
I am going to install solaris 10 u10 on this server.it that any problem
about compatible?
and which version of solaris or solaris derived do you suggest to build
storage with the above hardware.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread darkblue
hi, all
I am a newbie on ZFS, recently, my company is planning to build a
entry-level enterpirse storage server.
here is the hardware list:

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

this storage is going to serve:
1、100+ VMware and xen guest
2、backup storage

my original plan is:
1、create a mirror root within a pair of SSD, then partition one the them
for cache (L2ARC), Is this reasonable?
2、the other pair of SSD will be used for ZIL
3、I haven't got a clear scheme for the 22 WD disks.

any suggestion?
especially how to get No 1 step done?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread darkblue
2011/11/10 Jeff Savit jeff.sa...@oracle.com

 **
 Hi darkblue, comments in-line


 On 11/09/2011 06:11 PM, darkblue wrote:

 hi, all
 I am a newbie on ZFS, recently, my company is planning to build a
 entry-level enterpirse storage server.
 here is the hardware list:

 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

 this storage is going to serve:
 1、100+ VMware and xen guest
 2、backup storage

 my original plan is:
 1、create a mirror root within a pair of SSD, then partition one the them
 for cache (L2ARC), Is this reasonable?

 Why would you want your root pool to be on the SSD? Do you expect an
 extremely high I/O rate for the OS disks? Also, not a good idea for
 performance to partition the disks as you suggest.

  because the solaris os occuppied the whole 1TB disk is a waste
 and the RAM is only 24G, does this could handle such big cache(160G)?

2、the other pair of SSD will be used for ZIL

How about using 1 pair of SSD for ZIL, and the other pair of SSD for L2ARC


 3、I haven't got a clear scheme for the 22 WD disks.

 I suggest a mirrored pool on the WD disks for a root ZFS pool, and the
 other 20 disks for a data pool (quite possibly also a mirror) that also
 incorporates the 4 SSD, using 2 each for ZIL and L2ARC.  If you want to
 isolate different groups of virtual disks then you could have other
 possibilities. Maybe split the 20 disks between guest virtual disks and a
 backup pool. Lots of possibilities.

 hmm, could you give me an example and more details info
suppose after mirror 20 hard disk,  we got a 10TB usage space, and 6TB will
be use for Vguest, 4TB will be use for backup purpose.
within 6TB space,3TB might throught iSCSI to XEN domU, the other 3TB might
throught NFS to VMware guest.
4TB might throught NFS for backup.
thanks in advanced.


 any suggestion?
 especially how to get No 1 step done?

 Creating the mirrored root pool is easy enough and install time - just
 save the SSD for the guest virtual disks.  All of this is in absence of the
 actual performance characteristics you expect, but that's a reasonable
 starting point.

 I hope that's useful...  Jeff

That is great, thanks Jeff


 --


 *Jeff Savit* | Principal Sales Consultant
 Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog:
 http://blogs.oracle.com/jsavit
 Oracle North America Commercial Hardware
 Operating Environments  Infrastructure S/W Pillar
 2355 E Camelback Rd | Phoenix, AZ 85016




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss