Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion

- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very nicely). Mirroring is an alternative, but when you're on a tight budget losing N/2 disk capacity is painful.

What about adding a whole new RAID-Z vdev and dynamicly stripe across the RAID-Zs? Your capacity and performance will go up with each RAID-Z vdev you add.

Such as:
# zpool create swim raidz /var/tmp/dev1 /var/tmp/dev2 /var/tmp/dev3
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

        NAME               STATE     READ WRITE CKSUM
        swim               ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev1  ONLINE       0     0     0
            /var/tmp/dev2  ONLINE       0     0     0
            /var/tmp/dev3  ONLINE       0     0     0

errors: No known data errors
# zpool add swim raidz /var/tmp/dev4 /var/tmp/dev5 /var/tmp/dev6
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

        NAME               STATE     READ WRITE CKSUM
        swim               ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev1  ONLINE       0     0     0
            /var/tmp/dev2  ONLINE       0     0     0
            /var/tmp/dev3  ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev4  ONLINE       0     0     0
            /var/tmp/dev5  ONLINE       0     0     0
            /var/tmp/dev6  ONLINE       0     0     0

errors: No known data errors
#
# zpool add swim raidz /var/tmp/dev7 /var/tmp/dev8 /var/tmp/dev9
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

        NAME               STATE     READ WRITE CKSUM
        swim               ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev1  ONLINE       0     0     0
            /var/tmp/dev2  ONLINE       0     0     0
            /var/tmp/dev3  ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev4  ONLINE       0     0     0
            /var/tmp/dev5  ONLINE       0     0     0
            /var/tmp/dev6  ONLINE       0     0     0
          raidz1           ONLINE       0     0     0
            /var/tmp/dev7  ONLINE       0     0     0
            /var/tmp/dev8  ONLINE       0     0     0
            /var/tmp/dev9  ONLINE       0     0     0

errors: No known data errors
#



- The default scheme of one filesystem per user runs into problems with linux NFS clients; on one linux system, with 1300 logins, we already have to do symlinks with amd because linux systems can't mount more than about 255 filesystems at once. We can of course just have one filesystem exported, and make /home/student a subdirectory of that, but then we run into problems with quotas -- and on an undergraduate fileserver, quotas aren't optional!

Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b

Look for section "B3. Why can't I mount more than 255 NFS file systems on my client? Why is it sometimes even less than 255?".

Let us know if that works or doesn't work.

Also, ask for reasoning/schedule on when they are going to fix this on the linux NFS alias (i believe its [EMAIL PROTECTED] ). Trond should be able to help you. If going to OpenSolaris clients is not an option, then i would be curious to know why.

eric


Neither of these problems are necessarily showstoppers, but both make the transition more difficult. Any progress that could be made with them would help sites like us make the switch sooner.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to