Re: [zfs-discuss] zpool naming
David, Appreciated so much for your sharing. Best Regards, Adele On 10/25/2011 10:23 AM, David Magda wrote: On Tue, October 25, 2011 09:42, adele@oracle.com wrote: Hi all, I have a customer who wants to know what is the max characters allowed in creating name for zpool, Are there any restrictions in using special characters? 255 characters. Try doing a 'man zpool': Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), and period ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are names beginning with the pattern "c[0-9]". The vdev specification is described in the "Virtual Devices" section. Or, use the source Luke: Going to http://src.opensolaris.org, and searching for "zpool" turns up: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_main.c Inside of it we have a zpool_do_create() function, which defines a 'char *poolname' variable. From there we call a zpool_create() in libzfs/common/libzfs_pool.c to zpool_name_valid() to pool_namecheck(), where we end up with the following code snippet: /* * Make sure the name is not too long. * * ZPOOL_MAXNAMELEN is the maximum pool length used in the userland * which is the same as MAXNAMELEN used in the kernel. * If ZPOOL_MAXNAMELEN value is changed, make sure to cleanup all * places using MAXNAMELEN. */ if (strlen(pool)>= MAXNAMELEN) { if (why) *why = NAME_ERR_TOOLONG; return (-1); } Check the function for further restrictions: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c#288 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool naming
On Tue, October 25, 2011 09:42, adele@oracle.com wrote: > Hi all, > > I have a customer who wants to know what is the max characters allowed > in creating name for zpool, > > Are there any restrictions in using special characters? 255 characters. Try doing a 'man zpool': Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), and period ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are names beginning with the pattern "c[0-9]". The vdev specification is described in the "Virtual Devices" section. Or, use the source Luke: Going to http://src.opensolaris.org, and searching for "zpool" turns up: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_main.c Inside of it we have a zpool_do_create() function, which defines a 'char *poolname' variable. From there we call a zpool_create() in libzfs/common/libzfs_pool.c to zpool_name_valid() to pool_namecheck(), where we end up with the following code snippet: /* * Make sure the name is not too long. * * ZPOOL_MAXNAMELEN is the maximum pool length used in the userland * which is the same as MAXNAMELEN used in the kernel. * If ZPOOL_MAXNAMELEN value is changed, make sure to cleanup all * places using MAXNAMELEN. */ if (strlen(pool) >= MAXNAMELEN) { if (why) *why = NAME_ERR_TOOLONG; return (-1); } Check the function for further restrictions: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c#288 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zpool naming
Hi all, I have a customer who wants to know what is the max characters allowed in creating name for zpool, Are there any restrictions in using special characters? Thanks in advance for advise. Adele ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS in front of MD3000i
On Mon, Oct 24, 2011 at 5:45 PM, Ray Van Dolson wrote: > Unfortunately, it doesn't look like the MD3000i this (though this[1] > post seems to reference an Enhanced JBOD mode), so we decided to > create a whole bunch of RAID0 1-disk LUNs and expose those. Great.. > except that the MD3000i only lets you create 16 LUNs and we have 44 > disks total. :) > > Anyone tried this? I guess our best bet will be to just do all the > RAID stuff on the MD3000i and export one LUN to ZFS. When i have run into this kind of problem (a RAID unit that either does not support JBOD or one drive per LUN or enough LUNs), I usually carve the array disks into small sets of RAID0 and present those to ZFS. Currently I am reusing some old Sun 3511 arrays, each loaded with 12 x 500GB drives. The 3511 supports one drive LUNs, but discourages their use (it is for support use only according to the documentation), so I created RAID0 sets of 2 drives each and ZFS sees 6 x 1TB LUNs. ZFS then provides my redundancy and data integrity. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, RPI Players ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs
On Fri, Oct 21, 2011 at 9:33 PM, Mike Gerdts wrote: > Some people have trained their fingers to use the -f option on every > command that supports it to force the operation. For instance, how > often do you do rm -rf vs. rm -r and answer questions about every > file? The last time I tried it, the SUN / ORACLE java webconsole plugin for ZFS used the -f option on every command issued, including zpool create and zpool import ... very dangerous in my opinion, which is part of the reason I don't use that interface. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, RPI Players ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs
> Some people have trained their fingers to use the -f option on every > command that supports it to force the operation. For instance, how > often do you do rm -rf vs. rm -r and answer questions about every > file? > > If various zpool commands (import, create, replace, etc.) are used > against the wrong disk with a force option, you can clobber a zpool > that is in active use by another system. In a previous job, my lab > environment had a bunch of LUNs presented to multiple boxes. This was > done for convenience in an environment where there would be little > impact if an errant command were issued. I'd never do that in > production without some form of I/O fencing in place. > I also have that habit. And It is good practice to bear in mind. Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs
Paul, Thanks. I understand now. Fred > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Paul Kraus > Sent: 星期一, 十月 24, 2011 22:38 > To: ZFS Discussions > Subject: Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs > > On Sat, Oct 22, 2011 at 12:36 AM, Paul Kraus > wrote: > > > Recently someone posted to this list of that _exact_ situation, they > loaded > > an OS to a pair of drives while a pair of different drives containing > an OS > > were still attached. The zpool on the first pair ended up not being > able to > > be imported, and were corrupted. I can post more info when I am back > in the > > office on Monday. > > See the thread started on Tue, Aug 2, 2011 at 12:23 PM with a > Subject of "[zfs-discuss] Wrong rpool used after reinstall!", the > followups, and at least one additional related thread. > > While I agree that you _should_ be able to have multiple unrelated > boot environments on hard drives at once, it seems prudent to me to > NOT do such. I assume you _can_ manage multiple ZFS based boot > environments using Live Upgrade (or whatever has replaced it in 11). > NOTE that I have not done such (managed multiple ZFS boot environments > with Live Upgrade), but I ASSUME you can. > > I suspect that the "root" of this potential problem is in the ZFS > boot code and the use of the same zpool name for multiple zpools at > once. By having the boot loader use the zpool directly you get the > benefit of having the redundancy of ZFS much earlier in the boot > process (the only thing that appears to load off of a single drive is > the boot loader, everything from there on loads from the mirrored > zpool, at least on my NCP 3 system, my first foray into ZFS root). The > danger is that if there are multiple zpools with the same (required) > name, then the boot loader may become confused, especially if drives > get physically moved around. > > -- > {1-2-3-4-5-6-7- > } > Paul Kraus > -> Senior Systems Architect, Garnet River > ( http://www.garnetriver.com/ ) > -> Sound Coordinator, Schenectady Light Opera Company ( > http://www.sloctheater.org/ ) > -> Technical Advisor, RPI Players > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] unsubscribe
unsubscribe ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss