I second what John wrote. We have never experienced a device taking out all
devices in a zone - because we went with the best practice of one adapter <->
one target per zone from the beginning.
For clarity and sanity sake, use aliases for each device host side and target
side and a naming convention like: 'host-fcs2'  or 'switch2-rmt12' for the
aliases, create the zones using the alias names and roll up everything into a
'config' that you enable (load into flash memory on the switch).
Thus 6 months down the line when an adapter or  device fails and is replaced
and you are scratching your head at the schema you drew for yourself on a
scrap of A4 ...
you just login to the switch(es) and do:
 admin> alishow    # list your device aliases
 admin> aliadd  <aliasname> <new_WWN>  # add new WWN
 admin> aliremove <aliasname> <old_WWN>  # remove the old WWN
 admin> cfgshow      # show the new alias in the defined config
 admin> cfgsave      # save the config to internal flash
 admin> cfgenable    # enable the changed config
 admin> cfgactvshow  # sanity check

HTH
Ian Smith
Oxford University Computing Services
England.


On Wednesday 07 Feb 2007 1:57 am, John Monahan wrote:
> It is best practice to put one initiator and one target in each zone.  It
> may seem cumbersome but its really not that bad.  You'll be happy you did
> it if you ever have SAN problems down the road.  I have seen one device
> take out all other devices within the same zone before, more than once.
> Just pick a good naming convention for your zones so you can tell exactly
> what is in each zone just from the name.  I also prefer to use aliases so
> when you replace a HBA or tape drive you just update the alias with the
> new PWWN instead of going in and changing 20 different zones.
>
>
> ***Please note new address, phone number, and email below***
> ______________________________
> John Monahan
> Consultant
> Logicalis
> 5500 Wayzata Blvd Suite 315
> Golden Valley, MN 55416
> Office: 763-417-0552
> Cell: 952-221-6938
> Fax:  952-833-0931
> [EMAIL PROTECTED]
> http://www.us.logicalis.com
>
>
>
>
> "Schneider, John" <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> 02/06/2007 05:05 PM
> Please respond to
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
>
> To
> ADSM-L@VM.MARIST.EDU
> cc
>
> Subject
> Tape drive zones for FC drives - best practices
>
>
>
>
>
>
> Greetings,
>         My habit in regards to zoning FC tape drives has always been to
> put
> one host HBA in a zone with all the tape drives it should see, and to have
> a
> separate zone for each host HBA.  For example, in a situation with 2 host
> HBAs and 10 tape drives, I would have two zones, one with one host HBA and
> 5
> tape drives, and the other with the other host HBA and 5 tape drives.
> Pretty simple.
>
>         But an IBM consultant working here is telling me that the best
> practice is to have a separate zone for each HBA/tape drive pair.  So in
> my
> example above, I would have 20 zones instead of two.   His claim is that
> an
> individual tape drive can hang all the other drives if they are in the
> same
> zone, but not if they are in separate ones.  Has anyone seen this in real
> life?
>
>         This becomes important to me because I am about to put in new SAN
> switches, and he wants me to follow this recommendation.  I have 2 TSM
> servers with 4 HBAs each, 4 NDMP nodes, and 14 tape drives.  Using my
> scheme, I would have 12 zones, with his scheme I would have 56 zones. That
> seems like a lot of zones, and unnecessarily cumbersome.
>
>         Is it really necessary to isolate each HBA/Tape drive into a
> separate zone?  Do individual tape drives really hang other drives in
> their
> zone?
>
> Best Regards,
>
> John D. Schneider
> Sr. System Administrator - Storage
> Sisters of Mercy Health System
> 3637 South Geyer Road
> St. Louis, MO.  63127
> Email:  [EMAIL PROTECTED]
> Office: 314-364-3150, Cell:  314-486-2359

Reply via email to