RE: Storage Area Networking [7:56857]

2002-11-05 Thread Symon Thurlow
I agree with Steven, SAN's are really good to address a need, but for
smaller companies, having all your storage eggs in one basket can
(potentially) be a problem. 

I have done some work for a company that uses ESA1's, the old SCSI
based storage works units. They had a few troubles with one of them,
which has the data volumes for Exchange, the file server, database
server etc etc. When that baby went down, so did EVERYTHING else.

I guess it is a trade off between funcitonalty and risk.

Symon




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56887t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



Re: Storage Area Networking [7:56857]

2002-11-05 Thread Aaron Ajello
Is this an example of what you call snapping? 
http://www.samag.com/documents/s=1824/sam0201j/0201j.htm 
Yes, it sounds like the same thing.  When you snap with and EMC SAN, you're
not actually making a second copy of the volume.  The SAN just makes the
volume available to two different servers.  Any changes to the volume are
stored in cache, not actually done to the data.  When the snap session is
ended, any changes made from the production side are folded into the volume
and any changes made to the snap\test side are thrown away.  I don't know
anything about Solaris or Veritas snapping, but I assume it's the same
thing.  They're both done with software and any changes are stored in cache.

Are snaps the same thing as a scratch disk?
I don't really know what a scratch disk is.  The term rings a bell
somewhere, but I couldn't tell you.

Where does one learn how to do volume sizing for growth/performance, or
does the SAN do it for you automagically in some ways?
You gotta do it the old fashion way, lift and move.  An EMC Clariion will
not let you grow a volume.  If you run out of space, you have to create a
new, bigger volume and move the data yourself.  I saw a demonstration of
MTI's SAN product and they said they could make a volume grow, but they also
said most operating systems wouldn't be able to see the increased size. 
What's the point then, I have no idea.

How does this better advantage of disk space' work, exactly?
You can create a RAID X, chop it up into multiple volumes and then give
those volumes to multiple servers.

What do networks have to do with this? 
Try not to think of it as an ip network, think of it as a storage network. 
Instead of having ip devices on either side, you connect disks to servers. 
On an ip network, a workstation connects to an email server.  On a storage
network a server connects to disk space.  It's networking in a basic sense -
something connected to something else.  That connection goes through a
switch, just like a connection from a workstation passes through a switch to
see an email server.







Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56897t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Storage Area Networking [7:56857]

2002-11-05 Thread Frank Dagenhardt
There is no more risk in the san than would be in a normal array. You still
have controllers and disks that can fail.

Frank 


-Original Message-
From: Symon Thurlow [mailto:sthurlow;webvein.com]
Sent: Tuesday, November 05, 2002 6:42 AM
To: [EMAIL PROTECTED]
Subject: RE: Storage Area Networking [7:56857]


I agree with Steven, SAN's are really good to address a need, but for
smaller companies, having all your storage eggs in one basket can
(potentially) be a problem. 

I have done some work for a company that uses ESA1's, the old SCSI
based storage works units. They had a few troubles with one of them,
which has the data volumes for Exchange, the file server, database
server etc etc. When that baby went down, so did EVERYTHING else.

I guess it is a trade off between funcitonalty and risk.

Symon




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56903t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



Re: Storage Area Networking [7:56857]

2002-11-05 Thread Priscilla Oppenheimer
Thanks for all the very helpful replies.

FYI, I also finally found a good link at Cisco that describes SANs in a
top-down way, with pictures, and few acronyms. ;-)

Here it is:

http://www.cisco.com/en/US/products/hw/ps4159/ps4358/products_white_paper09186a00800c4660.shtml

Priscilla

Steven A. Ridder wrote:
 
 We in the Cisco world are just entering the SAN arena, but it
 isn't new
 technology.  The only new thing will be iSCSI.  My company is
 HP and EMC's
 largest reseller, so we have been doing this stuff for a while,
 but it's
 brand new to me.  I have been picking everyone's brains the
 past few months
 to understand what all the hubbub is about in the SAN arena. 
 Here is what I
 have learned so far.
 
 The obvious:  First off you need a off disk place to store
 the data should
 the HD fail.  In the beginning there was the tape drive,
 usually connected
 to the same SCSI bus as the hard drives of the server.  Since
 everything was
 SCSI, and local to the server, it was quick and speedy, and you
 didn't have
 to worry about disc timeouts, LUN addressing, or distance
 etc..  The
 limitation was obviously the challenge of managing potentially
 hundreds of
 tape drives.
 
 So someone came out with the idea of creating a large disc
 system that many
 servers could connect to via SCSI.  This offered a more
 centralized solution
 for locally connected servers, but if a large company had many
 clusters of
 servers over a large city, state, country, continent and so on,
 this
 solution couldn't meet that need since the servers still
 connected to the
 central disc system via a SCSI bus.  What was needed was a way
 to transport
 data over a network.  At those times, 10/100 Ethernet was not
 fast enough,
 both because of the 100MB limitation (VS the GB speeds of a
 local SCSI bus)
 and the MTU of Ethernet.  If I tried to transfer even a 512
 byte chunk of
 data from a SCSI HD to another over Ethernet, the HD would
 timeout and give
 errors.
 
 I think this is where FC came in, with initial speeds of 1 GB
 and a direct
 encapsulation of raw SCSI data, eliminating the timeout issues
 and the MTU
 size, as a raw file could be large than 1500 bytes.  The FC
 spec also
 offered a way to address LUN's on servers.  The only problem I
 can find with
 FC is that there is no standardization as each FC switch vendor
 offers it's
 own flavor of FC, which in turn needs it's own approved FC
 cards for the
 server and each vendor of server/disc system needs to approve
 it's use.
 
 The next step is iSCSI, which will offer vendor
 interoperability and
 eliminate the separation of IP and FC networks.  On the LAN
 end, Cisco is
 going after Brocade with a new Switch in the 9xxx family (can't
 remember the
 exact name) that, from a technical issue, beats any Brocade
 switch hands
 down (now if only the EMC's, HP's, Hitachi's and IBM's would
 certify it).
 The 9xxx has 128 ports on 1 bus, vs a large brocade that has 32
 ports over 2
 busses, for a total of 64.  Not only that, the 9xxx switch
 looks like a Cat
 6k, and therefore is modular, and can combine FC/IP/iSCSI all
 in 1 box.
 Cisco hasn't come up with a go-to-market strategy yet, but I
 have met with
 one of the Technical Product Managers at Cisco, and it's coming
 any day now,
 so expect to see Cisco go head to head with Brocade.
 
 That may tackle one issue, but I have other needs where I need
 Cisco today:
 
 Now the big thing is DR, where I can back up data over WAN's to
 a remote DR
 site.  The problems I am encountering now is two fold:  I can't
 use a Cisco
 WAN router to take FC on LAN end and send over WAN such as a T1
 or T3.  I
 have customers doing AVVID and storage, but it's over IP, and
 not FC or
 iSCSI.  Cisco is off on the right foot with AVVID, but it needs
 an S at
 the end (S is for storage).  Once I can combine all 4, (from
 what I can
 gather, storage is just another application with it's own
 needs- *CAN* use a
 ton of bandwidth and is latency sensitive like SNA or Video) I
 can tell
 large, LARGE enterprises that we have a great DR solution.  I
 don't think
 that SAN's are for most companies, just the large ones.  The
 other problem I
 have is that none of the Cisco gear is certified, and it
 doesn't matter how
 awesome Cisco's gear is, if the vendors won't certify it, then
 they will
 fail.  If I had to add a third problem, I'd say iSCSI hasn't
 lived up to
 it's hype yet, and there are very few products (servers and
 disc systems)
 out there that offer native iSCSI.
 
 I am not a SAN expert, but I have seen more companies willing
 to invest in a
 SAN than a IP Tel network, so it's a good thing to learn, but
 not today.
 
 
 Priscilla Oppenheimer  wrote in
 message
 news:200211050001.AAA21659;groupstudy.com...
  Is anyone using Storage Area Networking? How do you use it?
 How well does
 it
  work? What problems does it solve for you?
 
  It it really networking, the way we know the term?? It sounds
 like it's
 sort
  of the next generation of 

RE: Storage Area Networking [7:56857]

2002-11-04 Thread Aaron Ajello
Hey, you just answered my question (spoofing), now I'm answering yours, or
I'll try to answer yours..

It's networking in that it uses a switch - but it's not networking at all
like cisco stuff.  The switch in a SAN connects servers to a storage
processor and a storage processor is connected to the disks.  A storage
processor is nothing more than a standard mother board, cpu, memory and a
small disk running a stripped down version of NT 4 (if you can believe that).

The switch has fiber coming into it from the servers, and fiber coming out
of it going to the storage processor.  The servers have a HBA (Host Bus
Adapter) which is basically a scsi card that you connect a fiber cable to. 
The other end of that cable attaches to the switch.  So the switch allows
the server to speak to the storage processor.  The storage processor allows
the server to see disks.

server  switch - storage processor - disks

I manage an EMC Clariion, which is their middle range SAN.  The advantage to
is you can add disk space to an existing server quickly and easily,
sometimes without a reboot if it's already connected to the SAN.  You can
also easily move a volume from one server to another.  Another use is
something they call snap.  Snapping is taking a volume and creating a
second, virtual copy that you can allow another server to see.  Say you had
a database you wanted to test something out on, you could snap it and then
allow another, test server to see the snap copy.  Then you test what ever
you want without actually affecting the original, production data.  Another
advantage is better use of disk space.  You could create a RAID 5 and then
chop it up into 4 volumes and give one volume to four different servers, or
you could give them all to one server, or whatever else.  You do have to
worry about contention, but it's possible to take better advantage of disk
space.

Basically you're right, it's a new way of managing hard drives, with added
capabilities.

If you'd like feel free to email me with any questions.  I think SANs are
pretty cool, so I'm happy to talk about them.

-Aaron



Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56860t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



Re: Storage Area Networking [7:56857]

2002-11-04 Thread dre
Aaron Ajello  wrote in message
news:200211050047.AAA01492;groupstudy.com...
 I manage an EMC Clariion, which is their middle range SAN.  The advantage
to
 is you can add disk space to an existing server quickly and easily,
 sometimes without a reboot if it's already connected to the SAN.  You can
 also easily move a volume from one server to another.  Another use is
 something they call snap.  Snapping is taking a volume and creating a
 second, virtual copy that you can allow another server to see.  Say you
had
 a database you wanted to test something out on, you could snap it and then
 allow another, test server to see the snap copy.  Then you test what ever
 you want without actually affecting the original, production data.
Another
 advantage is better use of disk space.  You could create a RAID 5 and then
 chop it up into 4 volumes and give one volume to four different servers,
or
 you could give them all to one server, or whatever else.  You do have to
 worry about contention, but it's possible to take better advantage of disk
 space.

Aaron, I have some questions you might be able to answer.

Is this an example of what you call snapping?
http://www.samag.com/documents/s=1824/sam0201j/0201j.htm
If so, what's the difference between doing snapping with software
(e.g. Solaris default or Veritas) and with a storage area network?

Are snaps the same thing as a scratch disk?  I guess my question is:
what's a scratch disk?  Why would you want a regular disk (not in a
RAID 0, 1, 0+1, 5), or set of disks, in a volume besides 'snaps'?

Also I find RAID 0, 0+1, and 5 and volumes interesting because you
can theoretically use up the closest spindles on the disk (which I
understand
to be faster for disk I/O) for certain purposes.  Is there any
practice/theory
behind doing this?  Is it more of an art or a science?  Where does one learn
how to do volume sizing for growth/performance, or does the SAN do it for
you automagically in some ways?  How does this `better advantage of disk
space' work, exactly?  What do networks have to do with this?

Thanks,
-dre




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56863t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



Re: Storage Area Networking [7:56857]

2002-11-04 Thread Steven A. Ridder
We in the Cisco world are just entering the SAN arena, but it isn't new
technology.  The only new thing will be iSCSI.  My company is HP and EMC's
largest reseller, so we have been doing this stuff for a while, but it's
brand new to me.  I have been picking everyone's brains the past few months
to understand what all the hubbub is about in the SAN arena.  Here is what I
have learned so far.

The obvious:  First off you need a off disk place to store the data should
the HD fail.  In the beginning there was the tape drive, usually connected
to the same SCSI bus as the hard drives of the server.  Since everything was
SCSI, and local to the server, it was quick and speedy, and you didn't have
to worry about disc timeouts, LUN addressing, or distance etc..  The
limitation was obviously the challenge of managing potentially hundreds of
tape drives.

So someone came out with the idea of creating a large disc system that many
servers could connect to via SCSI.  This offered a more centralized solution
for locally connected servers, but if a large company had many clusters of
servers over a large city, state, country, continent and so on, this
solution couldn't meet that need since the servers still connected to the
central disc system via a SCSI bus.  What was needed was a way to transport
data over a network.  At those times, 10/100 Ethernet was not fast enough,
both because of the 100MB limitation (VS the GB speeds of a local SCSI bus)
and the MTU of Ethernet.  If I tried to transfer even a 512 byte chunk of
data from a SCSI HD to another over Ethernet, the HD would timeout and give
errors.

I think this is where FC came in, with initial speeds of 1 GB and a direct
encapsulation of raw SCSI data, eliminating the timeout issues and the MTU
size, as a raw file could be large than 1500 bytes.  The FC spec also
offered a way to address LUN's on servers.  The only problem I can find with
FC is that there is no standardization as each FC switch vendor offers it's
own flavor of FC, which in turn needs it's own approved FC cards for the
server and each vendor of server/disc system needs to approve it's use.

The next step is iSCSI, which will offer vendor interoperability and
eliminate the separation of IP and FC networks.  On the LAN end, Cisco is
going after Brocade with a new Switch in the 9xxx family (can't remember the
exact name) that, from a technical issue, beats any Brocade switch hands
down (now if only the EMC's, HP's, Hitachi's and IBM's would certify it).
The 9xxx has 128 ports on 1 bus, vs a large brocade that has 32 ports over 2
busses, for a total of 64.  Not only that, the 9xxx switch looks like a Cat
6k, and therefore is modular, and can combine FC/IP/iSCSI all in 1 box.
Cisco hasn't come up with a go-to-market strategy yet, but I have met with
one of the Technical Product Managers at Cisco, and it's coming any day now,
so expect to see Cisco go head to head with Brocade.

That may tackle one issue, but I have other needs where I need Cisco today:

Now the big thing is DR, where I can back up data over WAN's to a remote DR
site.  The problems I am encountering now is two fold:  I can't use a Cisco
WAN router to take FC on LAN end and send over WAN such as a T1 or T3.  I
have customers doing AVVID and storage, but it's over IP, and not FC or
iSCSI.  Cisco is off on the right foot with AVVID, but it needs an S at
the end (S is for storage).  Once I can combine all 4, (from what I can
gather, storage is just another application with it's own needs- *CAN* use a
ton of bandwidth and is latency sensitive like SNA or Video) I can tell
large, LARGE enterprises that we have a great DR solution.  I don't think
that SAN's are for most companies, just the large ones.  The other problem I
have is that none of the Cisco gear is certified, and it doesn't matter how
awesome Cisco's gear is, if the vendors won't certify it, then they will
fail.  If I had to add a third problem, I'd say iSCSI hasn't lived up to
it's hype yet, and there are very few products (servers and disc systems)
out there that offer native iSCSI.

I am not a SAN expert, but I have seen more companies willing to invest in a
SAN than a IP Tel network, so it's a good thing to learn, but not today.


Priscilla Oppenheimer  wrote in message
news:200211050001.AAA21659;groupstudy.com...
 Is anyone using Storage Area Networking? How do you use it? How well does
it
 work? What problems does it solve for you?

 It it really networking, the way we know the term?? It sounds like it's
sort
 of the next generation of file servers, but it also sounds like it's just
a
 new way of managing hard drives.

 I'm having a difficult time figuring out what it is really. Thanks for
 helping me understand it.

 ___

 Priscilla Oppenheimer
 www.troubleshootingnetworks.com
 www.priscilla.com




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56871t=56857
--
FAQ, list archives, and 

Re: Storage Area Networking [7:56857]

2002-11-04 Thread Howard C. Berkowitz
I'm relatively new to SAN thinking, but I find a remarkable 
similarity between the description of SANs and the internal 
organization of a carrier-grade router. Both seem to have separate 
control processors that control interface processors connected to a 
high-speed crossbar (presumably optical or 
electronic-optical-electronic) switching fabric.




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7i=56873t=56857
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]