RE: High Density Multimode Runs BCP?

2005-01-27 Thread Neil J. McRae


> a) Find/adapt a 24/48 thread inside-plant cable (either 
> multimode, or condition single mode) and connectorize the 
> ends. Adv: Clean, Single, high density cable runs, Dis: Not 
> sure if such a beast exists in multimode, and the whole cable 
> has to be replaced/made redundant if one fiber dies and you 
> need a critical restore, may need a break out shelf.


We use multicore fibre cables in our datacentre and nodes.
In Europe we can obtain these to order on specific
size. If we need to bring these back to a central area we 
use an ODF and and then patch accordingly with fibre boxes
in cabinets. The multicores are in a rugged PVC type plastic
sheath [same type of plastic/pvc that is used for gas piping
in the streets here in Europe]. You have to do some serious 
hacking to damage this type of cable, and if you've ever
been to Telehouse in London you'll know how the type
of situation that I mean.

This is the company we use in the UK to do a lot
of this work

http://www.mainframecomms.co.uk/products_cables.html

I can't recommend them highly enough.

You may also want to look at passive optical stuff that you
can use to cheaply use to do cdwm.

Regards,
Neil.



Re: High Density Multimode Runs BCP?

2005-01-26 Thread Valdis . Kletnieks
On Thu, 27 Jan 2005 00:48:25 EST, "Hannigan, Martin" said:

> As I said earlier, ribbon isn't designed for data centers,
> nor is innerduct designed for ribbon.
> 
> I'd love to see some photos of people using innerduct+ribbon
> cable. :-)

And let me guess - it probably actually works (more or less) for long enough
that the perpetrators have utilized the usual turnover rate to secure other
employment before the full extent of the horror becomes obvious? ;)

(In a quarter century, I have *yet* to encounter a cable-gone-horrorshow where
the perpetrators are still to be found.  Wonder how the cable *knows* that it's
now safe to go bad...)



pgpRf6RdJsrn4.pgp
Description: PGP signature


RE: High Density Multimode Runs BCP?

2005-01-26 Thread Hannigan, Martin



Just in case some folks are wondering what we are
talking about, here's a decent URL covering it:


http://images.google.com/imgres?imgurl=http://www.tpub.com/neets/tm/30NVM053
.GIF&imgrefurl=http://www.tpub.com/neets/tm/107-8.htm&h=387&w=397&sz=13&tbni
d=gGUI7fKu6OwJ:&tbnh=116&tbnw=119&start=16&prev=/images%3Fq%3Dfiber%2Bribbon
%2Bcable%26hl%3Den%26lr%3D%26c2coff%3D1





--
Martin Hannigan (c) 617-388-2663
VeriSign, Inc.  (w) 703-948-7018
Network Engineer IV   Operations & Infrastructure
[EMAIL PROTECTED]



> -Original Message-
> From: Scott McGrath [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 26, 2005 10:44 PM
> To: Hannigan, Martin
> Cc: nanog@merit.edu
> Subject: RE: High Density Multimode Runs BCP?
> 
> 
> 
> Hi, Martin
> 
> Yes indeed the ribbon cable.  Tho' due to the damage factor I probably
> would not specify it again unless I could use innerduct to 
> protect it as
> we had some machine room renovations done and the construction workers
> managed to kink the underfloor runs as well as setting off the Halon
> system several times...
> 
> 
> The ribbon cables work well if they are adequately protected.  If the
> people in the machine room environment are skilled at handling fiber
> there should be no problems.   If however J. Random Laborer 
> has access I
> would go with conventional armored runs.
> 
> 
> Scott C. McGrath
> 
> On Wed, 26 Jan 2005, Hannigan, Martin wrote:
> 
> >
> > The ribbon cable?
> >
> >
> >
> >
> > --
> > Martin Hannigan (c) 617-388-2663
> > VeriSign, Inc.  (w) 703-948-7018
> > Network Engineer IV   Operations & 
> Infrastructure
> > [EMAIL PROTECTED]
> >
> >
> >
> > > -----Original Message-
> > > From: Scott McGrath [mailto:[EMAIL PROTECTED]
> > > Sent: Wednesday, January 26, 2005 6:44 PM
> > > To: Hannigan, Martin
> > > Cc: Thor Lancelot Simon; nanog@merit.edu
> > > Subject: RE: High Density Multimode Runs BCP?
> > >
> > >
> > >
> > > Hi, Thor
> > >
> > > We used it to create zone distribution points throughout our
> > > datacenter's
> > > which ran back to a central distribution point.   This
> > > solution has been
> > > in place for almost 4 years.   We have 10Gb SM ethernet links
> > > traversing
> > > the datacenter which link to the campus distribution center.
> > >
> > > The only downsides we have experienced are
> > >
> > > 1 - Lead time in getting the component parts
> > >
> > > 2 - easiliy damaged by careless contractors
> > >
> > > 3 - somewhat higher than normal back reflection
> > > on poor terminations
> > >
> > > Scott C. McGrath
> > >
> > > On Wed, 26 Jan 2005, Hannigan, Martin wrote:
> > >
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
> > > > > Sent: Wednesday, January 26, 2005 3:17 PM
> > > > > To: Hannigan, Martin; nanog@merit.edu
> > > > > Subject: Re: High Density Multimode Runs BCP?
> > > > >
> > > > >
> > > > > On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, 
> Martin wrote:
> > > > > > > >
> > > > > > > > When running say 24-pairs of multi-mode across a
> > > > > datacenter, I have
> > > > > > > > considered a few solutions, but am not sure what is
> > > > > > > common/best practice.
> > > > > > >
> > > > > > > I assume multiplexing up to 10Gb (possibly two links
> > > > > thereof) and then
> > > > > > > back down is cost-prohibitive?  That's probably the
> > > > > "best" practice.
> > > > > >
> > > > > > I think he's talking physical plant. 200m should be
> > > fine. Consult
> > > > > > your equipment for power levels and support distance.
> > > > >
> > > > > Sure -- but given the cost of the new physical plant
> > > installation he's
> > > > > talking about, the fact that he seems to know the present
> > > maximum data
> > > > >

RE: High Density Multimode Runs BCP?

2005-01-26 Thread Hannigan, Martin

> -Original Message-
> From: Scott McGrath [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 26, 2005 10:44 PM
> To: Hannigan, Martin
> Cc: nanog@merit.edu
> Subject: RE: High Density Multimode Runs BCP?
> 
> 
> 
> Hi, Martin
> 
> Yes indeed the ribbon cable.  Tho' due to the damage factor I probably
> would not specify it again unless I could use innerduct to 
> protect it as
> we had some machine room renovations done and the construction workers
> managed to kink the underfloor runs as well as setting off the Halon
> system several times...
> 
> 
> The ribbon cables work well if they are adequately protected.  If the
> people in the machine room environment are skilled at handling fiber
> there should be no problems.   If however J. Random Laborer 
> has access I
> would go with conventional armored runs.

Armored runs are for outside plant, or, inside plant that
doesn't have adequate cable management. I've seen armored
indoors as a "security" measure, but if you don't control
your data center, you may as well save the money since 
armored cable is like locks, it keeps good people out. 
It may keep slack electricians from damaging it, but 
considering they are usually heavy industry, I wouldn't
rely on it for that reason alone. :-)

As I said earlier, ribbon isn't designed for data centers,
nor is innerduct designed for ribbon.

I'd love to see some photos of people using innerduct+ribbon
cable. :-)

YMMV.

-M<


RE: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Hi, Martin

Yes indeed the ribbon cable.  Tho' due to the damage factor I probably
would not specify it again unless I could use innerduct to protect it as
we had some machine room renovations done and the construction workers
managed to kink the underfloor runs as well as setting off the Halon
system several times...


The ribbon cables work well if they are adequately protected.  If the
people in the machine room environment are skilled at handling fiber
there should be no problems.   If however J. Random Laborer has access I
would go with conventional armored runs.


Scott C. McGrath

On Wed, 26 Jan 2005, Hannigan, Martin wrote:

>
> The ribbon cable?
>
>
>
>
> --
> Martin Hannigan (c) 617-388-2663
> VeriSign, Inc.  (w) 703-948-7018
> Network Engineer IV   Operations & Infrastructure
> [EMAIL PROTECTED]
>
>
>
> > -Original Message-
> > From: Scott McGrath [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, January 26, 2005 6:44 PM
> > To: Hannigan, Martin
> > Cc: Thor Lancelot Simon; nanog@merit.edu
> > Subject: RE: High Density Multimode Runs BCP?
> >
> >
> >
> > Hi, Thor
> >
> > We used it to create zone distribution points throughout our
> > datacenter's
> > which ran back to a central distribution point.   This
> > solution has been
> > in place for almost 4 years.   We have 10Gb SM ethernet links
> > traversing
> > the datacenter which link to the campus distribution center.
> >
> > The only downsides we have experienced are
> >
> > 1 - Lead time in getting the component parts
> >
> > 2 - easiliy damaged by careless contractors
> >
> > 3 - somewhat higher than normal back reflection
> > on poor terminations
> >
> > Scott C. McGrath
> >
> > On Wed, 26 Jan 2005, Hannigan, Martin wrote:
> >
> > >
> > >
> > > > -Original Message-
> > > > From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
> > > > Sent: Wednesday, January 26, 2005 3:17 PM
> > > > To: Hannigan, Martin; nanog@merit.edu
> > > > Subject: Re: High Density Multimode Runs BCP?
> > > >
> > > >
> > > > On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
> > > > > > >
> > > > > > > When running say 24-pairs of multi-mode across a
> > > > datacenter, I have
> > > > > > > considered a few solutions, but am not sure what is
> > > > > > common/best practice.
> > > > > >
> > > > > > I assume multiplexing up to 10Gb (possibly two links
> > > > thereof) and then
> > > > > > back down is cost-prohibitive?  That's probably the
> > > > "best" practice.
> > > > >
> > > > > I think he's talking physical plant. 200m should be
> > fine. Consult
> > > > > your equipment for power levels and support distance.
> > > >
> > > > Sure -- but given the cost of the new physical plant
> > installation he's
> > > > talking about, the fact that he seems to know the present
> > maximum data
> > > > rate for each physical link, and so forth, I think it does
> > > > make sense to
> > > > ask the question "is the right solution to simply be more
> > economical
> > > > with physical plant by multiplexing to a higher data rate"?
> > > >
> > > > I've never used fibre ribbon, as advocated by someone else in
> > > > this thread,
> > > > and that does sound like a very clever space- and
> > possibly cost-saving
> > > > solution to the puzzle.  But even so, spending tens of
> > thousands of
> > > > dollars to carry 24 discrete physical links hundreds of
> > > > meters across a
> > >
> > > Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
> > > is $25 per splice per strand including termination. The 100m
> > > patch chords are $100.00. It's cheaper to bundle and splice.
> > >
> > > How much does the mux cost?
> > >
> > >
> > > > datacenter, each at what is, these days, not a
> > particularly high data
> > > > rate, may not be the best choice.  There may well be some
> > > > question about
> > > > at which layer it makes sense to aggregate the links --
> > but to me, 

Re: High Density Multimode Runs BCP?

2005-01-26 Thread David Lesher

Speaking on Deep Background, the Press Secretary whispered:
> 
> 
> 
> Look into MPO cabling
> 
> MPO uses fiber ribbon cables the most common of which is 6x2
> six strands by two layers

I've helped deploy/retrieve MBO at the recent IETF at the
Hinckley Hilton here in DC. It's not Mil-Spec sturdy but 
with reasonable care, it does the job.


-- 
A host is a host from coast to [EMAIL PROTECTED]
& no one will talk to a host that's close[v].(301) 56-LINUX
Unless the host (that isn't close).pob 1433
is busy, hung or dead20915-1433


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Wed, Jan 26, 2005 at 09:17:44PM -0500, John Fraizer wrote:
> 
> >I assume multiplexing up to 10Gb (possibly two links thereof) and then
> >back down is cost-prohibitive?  That's probably the "best" practice.
> 
> It's best practice to put two new points of failure (mux + demux) in a 
> 200m fiber run?

Well, that depends.  To begin with, it's not one run, it's 24 runs.
Deepak described the cost of those 24 runs as:

> I priced up one of these runs at 100m, and I was seeing a list price in
> the ballpark of $2500-$3000 plenum. So I figured it was worth asking if
> here is a better way when we're talking about N times that number. :)

So, to take his lower estimate 24 x $2500, we're talking about $60,000
worth of cable -- and all the bulk and management hassle of 48 strands
of fibre for what is in one sense logically a single run.

It still probably doesn't cover the cost of muxing it up and back down,
but particularly when you consider that space for 48 strands isn't free
either, it is certainly worth thinking about.

I was a little surprised by the $2500/pair figure but that's what he
said.

Thor


Re: High Density Multimode Runs BCP?

2005-01-26 Thread John Fraizer
Thor Lancelot Simon wrote:
On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
I have a situation where I want to run Nx24 pairs of GE across a 
datacenter to several different customers. Runs are about 200meters max.

When running say 24-pairs of multi-mode across a datacenter, I have 
considered a few solutions, but am not sure what is common/best practice.

I assume multiplexing up to 10Gb (possibly two links thereof) and then
back down is cost-prohibitive?  That's probably the "best" practice.
Thor

It's best practice to put two new points of failure (mux + demux) in a 
200m fiber run?

John


RE: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Hi, Thor

We used it to create zone distribution points throughout our datacenter's
which ran back to a central distribution point.   This solution has been
in place for almost 4 years.   We have 10Gb SM ethernet links traversing
the datacenter which link to the campus distribution center.

The only downsides we have experienced are

1 - Lead time in getting the component parts

2 - easiliy damaged by careless contractors

3 - somewhat higher than normal back reflection
on poor terminations

Scott C. McGrath

On Wed, 26 Jan 2005, Hannigan, Martin wrote:

>
>
> > -Original Message-
> > From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, January 26, 2005 3:17 PM
> > To: Hannigan, Martin; nanog@merit.edu
> > Subject: Re: High Density Multimode Runs BCP?
> >
> >
> > On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
> > > > >
> > > > > When running say 24-pairs of multi-mode across a
> > datacenter, I have
> > > > > considered a few solutions, but am not sure what is
> > > > common/best practice.
> > > >
> > > > I assume multiplexing up to 10Gb (possibly two links
> > thereof) and then
> > > > back down is cost-prohibitive?  That's probably the
> > "best" practice.
> > >
> > > I think he's talking physical plant. 200m should be fine. Consult
> > > your equipment for power levels and support distance.
> >
> > Sure -- but given the cost of the new physical plant installation he's
> > talking about, the fact that he seems to know the present maximum data
> > rate for each physical link, and so forth, I think it does
> > make sense to
> > ask the question "is the right solution to simply be more economical
> > with physical plant by multiplexing to a higher data rate"?
> >
> > I've never used fibre ribbon, as advocated by someone else in
> > this thread,
> > and that does sound like a very clever space- and possibly cost-saving
> > solution to the puzzle.  But even so, spending tens of thousands of
> > dollars to carry 24 discrete physical links hundreds of
> > meters across a
>
> Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
> is $25 per splice per strand including termination. The 100m
> patch chords are $100.00. It's cheaper to bundle and splice.
>
> How much does the mux cost?
>
>
> > datacenter, each at what is, these days, not a particularly high data
> > rate, may not be the best choice.  There may well be some
> > question about
> > at which layer it makes sense to aggregate the links -- but to me, the
> > question "is it really the best choice of design constraints to take
> > aggregation/multiplexing off the table" is a very substantial one here
> > and not profitably avoided.
>
> Fiber ribbon doesn't "fit" in any long distance (+7') distribution
> system, rich or poor, that I'm aware of. Racks, cabinets, et. al.
> are not very conducive to it. The only application I've seen was
> IBM fiber channel.
>
> Datacenters are sometimes permanent facilities and it's better,
> IMHO, to make things more permanent with cross connect than
> aggregation. It enables you to make your cabinet cabling and
> your termination area cabling almost permanent and maintenance
> free - as well as giving you test,add, move, and drop. It's more
> cable, but less equipment to maintain, support, and reduces
> failure points. It enhances security as well. You can't open
> the cabinet and just jack something in. You have to provision
> behind the locked term area.
>
> I'd love to hear about a positive experience using ribbon cable
> inside a datacenter.
>
>
> >
> > Thor
> >
>


RE: High Density Multimode Runs BCP?

2005-01-26 Thread Hannigan, Martin


> -Original Message-
> From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 26, 2005 3:17 PM
> To: Hannigan, Martin; nanog@merit.edu
> Subject: Re: High Density Multimode Runs BCP?
> 
> 
> On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
> > > > 
> > > > When running say 24-pairs of multi-mode across a 
> datacenter, I have 
> > > > considered a few solutions, but am not sure what is 
> > > common/best practice.
> > > 
> > > I assume multiplexing up to 10Gb (possibly two links 
> thereof) and then
> > > back down is cost-prohibitive?  That's probably the 
> "best" practice.
> > 
> > I think he's talking physical plant. 200m should be fine. Consult
> > your equipment for power levels and support distance.
> 
> Sure -- but given the cost of the new physical plant installation he's
> talking about, the fact that he seems to know the present maximum data
> rate for each physical link, and so forth, I think it does 
> make sense to
> ask the question "is the right solution to simply be more economical
> with physical plant by multiplexing to a higher data rate"?
> 
> I've never used fibre ribbon, as advocated by someone else in 
> this thread,
> and that does sound like a very clever space- and possibly cost-saving
> solution to the puzzle.  But even so, spending tens of thousands of
> dollars to carry 24 discrete physical links hundreds of 
> meters across a

Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
is $25 per splice per strand including termination. The 100m
patch chords are $100.00. It's cheaper to bundle and splice.

How much does the mux cost?


> datacenter, each at what is, these days, not a particularly high data
> rate, may not be the best choice.  There may well be some 
> question about
> at which layer it makes sense to aggregate the links -- but to me, the
> question "is it really the best choice of design constraints to take
> aggregation/multiplexing off the table" is a very substantial one here
> and not profitably avoided.

Fiber ribbon doesn't "fit" in any long distance (+7') distribution
system, rich or poor, that I'm aware of. Racks, cabinets, et. al.
are not very conducive to it. The only application I've seen was
IBM fiber channel.

Datacenters are sometimes permanent facilities and it's better,
IMHO, to make things more permanent with cross connect than 
aggregation. It enables you to make your cabinet cabling and
your termination area cabling almost permanent and maintenance
free - as well as giving you test,add, move, and drop. It's more
cable, but less equipment to maintain, support, and reduces
failure points. It enhances security as well. You can't open
the cabinet and just jack something in. You have to provision
behind the locked term area.

I'd love to hear about a positive experience using ribbon cable
inside a datacenter.


> 
> Thor
> 


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
> > > 
> > > When running say 24-pairs of multi-mode across a datacenter, I have 
> > > considered a few solutions, but am not sure what is 
> > common/best practice.
> > 
> > I assume multiplexing up to 10Gb (possibly two links thereof) and then
> > back down is cost-prohibitive?  That's probably the "best" practice.
> 
> I think he's talking physical plant. 200m should be fine. Consult
> your equipment for power levels and support distance.

Sure -- but given the cost of the new physical plant installation he's
talking about, the fact that he seems to know the present maximum data
rate for each physical link, and so forth, I think it does make sense to
ask the question "is the right solution to simply be more economical
with physical plant by multiplexing to a higher data rate"?

I've never used fibre ribbon, as advocated by someone else in this thread,
and that does sound like a very clever space- and possibly cost-saving
solution to the puzzle.  But even so, spending tens of thousands of
dollars to carry 24 discrete physical links hundreds of meters across a
datacenter, each at what is, these days, not a particularly high data
rate, may not be the best choice.  There may well be some question about
at which layer it makes sense to aggregate the links -- but to me, the
question "is it really the best choice of design constraints to take
aggregation/multiplexing off the table" is a very substantial one here
and not profitably avoided.

Thor


RE: High Density Multimode Runs BCP?

2005-01-26 Thread Hannigan, Martin


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
> Thor Lancelot Simon
> Sent: Wednesday, January 26, 2005 2:09 PM
> To: nanog@merit.edu
> Subject: Re: High Density Multimode Runs BCP?
> 
> 
> 
> On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
> > 
> > 
> > I have a situation where I want to run Nx24 pairs of GE across a 
> > datacenter to several different customers. Runs are about 
> 200meters max.
> > 
> > When running say 24-pairs of multi-mode across a datacenter, I have 
> > considered a few solutions, but am not sure what is 
> common/best practice.
> 
> I assume multiplexing up to 10Gb (possibly two links thereof) and then
> back down is cost-prohibitive?  That's probably the "best" practice.

I think he's talking physical plant. 200m should be fine. Consult
your equipment for power levels and support distance.

Inside plant, dedicated fiber tray

Nothing wrong with running a bundle of MM with the SM
  bundles. This method usually uses fiber shelfs with
  either pig tailed (factory) or fusion spliced(best)
  method.

Inside plant, no tray, fiber trough

Use factory terminated strands(patch) only and save
  yourself aggravation and get better reliability. Run
  it loose in the trough from source to destination ports.

Inside plant, no tray,  no fiber trough

Use factory strands(patch) and run it inside 
  a 1/4" or larger innerduct from source to destination
  ports.

Spiral wrap is always recommended on the "last 7'" and
some sort of bracing near the port should be provided.

Consult the cable manufacture for proper bend radius.

Avoid zip ties if possible. Vibration and other factors
make them not desirable.

There's a multitude of combinations of the above. Without
knowing the facility layout and the cross connect/inter connect
standard, it's hard to speculate.

(the above is as close to a BCP as you can get. Fairly typical
central office standards via Bellcore. Er. Telcordia. 

YMMV



-M<

 


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
> 
> 
> I have a situation where I want to run Nx24 pairs of GE across a 
> datacenter to several different customers. Runs are about 200meters max.
> 
> When running say 24-pairs of multi-mode across a datacenter, I have 
> considered a few solutions, but am not sure what is common/best practice.

I assume multiplexing up to 10Gb (possibly two links thereof) and then
back down is cost-prohibitive?  That's probably the "best" practice.

Thor


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Look into MPO cabling

MPO uses fiber ribbon cables the most common of which is 6x2
six strands by two layers

Panduit has several solutions which use cartridges so you get a
cartridge with your desired termination type and run the MPO cable between
the cartridges.

This cabling under another name is also used for IBM Mainframe channel
connections

Scott C. McGrath

On Tue, 25 Jan 2005, Deepak Jain wrote:

>
>
> I have a situation where I want to run Nx24 pairs of GE across a
> datacenter to several different customers. Runs are about 200meters max.
>
> When running say 24-pairs of multi-mode across a datacenter, I have
> considered a few solutions, but am not sure what is common/best practice.
>
> a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or
> condition single mode) and connectorize the ends. Adv: Clean, Single,
> high density cable runs, Dis: Not sure if such a beast exists in
> multimode, and the whole cable has to be replaced/made redundant if one
> fiber dies and you need a critical restore, may need a break out shelf.
>
> b) Run 24 duplex MM cables of the proper lengths. Adv: Easy to trace,
> color code, understand. Easy to replace/repair one cable should
> something untoward occur. Can buy/stock pre-terminated cables of the
> proper length for easy restore. Dis: Lots of cables, more riser space.
>
> c) ??
>
> 
>
> So... is there an option C? Does a multimode beastie like A exist
> commonly? Is it generally more cost effective to terminate your own MM
> cables or buy them pre-terminated?
>
> Assume that each of these pairs is going to be used for something like
> 1000B-SX full duplex, and that these are all aggregated trunk links so
> you can't take a single pair of 1000B-SX and break it out to 24xGE at
> the end points with a switch.
>
> I priced up one of these runs at 100m, and I was seeing a list price in
> the ballpark of $2500-$3000 plenum. So I figured it was worth asking if
> there is a better way when we're talking about N times that number. :)
>
> Thanks in advance, I'm sure I just haven't had enough caffeine today.
>
> DJ
>
>


High Density Multimode Runs BCP?

2005-01-25 Thread Deepak Jain

I have a situation where I want to run Nx24 pairs of GE across a 
datacenter to several different customers. Runs are about 200meters max.

When running say 24-pairs of multi-mode across a datacenter, I have 
considered a few solutions, but am not sure what is common/best practice.

a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or 
condition single mode) and connectorize the ends. Adv: Clean, Single, 
high density cable runs, Dis: Not sure if such a beast exists in 
multimode, and the whole cable has to be replaced/made redundant if one 
fiber dies and you need a critical restore, may need a break out shelf.

b) Run 24 duplex MM cables of the proper lengths. Adv: Easy to trace, 
color code, understand. Easy to replace/repair one cable should 
something untoward occur. Can buy/stock pre-terminated cables of the 
proper length for easy restore. Dis: Lots of cables, more riser space.

c) ??

So... is there an option C? Does a multimode beastie like A exist 
commonly? Is it generally more cost effective to terminate your own MM 
cables or buy them pre-terminated?

Assume that each of these pairs is going to be used for something like 
1000B-SX full duplex, and that these are all aggregated trunk links so 
you can't take a single pair of 1000B-SX and break it out to 24xGE at 
the end points with a switch.

I priced up one of these runs at 100m, and I was seeing a list price in 
the ballpark of $2500-$3000 plenum. So I figured it was worth asking if 
there is a better way when we're talking about N times that number. :)

Thanks in advance, I'm sure I just haven't had enough caffeine today.
DJ