Angled Polish Connectors and DWDM

2012-09-30 Thread Aaron Glenn
dear peoples of NANOG,

I've always held the -- possibly completely false -- notion that
angled connectors (APC) were not idea for fiber spans carrying
multiple dense/DWDM wavelengths. However, after copious amounts of
googling and duckduckgo'ing I cannot find an opinion or tech note one
way or the other. Are there situations when angled connectors are not
to be used? Are they 'safe' or even recommended for any kind of DWDM
application? I know they're not meant for mating directly with optics,
but for panels, x-conns, and distribution frames...?

I have this sinking feeling I've been misunderstanding angled
connectors all these years. cluebats are appreciated. please be
gentle.

thanks,
aaron



Re: Angled Polish Connectors and DWDM

2012-09-30 Thread Igor Ybema
Hi Aaron,

 Are there situations when angled connectors are not
 to be used? Are they 'safe' or even recommended for any kind of DWDM
 application?

To my knowledge APC is always better than PC connectors. APC are used
to eliminate back reflections. Due to the angled connector reflections
are sent mostly towards the cladding and not the core and therefore.

The effecs of back reflections are great. Read this
http://documents.exfo.com/appnotes/anote044-ang.pdf

I can not see why APC connectors are not to be used, even with DWDM.
The only problem with APC is that there are different kinds of angles
(8 degrees and 9 degrees) and polishments of the tip. When using the
wrong connectors (for example a 8 degrees against a 9 degrees in a
coupler) you will introduce more loss. This would be the reasons why
some people fear to use APC.

regards, Igor



Re: Angled Polish Connectors and DWDM

2012-09-30 Thread Mikael Abrahamsson

On Sun, 30 Sep 2012, Igor Ybema wrote:

To my knowledge APC is always better than PC connectors. APC are used to 
eliminate back reflections. Due to the angled connector reflections are 
sent mostly towards the cladding and not the core and therefore.


I've been told this is very important in HFC solutions (hybrid fibre 
coax).


Operationally though, having APC is a hassle. I know of companies who 
mandated APC for a few years for all installations, but then reverted back 
to UPC due to operational problems (people putting UPC cables into APC 
ODFs etc).


So while APC might be technically superior when properly installed, it's 
not always better when looking at the whole system.


--
Mikael Abrahamssonemail: swm...@swm.pp.se



Re: Angled Polish Connectors and DWDM

2012-09-30 Thread Aaron Glenn
On Sun, Sep 30, 2012 at 11:52 AM, Igor Ybema i...@ergens.org wrote:
 Hi Aaron,

 Are there situations when angled connectors are not
 to be used? Are they 'safe' or even recommended for any kind of DWDM
 application?

 To my knowledge APC is always better than PC connectors. APC are used
 to eliminate back reflections. Due to the angled connector reflections
 are sent mostly towards the cladding and not the core and therefore.

Indeed. I have always held the idea that APC connectors induced
greater chromatic and/or polarization mode dispersion -- yet can't
find any resources that claim so, nor does that fit in with my working
mental model of how light propagates. Just something I picked up
during the hellish days of trying to deploy 10GbE before it was cool;
now I'd like to know if it is grounded in any science (-:

Thanks
aaron



RE: need help about 40G

2012-09-30 Thread Erik Bais
Hi Deric, 

On the part of 40G, there is a clear financial benefit instead of moving
currently towards 100G.  If you don't need 100G yet, there is a lot of money
to be saved by skipping first generation 100G.  40G ports will only set you
back around 1500 US$ per port. Optics are also quite cheap and because of
that, it is a nice solution. 

As already mentioned by Adam in his reply. Extreme Networks has a device
(the BD X8) that can fit 192 - 40G ports in 14.5U (or 1/3) rack space. These
can also be used as 768 10G ports, if you split the 40G into 4*10G.  Which
you can by just putting other cabling to the optic.  

If you don't require such a large quantity of ports, there are also smaller
(1U switches) with have 4*40G option boards, like the X670V from Extreme. 
These boxes are not expensive, stable and provide a great price to port
density ratio.  Also something which is very interesting, these boxes have
both a very low latency, which makes them ideal for storage, cloud and
trading scenarios. There are some tests reports you can find online if you
are not only looking for 40G but specifically in combination with very low
latency. 

I personally haven't used the Mellanox 40G nics yet, but I have been told
that these have been tested by other customers and work like a charm with
the Extreme kit.  

If you have more specific questions, please let me know (can also off-list).


Regards,
Erik Bais 





Re: Angled Polish Connectors and DWDM

2012-09-30 Thread ML

On 9/30/2012 6:14 AM, Aaron Glenn wrote:

  sent mostly towards the cladding and not the core and therefore.
Indeed. I have always held the idea that APC connectors induced
greater chromatic and/or polarization mode dispersion -- yet can't
find any resources that claim so, nor does that fit in with my working
mental model of how light propagates. Just something I picked up
during the hellish days of trying to deploy 10GbE before it was cool;
now I'd like to know if it is grounded in any science (-:

Thanks
aaron



Just tossing this in here.  I don't claim to know either way.

On the current project I'm working on all of the panels are LC-APC. This 
seems to have not been a problem for our DWDM vendor, who we have been 
working very closely with, and I assume know about our use of APC.

So far our PMD testing has come back clear.



IETF's PIM Working Group conducting deployment survey

2012-09-30 Thread Adrian Farrel
Hi,

After a snafu, the PIM working group has restarted its survey into PIM sparse
mode deployments. 

Please see the email at
http://www.ietf.org/mail-archive/web/pim/current/msg02479.html for more
information. responses will be anonymised.

Many thanks to all operators who are able to respond.

Adrian




Re: Angled Polish Connectors and DWDM

2012-09-30 Thread Mikael Abrahamsson

On Sun, 30 Sep 2012, ML wrote:


So far our PMD testing has come back clear.


How have you done the PMD testing?

For verifying PMD and CD through an actual wavelength (not per-fiber, but 
through all the ADMs etc), I haven't really been able to find a good 
solution. Suggestions welcome.


--
Mikael Abrahamssonemail: swm...@swm.pp.se



Re: Angled Polish Connectors and DWDM

2012-09-30 Thread ML

On 9/30/2012 12:46 PM, Mikael Abrahamsson wrote:

On Sun, 30 Sep 2012, ML wrote:


So far our PMD testing has come back clear.


How have you done the PMD testing?

For verifying PMD and CD through an actual wavelength (not per-fiber, 
but through all the ADMs etc), I haven't really been able to find a 
good solution. Suggestions welcome.




All tests done per span on dark fiber.

CDs via a Nettest FD440: 1520nm to 1640nm
PMD via a PerkinElmer* device: Just at 1550nm.

-ML


Couldn't find a model number in the test results. Just a reference to 
PerkinElmer.




Re: /. Terabit Ethernet is Dead, for Now

2012-09-30 Thread Jimmy Hess
On 9/29/12, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp wrote:
 Jared Mauch wrote:
...
 The problem is that physical layer of 100GE (with 10*10G) and
 10*10GE are identical (if same plug and cable are used both for
 100GE and 10*10GE).

Interesting.Well,  I would say if there are no technical
improvements that will significantly improve performance over the best
possible carrier Ethernet bonding implementation and   no cost savings
at the physical layer  over picking the higher data rate physical
layer standard,  _after_   considering  the increased hardware costs
due to newly manufactured components for a standard that is just
newer.

E.g.  If no fewer transceivers and fewer strands of fiber required,
or  shorter wavelength required,  so it doesn't enable you to achieve
greater throughput over the same amount of light spectrum on your
cabling, and therefore lower cost at sufficient density,   then:  in
that case, there will probably be fairly little point in  having the
higher rate standard exist in the first place,   as long as the
bonding mechanisms available are good  for the previous standard.

Just keep bonding together more and more data links at  basic units of
 10GE,  until the required throughput capacity has been achieved.


It's not as if a newer  1 Tbit  standard, will make the bits you send
get read at the other end faster than the speed of light.  Newer
standard does not necessarily mean  more reliable, technically better,
or more efficient,  so it is prudent to  consider what is actually
achieved that would benefit networks considered to be potential
candidates for implementation of the new standard,  before actually
making it a standard...

--
-JH



Re: guys != gender neutral

2012-09-30 Thread Owen DeLong
Ugly would usually be considered pejorative.

Owen

On Sep 29, 2012, at 20:59 , Keith Medcalf kmedc...@dessus.com wrote:

 
 Ugly bags of mostly water?
 
 
 ---
 ()  ascii ribbon campaign against html e-mail
 /\  www.asciiribbon.org
 
 
 -Original Message-
 From: Otis L. Surratt, Jr. [mailto:o...@ocosa.com]
 Sent: Friday, 28 September, 2012 05:33
 To: nanog@nanog.org
 Subject: RE: guys != gender neutral
 
 Maybe the OP for really nasty attacks in hindsight wishes NANOGers was
 used instead to address the list. :)
 
 Having all walks of life essentially all around, it really makes one
 careful to truly think before speaking. Sometimes we miss this with
 everything we have going on, but no one is perfect.
 
 The bottomline is, no one can really sastifisfy any indivdual and their
 preference of how they would liked to be addressed. If one is to be offended
 or looks for offense they will capitalize on it period. I try much as
 possible to avoid those situations.
 
 When we refer to our clients in a mass communication we either utilize our
 tools to auto fix their name to the letter or we address them as OCOSA
 Family or All or Clients.  We are a very family-oriented business and
 are down to Earth. We'd like to believe our clients are apart of our family
 and some may take offense but you might or never would know unless an
 opportunity presented itself.
 
 Personally, I practice using the person's name, I am in communication
 with...not buddy, bud, pal, man, guys, gal y'all and etc. When
 addressing mixed gender groups, I simply speak or address as all. Thus, no
 mistakes.
 
 When addressing both genders you have to be extremely careful. Ultimately, It
 depends on the audience and treating all with respect seems to work for me.
 
 For example: You could address a group a men and call them boys. Well, that
 might offend some, especially if they are older than you.
 For example: You could address a group of young adults and call them kids.
 Well, that might offend some.
 
 As Owen mentioned saying human seems okay and true but then again, because
 it's not the norm it raises some question. (Internal thinking process, Oh
 I'm a HUMAN, well I that is true then your temperature gets back to
 normal) :)
 
 In general, this is life and I simply have fun and enjoy it because it's too
 short.
 
 
 Otis
 
 
 
 




Re: /. Terabit Ethernet is Dead, for Now

2012-09-30 Thread joel jaeggli

On 9/30/12 12:05 PM, Jimmy Hess wrote:

On 9/29/12, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp wrote:

Jared Mauch wrote:

...

The problem is that physical layer of 100GE (with 10*10G) and
10*10GE are identical (if same plug and cable are used both for
100GE and 10*10GE).

Interesting.Well,  I would say if there are no technical
improvements that will significantly improve performance over the best
possible carrier Ethernet bonding implementation and   no cost savings
at the physical layer  over picking the higher data rate physical
layer standard,  _after_   considering  the increased hardware costs
due to newly manufactured components for a standard that is just
newer.
There is a real-estate problem. 10 sfp+ connectors takes a lot more 
space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon 
cables are a lot more compact than the equivalent 10Gbe footprint 
terminated as LC. When you add cwdm as 40Gb/s lr4 does the fiber count 
drops by a lot.

E.g.  If no fewer transceivers and fewer strands of fiber required,
or  shorter wavelength required,  so it doesn't enable you to achieve
greater throughput over the same amount of light spectrum on your
cabling, and therefore lower cost at sufficient density,   then:  in
that case, there will probably be fairly little point in  having the
higher rate standard exist in the first place,   as long as the
bonding mechanisms available are good  for the previous standard.

Just keep bonding together more and more data links at  basic units of
  10GE,  until the required throughput capacity has been achieved.


It's not as if a newer  1 Tbit  standard, will make the bits you send
get read at the other end faster than the speed of light.  Newer
standard does not necessarily mean  more reliable, technically better,
or more efficient,  so it is prudent to  consider what is actually
achieved that would benefit networks considered to be potential
candidates for implementation of the new standard,  before actually
making it a standard...

--
-JH






Re: /. Terabit Ethernet is Dead, for Now

2012-09-30 Thread Masataka Ohta
joel jaeggli wrote:

 The problem is that physical layer of 100GE (with 10*10G) and
 10*10GE are identical (if same plug and cable are used both for
 100GE and 10*10GE).
 Interesting.Well,  I would say if there are no technical
 improvements that will significantly improve performance over the best
 possible carrier Ethernet bonding implementation and   no cost savings
 at the physical layer  over picking the higher data rate physical
 layer standard,  _after_   considering  the increased hardware costs
 due to newly manufactured components for a standard that is just
 newer.
 There is a real-estate problem. 10 sfp+ connectors takes a lot more 
 space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon 
 cables are a lot more compact than the equivalent 10Gbe footprint 
 terminated as LC.

That's why I wrote:

 (if same plug and cable are used both for
 100GE and 10*10GE).

As is mentioned in 40G thread, 24 Port 40GE interface module
of Extreme BD X8 can be used as 96 port 10GE.

 When you add cwdm as 40Gb/s lr4 does the fiber count
 drops by a lot.

That's also possible with 4*10GE and 4*10GE is a lot more
flexible to enable 3*10GE failure mode trivially and allows
for very large skew.

Masataka Ohta




Re: /. Terabit Ethernet is Dead, for Now

2012-09-30 Thread Tom Hill

On 30/09/12 20:05, Jimmy Hess wrote:

On 9/29/12, Masataka Ohtamo...@necom830.hpcl.titech.ac.jp  wrote:

Jared Mauch wrote:

...

The problem is that physical layer of 100GE (with 10*10G) and
10*10GE are identical (if same plug and cable are used both for
100GE and 10*10GE).



Interesting.Well,  I would say if there are no technical
improvements that will significantly improve performance over the best
possible carrier Ethernet bonding implementation and   no cost savings
at the physical layer  over picking the higher data rate physical
layer standard,_after_considering  the increased hardware costs
due to newly manufactured components for a standard that is just
newer.

E.g.  If no fewer transceivers and fewer strands of fiber required,
or  shorter wavelength required,  so it doesn't enable you to achieve
greater throughput over the same amount of light spectrum on your
cabling, and therefore lower cost at sufficient density,   then:  in
that case, there will probably be fairly little point in  having the
higher rate standard exist in the first place,   as long as the
bonding mechanisms available are good  for the previous standard.


When you consider 100GBASE-LR4 (with its 4x25G form factor) there is 
some efficiency to be gained. ADVA  others now support the running of 
each channel on their DWDM muxes at ~28G, to suit carrying 100GBASE-LR4 
over four of your existing waves. CFPs with 4xSFP+ tunable optics in the 
front are out there for this reason.


Once you get your head (and wallet) around that, there becomes a case 
for running each of your waves at 2.5x the rate they're employed at now. 
The remaining question is then to decide if that's cheaper than running 
more fibre.


Still a hard one to justify though, I agree.

I've recently seen a presentation from EPF** (by Juniper) that was 
*very* interesting in the 100G race, from a technical perspective. Well 
worth hunting that one down if you can, as it details a lot about optic 
composition in future standards, optic densities/backplanes, etc.


Tom

** I couldn't justify going, but the nerd porn is hard to turn down. :)



Re: Angled Polish Connectors and DWDM

2012-09-30 Thread Daniel Griggs
We have been using APC connectors on our DWDM deployments now for about 5
years and have had no issues with it at all.

Additionally, last year we engaged several DWDM system providers  and one
of the questions we asked them was, did they know of any issues with APC
connectors especially with regard to dual polarization systems. All vendors
came back with having no issues with APC connectors.

On 1 October 2012 06:15, ML m...@kenweb.org wrote:

 On 9/30/2012 12:46 PM, Mikael Abrahamsson wrote:

 On Sun, 30 Sep 2012, ML wrote:

  So far our PMD testing has come back clear.


 How have you done the PMD testing?

 For verifying PMD and CD through an actual wavelength (not per-fiber, but
 through all the ADMs etc), I haven't really been able to find a good
 solution. Suggestions welcome.


 All tests done per span on dark fiber.

 CDs via a Nettest FD440: 1520nm to 1640nm
 PMD via a PerkinElmer* device: Just at 1550nm.

 -ML


 Couldn't find a model number in the test results. Just a reference to
 PerkinElmer.




-- 
Daniel Griggs
Network Operations
e: dan...@fx.net.nz
d: +64 4 4989567