Re: [j-nsp] Re-write rule for GRE interface

2011-01-17 Thread Shiva Shankar
Hi All, Thanks for the reply. Platform is M7i, and the junos is 9.3

What i want to achieve is marking all the traffic leaving this GRE interface
with a particular DSCP value, say EF. The content of the GRE packet is a
layer 3 vpn datagram (IP datagram inside a MPLS packet)...

here's the o/p: (i've tried using dscp marking for the GRE interface, but
its always set as 00)

Router Manager@head-end-PE1 show class-of-service interface gr-1/2/0
Physical interface: gr-1/2/0, Index: 132
Queues supported: 4, Queues in use: 4
  Scheduler map: bfin-cos, Index: 22125
  Chassis scheduler map: bfin-cos, Index: 22125
  Logical interface: gr-1/2/0.0, Index: 66
Object  Name   Type
Index
Rewrite mark-dscp  dscp
55103
Rewrite exp-defaultexp
(mpls-any) 33
Classifier  exp-default
exp10
Classifier  ipprec-compatibility
ip 13
Here's the config of the interface:
Manager@head-end-PE1 ...s-of-service interfaces gr-1/2/0

scheduler-map smap-cos;
unit 0 {
rewrite-rules {
dscp mark-dscp;
}
}

Router Manager@head-end-PE1 ...-service rewrite-rules dscp
mark-dscp
forwarding-class be {
loss-priority low code-point be;
loss-priority high code-point cs3;
}
forwarding-class test1 {
loss-priority high code-point cs1;
loss-priority low code-point cs2;
}
forwarding-class ef {
loss-priority high code-point cs4;
loss-priority low code-point cs5;
}
forwarding-class nc {
loss-priority high code-point cs6;
loss-priority low code-point cs7;
}

Thanks
On Mon, Jan 17, 2011 at 1:04 AM, Diogo Montagner
diogo.montag...@gmail.comwrote:

 Hi Shiva,

 could you please post the command show class-of-service interface
 gr-x/y/z.abc ?

 Regards
 ./diogo -montagner



 On Fri, Jan 14, 2011 at 10:59 PM, Shiva Shankar shanka...@gmail.com
 wrote:
  Hi All, I'm trying to mark the DSCP value on a GRE packet, so that the
 telco
  can handle as per our contracted services. I've tried
  'copy-tos-to-outer-ip-header', but it doesn't work as the inner datagram
 of
  a GRE packet is a MPLS datagram.
  Here, how it looks on the wire (found using packet capture)
 
  FrameEthernet II header  IP packet  GRE header  MPLS header
 
  origianl IP packet with data
 
  A Layer 3 VPN packet while leaving the local PE to remote PE, uses GRE
  interface which has LDP enabled.
 
  I've tried even applying dscp rewrite rules, but it doesn't work. Any
 ideas.
 
  Thanks
  Shiva
   ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M20 SSB slot 0 failures

2011-01-17 Thread Chris Cappuccio
Hi,

I have four M20 chassis with continuous slot 0 SSB failures. 

These are from two completely different vendors..

I would think, oh, a bad chassis, but I am getting this same result with a 
variety of chassis and SSB cards.  I do have chassis that don't display this 
failure, with the same SSB cards.  This is what leads me to believe that I am 
hitting a rash of bad crap.

The failure is as follows.  Any SSB tests out fine in slot 1.  But in slot 0, 
the same SSBs fail.  Slot 0 often Fails over to slot 1 in operation if both 
SSBs are populated in these chassis.

Is this some kind of known problem?  Or am I just the most unlucky person in 
the Juniper M20 world?

Success in slot 1
-

SSB1( vty)# bringup chassis slot-state 1 diag
Slot 1 state changed from 'on-line' to 'diagnostics'

SSB1( vty)# diagnostic set mode manufacturing

SSB1( vty)# diag clear log

SSB1( vty)# diag bchip 1 sdram
[Waiting for completion, a:abort, p:pause]
B SDRAM (Slot 1) test
phase 1, pass 1, B SDRAM (Slot 1) test: Address Test
phase 2, pass 1, B SDRAM (Slot 1) test: Pattern Test
phase 3, pass 1, B SDRAM (Slot 1) test: Walking 0 Test
phase 4, pass 1, B SDRAM (Slot 1) test: Walking 1 Test
phase 5, pass 1, B SDRAM (Slot 1) test: Mem Clear Test
B SDRAM (Slot 1) test completed, 1 pass,  0 errors


SSB1( vty)# diag bchip 1 sdram
[Waiting for completion, a:abort, p:pause]
B SDRAM (Slot 1) test
phase 1, pass 1, B SDRAM (Slot 1) test: Address Test
phase 2, pass 1, B SDRAM (Slot 1) test: Pattern Test
phase 3, pass 1, B SDRAM (Slot 1) test: Walking 0 Test
phase 4, pass 1, B SDRAM (Slot 1) test: Walking 1 Test
phase 5, pass 1, B SDRAM (Slot 1) test: Mem Clear Test
B SDRAM (Slot 1) test completed, 1 pass,  0 errors


Fail in slot 0
--

SSB0( vty)# bringup chassis slot-state 0 diag
Slot 0 state changed from 'diagnostics' to 'diagnostics'

SSB0( vty)# diagnostic set mode manufacturing

SSB0( vty)# diag clear log

SSB0( vty)# diag bchip 0 sdram 
[Waiting for completion, a:abort, p:pause]
B SDRAM (Slot 0) test
phase 1, pass 1, B SDRAM (Slot 0) test: Address Test

*** Fatal error during B SDRAM (Slot 0) test, pass 1,
Data did not compare, Slot 0 (NIC0 B chip SDRAM banks ref. des. U?)


B SDRAM (Slot 0) test completed, 1 pass,  1 error

[Jan  5 21:34:17.356 LOG: Err] Data Error: Bank 0 (global cell 0x3e52): 
Expected 0x5280001f, Observed 0x200200

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Thedin Guruge
Hi,

What i gather is that you have LDP implemented in MPLS level and edge
routers are dual homed with core routers, why not consider running LDP over
RSVP, RSVP LSPs will only be per link LSPs between P-PE links. RSVP will
provide sub second failure times and no need for a dirty full meshed RSVP
setup. But of course this relies on fast link down detection at a physical
level as well as by IGP. but you can opt out BFD with BGP.

Thedin

On Mon, Jan 17, 2011 at 4:34 AM, Phil Bedard phil...@gmail.com wrote:

 If BGP stability is the main goal, do not use BFD with your BGP sessions.
 Are you using site multi-homing with the connected CE devices or are they
 all single-homed?  I don't know your topology but there may be some
 instances where you would want to run BFD for BGP notification with
 multi-homing.

 What hardware are you using?  We are using 300x3 everywhere and while we
 have seen some isolated false positives, things have been relatively
 stable.

 Also, I would look at the types of failures you sustain on a regular
 basis.  BFD doesn't make restoration faster, it lets you catch issues
 which may not have otherwise been caught like control plane issues. If you
 do not have a history of that maybe BFD isn't really necessary and may
 cause more problems than it solves.  Link failures and most node failures
 (which cause links to go dark) trigger routing protocol events much faster
 than BFD.  We use it because the routers were keeping the physical links
 up during a reboot and would eventually start dropping traffic.

 Phil

 On 1/14/11 9:39 PM, Clarke Morledge chm...@wm.edu wrote:

 I am trying to determine the optimal Bidirectional Forwarding Detection
 (BFD) settings for BGP auto-discovery and layer-2 signaling in a VPLS
 application.
 
 To simplify things, assume I am running LDP for building dynamic-only
 LSPs, as opposed to RSVP.  Assume I am running IS-IS as the IGP with BFD
 enabled on that, too, interconnecting all of the P and PE routers in the
 MPLS cloud.  I am following the Juniper recommendation of 300 ms mininum
 interval with 3 misses before calling a BFD down event.
 
 The network design has a small set of core routers, each one of these
 routers serves as a BGP route reflector.  All of the core routers have
 inter-meshed connections.  Each core router is only one hop away from the
 other.
 
 On the periphery, I have perhaps dozens of distribution routers.  Each
 distribution router is  directly connected to two or more core routers.
 Each distribution router has a BGP session to these core routers;
 therefore, each distribution router is connected to two route reflectors
 for redundancy.
 
 Given that above, what type of minimum interval BFD setting and miss
 count
 would you configure?  I want to try to get to a sub-second convergence
 during node/link failure, but I do not want to tune BFD too tight and
 potentially introduce unecessary flapping.  I am willing to suffer some
 sporadic loss to the layer-2 connectivity within the VPLS cloud in the
 event of a catastrophe, etc, for a few seconds, but I don't want to
 unnecessarily tear down BGP sessions and wait some 20 to 60 seconds or so
 until BGP rebuilds and redistributes L2 information.
 
 For some time now, I have been playing with 3000 ms interval with 3
 misses
 (that's 9 seconds) as what I thought was a conservative estimate.
 However, I have run into cases where there has been enough router churn
 for various reasons to uneccesarily trip a BFD down event.  My hunch is
 that this router churn is due to buggy JUNOS code, but I don't have
 proof of that yet.  Nevertheless, I want the BGP infrastructure to stay
 solid and ride through transient events in a redundant network.
 
 Are there any factors that I am missing or not thinking thoroughly enough
 about when considering optimal BFD settings?
 
 Thanks.
 
 Clarke Morledge
 College of William and Mary
 Information Technology - Network Engineering
 Jones Hall (Room 18)
 Williamsburg VA 23187
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Keegan Holley
I agree except for using the IGP and RSVP for failure detection.  RSVP and
OSPF/ISIS run in the control plane and BFD is designed to run in the
forwarding plane.  Running BFD will diagnose issues where the control plane
is working but the forwarding plane is not.

On Mon, Jan 17, 2011 at 3:13 PM, Thedin Guruge the...@gmail.com wrote:

 Hi,

 What i gather is that you have LDP implemented in MPLS level and edge
 routers are dual homed with core routers, why not consider running LDP over
 RSVP, RSVP LSPs will only be per link LSPs between P-PE links. RSVP will
 provide sub second failure times and no need for a dirty full meshed RSVP
 setup. But of course this relies on fast link down detection at a physical
 level as well as by IGP. but you can opt out BFD with BGP.

 Thedin

 On Mon, Jan 17, 2011 at 4:34 AM, Phil Bedard phil...@gmail.com wrote:

  If BGP stability is the main goal, do not use BFD with your BGP sessions.
  Are you using site multi-homing with the connected CE devices or are they
  all single-homed?  I don't know your topology but there may be some
  instances where you would want to run BFD for BGP notification with
  multi-homing.
 
  What hardware are you using?  We are using 300x3 everywhere and while we
  have seen some isolated false positives, things have been relatively
  stable.
 
  Also, I would look at the types of failures you sustain on a regular
  basis.  BFD doesn't make restoration faster, it lets you catch issues
  which may not have otherwise been caught like control plane issues. If
 you
  do not have a history of that maybe BFD isn't really necessary and may
  cause more problems than it solves.  Link failures and most node failures
  (which cause links to go dark) trigger routing protocol events much
 faster
  than BFD.  We use it because the routers were keeping the physical links
  up during a reboot and would eventually start dropping traffic.
 
  Phil
 
  On 1/14/11 9:39 PM, Clarke Morledge chm...@wm.edu wrote:
 
  I am trying to determine the optimal Bidirectional Forwarding Detection
  (BFD) settings for BGP auto-discovery and layer-2 signaling in a VPLS
  application.
  
  To simplify things, assume I am running LDP for building dynamic-only
  LSPs, as opposed to RSVP.  Assume I am running IS-IS as the IGP with BFD
  enabled on that, too, interconnecting all of the P and PE routers in the
  MPLS cloud.  I am following the Juniper recommendation of 300 ms mininum
  interval with 3 misses before calling a BFD down event.
  
  The network design has a small set of core routers, each one of these
  routers serves as a BGP route reflector.  All of the core routers have
  inter-meshed connections.  Each core router is only one hop away from
 the
  other.
  
  On the periphery, I have perhaps dozens of distribution routers.  Each
  distribution router is  directly connected to two or more core routers.
  Each distribution router has a BGP session to these core routers;
  therefore, each distribution router is connected to two route reflectors
  for redundancy.
  
  Given that above, what type of minimum interval BFD setting and miss
  count
  would you configure?  I want to try to get to a sub-second convergence
  during node/link failure, but I do not want to tune BFD too tight and
  potentially introduce unecessary flapping.  I am willing to suffer some
  sporadic loss to the layer-2 connectivity within the VPLS cloud in the
  event of a catastrophe, etc, for a few seconds, but I don't want to
  unnecessarily tear down BGP sessions and wait some 20 to 60 seconds or
 so
  until BGP rebuilds and redistributes L2 information.
  
  For some time now, I have been playing with 3000 ms interval with 3
  misses
  (that's 9 seconds) as what I thought was a conservative estimate.
  However, I have run into cases where there has been enough router churn
  for various reasons to uneccesarily trip a BFD down event.  My hunch is
  that this router churn is due to buggy JUNOS code, but I don't have
  proof of that yet.  Nevertheless, I want the BGP infrastructure to stay
  solid and ride through transient events in a redundant network.
  
  Are there any factors that I am missing or not thinking thoroughly
 enough
  about when considering optimal BFD settings?
  
  Thanks.
  
  Clarke Morledge
  College of William and Mary
  Information Technology - Network Engineering
  Jones Hall (Room 18)
  Williamsburg VA 23187
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list 

Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Phil Bedard
Oops I meant forwarding plane in my original post.   In some of the older
hardware where it's a centralized function and not independent of the
control plane, it helps catch those failures as well.   I believe this
covers most Juniper hardware, unless I'm mistaken.

Phil 

From:  Keegan Holley keegan.hol...@sungard.com
Date:  Mon, 17 Jan 2011 15:22:01 -0500
To:  Thedin Guruge the...@gmail.com
Cc:  Phil Bedard phil...@gmail.com, juniper-nsp@puck.nether.net, Clarke
Morledge chm...@wm.edu
Subject:  Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

I agree except for using the IGP and RSVP for failure detection.  RSVP and
OSPF/ISIS run in the control plane and BFD is designed to run in the
forwarding plane.  Running BFD will diagnose issues where the control plane
is working but the forwarding plane is not.

On Mon, Jan 17, 2011 at 3:13 PM, Thedin Guruge the...@gmail.com wrote:
 Hi,
 
 What i gather is that you have LDP implemented in MPLS level and edge
 routers are dual homed with core routers, why not consider running LDP over
 RSVP, RSVP LSPs will only be per link LSPs between P-PE links. RSVP will
 provide sub second failure times and no need for a dirty full meshed RSVP
 setup. But of course this relies on fast link down detection at a physical
 level as well as by IGP. but you can opt out BFD with BGP.
 
 Thedin
 
 On Mon, Jan 17, 2011 at 4:34 AM, Phil Bedard phil...@gmail.com wrote:
 
  If BGP stability is the main goal, do not use BFD with your BGP sessions.
  Are you using site multi-homing with the connected CE devices or are they
  all single-homed?  I don't know your topology but there may be some
  instances where you would want to run BFD for BGP notification with
  multi-homing.
 
  What hardware are you using?  We are using 300x3 everywhere and while we
  have seen some isolated false positives, things have been relatively
  stable.
 
  Also, I would look at the types of failures you sustain on a regular
  basis.  BFD doesn't make restoration faster, it lets you catch issues
  which may not have otherwise been caught like control plane issues. If you
  do not have a history of that maybe BFD isn't really necessary and may
  cause more problems than it solves.  Link failures and most node failures
  (which cause links to go dark) trigger routing protocol events much faster
  than BFD.  We use it because the routers were keeping the physical links
  up during a reboot and would eventually start dropping traffic.
 
  Phil
 
  On 1/14/11 9:39 PM, Clarke Morledge chm...@wm.edu wrote:
 
  I am trying to determine the optimal Bidirectional Forwarding Detection
  (BFD) settings for BGP auto-discovery and layer-2 signaling in a VPLS
  application.
  
  To simplify things, assume I am running LDP for building dynamic-only
  LSPs, as opposed to RSVP.  Assume I am running IS-IS as the IGP with BFD
  enabled on that, too, interconnecting all of the P and PE routers in the
  MPLS cloud.  I am following the Juniper recommendation of 300 ms mininum
  interval with 3 misses before calling a BFD down event.
  
  The network design has a small set of core routers, each one of these
  routers serves as a BGP route reflector.  All of the core routers have
  inter-meshed connections.  Each core router is only one hop away from the
  other.
  
  On the periphery, I have perhaps dozens of distribution routers.  Each
  distribution router is  directly connected to two or more core routers.
  Each distribution router has a BGP session to these core routers;
  therefore, each distribution router is connected to two route reflectors
  for redundancy.
  
  Given that above, what type of minimum interval BFD setting and miss
  count
  would you configure?  I want to try to get to a sub-second convergence
  during node/link failure, but I do not want to tune BFD too tight and
  potentially introduce unecessary flapping.  I am willing to suffer some
  sporadic loss to the layer-2 connectivity within the VPLS cloud in the
  event of a catastrophe, etc, for a few seconds, but I don't want to
  unnecessarily tear down BGP sessions and wait some 20 to 60 seconds or so
  until BGP rebuilds and redistributes L2 information.
  
  For some time now, I have been playing with 3000 ms interval with 3
  misses
  (that's 9 seconds) as what I thought was a conservative estimate.
  However, I have run into cases where there has been enough router churn
  for various reasons to uneccesarily trip a BFD down event.  My hunch is
  that this router churn is due to buggy JUNOS code, but I don't have
  proof of that yet.  Nevertheless, I want the BGP infrastructure to stay
  solid and ride through transient events in a redundant network.
  
  Are there any factors that I am missing or not thinking thoroughly enough
  about when considering optimal BFD settings?
  
  Thanks.
  
  Clarke Morledge
  College of William and Mary
  Information Technology - Network Engineering
  Jones Hall (Room 18)
  Williamsburg VA 23187
  

Re: [j-nsp] Re-write rule for GRE interface

2011-01-17 Thread Dale Shaw
Hi Shiva,

On Monday, January 17, 2011, Shiva Shankar shanka...@gmail.com wrote:
 Hi All, Thanks for the reply. Platform is M7i, and the junos is 9.3

[...]

How are you classifying traffic into the forwarding classes in the
first place? The rewrite-rule assumes traffic has been classified
already. For example, for the 'ef' rewrite-rule to work, you must have
already mapped your voice RTP traffic into the 'ef' forwarding-class.

You need a Behaviour Aggregate (BA) classifier, Multi-Field (MF)
classifier or static classifier applied on the ingress interface(s)
under the class-of-service stanza.

Cheers,
Dale
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Keegan Holley
On Mon, Jan 17, 2011 at 3:57 PM, sth...@nethelp.no wrote:

  I agree except for using the IGP and RSVP for failure detection.  RSVP
 and
  OSPF/ISIS run in the control plane and BFD is designed to run in the
  forwarding plane.  Running BFD will diagnose issues where the control
 plane
  is working but the forwarding plane is not.

 The BFD Echo mode is designed to operate in the forwarding plane. But
 BFD also has an Async mode which operates in the control plane.

 As far as I know Juniper doesn't implement Echo mode, only Async mode.
 Which is unfortunate.


Of course I can't find the link now, but just last night I read that prior
to JunOS 9.4 echo mode required a command to be entered in order to move BFD
to the forwarding plane.  In or after 9.4 a new daemon was created to allow
BFD to run in the forwarding plane and that became the default.  I don't
have time now but I will post the link later.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Daniel Verlouw
On Jan 17, 2011, at 11:50 PM, Keegan Holley wrote:
 Of course I can't find the link now, but just last night I read that prior
 to JunOS 9.4 echo mode required a command to be entered in order to move BFD
 to the forwarding plane.  In or after 9.4 a new daemon was created to allow
 BFD to run in the forwarding plane and that became the default.  I don't
 have time now but I will post the link later.

help topic routing-options ppm

still not echo mode though, it's async, regardless of where it's running, RE or 
PFE. As Steinar said, echo mode is not supported.

--Daniel.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?

2011-01-17 Thread Colin House

On 18/01/2011 9:50 AM, Keegan Holley wrote:

...  In or after 9.4 a new daemon was created to allow
BFD to run in the forwarding plane and that became the default.  I don't
have time now but I will post the link later.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
is this what you're speaking of? 
http://www.juniper.net/techpubs/software/junos/junos94/swconfig-routing/distributed-ppm.html. 

this may also be of some interest - offloading (part of) vrrp to the PFE 
rather than running on the RE: 
http://www.juniper.net/techpubs/en_US/junos9.6/information-products/topic-collections/swconfig-high-availability/vrrp-ppmd-enabling.html.


cheers,
colin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Re-write rule for GRE interface

2011-01-17 Thread Diogo Montagner
Hi,

You can also try to apply an output firewall filter in the gre
interface to rewrite the DSCP of the packet.

I think the option copy-tos-to-outer-ip-header will not work because
your inner packet is not an IP packet and this option only works for
inner IP packet.

If this does not work, you can apply an outbound firewall filter in
the output direction of your interfaces matching GRE packets + ipsrc +
ipdst of your tunnel and then applying the right dscp values.

Another option you can give it a try is the output-forwarding-class-map:
http://www.juniper.net/techpubs/en_US/junos9.6/information-products/topic-collections/config-guide-cos/cos-classifying-packets-by-egress-interface.html

HTH
./diogo -montagner



On Tue, Jan 18, 2011 at 4:59 AM, Dale Shaw dale.shaw+j-...@gmail.com wrote:
 Hi Shiva,

 On Monday, January 17, 2011, Shiva Shankar shanka...@gmail.com wrote:
 Hi All, Thanks for the reply. Platform is M7i, and the junos is 9.3

 [...]

 How are you classifying traffic into the forwarding classes in the
 first place? The rewrite-rule assumes traffic has been classified
 already. For example, for the 'ef' rewrite-rule to work, you must have
 already mapped your voice RTP traffic into the 'ef' forwarding-class.

 You need a Behaviour Aggregate (BA) classifier, Multi-Field (MF)
 classifier or static classifier applied on the ingress interface(s)
 under the class-of-service stanza.

 Cheers,
 Dale
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Offline config verification

2011-01-17 Thread Phil Shafer
Jared Mauch writes:
Just load them on the device and rollback and use commit-check as your middle 
step.

There's also the test configuration command:

user@cli test configuration server:cli.conf
cli.conf   100%   29KB  28.6KB/s   
00:00
server:cli.conf:986:(28) fpc value outside range 0..1 for '2/2/0.0' in 
'so-2/2/0.0': so-2/2/0.0
  [edit routing-instances one interface]
'interface so-2/2/0.0;'
  fpc value outside range 0..1 for '2/2/0.0' in 'so-2/2/0.0'
warning: statement must contain additional statements
server:cli.conf:991:(28) fpc value outside range 0..1 for '3/2/0.0' in 
'so-3/2/0.0': so-3/2/0.0
  [edit routing-instances two interface]
'interface so-3/2/0.0;'
  fpc value outside range 0..1 for '3/2/0.0' in 'so-3/2/0.0'
warning: statement must contain additional statements
error: configuration syntax failed

user@cli 

Thanks,
 Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp