Re: [j-nsp] M-series IPSEC / SP interface and VRF
So this works to establish the tunnels, the problem is, BGP received routes over the tunnel do not function correctly. The routes are properly installed in the VRF but traffic to those destinations does not pass correctly. Does anyone have any experience running BGP like this on the m-series or does it just not work on next-hop-style? Thanks, -SH On 11/12/13, 1:34 PM, Scott Harvanek wrote: Yep excellent, I'll give it a whirl, thanks! Scott H. On 11/12/13, 1:24 PM, Alex Arseniev wrote: So, if I understand Your requirement, You want sp-0/0/0.unit in VRF, correct? And outgoing GE interface in inet.0? And where the decrypted packets should be placed, inet.0 or VRF? And where from the to-be-ecrypted packets should arrive, from inet.0 or VRF? If the answer is correct/inet.0/VRF/VRF then migrate to next-hop-style IPSec and place inside sp-* unit into the VRF leaving outside sp-* unit in inet.0. HTH Thanks Alex On 12/11/2013 16:35, Scott Harvanek wrote: Alex, Yea, tried this but it looks like you can't set it to the default inet.0 instance, only to things different... the local gw in my case is in the default instance and I want the service interface in another so unless I'm mistaken it's in default by default and this fails? Scott H. On 11/12/13, 11:22 AM, Alex Arseniev wrote: Yes [edit] aarseniev@m120# set services service-set SS1 ipsec-vpn-options local-gateway ? Possible completions: addressLocal gateway address routing-instance Name of routing instance that hosts local gateway = CHECK THIS OUT!!! aarseniev@m120 show version Hostname: m120 Model: m120 JUNOS Base OS boot [10.4S7.1] HTH Thanks Alex On 12/11/2013 16:05, Scott Harvanek wrote: Anyone with any ideas on this? Scott H. On 11/9/13, 12:58 PM, Scott Harvanek wrote: Is there a way to build a IPSec tunnel / service interface where the local gateway is NOT in the same routing-instance as the service interface? Here's what I'm trying to do; [ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ] [ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ] The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the outside interface on router B, I cannot commit unless the outside/local-gateway on the IPSec tunnel is in the same routing-instance as the service interface, is there a way around this? The SRX devices can do this without issue. service-set { interface-service { service-interface sp-0/0/0.0; -- want this in a VRF } ipsec-vpn-options { local-gateway x.x.x.x; -- default routing instance } ipsec-vpn-rules } ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] M-series IPSEC / SP interface and VRF
For the traffic to be encrypted, the BGP nexthop has to point into the tunnel which means one of the below: 1/ BGP has to run inside the tunnel, or 2/ You have to have a BGP import policy to change the nexthop to tunnel's remote address. If this is eBGP, then also add accept-remote-nexthop knob. HTH Thanks Alex On 17/12/2013 16:08, Scott Harvanek wrote: So this works to establish the tunnels, the problem is, BGP received routes over the tunnel do not function correctly. The routes are properly installed in the VRF but traffic to those destinations does not pass correctly. Does anyone have any experience running BGP like this on the m-series or does it just not work on next-hop-style? Thanks, -SH On 11/12/13, 1:34 PM, Scott Harvanek wrote: Yep excellent, I'll give it a whirl, thanks! Scott H. On 11/12/13, 1:24 PM, Alex Arseniev wrote: So, if I understand Your requirement, You want sp-0/0/0.unit in VRF, correct? And outgoing GE interface in inet.0? And where the decrypted packets should be placed, inet.0 or VRF? And where from the to-be-ecrypted packets should arrive, from inet.0 or VRF? If the answer is correct/inet.0/VRF/VRF then migrate to next-hop-style IPSec and place inside sp-* unit into the VRF leaving outside sp-* unit in inet.0. HTH Thanks Alex On 12/11/2013 16:35, Scott Harvanek wrote: Alex, Yea, tried this but it looks like you can't set it to the default inet.0 instance, only to things different... the local gw in my case is in the default instance and I want the service interface in another so unless I'm mistaken it's in default by default and this fails? Scott H. On 11/12/13, 11:22 AM, Alex Arseniev wrote: Yes [edit] aarseniev@m120# set services service-set SS1 ipsec-vpn-options local-gateway ? Possible completions: addressLocal gateway address routing-instance Name of routing instance that hosts local gateway = CHECK THIS OUT!!! aarseniev@m120 show version Hostname: m120 Model: m120 JUNOS Base OS boot [10.4S7.1] HTH Thanks Alex On 12/11/2013 16:05, Scott Harvanek wrote: Anyone with any ideas on this? Scott H. On 11/9/13, 12:58 PM, Scott Harvanek wrote: Is there a way to build a IPSec tunnel / service interface where the local gateway is NOT in the same routing-instance as the service interface? Here's what I'm trying to do; [ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ] [ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ] The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the outside interface on router B, I cannot commit unless the outside/local-gateway on the IPSec tunnel is in the same routing-instance as the service interface, is there a way around this? The SRX devices can do this without issue. service-set { interface-service { service-interface sp-0/0/0.0; -- want this in a VRF } ipsec-vpn-options { local-gateway x.x.x.x; -- default routing instance } ipsec-vpn-rules } ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] M-series IPSEC / SP interface and VRF
BGP is running in the tunnel and the next hop is the far side of the tunnel, everything looks correct. All the routes show the far end of the tunnel and BGP is established inside the VRF but traffic will not pass except of traffic directly between the two endpoints. E.g. BGP/ICMP on the tunnel subnet. I'm at a loss. I'll pull some info and post it back, maybe someone sees something I don't. Scott H. On 12/17/13, 12:27 PM, Alex Arseniev wrote: For the traffic to be encrypted, the BGP nexthop has to point into the tunnel which means one of the below: 1/ BGP has to run inside the tunnel, or 2/ You have to have a BGP import policy to change the nexthop to tunnel's remote address. If this is eBGP, then also add accept-remote-nexthop knob. HTH Thanks Alex On 17/12/2013 16:08, Scott Harvanek wrote: So this works to establish the tunnels, the problem is, BGP received routes over the tunnel do not function correctly. The routes are properly installed in the VRF but traffic to those destinations does not pass correctly. Does anyone have any experience running BGP like this on the m-series or does it just not work on next-hop-style? Thanks, -SH On 11/12/13, 1:34 PM, Scott Harvanek wrote: Yep excellent, I'll give it a whirl, thanks! Scott H. On 11/12/13, 1:24 PM, Alex Arseniev wrote: So, if I understand Your requirement, You want sp-0/0/0.unit in VRF, correct? And outgoing GE interface in inet.0? And where the decrypted packets should be placed, inet.0 or VRF? And where from the to-be-ecrypted packets should arrive, from inet.0 or VRF? If the answer is correct/inet.0/VRF/VRF then migrate to next-hop-style IPSec and place inside sp-* unit into the VRF leaving outside sp-* unit in inet.0. HTH Thanks Alex On 12/11/2013 16:35, Scott Harvanek wrote: Alex, Yea, tried this but it looks like you can't set it to the default inet.0 instance, only to things different... the local gw in my case is in the default instance and I want the service interface in another so unless I'm mistaken it's in default by default and this fails? Scott H. On 11/12/13, 11:22 AM, Alex Arseniev wrote: Yes [edit] aarseniev@m120# set services service-set SS1 ipsec-vpn-options local-gateway ? Possible completions: addressLocal gateway address routing-instance Name of routing instance that hosts local gateway = CHECK THIS OUT!!! aarseniev@m120 show version Hostname: m120 Model: m120 JUNOS Base OS boot [10.4S7.1] HTH Thanks Alex On 12/11/2013 16:05, Scott Harvanek wrote: Anyone with any ideas on this? Scott H. On 11/9/13, 12:58 PM, Scott Harvanek wrote: Is there a way to build a IPSec tunnel / service interface where the local gateway is NOT in the same routing-instance as the service interface? Here's what I'm trying to do; [ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ] [ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ] The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the outside interface on router B, I cannot commit unless the outside/local-gateway on the IPSec tunnel is in the same routing-instance as the service interface, is there a way around this? The SRX devices can do this without issue. service-set { interface-service { service-interface sp-0/0/0.0; -- want this in a VRF } ipsec-vpn-options { local-gateway x.x.x.x; -- default routing instance } ipsec-vpn-rules } ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Anybody use dual RE in srx3k? SCM only?
SCM will also keep a chassis alive in case of RE failure (assuming it's in a cluster and there's an active RE on the other chassis). I don't believe that a second RE is supported in 3k, but that's primarily because the SCM does everything that a backup RE would be able to do for less $$$ (so even if a second RE were supported, there would be no benefit to paying the extra cost of using an RE rather than SCM). -Bill On 12/16/13 6:12 PM, Santiago Martinez santiago.martinez...@gmail.com wrote: Hi, the SCM will only allow you to use a secondary control link on the srx3600. Juniper sales ing said that they are not planning to add support for a second RE on the srx3000 family. if you don't have the SCM the second control port will be/stay disable. Hope it helps Santiago. On 16 Dec 2013, at 21:26, OBrien, Will obri...@missouri.edu wrote: Second REs don't really do anything on SRX... yet. On the 5800s, I had to add them in order to bring up a secondary control link. The only thing they do is init the control plane on the chassis for that link to come up. I believe it's an artifact from stealing the MX chassis. I don't think it does anything for you on the 3600, since that's a different chassis architecture altogether. Will On Dec 16, 2013, at 3:07 PM, Morgan McLean wrote: Hi all, Looking into installing the SCM module into a couple of SRX3600's I have in production. Notice the diagram from juniper says slot RE1 for SCM. Do they support running another RE? Just curious if anybody does this, if its worth it or if its even possible. -- Thanks, Morgan ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] ip fragmentation, different mtu sizes
hi, all: i have a genetic question regarding ip fragmentation. i have two routers; one is cisco and another is juniper. they connected back to back with default ethernet mtu (cisco 1522 and juniper 1518, of course with vlan on both ends). i understand that two vendors have different ways of calculating the overhead of headers. when i send icmp pings, without specifying packets sizes (just default values) or specifying packet sizes smaller than the values (1472 on juniper side and 1500 on cisco side), everything is fine, but anything beyond thsoe two values on both ends, i got nothing. i thought that, for ip mtu, anything bigger than ip mtu (or juniper term protocol mtu) would be fragmented into multiple packets. did i miss something or my understanding isn't correct? thanks! ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ip fragmentation, different mtu sizes
you're correct that they calculate sizes differently. Cisco uses the payload size including headers; Juniper just the data-payload size, so for example a 9000 byte layer3 packet for Cisco = 9000 - 20B IP header - 8B ICMP header=8972B for Juniper. you can get them to send unfragmented ICMP packets by turning on the no-fragment flag. On JunOS, it's 'do-not-fragment'; in IOS, it depends a lot on the version but it's there.HTH. brent sweeny, indiana university On 12/17/2013 8:03 PM, snort bsd wrote: hi, all: i have a genetic question regarding ip fragmentation. i have two routers; one is cisco and another is juniper. they connected back to back with default ethernet mtu (cisco 1522 and juniper 1518, of course with vlan on both ends). i understand that two vendors have different ways of calculating the overhead of headers. when i send icmp pings, without specifying packets sizes (just default values) or specifying packet sizes smaller than the values (1472 on juniper side and 1500 on cisco side), everything is fine, but anything beyond thsoe two values on both ends, i got nothing. i thought that, for ip mtu, anything bigger than ip mtu (or juniper term protocol mtu) would be fragmented into multiple packets. did i miss something or my understanding isn't correct? thanks! ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ip fragmentation, different mtu sizes
thanks. it is not about setting df bits. i didn't set df bits when i sent extended icmp pings between two routers and i wasn't interested in that. there are a few posts clearly explained the differences between two vendors in terms of mtu calculations. that is not the point here. what i am trying to understand is about fragmentation. clearly with default media mtu (or ip mtu for that matter), if i send out l3 packets bigger than the protocol mtu (without seting up df bit), why didn't the expected ip fragmentations happen? On Tuesday, 17 December 2013 9:13 PM, Brent Sweeny swe...@indiana.edu wrote: you're correct that they calculate sizes differently. Cisco uses the payload size including headers; Juniper just the data-payload size, so for example a 9000 byte layer3 packet for Cisco = 9000 - 20B IP header - 8B ICMP header=8972B for Juniper. you can get them to send unfragmented ICMP packets by turning on the no-fragment flag. On JunOS, it's 'do-not-fragment'; in IOS, it depends a lot on the version but it's there. HTH. brent sweeny, indiana university On 12/17/2013 8:03 PM, snort bsd wrote: hi, all: i have a genetic question regarding ip fragmentation. i have two routers; one is cisco and another is juniper. they connected back to back with default ethernet mtu (cisco 1522 and juniper 1518, of course with vlan on both ends). i understand that two vendors have different ways of calculating the overhead of headers. when i send icmp pings, without specifying packets sizes (just default values) or specifying packet sizes smaller than the values (1472 on juniper side and 1500 on cisco side), everything is fine, but anything beyond thsoe two values on both ends, i got nothing. i thought that, for ip mtu, anything bigger than ip mtu (or juniper term protocol mtu) would be fragmented into multiple packets. did i miss something or my understanding isn't correct? thanks! ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ip fragmentation, different mtu sizes
I believe it does show 'the expected ip fragmentation'. this is from a cisco 2921: tpr-oob#show ip traffic | in frag 0 fragmented, 0 fragments, 0 couldn't fragment tpr-oob#ping somehost size 1600 Type escape sequence to abort. Sending 5, 1600-byte ICMP Echos to somehost, timeout is 2 seconds: ! Success rate is 100 percent (5/5), round-trip min/avg/max = 64/66/68 ms tpr-oob#show ip traffic | in frag 5 fragmented, 10 fragments, 0 couldn't fragment I don't have a juniper quiet enough to isolate one test, but it is seeing and counting fragments from a number of sources: re1 show system statistics ip | match frag 25525417 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped (queue overflow) 2 fragments dropped after timeout 0 fragments dropped due to over limit 242206410 output datagrams fragmented 0 fragments created 7 datagrams that can't be fragmented On 12/17/2013 10:15 PM, snort bsd wrote: thanks. it is not about setting df bits. i didn't set df bits when i sent extended icmp pings between two routers and i wasn't interested in that. there are a few posts clearly explained the differences between two vendors in terms of mtu calculations. that is not the point here. what i am trying to understand is about fragmentation. clearly with default media mtu (or ip mtu for that matter), if i send out l3 packets bigger than the protocol mtu (without seting up df bit), why didn't the expected ip fragmentations happen? On Tuesday, 17 December 2013 9:13 PM, Brent Sweeny swe...@indiana.edu wrote: you're correct that they calculate sizes differently. Cisco uses the payload size including headers; Juniper just the data-payload size, so for example a 9000 byte layer3 packet for Cisco = 9000 - 20B IP header - 8B ICMP header=8972B for Juniper. you can get them to send unfragmented ICMP packets by turning on the no-fragment flag. On JunOS, it's 'do-not-fragment'; in IOS, it depends a lot on the version but it's there.HTH. brent sweeny, indiana university On 12/17/2013 8:03 PM, snort bsd wrote: hi, all: i have a genetic question regarding ip fragmentation. i have two routers; one is cisco and another is juniper. they connected back to back with default ethernet mtu (cisco 1522 and juniper 1518, of course with vlan on both ends). i understand that two vendors have different ways of calculating the overhead of headers. when i send icmp pings, without specifying packets sizes (just default values) or specifying packet sizes smaller than the values (1472 on juniper side and 1500 on cisco side), everything is fine, but anything beyond thsoe two values on both ends, i got nothing. i thought that, for ip mtu, anything bigger than ip mtu (or juniper term protocol mtu) would be fragmented into multiple packets. did i miss something or my understanding isn't correct? thanks! ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] [c-nsp] ip fragmentation, different mtu sizes
can you paste the output of show ip interface interface from cisco and show interface interface from Juniper. Regards, Amjad Ul Hasnain Qasmi -Original Message- From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of snort bsd Sent: Wednesday, December 18, 2013 6:03 AM To: juniper-nsp; cisco-nsp Subject: [c-nsp] ip fragmentation, different mtu sizes hi, all: i have a genetic question regarding ip fragmentation. i have two routers; one is cisco and another is juniper. they connected back to back with default ethernet mtu (cisco 1522 and juniper 1518, of course with vlan on both ends). i understand that two vendors have different ways of calculating the overhead of headers. when i send icmp pings, without specifying packets sizes (just default values) or specifying packet sizes smaller than the values (1472 on juniper side and 1500 on cisco side), everything is fine, but anything beyond thsoe two values on both ends, i got nothing. i thought that, for ip mtu, anything bigger than ip mtu (or juniper term protocol mtu) would be fragmented into multiple packets. did i miss something or my understanding isn't correct? thanks! ___ cisco-nsp mailing list cisco-...@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] SRX monitor-interface question
SRX (High end) by default keeps logs on data plane and they have to be forwarded to any external syslog http://kb.juniper.net/InfoCenter/index?page=contentid=KB16506 however from Junos 10 perhaps you can copy them from data plane to control plane if you want to see them on console. Muhammad Fahad Khan JNCIE-M # 756 Lead Network and Security Consultant - IBM +92-301-8247638 Skype: fahad-ibm http://pk.linkedin.com/in/muhammadfahadkhan On Fri, Dec 13, 2013 at 7:28 PM, R S dim0...@hotmail.com wrote: The only part missing will remain local control plane resources (ie logs, snmp, etc) that remain on RG0 secondary. Am I right ? -- Date: Fri, 13 Dec 2013 14:58:46 +0300 Subject: Re: [j-nsp] SRX monitor-interface question From: asadgard...@gmail.com To: dim0...@hotmail.com CC: fahad.k...@gmail.com; juniper-nsp@puck.nether.net Reffer data plane on following: http://kb.juniper.net/InfoCenter/index?page=contentid=KB16224 Asad On Friday, December 13, 2013, R S wrote: how can I config syslog/traffic log directly from data plane ? some config example ? tks -- Date: Fri, 13 Dec 2013 14:51:58 +0300 Subject: Re: [j-nsp] SRX monitor-interface question From: asadgard...@gmail.com To: dim0...@hotmail.com CC: fahad.k...@gmail.com; juniper-nsp@puck.nether.net Its not recommended to use control plane for traffic logs, you can configure sex to forward traffic logs directly from data plane RG0 aka control plane controls your rotuing engine, routing protocols and chassis. Failing it over will cause your routing daemon to restart , routing protocols to reconverge and so on... Asad On Friday, December 13, 2013, R S wrote: And what about syslog or firewall traffic logging flows on the RG1 Active node if RG0 remain active on the Passive ? Date: Fri, 13 Dec 2013 16:34:53 +0500 Subject: Re: [j-nsp] SRX monitor-interface question From: fahad.k...@gmail.com To: dim0...@hotmail.com CC: juniper-nsp@puck.nether.net RG0 only contains Control Plane or REs. In SRX failover, its not necessary to failover RG0 when there is a failover in RG1 due to a link failure. So we only do interface-monitor in RG1, RG2 ... not in RG0. RG0 already run in A/P mode. It can be possible that SRX B is Primary in RG0 while Secondary in RG1 (means SRX A is Primary in RG 1) Muhammad Fahad Khan JNCIE-M # 756 Lead Network and Security Consultant - IBM +92-301-8247638 Skype: fahad-ibm http://pk.linkedin.com/in/muhammadfahadkhan On Fri, Dec 13, 2013 at 2:07 PM, R S dim0...@hotmail.com wrote: Hi In an SRX5800 cluster A/P deployment, does anybody recommend to monitor-interface also on RG0 or not ? PRO ? CONS ? We did it but unfortunately during an SPU crash the RG0 didn’t switch properly and JTAC told us it’s not recommended monitor-interface under RG0 in same corner case… Any experience to share is useful Tks ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] R: Re: SRX monitor-interface question
I currently would like to have both traffic logs and local resource logs (systems syslog, routing syslog, snmp, etc) in the device we re able to reach. We did management inband hence we re able to reach only through RG1. Any idea? Tks Sent with Mobile Messaggio originale Da: Fahad Khan fahad.k...@gmail.com Data: A: R S dim0...@hotmail.com Cc: Asad Raza asadgard...@gmail.com,juniper-nsp@puck.nether.net Oggetto: Re: [j-nsp] SRX monitor-interface question SRX (High end) by default keeps logs on data plane and they have to be forwarded to any external syslog http://kb.juniper.net/InfoCenter/index?page=contentid=KB16506 however from Junos 10 perhaps you can copy them from data plane to control plane if you want to see them on console. Muhammad Fahad Khan JNCIE-M # 756 Lead Network and Security Consultant - IBM +92-301-8247638 Skype: fahad-ibm http://pk.linkedin.com/in/muhammadfahadkhan On Fri, Dec 13, 2013 at 7:28 PM, R S dim0...@hotmail.com wrote: The only part missing will remain local control plane resources (ie logs, snmp, etc) that remain on RG0 secondary. Am I right ? Date: Fri, 13 Dec 2013 14:58:46 +0300 Subject: Re: [j-nsp] SRX monitor-interface question From: asadgard...@gmail.com To: dim0...@hotmail.com CC: fahad.k...@gmail.com; juniper-nsp@puck.nether.net Reffer data plane on following: http://kb.juniper.net/InfoCenter/index?page=contentid=KB16224 Asad On Friday, December 13, 2013, R S wrote: how can I config syslog/traffic log directly from data plane ? some config example ? tks Date: Fri, 13 Dec 2013 14:51:58 +0300 Subject: Re: [j-nsp] SRX monitor-interface question From: asadgard...@gmail.com To: dim0...@hotmail.com CC: fahad.k...@gmail.com; juniper-nsp@puck.nether.net Its not recommended to use control plane for traffic logs, you can configure sex to forward traffic logs directly from data plane RG0 aka control plane controls your rotuing engine, routing protocols and chassis. Failing it over will cause your routing daemon to restart , routing protocols to reconverge and so on... Asad On Friday, December 13, 2013, R S wrote: And what about syslog or firewall traffic logging flows on the RG1 Active node if RG0 remain active on the Passive ? Date: Fri, 13 Dec 2013 16:34:53 +0500 Subject: Re: [j-nsp] SRX monitor-interface question From: fahad.k...@gmail.com To: dim0...@hotmail.com CC: juniper-nsp@puck.nether.net RG0 only contains Control Plane or REs. In SRX failover, its not necessary to failover RG0 when there is a failover in RG1 due to a link failure. So we only do interface-monitor in RG1, RG2 ... not in RG0. RG0 already run in A/P mode. It can be possible that SRX B is Primary in RG0 while Secondary in RG1 (means SRX A is Primary in RG 1) Muhammad Fahad Khan JNCIE-M # 756 Lead Network and Security Consultant - IBM +92-301-8247638 Skype: fahad-ibm http://pk.linkedin.com/in/muhammadfahadkhan On Fri, Dec 13, 2013 at 2:07 PM, R S dim0...@hotmail.com wrote: Hi In an SRX5800 cluster A/P deployment, does anybody recommend to monitor-interface also on RG0 or not ? PRO ? CONS ? We did it but unfortunately during an SPU crash the RG0 didn’t switch properly and JTAC told us it’s not recommended monitor-interface under RG0 in same corner case… Any experience to share is useful Tks ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp