Re: [j-nsp] 6PE RR next-hop resolution best practices
Thanks Adam Ivan! I'll try this out shortly. Best, James On Mon, May 18, 2015 at 06:24:32AM +, Adam Vitkovsky wrote: To resolve the NHs you can do: set routing-options rib-groups 0-to-6 import-rib inet.0 set routing-options rib-groups 0-to-6 import-rib inet6.0 set routing-options rib-groups 0-to-6 import-policy loopbacks set protocols isis/ospf rib-group inet 0-to-6 This should create the ipv4-mapped-in-v6 (::ipv4) addresses in inet6 And then you tell BGP to resolve the NHs in inet.6: set routing-options resolution rib bgp.inet6.0 resolution-ribs inet6.0 adam ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On 18/May/15 14:11, Scott Granados wrote: I’m not sure exactly what you’re looking for but the peer group system under JunOS is fairly efficient. If you set your export and import policies per group the bgp process will place these in a peer group and dynamically break off slow members in to their own groups so that one slower peer won’t cause the other members in the same peer group to synchronize as slowly. Cisco does not or at least did not do this as I understand things. In the cisco case if a peer group member lags it causes the other members of the same peer group to lag and doesn’t allow updates until the slowest member catches up. Since BGP under Junos breaks this slow guy off on it’s own you don’t have the same limitation. This all happens dynamically. If memory serves, Cisco recently developed some kind of feature to deal with slow peers to fix this very issue: http://www.cisco.com/c/en/us/td/docs/ios/ios_xe/iproute_bgp/configuration/guide/2_xe/irg_xe_book/irg_slow_peer_xe.html Okay, so maybe not recently... Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] SRx self-generated traffic
Hello I have three questions related to SRX self-generated traffic 1- How to force the SRX self-generated traffic to get out to internet through certain link (suppose I have two internet connections)? 2- Is it possible to carry the self-generated traffic over a VPN tunnel terminated on the SRX? 3-Can we proxy the self-generated traffic to some proxy server? RegardsMahmoud ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 6PE RR next-hop resolution best practices
Hi Ivan, James, To be honest I was improvising a bit with this one as we are using bgp.l3vpn-inet6.0 (6VPE) so I thought that 6PE would use bgp.inet6.0 but Ivan you’re right 6PE would be using inet6.0. Alright so by default the inet6.0 would try to resolve NHs in inet6.3 right? So it needs to be changed to resolve NHs in itself (while the loopbacks are leaked into inet6.0) so I guess the last piece of the config should actually be: set routing-options resolution rib inet6.0 resolution-ribs inet6.0 adam From: Ivan Ivanov [mailto:ivanov.i...@gmail.com] Sent: 18 May 2015 14:29 To: Adam Vitkovsky Cc: James Jun; juniper-nsp@puck.nether.net Subject: Re: [j-nsp] 6PE RR next-hop resolution best practices Hi Adam, I am not sure if your solution will do the job. Do you have this in production? Table bgp.inet6.0 is a IPv6 table of routing-instance called 'bgp' or? Routes from label-unicast inet6 family are put in inet6.0 table not in bgp.inet6.0. James will confirm that after he tests it. It might be that I am wrong. Ivan, On Mon, May 18, 2015 at 7:24 AM, Adam Vitkovsky adam.vitkov...@gamma.co.ukmailto:adam.vitkov...@gamma.co.uk wrote: Hi James James Jun Sent: 16 May 2015 16:20 The problem however is that I'm using the P's also as route-reflectors for distributing BGP throughout the network. So, I need the RR's to make correct BGP best-path decisions, but they can't do that on 6PE routes without having inet6.3 table to reference the ipv4-mapped-in-v6 next-hops against. I'm sorry I misunderstood the problem so the problem isn't that the P's are trying to do IPv6 lookup instead of label-switch the packets between PEs but the problem is that RRs are not advertising the IPv6 prefixes because they can't select the best paths so the IPv6 prefixes are not being exchanged between the PEs right? To resolve the NHs you can do: set routing-options rib-groups 0-to-6 import-rib inet.0 set routing-options rib-groups 0-to-6 import-rib inet6.0 set routing-options rib-groups 0-to-6 import-policy loopbacks set protocols isis/ospf rib-group inet 0-to-6 This should create the ipv4-mapped-in-v6 (::ipv4) addresses in inet6 And then you tell BGP to resolve the NHs in inet.6: set routing-options resolution rib bgp.inet6.0 resolution-ribs inet6.0 adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp -- Best Regards! Ivan Ivanov --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
I’m not sure exactly what you’re looking for but the peer group system under JunOS is fairly efficient. If you set your export and import policies per group the bgp process will place these in a peer group and dynamically break off slow members in to their own groups so that one slower peer won’t cause the other members in the same peer group to synchronize as slowly. Cisco does not or at least did not do this as I understand things. In the cisco case if a peer group member lags it causes the other members of the same peer group to lag and doesn’t allow updates until the slowest member catches up. Since BGP under Junos breaks this slow guy off on it’s own you don’t have the same limitation. This all happens dynamically. On May 18, 2015, at 7:00 AM, Adam Vitkovsky adam.vitkov...@gamma.co.uk wrote: Hey buddy, Saku Ytti Sent: 18 May 2015 11:12 On (2015-05-18 10:04 +), Adam Vitkovsky wrote: Hey Adam, I'd like to ask if the 90's way of BGP generating updates per peer-group is a cause for concern on a modern gear. And if not then anyways am I the only one missing some flexibility in BGP peers configuration in Junos? It's really annoying that every time one needs to adjust something for a peer, might even be something session related, a new peer-group has to be carved up. Is there some efficient 2010's way you're thinking about? Spamming same TCP datagram to multiple hosts has great efficiency benefits, but to be able to capitalize this, you need to group neighbours who are to receive same set of routes, or TCP messages. Yes I'm thinking about BGP Dynamic Update Peer-Groups - the improved BGP update message generation where the update-group memberships is calculated automatically/dynamically by the system based on common egress policies assigned to peers. And of course the configuration using templates (i.e. session templates and policy templates) and inheritance. I've been relying on the above since ever in Cisco world and I miss that much in Junos. Making 1 peer == 1 update-group would be easy, but it would make already hella slow rpd lot worse. You see this is what I mean, the RPD is slow so why not use the above to speed things up. It is best to optimize advertised routes to as few sets as possible, to gain best benefits. I would recommend not setting export filters on per- neighbour basis, only on group-level. (i.e. not micro-optimize neighbours to receive exactly what is needed, if extra routes are not actively harmful) -- ++ytti That is an interesting approach indeed though it's a sacrifice we are forced into because the update generation is not optimal in Junos. adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 6PE RR next-hop resolution best practices
Hi Adam, I am not sure if your solution will do the job. Do you have this in production? Table bgp.inet6.0 is a IPv6 table of routing-instance called 'bgp' or? Routes from label-unicast inet6 family are put in inet6.0 table not in bgp.inet6.0. James will confirm that after he tests it. It might be that I am wrong. Ivan, On Mon, May 18, 2015 at 7:24 AM, Adam Vitkovsky adam.vitkov...@gamma.co.uk wrote: Hi James James Jun Sent: 16 May 2015 16:20 The problem however is that I'm using the P's also as route-reflectors for distributing BGP throughout the network. So, I need the RR's to make correct BGP best-path decisions, but they can't do that on 6PE routes without having inet6.3 table to reference the ipv4-mapped-in-v6 next-hops against. I'm sorry I misunderstood the problem so the problem isn't that the P's are trying to do IPv6 lookup instead of label-switch the packets between PEs but the problem is that RRs are not advertising the IPv6 prefixes because they can't select the best paths so the IPv6 prefixes are not being exchanged between the PEs right? To resolve the NHs you can do: set routing-options rib-groups 0-to-6 import-rib inet.0 set routing-options rib-groups 0-to-6 import-rib inet6.0 set routing-options rib-groups 0-to-6 import-policy loopbacks set protocols isis/ospf rib-group inet 0-to-6 This should create the ipv4-mapped-in-v6 (::ipv4) addresses in inet6 And then you tell BGP to resolve the NHs in inet.6: set routing-options resolution rib bgp.inet6.0 resolution-ribs inet6.0 adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp -- Best Regards! Ivan Ivanov ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
Hi Saku, Saku Ytti [mailto:s...@ytti.fi] Sent: 18 May 2015 12:50 On 18 May 2015 at 14:00, Adam Vitkovsky adam.vitkov...@gamma.co.uk wrote: Hey Adam, Yes I'm thinking about BGP Dynamic Update Peer-Groups - the improved BGP update message generation where the update-group memberships is calculated automatically/dynamically by the system based on common egress policies assigned to peers. And of course the configuration using templates (i.e. session templates and policy templates) and inheritance. I've been relying on the above since ever in Cisco world and I miss that much in Junos. The update-groups are created dynamically in JunOS as far as I know. That is if you have BGP group where neighbors have unique export policies, you will have multiple update-groups in configuration group. Yes that is my understanding, that there's definitely an update-group for each configuration group. But I guess if you have two neighbors with same export policies in different configuration group, it likely won't share same update-group, haven't tested though. Yeah that's the culprit right there that the update-group is based on the configuration group rather than the actual egress policy commonalities. I also suspect that if you have two neighbours in a configuration group both with the same policy but attached directly to the peer there would be 3 update groups created one for the configuration group and one for each peer. I'm personally not too excited about templates in IOS, as I tend to have only like 3-5 peer-groups in IOS, and if I translate the config to template based, the amount of lines in my config increases. I think the break-even would require rather large amount of groups. -- ++ytti Of course it depends on the use case and the divergence of routing policies among peers. adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
From: Mark Tinka [mailto:mark.ti...@seacom.mu] Sent: 18 May 2015 13:23 On 18/May/15 14:11, Scott Granados wrote: I'm not sure exactly what you're looking for but the peer group system under JunOS is fairly efficient. If you set your export and import policies per group the bgp process will place these in a peer group and dynamically break off slow members in to their own groups so that one slower peer won't cause the other members in the same peer group to synchronize as slowly. Cisco does not or at least did not do this as I understand things. In the cisco case if a peer group member lags it causes the other members of the same peer group to lag and doesn't allow updates until the slowest member catches up. Since BGP under Junos breaks this slow guy off on it's own you don't have the same limitation. This all happens dynamically. If memory serves, Cisco recently developed some kind of feature to deal with slow peers to fix this very issue: http://www.cisco.com/c/en/us/td/docs/ios/ios_xe/iproute_bgp/configurati on/guide/2_xe/irg_xe_book/irg_slow_peer_xe.html Okay, so maybe not recently... Mark. Hi Mark, Haaha yeah that one has been around for some time already :) adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On May 18, 2015, at 8:08 AM, Julien Goodwin jgood...@studio442.com.au wrote: On 18/05/15 21:49, Saku Ytti wrote: The update-groups are created dynamically in JunOS as far as I know. That is if you have BGP group where neighbors have unique export policies, you will have multiple update-groups in configuration group. But I guess if you have two neighbors with same export policies in different configuration group, it likely won't share same update-group, haven't tested though. I believe the two groups same policies do get two update-groups. That's right. I'm not sure about neighbor with neighbor level policy the same as the group level case though. Most neighbor level policy changes will split the groups. When in doubt, use show bgp groups. It will show more than one group of the same name when the group gets internally split by such configuration. -- Jeff ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On 18/05/15 21:49, Saku Ytti wrote: The update-groups are created dynamically in JunOS as far as I know. That is if you have BGP group where neighbors have unique export policies, you will have multiple update-groups in configuration group. But I guess if you have two neighbors with same export policies in different configuration group, it likely won't share same update-group, haven't tested though. I believe the two groups same policies do get two update-groups. I'm not sure about neighbor with neighbor level policy the same as the group level case though. signature.asc Description: OpenPGP digital signature ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] clear show class-of-service fabric statistics summary
Hi, Do you know if it is possible to clear show class-of-service fabric statistics summary ? thx BR Alberto ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] SRx self-generated traffic
On 18 May 2015, at 11:21 pm, M Abdeljawad via juniper-nsp juniper-nsp@puck.nether.net wrote: Hello I have three questions related to SRX self-generated traffic 1- How to force the SRX self-generated traffic to get out to internet through certain link (suppose I have two internet connections)? Self-generated traffic will use inet.0 to determine the best path anywhere. I'm not aware of any way to perform policy-based routing on self-generated traffic, as FBF is applied on ingress. 2- Is it possible to carry the self-generated traffic over a VPN tunnel terminated on the SRX? Yes, however there are some caveats to this approach depending on the specific traffic you are generating. In general though, you want to have numbered interfaces (eg: your st0.x interface has an IP address assigned to it) so that the source IP of the traffic is something sane (traffic sourced from an unnumbered tunnel interface will otherwise select the underlying interface IP address, which may be public). Depending on what you are trying to do, you might find this useful: set system default-address-selection This sources all system-generated IP traffic from the loopback interface if one is defined. Depending on which zone your loopback is in, you can then configure policies to suit. 3-Can we proxy the self-generated traffic to some proxy server? In the case of traffic like syslog, DNS and software updates then yes this should be possible (for various definitions of proxy). Cheers, Ben ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 6PE RR next-hop resolution best practices
Hi James James Jun Sent: 16 May 2015 16:20 The problem however is that I'm using the P's also as route-reflectors for distributing BGP throughout the network. So, I need the RR's to make correct BGP best-path decisions, but they can't do that on 6PE routes without having inet6.3 table to reference the ipv4-mapped-in-v6 next-hops against. I'm sorry I misunderstood the problem so the problem isn't that the P's are trying to do IPv6 lookup instead of label-switch the packets between PEs but the problem is that RRs are not advertising the IPv6 prefixes because they can't select the best paths so the IPv6 prefixes are not being exchanged between the PEs right? To resolve the NHs you can do: set routing-options rib-groups 0-to-6 import-rib inet.0 set routing-options rib-groups 0-to-6 import-rib inet6.0 set routing-options rib-groups 0-to-6 import-policy loopbacks set protocols isis/ospf rib-group inet 0-to-6 This should create the ipv4-mapped-in-v6 (::ipv4) addresses in inet6 And then you tell BGP to resolve the NHs in inet.6: set routing-options resolution rib bgp.inet6.0 resolution-ribs inet6.0 adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Junos BGP update generation inefficiency -cause for concern?
Hi folks, I'd like to ask if the 90's way of BGP generating updates per peer-group is a cause for concern on a modern gear. And if not then anyways am I the only one missing some flexibility in BGP peers configuration in Junos? It's really annoying that every time one needs to adjust something for a peer, might even be something session related, a new peer-group has to be carved up. adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On (2015-05-18 10:04 +), Adam Vitkovsky wrote: Hey Adam, I'd like to ask if the 90's way of BGP generating updates per peer-group is a cause for concern on a modern gear. And if not then anyways am I the only one missing some flexibility in BGP peers configuration in Junos? It's really annoying that every time one needs to adjust something for a peer, might even be something session related, a new peer-group has to be carved up. Is there some efficient 2010's way you're thinking about? Spamming same TCP datagram to multiple hosts has great efficiency benefits, but to be able to capitalize this, you need to group neighbours who are to receive same set of routes, or TCP messages. Making 1 peer == 1 update-group would be easy, but it would make already hella slow rpd lot worse. It is best to optimize advertised routes to as few sets as possible, to gain best benefits. I would recommend not setting export filters on per-neighbour basis, only on group-level. (i.e. not micro-optimize neighbours to receive exactly what is needed, if extra routes are not actively harmful) -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
Hey buddy, Saku Ytti Sent: 18 May 2015 11:12 On (2015-05-18 10:04 +), Adam Vitkovsky wrote: Hey Adam, I'd like to ask if the 90's way of BGP generating updates per peer-group is a cause for concern on a modern gear. And if not then anyways am I the only one missing some flexibility in BGP peers configuration in Junos? It's really annoying that every time one needs to adjust something for a peer, might even be something session related, a new peer-group has to be carved up. Is there some efficient 2010's way you're thinking about? Spamming same TCP datagram to multiple hosts has great efficiency benefits, but to be able to capitalize this, you need to group neighbours who are to receive same set of routes, or TCP messages. Yes I'm thinking about BGP Dynamic Update Peer-Groups - the improved BGP update message generation where the update-group memberships is calculated automatically/dynamically by the system based on common egress policies assigned to peers. And of course the configuration using templates (i.e. session templates and policy templates) and inheritance. I've been relying on the above since ever in Cisco world and I miss that much in Junos. Making 1 peer == 1 update-group would be easy, but it would make already hella slow rpd lot worse. You see this is what I mean, the RPD is slow so why not use the above to speed things up. It is best to optimize advertised routes to as few sets as possible, to gain best benefits. I would recommend not setting export filters on per- neighbour basis, only on group-level. (i.e. not micro-optimize neighbours to receive exactly what is needed, if extra routes are not actively harmful) -- ++ytti That is an interesting approach indeed though it's a sacrifice we are forced into because the update generation is not optimal in Junos. adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On Mon, May 18, 2015 at 10:04:00AM +, Adam Vitkovsky wrote: Hi folks, I'd like to ask if the 90's way of BGP generating updates per peer-group is a cause for concern on a modern gear. And if not then anyways am I the only one missing some flexibility in BGP peers configuration in Junos? It's really annoying that every time one needs to adjust something for a peer, might even be something session related, a new peer-group has to be carved up. Is creation of new peer-group really needed ? We have absolutely no problems configuring common settings at group-level and customized settings at peer level: s...@rt.ov.spb show configuration protocols bgp group DownLinks type external; import deny_any; export eBGP_to_customer_default; remove-private; multipath multiple-as; /* typical customer setup, only import policy modified */ neighbor XX.XXX.XXX.XX { description ...; passive; import eBGP_from_Customer1; peer-as 123456; } /* another customer who wants default route together with full-view, so export policy customized too */ neighbor XX.XXX.XXX.XX { description ; passive; import eBGP_from_Customer2; export [ default_originate eBGP_to_customer_default ]; peer-as 123457; } /* yet another customer, multihomed (thus metric-out) and with personal prefix-limit, prefix-filter is not enough in this case */ neighbor XX.XXX.XXX.XX { description ..; metric-out igp; passive; import eBGP_from_Customer3; family inet { unicast { prefix-limit { maximum 1000; teardown 30 idle-timeout 10; } } } peer-as 123458; } adam --- This email has been scanned for email related threats and delivered safely by Mimecast. For more information please visit http://www.mimecast.com --- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos BGP update generation inefficiency -cause for concern?
On 18 May 2015 at 14:00, Adam Vitkovsky adam.vitkov...@gamma.co.uk wrote: Hey Adam, Yes I'm thinking about BGP Dynamic Update Peer-Groups - the improved BGP update message generation where the update-group memberships is calculated automatically/dynamically by the system based on common egress policies assigned to peers. And of course the configuration using templates (i.e. session templates and policy templates) and inheritance. I've been relying on the above since ever in Cisco world and I miss that much in Junos. The update-groups are created dynamically in JunOS as far as I know. That is if you have BGP group where neighbors have unique export policies, you will have multiple update-groups in configuration group. But I guess if you have two neighbors with same export policies in different configuration group, it likely won't share same update-group, haven't tested though. I'm personally not too excited about templates in IOS, as I tend to have only like 3-5 peer-groups in IOS, and if I translate the config to template based, the amount of lines in my config increases. I think the break-even would require rather large amount of groups. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp