Re: [j-nsp] Number of PFE in MX DPC (DPCE-R-4XGE-XFP)

2015-04-09 Thread Abhi via juniper-nsp
Yes you are right. regards
abhijeet.c
 


 On Wednesday, April 8, 2015 1:18 PM, Sachin Rai 
sachinrai1...@hotmail.com wrote:
   
 

 Hi All,

I need to configure a GRE tunnel on a 10 gig interface of DPC - DPCE-R-4XGE-XFP.

As per juniper document Ethernet and tunnel interfaces cannot coexist
on the same Packet Forwarding Engine of a 10-Gigabit Ethernet 4-port
DPC.

Just need to check how many PFE do these DPCs have. (AFAIK it should be 4).
                         
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


 
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] SRX ignores then routing-instance firewall action

2015-04-09 Thread Martin T
Hi,

I have a Juniper SRX firewall cluster with interface reth2.28 facing
primary Internet connection, interface reth2.128 facing secondary
Internet connection and reth1.901 is facing LAN. Incoming traffic uses
reth2.28 interface. There is a static NAT configuration applied which
will change the destination IP address to 10.70.50.201 if destination
port is 515:

static {
rule-set nat {
from interface reth2.28;
rule nat {
match {
destination-address 192.0.2.1/32
destination-port 515;
}
then {
static-nat {
prefix {
10.70.50.201/32;
mapped-port 515;
}
}
}
}
}
}

Now host with IP address 10.70.50.201 will send a reply(for example
TCP SYN+ACK) and SRX receives it on reth1.901 interface. I have an
input filter configured to reth1.901 which should force this traffic
to use routing instance DIA:

firewall {
filter fallback-to-nat {
term nat {
from {
destination-address {
104.236.80.115/32;
}
protocol tcp;
source-port 515;
}
then {
routing-instance DIA;
}
}

However, according to flow traceoptions the router still uses inet.0
RIB for routing decisions and not the DIA.inet.0 RIB:

Apr  9 13:29:18 13:29:21.392241:CID-1:RT:
reth1.901:10.70.50.201/515-104.236.80.115/56022, tcp, flag 12 syn ack
Apr  9 13:29:18 13:29:21.392241:CID-1:RT: find flow: table 0x5115c900,
hash 9435(0x), sa 10.70.50.201, da 104.236.80.115, sp 515, dp
56022, proto 6, tok 9
Apr  9 13:29:18 13:29:21.392241:CID-1:RT:  flow got session.
Apr  9 13:29:18 13:29:21.392241:CID-1:RT:  flow session id 132067
Apr  9 13:29:18 13:29:21.392241:CID-1:RT:  route lookup failed:
dest-ip 104.236.80.115 orig ifp reth2.28 output_ifp reth2.128 fto
0x48bf7b50 orig-zone 7 out-zone 8 vsd 2
Apr  9 13:29:18 13:29:21.392241:CID-1:RT:  packet dropped,   pak
dropped since re-route failed


In case of DIA.inet.0 the interface for 104.236.80.115 would be reth2.28.

Any ideas what might cause such behavior?



thanks,
Martin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Aggregate policer config

2015-04-09 Thread Ben Dale


On 9 Apr 2015, at 10:22 am, Mark Tees markt...@gmail.com wrote:

 I would be curious to know if/how the aggregate behaviour works
 between different line cards/PFE.

I was wondering this too, so I did a bit of digging - Page 198 of Doug Hanks' 
MX Series book suggests it doesn't - quoting:

The same filter can be applied to multiple interfaces at the same time.  By 
default on MX Routers, these filters will sum (or aggregate) their counters and 
policing actions when those interfaces share a PFE.

I've only got MX80s here in the lab just now, which I think share a PFE for 
both FPC 0 and FPC1 - I can apply the same filter/policer to both a 10G and a 
1G interface and get the aggregate bandwidth between interfaces to be dictated 
by the policer.

 Just to clarify here:
 
 set firewall policer POLICER-800M filter-specific
 set firewall policer POLICER-800M if-exceeding bandwidth-limit 800m
 set firewall policer POLICER-800M if-exceeding burst-size-limit 10m
 set firewall policer POLICER-800M then discard
 
 This should result in the policer/counter actions being created per
 the filter they are used in but still shared within that filter
 providing interface-specific is not used right?

Yes, correct, however I suspect that the policer aggregate would again be per 
PFE.

So, back to the OP's question - you *should* be able to use a single filter, 
provided both your customer's links are on an MPC1 or MPC3E with 1G / 10G MICs.

If that's not the case, then stick with the per-interface 800M policer and just 
apply local-preference to your customers routes as you import them to ensure 
their traffic is always preferred via the 10G link (while it's up), and use 
MED/metric to encourage them to use the 10G link for their outbound.

Cheers,

Ben
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP route filtering

2015-04-09 Thread Patrick Okui
On  9-Apr-2015 03:07:22 (+0300), Jonathan Call wrote:
 My IPv6 BGP experience is a bit lacking.  What would be an
 appropriate IPv6 policy-statement to only install a default route. Is
 it something as basic as this?

In general your v4 and v6 should work the same (emphasis on should).

As for your policy-statement I do not think it needs the bgp-nets term
as that is covered by the deny term below it.

--
patrick



signature.asc
Description: OpenPGP digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp