All:

I actually received quite a few responses off-list to this question.  

We have to deal with many different audit/compliance agencies each with their 
own guidelines. One of their guidelines is that security zones should reside on 
physically separate switches.  However, in an MPLS based on environment they 
allow for VRF/VSI separation on the same physical device.  The reason is that 
each instance has its own RIB and its own FIB structures.  At least, this is 
what I've heard now from multiple auditors over the last 6 or 7 years while 
working for different companies.  

I'm questioning this in general because we are looking at OpenFlow.  In 
particular, the question came up "Are separate structures really necessary?"  
What if the FIB lookup was entirely hash-based (source-port included) and each 
entry in the hash table had a mask-structure associated with it (for src/dst 
mac and IPs?).    

I previously blogged that a (totally hypothetical) multi-tenant network built 
entirely with PBR or FBF would not pass audit because of a lack of separate RIB 
and separate FIB structures for each tenant in the network.  Why wouldn't this 
pass audit?  OpenFlow is similar.  In this potential OpenFlow design there 
would still be separate VRFs on the controllers, but ultimately the forwarding 
would be compiled into this single hash table structure.  

So I'm questioning a basic assumption here: Are separate FIB structures for 
each VPN required? What I am hearing is mainly ASIC/NPU/FPGA design/performance 
concerns.  Robert expressed some concerns over one VPN potentially impacting 
other VPNs with something like route instability or table corruption of some 
kind.. "crashing" was the word he used :-).
 
I did spray a few lists with this question, but they are lists where the right 
people generally lurk...

 
Derick Winkworth
CCIE #15672 (RS, SP), JNCIE-M #721
http://packetpushers.net/author/dwinkworth


________________________________
From: Robert Raszuk <rob...@raszuk.net>
To: Gert Doering <g...@greenie.muc.de>
Cc: Derick Winkworth <dwinkwo...@att.net>; "juniper-nsp@puck.nether.net" 
<juniper-nsp@puck.nether.net>; "cisco-...@puck.nether.net" 
<cisco-...@puck.nether.net>
Sent: Tuesday, September 27, 2011 3:58 AM
Subject: Re: [c-nsp] general question on VRFs and FIBs...

Hi Gert,

> "address first, VRF second".

Well no one sane would do that ;) I believe what Derick was asking was 
why not have "incoming_interface/table_id -> prefix" lookup.

And while in software each VRF has separate RIB and FIB data structures 
for reasons already discussed on L3VPN IETF mailing list in actual 
hardware on a given line card however this may no longer be the case.

Also side note that most vendors still did not implement per 
interface/per vrf MPLS labels (even in control plane) so all labels are 
looked up in a global table with just additional essentially control 
plane driven twicks to protect from malicious attacks in the case of 
CSC/Inter-AS.

Cheers,
R.

> Hi,
>
> On Mon, Sep 26, 2011 at 01:18:05PM -0700, Derick Winkworth wrote:
>> I'm trying to find an archived discussion or presentation discussing
>> why exactly the industry generally settled on having a separate
>> FIB table for each VRF vs having one FIB table with a column that
>> identifies the VRF instance?  I'm not finding it, but I'm guessing
>> its because of performance issues?
>
> Lookup would fail for overlapping address space if you lookup
> "address first, VRF second".
>
> How do you find the right entry if you have
>
>    10.0.0.0/8 vrf red
>    10.0.0.0/16 vrf green
>    10.0.1.0/24 vrf blue
>
> and try to look up 10.0.0.1 in vrf red?  You'll find the /24 entry, which
> is tagged "vrf blue".
>
> Alternatively, you'd need to explode the /8 entry for vrf red if *another*
> VRF adds a more specific for that /8.
>
> gert
>
>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-...@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to