Re: [j-nsp] Full table in L3VPN

2014-09-02 Thread Daniel Roesen
On Mon, Sep 01, 2014 at 08:53:19PM +0200, Johan Borch wrote:
> Is it a good or bad idea to run IP transit (full table ipv4 & ipv6) in a
> MPLS L3VPN and rely on the MP-BGP to carry routes around or is it better to
> skip the MPLS part and run iBGP between the routers with transit links?

I'm very much in favor of this approach for a couple of reasons, some
have been already mentioned. Problem was/is, that everytime I considered
deployment, I found features missing for L3VPNs that were required and
present for the "global table", as well as scaling limitations (think of
older M-Series - not sure offhand what today's figures look like for
current generation architectures).

As someone for whom L3VPNs are daily business, treating Internet just as
another L3VPN feels very much natural and desireable. But ask the
enterprise operator next door, I guess you'll get a completely different
PoV. :)

Boy how I would love to get rid of rib-groups, finally get proper
management interface routing separation etc... sigh.

Best regards,
Daniel

-- 
CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Full table in L3VPN

2014-09-02 Thread Mike Daniels
2 reasons immediately come to mind as to a good reason to run it in a VRF. 
Firstly no need for RIB-groups!! :) Secondly you can separate your core / 
manangement  routes from public routes, this can act as another line of defence 
(as well as good firewall filters of course).



-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of 
Saku Ytti
Sent: 02 September 2014 07:44
To: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] Full table in L3VPN

On (2014-09-02 06:56 +0200), Mark Tinka wrote:

> This is one of those polarizing questions where you'll get an equal 
> share of answers from both sides of the bench.

I think main reason is, because it appears scary, and I subscribe to that 
notion myself.

When I try to explain it to myself in technical terms, I'm drawing up short.
In smart implementation, there should be insignificant DRAM use increase due to 
few bytes of RT and RD, none of these should affect HW resource use in anyway.
Infact JunOS and IOS should not have implied 'Default' VRF, it should be 
configured VRF like any other VRF. Because it simplifies the code-base, when 
you do not create VRF-aware and VRF-unaware features.
Infact if you look at FIB/HW, there global table is already just another VRF, 
as these structures lend poorly to exceptions. It's only the control-plane 
code, which due to legacy reasons contain duplicate code for exact same stuff, 
causing annoying feature parity problems amongst others.

I think the main benefit would be obviously the ease of adding Internet access 
to other VRFs.


--
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp

Any prices quoted in this email are for indicative purposes only and are 
subject to further technical appraisal. Additional installation charges may 
apply and all orders are subject to survey. Unless otherwise specified, prices 
are in GB Pounds and exclude VAT (or equivalent taxes). Acceptance of any 
proposal contained in this email is subject to formal contract. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Virtual1.

Virtual1 is registered in England and Wales with company number 6177891. Our 
registered address is: 8 Angel Court, London, EC2R 7HP

This communication may contain confidential information. It may also be the 
subject of legal professional privilege and/or under copyright. If you are not 
an intended recipient, you must not keep, forward, copy, use, save or rely on 
this communication, and any such action is unauthorised and prohibited. If you 
have received this communication in error, please reply to this e-mail to 
notify the sender of its incorrect delivery, and then delete both it and your 
reply.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Full table in L3VPN

2014-09-01 Thread Saku Ytti
On (2014-09-02 06:56 +0200), Mark Tinka wrote:

> This is one of those polarizing questions where you'll get 
> an equal share of answers from both sides of the bench.

I think main reason is, because it appears scary, and I subscribe to that
notion myself.

When I try to explain it to myself in technical terms, I'm drawing up short.
In smart implementation, there should be insignificant DRAM use increase due
to few bytes of RT and RD, none of these should affect HW resource use in
anyway.
Infact JunOS and IOS should not have implied 'Default' VRF, it should be
configured VRF like any other VRF. Because it simplifies the code-base, when
you do not create VRF-aware and VRF-unaware features.
Infact if you look at FIB/HW, there global table is already just another VRF,
as these structures lend poorly to exceptions. It's only the control-plane
code, which due to legacy reasons contain duplicate code for exact same stuff,
causing annoying feature parity problems amongst others.

I think the main benefit would be obviously the ease of adding Internet access
to other VRFs.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Full table in L3VPN

2014-09-01 Thread Mark Tinka
On Monday, September 01, 2014 08:53:19 PM Johan Borch wrote:

> Is it a good or bad idea to run IP transit (full table
> ipv4 & ipv6) in a MPLS L3VPN and rely on the MP-BGP to
> carry routes around or is it better to skip the MPLS
> part and run iBGP between the routers with transit
> links?

This is one of those polarizing questions where you'll get 
an equal share of answers from both sides of the bench.

We run the IPv4/IPv6 table in global.

We only run l3vpn services in a VRF.

Just us...

I know lots of network who run the Internet table in a VRF. 
Some actually in more than one VRF, even.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Full table in L3VPN

2014-09-01 Thread Payam Chychi
Hi Johan, 

What type of hardware are we talking here? Also, whats your end goal? Cant 
really say which is better without knowing what you need at the end of the day



-- 
Payam Chychi
Network Engineer / Security Specialist


On Monday, September 1, 2014 at 11:53 AM, Johan Borch wrote:

> Hi!
> 
> Is it a good or bad idea to run IP transit (full table ipv4 & ipv6) in a
> MPLS L3VPN and rely on the MP-BGP to carry routes around or is it better to
> skip the MPLS part and run iBGP between the routers with transit links?
> 
> Any pros/cons with convergence time?
> 
> Regards
> Johan
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-10-12 Thread Dan Farrell
My philosophy on table size and usefullness is tied to the AS distance of said 
routes.

Arbitrarily speaking, any destinations more than 5 AS Hops away isn't likely to 
need the most efficient BGP-provided route to hit the destination. At that 
point, 6 actual autonomous systems is basically 'across the internet', and if 
you have 2 or 3 (or more) providers, they likely normally take the same 
relative ip hops and latency to reach the same destination when compared to 
each other. For example, if you are a domestic US network, getting to a network 
in Malaysia via Sprint or via AT+T (given ok network conditions and these 
carriers do not directly serve the destination network) will usually have the 
same relative latency and hops.

It's the destinations that are 1 to 4 Autonomous Systems away where BGP can 
make a palpable difference. For example, you don't want to transit Sprint to 
AT+T to an AT+T customer if you already peer with AT+T. Keeping both of those 
routes would be more important than keeping the two Malaysian route 
destinations in the previous example. And with limited memory, you might be in 
the position to have to make that choice.

So if you limit your route tables to a set number of autonomous systems (AS 
hops) then you get the 'relevant' routes, and all of those on the periphery can 
be be a 'carrier-grab-default' or a least-cost link. Also, those operators who 
love to as-path-prepend the hell out of a route will find that their routes 
don't even make it into your tables (which is kind of the point).

I always ask for full tables and the default from each provider, and then I 
hone down from there as necessary.


Dan Farrell
Applied Innovations
da...@appliedi.net

From: juniper-nsp-boun...@puck.nether.net [juniper-nsp-boun...@puck.nether.net] 
On Behalf Of Keegan Holley [keegan.hol...@sungard.com]
Sent: Tuesday, September 20, 2011 1:26 PM
To: juniper-nsp
Subject: [j-nsp] full table?

Is it always necessary to take in a full table?  Why or why not?  In light
of the Saudi Telekom fiasco I'm curious what others thing.  This question is
understandably subjective.  We have datacenters with no more than three
upstreams.  We would obviously have to have a few copies of the table for
customers that want to receive it from us, but I'm curious if it is still
necessary to have a full table advertised from every peering.  Several ISP's
will allow you to filter everything longer than say /20 and then receive a
default.  Just curious what others think and if anyone is doing this.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Robert Raszuk

Hi Keegan,


Is it always necessary to take in a full table?  Why or why not?  In light
of the Saudi Telekom fiasco I'm curious what others thing.  This question is
understandably subjective.  We have datacenters with no more than three
upstreams.  We would obviously have to have a few copies of the table for
customers that want to receive it from us, but I'm curious if it is still
necessary to have a full table advertised from every peering.  Several ISP's
will allow you to filter everything longer than say /20 and then receive a
default.  Just curious what others think and if anyone is doing this.


I am just curious if your question is driven by issue of handling full 
table in the control plane or if the issue is with RIB and FIB.


If the issue is with full table control plane on the edge then one could 
explore more ebgp multihop advertisements from some box/reflector behind 
the edge. Just FYI full table of 450K nets and 800K paths (2:1 ratio) 
should consume around 250 MB of RAM (including BGP idle footprint).


If the issue is with boxes actually melting in RIB and FIBs then I may 
have an easy, automated and operationally friendly solution for that. In 
my ex-cisco life I have invented simple-va which with single knob only 
puts from BGP to RIB and FIB what is necessary. 100% correctness to pick 
what is necessary is assured. Details could be found here: 
draft-ietf-grow-simple-va-04. So far quite a few operators liked it !


Cheers,
R.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Keegan Holley
2011/9/20 Pavel Lunin 

>
>  Is it always necessary to take in a full table?  Why or why not?  In light
>> of the Saudi Telekom fiasco I'm curious what others thing.  This question
>> is
>> understandably subjective.  We have datacenters with no more than three
>> upstreams.  We would obviously have to have a few copies of the table for
>> customers that want to receive it from us, but I'm curious if it is still
>> necessary to have a full table advertised from every peering.  Several
>> ISP's
>> will allow you to filter everything longer than say /20 and then receive a
>> default.  Just curious what others think and if anyone is doing this.
>>
>
> 1. If you have downstream ASes, than full table is [most probably] needed
> to be not just received but also used for forwarding (this is by default,
> but you can alter, see below). Otherwise loops/blackholes are possible in
> specific scenarios (split customer's AS, etc). Workarounds exist for this,
> but they are rather too complicated to maintain and make everyone in NOC to
> remember how it works. Some folks (especially DCs) don't care and just send
> full table to customers, not having it in data plane. For DCs with just few
> customers with their own ASes, this approach works well almost always (until
> something breaks). When issues arise, they troubleshot it and rewrite
> policies to accept additional routes. Not very good, but I saw this many
> times.
>

This is very insightful and probably the biggest hurdle to overcome.  I
definitely have downstreams that require the full table.  The combination of
full tables and skinny tables opens the possibility for routing loops,
suboptimal paths and confusion.  Then there's the NOC factor.

>
> 2. You must remember (people forget these basics surprisingly often), that
> full table, you receive from upstreams or IX-peer, is used to _send_
> traffic. So when you think it's useful to have full table to analyze or
> somehow intellectually influence traffic balancing across uplinks, you must
> remember that it only relates to traffic going _out_ your AS. Every couple
> of weeks I talk to someone who believes, he needs to receive full table,
> since he wants to "balance traffic". When I ask them, how much traffic they
> send out, it usually turns out to be not more than 10% of a single link
> capacity, and they don't experience any problem with it at all. And yes,
> most of them need to balance incoming traffic, but full table has nothing to
> do with this.
>

I often field this question myself.



>
> 6. If, after checking rules 1-5, you are still not sure if you need full
> table -- you definitely don't.
>

> Having this said, I'd say, full table is needed because of the downstream
> ASes. Playing the games described above does not worth it.


Wishful thinking I suppose.  Thanks all for humoring me.


>
> __**_
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/**mailman/listinfo/juniper-nsp
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Pavel Lunin



Is it always necessary to take in a full table?  Why or why not?  In light
of the Saudi Telekom fiasco I'm curious what others thing.  This question is
understandably subjective.  We have datacenters with no more than three
upstreams.  We would obviously have to have a few copies of the table for
customers that want to receive it from us, but I'm curious if it is still
necessary to have a full table advertised from every peering.  Several ISP's
will allow you to filter everything longer than say /20 and then receive a
default.  Just curious what others think and if anyone is doing this.


1. If you have downstream ASes, than full table is [most probably] 
needed to be not just received but also used for forwarding (this is by 
default, but you can alter, see below). Otherwise loops/blackholes are 
possible in specific scenarios (split customer's AS, etc). Workarounds 
exist for this, but they are rather too complicated to maintain and make 
everyone in NOC to remember how it works. Some folks (especially DCs) 
don't care and just send full table to customers, not having it in data 
plane. For DCs with just few customers with their own ASes, this 
approach works well almost always (until something breaks). When issues 
arise, they troubleshot it and rewrite policies to accept additional 
routes. Not very good, but I saw this many times.


2. You must remember (people forget these basics surprisingly often), 
that full table, you receive from upstreams or IX-peer, is used to 
_send_ traffic. So when you think it's useful to have full table to 
analyze or somehow intellectually influence traffic balancing across 
uplinks, you must remember that it only relates to traffic going _out_ 
your AS. Every couple of weeks I talk to someone who believes, he needs 
to receive full table, since he wants to "balance traffic". When I ask 
them, how much traffic they send out, it usually turns out to be not 
more than 10% of a single link capacity, and they don't experience any 
problem with it at all. And yes, most of them need to balance incoming 
traffic, but full table has nothing to do with this.


4. Most platform allow using two or more defaults with ECMP enabled. 
Real routers (like MX series) also allow using bandwidth-communities to 
make LB really asymmetrical (30% through ISP1, 70% through ISP2). If you 
need it, say, in case of different uplink capacities (10G and 1G). L3 
switches (e.g. EX series) can't do this because of their much simpler 
(and cheaper) PFEs.


5. It does not depend on ISP, if you want to strip things out the full 
table (this is _your_ import policies), you can write any policy you 
wish, but all these tricks with not receiving "anything longer than" 
eliminates most of the full table advantages. If you want partial table, 
you better filter routes somehow more consciously. E. g. 
international/domestic, routes to your/customers' remote locations, 
routes to ASes, to which you send most of your traffic, etc. Of course, 
you customers won't be much happy, if you send them a partial instead of 
full table. So if you filter something out, you most probably need to 
play games (not without cheating) with forwarding along partial but 
sending full table to customers. See rule number 1: first, be afraid of 
loops and blackholes, second, this is an art and not easy to maintain, 
especially when different NOC shifts have to troubleshoot and resolve 
issues quickly.


6. If, after checking rules 1–5, you are still not sure if you need full 
table — you definitely don't.


Having this said, I'd say, full table is needed because of the 
downstream ASes. Playing the games described above does not worth it.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Keegan Holley
2011/9/20 Mark Tinka 

> On Wednesday, September 21, 2011 01:26:07 AM Keegan Holley
> wrote:
>
> > Is it always necessary to take in a full table?  Why or
> > why not?  In light of the Saudi Telekom fiasco I'm
> > curious what others thing.  This question is
> > understandably subjective.  We have datacenters with no
> > more than three upstreams.  We would obviously have to
> > have a few copies of the table for customers that want
> > to receive it from us, but I'm curious if it is still
> > necessary to have a full table advertised from every
> > peering.  Several ISP's will allow you to filter
> > everything longer than say /20 and then receive a
> > default.  Just curious what others think and if anyone
> > is doing this.
>
> Well, if you're connected to a single provider, you tend to
> have less use for a full table. 0/0 or ::/0 will do just
> fine.
>

I wish :)


>
> If you're multi-homed and need to get full use of those
> links, while taking a full feed isn't absolutely necessary,
> it does help. Folks have gotten by with default, or default
> + partial, or even eBGP Multi-Hop + Loopback peering if
> multi-homed to the same ISP.
>
> If your customers want a full table from you (and you're
> multi-homed), then a full table likely makes sense.
>

I thought about partials for the data center links and having a separate
router that carries the full table for the customers that wish to peer with
us.  This makes me uneasy though.  Thoughts of routing loops or the default
leaking into the side with the full table.  Just wondering what others have
done.  Maybe this is the right way.

>
> If you want to implement very fine control of your routing
> and traffic flow, and perhaps, offer that capability to your
> customers through BGP communities, a full table may make
> sense.
>

We offer that as well as do our upstreams.  It's always been easier to use a
full table for everything.  My thought was to develop a lower tier of
internet service based on smaller devices and manipulate the next hops so
that the traffic from the full table customers can still traverse a feed
where we may not be receiving a full table.  I'm prone to strange ideas
though...

>
> If you're modeling the routing table for research, traffic
> engineering studies or some such work, a full table may make
> sense.
>

I wish :)

>
> If you're suffering from old hardware that you don't have
> the cash to upgrade as you run out of FIB slots, having a
> full table is probably the least of your problems, and you
> may consider just default, partial routes, or default + a
> "skinny" full table.
>

I've inherited alot of old hardware from mergers.  The main problem is that
there aren't enough hands to type the request software adds and install the
MX's.  My idea was a "skinny" table for most users and a system of route
reflectors for those that need a full table.

>
> As always, it depends, and YMMV :-).
>

Indeed.  Thanks!
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Joel jaeggli
On 9/20/11 10:26 , Keegan Holley wrote:
> Is it always necessary to take in a full table?  Why or why not?  In light
> of the Saudi Telekom fiasco I'm curious what others thing.  This question is
> understandably subjective.  We have datacenters with no more than three
> upstreams.  We would obviously have to have a few copies of the table for
> customers that want to receive it from us, but I'm curious if it is still
> necessary to have a full table advertised from every peering.  Several ISP's
> will allow you to filter everything longer than say /20 and then receive a
> default.  Just curious what others think and if anyone is doing this.

given the large number of people who have a default as a backup (which
can be empirically measured) there are plenty of things you can do to
avoid taking a full table, with relatively minor impact.

> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] full table?

2011-09-20 Thread Mark Tinka
On Wednesday, September 21, 2011 01:26:07 AM Keegan Holley 
wrote:

> Is it always necessary to take in a full table?  Why or
> why not?  In light of the Saudi Telekom fiasco I'm
> curious what others thing.  This question is
> understandably subjective.  We have datacenters with no
> more than three upstreams.  We would obviously have to
> have a few copies of the table for customers that want
> to receive it from us, but I'm curious if it is still
> necessary to have a full table advertised from every
> peering.  Several ISP's will allow you to filter
> everything longer than say /20 and then receive a
> default.  Just curious what others think and if anyone
> is doing this.

Well, if you're connected to a single provider, you tend to 
have less use for a full table. 0/0 or ::/0 will do just 
fine.

If you're multi-homed and need to get full use of those 
links, while taking a full feed isn't absolutely necessary, 
it does help. Folks have gotten by with default, or default 
+ partial, or even eBGP Multi-Hop + Loopback peering if 
multi-homed to the same ISP.

If your customers want a full table from you (and you're 
multi-homed), then a full table likely makes sense.

If you want to implement very fine control of your routing 
and traffic flow, and perhaps, offer that capability to your 
customers through BGP communities, a full table may make 
sense.

If you're modeling the routing table for research, traffic 
engineering studies or some such work, a full table may make 
sense.

If you're suffering from old hardware that you don't have 
the cash to upgrade as you run out of FIB slots, having a 
full table is probably the least of your problems, and you 
may consider just default, partial routes, or default + a 
"skinny" full table.

As always, it depends, and YMMV :-).

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] full table?

2011-09-20 Thread Jared Mauch

On Sep 20, 2011, at 1:26 PM, Keegan Holley wrote:

> Is it always necessary to take in a full table?  Why or why not?  In light
> of the Saudi Telekom fiasco I'm curious what others thing.  This question is
> understandably subjective.  We have datacenters with no more than three
> upstreams.  We would obviously have to have a few copies of the table for
> customers that want to receive it from us, but I'm curious if it is still
> necessary to have a full table advertised from every peering.  Several ISP's
> will allow you to filter everything longer than say /20 and then receive a
> default.  Just curious what others think and if anyone is doing this.

Generally what I've recommended for people with older routers for years is
to just take the 'on-net' prefixes from your upstreams.  This allows you to
send traffic most optimally to these networks without the need to have a
'full' table.  Anyone that isn't a customer of your upstream is harder for
them to solve issues with typically as well.  (then just point default at
your lowest cost transit).

This method can reduce the memory footprint better than picking some arbitrary
netmask to filter with (e.g.: /20, /24, etc).

It may not work as well with some networks: Level3 has about 150k on-net
prefixes last time I checked.

- Jared
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Full table inside VRF - J Series

2010-06-20 Thread Deon Vermeulen
Hi Rolf,

Truman is correct.
I just found that the J4350 you are referring to (Just so that the forum knows, 
Rolf and I work for the same company) has 1Gig RAM installed on it but is 
already 81% Utilized.

...@> show chassis routing-engine
Routing Engine status:
Temperature 30 degrees C / 86 degrees F
CPU temperature 46 degrees C / 114 degrees F
DRAM  1024 MB
Memory utilization  81 percent
CPU utilization:
  User   1 percent
  Real-time threads 15 percent
  Kernel13 percent
  Idle  71 percent
Model  RE-J4350-2540
Serial ID  !
Start time 2010-04-27 22:12:59 CAT
Uptime 54 days, 17 hours, 25 minutes, 47 seconds
Last reboot reason 0x8:power-button hard power off 
Load averages: 1 minute   5 minute  15 minute
   0.04   0.06   0.07

...@> 

According to Juniper Datasheet 
(http://www.juniper.net/us/en/local/pdf/datasheets/1000206-en.pdf) the J4350 
and 6350 can only upgrade to max of 2Gig RAM.

The best is to LAB this and then see what the performance is after having a 
full table in an (internet) VRF.


Kind Regards

Deon Vermeulen

On Jun 21, 2010, at 5:53 AM, Truman Boyes wrote:

> Yes you can do this on a J-series. If you can handle the full table in 
> inet.0, you can handle this full table in a VRF. Just make sure you have 
> enough RAM to hold a full table (regardless of the type of routing-instance) 
> ... 
> 
> Truman
> 
> 
> On 20/06/2010, at 4:53 PM, Rolf Mendelsohn wrote:
> 
>> Hi All,
>> 
>> Note that my J experience is limited, I've mainly been exposed to lots of C 
>> over the years... :>).
>> 
>> We are looking to try and squeeze a Full table into a vrf on the J Series.
>> 
>> Is this possible, or is the only bet to go for an M Series or C7200/NPE-G1 
>> or 
>> 2?
>> 
>> cheers
>> /rolf
>> 
>> 
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Full table inside VRF - J Series

2010-06-20 Thread Truman Boyes
Yes you can do this on a J-series. If you can handle the full table in inet.0, 
you can handle this full table in a VRF. Just make sure you have enough RAM to 
hold a full table (regardless of the type of routing-instance) ... 

Truman


On 20/06/2010, at 4:53 PM, Rolf Mendelsohn wrote:

> Hi All,
> 
> Note that my J experience is limited, I've mainly been exposed to lots of C 
> over the years... :>).
> 
> We are looking to try and squeeze a Full table into a vrf on the J Series.
> 
> Is this possible, or is the only bet to go for an M Series or C7200/NPE-G1 or 
> 2?
> 
> cheers
> /rolf
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp