Thank you I will look into that unit some more.

Cheers
Ryan


-----Original Message-----
From: Correa Adolfo [mailto:acor...@mcmtelecom.com.mx] 
Sent: Monday, June 27, 2011 9:22 PM
To: Mehmet Akcin; Ryan Finnesey
Cc: juniper-nsp@puck.nether.net
Subject: RE: [j-nsp] What do you think about the MX line?

Yes, I'd go for MX240 and upper, as you could grow them redundantly and
also in memory in the future.

-----Original Message-----
From: juniper-nsp-boun...@puck.nether.net
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Mehmet Akcin
Sent: domingo, 26 de junio de 2011 11:25 p.m.
To: Ryan Finnesey
Cc: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] What do you think about the MX line?

you probably want MX240-480-960s rather.

mehmet

On Jun 26, 2011, at 8:59 PM, Ryan Finnesey wrote:

> We are looking at the MX80s for about 60GB of traffic  with also some
private MLPS interconnection.
> Cheers
> Ryan
> 
> 
> -----Original Message-----
> From: Timothy Kaufman [mailto:tkauf...@corp.nac.net]
> Sent: Sunday, June 26, 2011 11:49 PM
> To: Ryan Finnesey; 'mti...@globaltransit.net';
'juniper-nsp@puck.nether.net'
> Subject: Re: [j-nsp] What do you think about the MX line?
> 
> Which one are you looking at?
> How many peers do you plan to configure?
> How much traffic?
> Thanks
> 
> Tim Kaufman
> Sent via blackberry
> 
> ----- Original Message -----
> From: juniper-nsp-boun...@puck.nether.net 
> <juniper-nsp-boun...@puck.nether.net>
> To: mti...@globaltransit.net <mti...@globaltransit.net>; 
> juniper-nsp@puck.nether.net <juniper-nsp@puck.nether.net>
> Sent: Sun Jun 26 22:15:34 2011
> Subject: Re: [j-nsp] What do you think about the MX line?
> 
> For us I will be looking at the MX Line mainly for peering.  Anyone
having issues with using them with peering?
> 
> Cheers
> Ryan
> 
> 
> -----Original Message-----
> From: juniper-nsp-boun...@puck.nether.net
> [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Mark Tinka
> Sent: Sunday, June 26, 2011 9:19 PM
> To: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] What do you think about the MX line?
> 
> On Monday, June 27, 2011 06:56:48 AM Keegan Holley wrote:
> 
>> I think the general attitude is positive towards them. 
>> They are a good compliment to the M/T series and generally solid 
>> flexible boxes.  You should probably include how you plan to use them

>> in your question.  For example a few list members complain about 
>> multicast/IGMP bugs and other issues with the new trio based cards 
>> and
> 
>> some of the new code.  If you don't run alot of multicast these 
>> wouldn't really apply to you.
> 
> For us, we use them heavily in the edge, and that hasn't been the
smoothest of rides.
> 
> My guess is if you need them for peering or in the core, you might
have less issues, but not necessarily (we already know of core
applications where the MX could be troublesome - but the edge role takes
the cake, by far).
> 
> There are also some limitations, so far, if we use them as BRAS's, but
these are mostly bugs are feature unavailability at this time. The
problem is that without the feature being present today, it's hard to
know how the box will scale, which could be a big problem unto itself.
> 
> All in all, it depends on the complexity/sophistication of your
deployment, the role you're placing the MX in, and what features you're
going to need. For some folk, it's the perfect box. For others, it's
less so.
> 
> Mark.
> 
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to