Re: [j-nsp] thoughs on MVRP?

2013-03-12 Thread Mark Tees
Yes, you would just create the same config for both switch ports.

Untagged space does become an issue but you could just move management traffic 
to a tag or have a dedicated management interface. I guess this all depends on 
whether the switches your servers are connecting to are doing the gateways too 
or just l2.

On 04/03/2013, at 1:11 PM, Luca Salvatore l...@ninefold.com wrote:

 And I guess another question is how would VM to VM communication work if VM-A 
 is on Server-A and VM-B is on Server-B.
 If Server-A and Server-B are connected to the same switch, does one switch 
 port add the S-VLAN then the other switch port on the same switch remove the 
 S-VLAN?
  
 Luca
  
 From: Luca Salvatore 
 Sent: Monday, 4 March 2013 1:03 PM
 To: 'Mark Tees'
 Cc: juniper-nsp@puck.nether.net
 Subject: RE: [j-nsp] thoughs on MVRP?
  
 My issue with Q-in-Q is that the ports connecting to the physical servers 
 will be the ‘customer ports’ which means they will be an access port in a 
 single S-VLAN.
 This creates a management problem for the servers as we normally manage (SSH) 
 the servers using a native (untagged) VLAN.
  
 So If I could get around that issue I think Q-in-Q would be suitable.  Anyone 
 know if that’s possible? 
  
 Luca
  
 From: Mark Tees [mailto:markt...@gmail.com] 
 Sent: Monday, 4 March 2013 8:08 AM
 To: Luca Salvatore
 Subject: Re: [j-nsp] thoughs on MVRP?
  
 Possibly you could use q-in-q to cross the VC cluster. Then the VC cluster 
 only needs to know the outer tags.
 http://www.juniper.net/techpubs/en_US/junos9.3/topics/concept/qinq-tunneling-ex-series.html
  
 
 But  That log message looks like the box is running of resources 
 somewhere and given the number of VLANs you are talking about, are you maybe 
 hitting MAC Learning limits?
  
 
 Check with JTAC about that message if you are unsure. 
 
 Sent from some sort of iDevice.
 
 On 03/03/2013, at 9:49 PM, Luca Salvatore l...@ninefold.com wrote:
 
 I don't really need to run STP on them, these are switchports connecting into 
 physical servers which host hundreds of VMs, so I need to trunk all my vlans 
 into about 20 ports per switch
 
 Not quite sure how Q-in-Q would help... How do I configure the ports facing 
 the servers, who does the tagging?
 
 MVRP looks more like cisco's VTP, so probably not what i'm after right?
 
 
 From: Alex Arseniev [alex.arsen...@gmail.com]
 Sent: Sunday, 3 March 2013 7:41 PM
 To: Luca Salvatore; juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] thoughs on MVRP?
 
 If you don't need to run STP on these VLANs, why not use
 QinQ/dot1q-tunneling?
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB21686actp=RSS
 Saves you
 Thanks
 Alex
 
 - Original Message -
 From: Luca Salvatore l...@ninefold.com
 To: juniper-nsp@puck.nether.net
 Sent: Sunday, March 03, 2013 12:13 AM
 Subject: [j-nsp] thoughs on MVRP?
 
 
 
 Hi,
 We have a requirment to trunk about 3500 VLANs into multiple ports on some
 EX4200 switches in VC mode.
  
 This breaches the vmember limit but a huge amout, and once we did this I
 have seen lots of errors in the logs such as:
  
 fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
 entry
 /kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
 fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
 fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix
 06:d4:f2:00:00:cb/48 nh 2850
 fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
 entry
  
 These messages worry me.  I have been looking into MVRP which seems like
 it will allow us to not need all 3500 VLANs trunked into the switches all
 the time, but will dynmicaly register VLANs as needed.
  
 Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable
 and reliable?
  
 thanks,
  
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] thoughs on MVRP?

2013-03-03 Thread Alex Arseniev
If you don't need to run STP on these VLANs, why not use 
QinQ/dot1q-tunneling?

http://kb.juniper.net/InfoCenter/index?page=contentid=KB21686actp=RSS
Saves you
Thanks
Alex

- Original Message - 
From: Luca Salvatore l...@ninefold.com

To: juniper-nsp@puck.nether.net
Sent: Sunday, March 03, 2013 12:13 AM
Subject: [j-nsp] thoughs on MVRP?



Hi,
We have a requirment to trunk about 3500 VLANs into multiple ports on some 
EX4200 switches in VC mode.


This breaches the vmember limit but a huge amout, and once we did this I 
have seen lots of errors in the logs such as:


fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route 
entry

/kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix 
06:d4:f2:00:00:cb/48 nh 2850
fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route 
entry


These messages worry me.  I have been looking into MVRP which seems like 
it will allow us to not need all 3500 VLANs trunked into the switches all 
the time, but will dynmicaly register VLANs as needed.


Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable 
and reliable?


thanks,

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] thoughs on MVRP?

2013-03-03 Thread Luca Salvatore
I don't really need to run STP on them, these are switchports connecting into 
physical servers which host hundreds of VMs, so I need to trunk all my vlans 
into about 20 ports per switch

Not quite sure how Q-in-Q would help... How do I configure the ports facing the 
servers, who does the tagging?

MVRP looks more like cisco's VTP, so probably not what i'm after right?


From: Alex Arseniev [alex.arsen...@gmail.com]
Sent: Sunday, 3 March 2013 7:41 PM
To: Luca Salvatore; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] thoughs on MVRP?

If you don't need to run STP on these VLANs, why not use
QinQ/dot1q-tunneling?
http://kb.juniper.net/InfoCenter/index?page=contentid=KB21686actp=RSS
Saves you
Thanks
Alex

- Original Message -
From: Luca Salvatore l...@ninefold.com
To: juniper-nsp@puck.nether.net
Sent: Sunday, March 03, 2013 12:13 AM
Subject: [j-nsp] thoughs on MVRP?


 Hi,
 We have a requirment to trunk about 3500 VLANs into multiple ports on some
 EX4200 switches in VC mode.

 This breaches the vmember limit but a huge amout, and once we did this I
 have seen lots of errors in the logs such as:

 fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
 entry
 /kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
 fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
 fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix
 06:d4:f2:00:00:cb/48 nh 2850
 fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
 entry

 These messages worry me.  I have been looking into MVRP which seems like
 it will allow us to not need all 3500 VLANs trunked into the switches all
 the time, but will dynmicaly register VLANs as needed.

 Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable
 and reliable?

 thanks,

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] thoughs on MVRP?

2013-03-03 Thread Luca Salvatore
My issue with Q-in-Q is that the ports connecting to the physical servers will 
be the ‘customer ports’ which means they will be an access port in a single 
S-VLAN.
This creates a management problem for the servers as we normally manage (SSH) 
the servers using a native (untagged) VLAN.

So If I could get around that issue I think Q-in-Q would be suitable.  Anyone 
know if that’s possible?

Luca

From: Mark Tees [mailto:markt...@gmail.com]
Sent: Monday, 4 March 2013 8:08 AM
To: Luca Salvatore
Subject: Re: [j-nsp] thoughs on MVRP?

Possibly you could use q-in-q to cross the VC cluster. Then the VC cluster only 
needs to know the outer tags.
http://www.juniper.net/techpubs/en_US/junos9.3/topics/concept/qinq-tunneling-ex-series.html


But  That log message looks like the box is running of resources somewhere 
and given the number of VLANs you are talking about, are you maybe hitting MAC 
Learning limits?


Check with JTAC about that message if you are unsure.

Sent from some sort of iDevice.

On 03/03/2013, at 9:49 PM, Luca Salvatore 
l...@ninefold.commailto:l...@ninefold.com wrote:
I don't really need to run STP on them, these are switchports connecting into 
physical servers which host hundreds of VMs, so I need to trunk all my vlans 
into about 20 ports per switch

Not quite sure how Q-in-Q would help... How do I configure the ports facing the 
servers, who does the tagging?

MVRP looks more like cisco's VTP, so probably not what i'm after right?


From: Alex Arseniev [alex.arsen...@gmail.commailto:alex.arsen...@gmail.com]
Sent: Sunday, 3 March 2013 7:41 PM
To: Luca Salvatore; 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] thoughs on MVRP?

If you don't need to run STP on these VLANs, why not use
QinQ/dot1q-tunneling?
http://kb.juniper.net/InfoCenter/index?page=contentid=KB21686actp=RSS
Saves you
Thanks
Alex

- Original Message -
From: Luca Salvatore l...@ninefold.commailto:l...@ninefold.com
To: juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
Sent: Sunday, March 03, 2013 12:13 AM
Subject: [j-nsp] thoughs on MVRP?



Hi,
We have a requirment to trunk about 3500 VLANs into multiple ports on some
EX4200 switches in VC mode.

This breaches the vmember limit but a huge amout, and once we did this I
have seen lots of errors in the logs such as:

fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry
/kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix
06:d4:f2:00:00:cb/48 nh 2850
fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry

These messages worry me.  I have been looking into MVRP which seems like
it will allow us to not need all 3500 VLANs trunked into the switches all
the time, but will dynmicaly register VLANs as needed.

Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable
and reliable?

thanks,

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] thoughs on MVRP?

2013-03-03 Thread Luca Salvatore
And I guess another question is how would VM to VM communication work if VM-A 
is on Server-A and VM-B is on Server-B.
If Server-A and Server-B are connected to the same switch, does one switch port 
add the S-VLAN then the other switch port on the same switch remove the S-VLAN?

Luca

From: Luca Salvatore
Sent: Monday, 4 March 2013 1:03 PM
To: 'Mark Tees'
Cc: juniper-nsp@puck.nether.net
Subject: RE: [j-nsp] thoughs on MVRP?

My issue with Q-in-Q is that the ports connecting to the physical servers will 
be the ‘customer ports’ which means they will be an access port in a single 
S-VLAN.
This creates a management problem for the servers as we normally manage (SSH) 
the servers using a native (untagged) VLAN.

So If I could get around that issue I think Q-in-Q would be suitable.  Anyone 
know if that’s possible?

Luca

From: Mark Tees [mailto:markt...@gmail.com]
Sent: Monday, 4 March 2013 8:08 AM
To: Luca Salvatore
Subject: Re: [j-nsp] thoughs on MVRP?

Possibly you could use q-in-q to cross the VC cluster. Then the VC cluster only 
needs to know the outer tags.
http://www.juniper.net/techpubs/en_US/junos9.3/topics/concept/qinq-tunneling-ex-series.html

But  That log message looks like the box is running of resources somewhere 
and given the number of VLANs you are talking about, are you maybe hitting MAC 
Learning limits?

Check with JTAC about that message if you are unsure.

Sent from some sort of iDevice.

On 03/03/2013, at 9:49 PM, Luca Salvatore 
l...@ninefold.commailto:l...@ninefold.com wrote:
I don't really need to run STP on them, these are switchports connecting into 
physical servers which host hundreds of VMs, so I need to trunk all my vlans 
into about 20 ports per switch

Not quite sure how Q-in-Q would help... How do I configure the ports facing the 
servers, who does the tagging?

MVRP looks more like cisco's VTP, so probably not what i'm after right?


From: Alex Arseniev [alex.arsen...@gmail.commailto:alex.arsen...@gmail.com]
Sent: Sunday, 3 March 2013 7:41 PM
To: Luca Salvatore; 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] thoughs on MVRP?

If you don't need to run STP on these VLANs, why not use
QinQ/dot1q-tunneling?
http://kb.juniper.net/InfoCenter/index?page=contentid=KB21686actp=RSS
Saves you
Thanks
Alex

- Original Message -
From: Luca Salvatore l...@ninefold.commailto:l...@ninefold.com
To: juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
Sent: Sunday, March 03, 2013 12:13 AM
Subject: [j-nsp] thoughs on MVRP?


Hi,
We have a requirment to trunk about 3500 VLANs into multiple ports on some
EX4200 switches in VC mode.

This breaches the vmember limit but a huge amout, and once we did this I
have seen lots of errors in the logs such as:

fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry
/kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix
06:d4:f2:00:00:cb/48 nh 2850
fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry

These messages worry me.  I have been looking into MVRP which seems like
it will allow us to not need all 3500 VLANs trunked into the switches all
the time, but will dynmicaly register VLANs as needed.

Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable
and reliable?

thanks,

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] thoughs on MVRP?

2013-03-02 Thread Luca Salvatore
Hi,
We have a requirment to trunk about 3500 VLANs into multiple ports on some 
EX4200 switches in VC mode.

This breaches the vmember limit but a huge amout, and once we did this I have 
seen lots of errors in the logs such as:

fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route entry
/kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix 
06:d4:f2:00:00:cb/48 nh 2850
fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route entry

These messages worry me.  I have been looking into MVRP which seems like it 
will allow us to not need all 3500 VLANs trunked into the switches all the 
time, but will dynmicaly register VLANs as needed.

Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable and 
reliable?

thanks,

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp