the challenge is that when you tout your vm mobility play as “zero touch” after 
move (i.e. you don’t have to re-ip your vm/application/etc to ensure 100% 
business continuity) — you need to have stretched layer-2 between locations to 
ensure proper functionality.
things like bgp host-route injection or dns-gslb can remove the dependence on 
application == ip address — but the organization has to be mature enough to 
handle such things — especially in an automated way.
hence the evolution of things like lisp and vxlan within the enterprise/dc — to 
help alleviate some of these problems (i.e. we can do a layer-2 overlay on a 
layer-3 network).  while mpls does such things as well — for a long time — the 
requirements for dc have diverged from service provider.  this is slowly 
changing.

q.
--
quinn snyder | snyd...@gmail.com


> On 1 Feb, 2018, at 10:04, Aaron Gould <aar...@gvtc.com> wrote:
>
> So I think (I could be wrong as I'm not a server guy) that all this L2
> network emulation is because of server virtualization and moving vm's or
> vmotion or something like that, and that they need to be in same ip subnet
> (aka bcast domain).... correct ?
>
> *if* that's true, and *if* all this layer 2 networking madness is because of
> that point stated above, I would think that someone (vendors/standards
> bodies/companies) would/should be working really hard to make that server
> stuff work in different bcast domains (different subnets)...so we wouldn't
> have to do all that L2 stuff
>
> -Aaron
>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to