Stephen, > Aah, Ok. FWIW, splitting up the VIP into instance/"floating IP entity" Right now I'm not sure what would be the best. Currently we don't have implementation that allows creating VIP on external network directly. For example, when haproxy VIP is created, it has address on the tenant network and floating ip is associated with vip address then. Other providers could allow creating VIP on external network directly.
Basically tenants can't share floating ip because they can't specify floating IP on external network. However, VIP address may or may not be internet facing. If it's internal address on tenant network then nothing prevents another tenant from having vip with the same ip address on it's own tenant network. However internet-facing addresses will obviously be different. Thanks, Eugene. On Fri, Feb 14, 2014 at 3:57 AM, Stephen Balukoff <[email protected]>wrote: > Hi Eugene, > > Aah, Ok. FWIW, splitting up the VIP into instance/"floating IP entity" > separate from listener (ie. carries most of the attributes of VIP, in > current implementation) still allows us to ensure tenants don't end up > accidentally sharing an IP address. The "instance" could be associated with > the neutron network port, and the haproxy listeners (one process per > listener) could simply be made to listen on that port (ie. in that network > namespace on the neutron node). There wouldn't be a need for two instances > to share a single neutron network port. > > Has any thought been put to preventing tenants from accidentally sharing > an IP if we stick with the current model? > > Stephen > > > On Thu, Feb 13, 2014 at 4:20 AM, Eugene Nikanorov <[email protected] > > wrote: > >> So we have some constraints here because of existing haproxy driver impl, >> the particular reason is that VIP created by haproxy is not a floating ip, >> but an ip on the internal tenant network with a neutron port. So ip >> uniqueness is enforced at port level and not at VIP level. We need to allow >> VIPs to share the port, that is a part of multiple-vips-per-pool blueprint. >> >> Thanks, >> Eugene. >> > > > -- > Stephen Balukoff > Blue Box Group, LLC > (800)613-4305 x807 > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
