Re: [Openstack-operators] Openstack with mininet

2017-06-07 Thread Dan Sneddon
Most people use OpenDaylight when connecting Mininet to OpenStack. You
will find many examples here:

https://www.google.com/search?q=mininet+opendaylight+openstack

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

On Wed, Jun 7, 2017 at 6:21 AM, Ahmed Omar Shahidullah
<ahmed_au...@yahoo.com> wrote:
> Hi,
>
> I want to connect openstack with mininet ... what I mean by that is that I
> want a multinode setup of openstack where the controller node connect with
> the compute nodes through a network created in mininet. Is there way to do
> this?
>
> Ahmed
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openvswitch flat and provider in the same bond

2017-04-19 Thread Dan Sneddon
If all of the VLANs are tagged, then I am having a hard time imagining
why you want to use only one of them as a flat network.

The only difference between a VLAN network and a flat network in Neutron
is that OVS handles tagging/untagging on VLAN networks and flat networks
are untagged.

It is possible to create a VLAN interface, and then add that VLAN
interface to a new OVS bridge. Since the VLAN interface will already
have stripped the VLAN tags by the time they get to the OVS bridge, you
would create a flat network in that case. However, I don't believe that
you can do this while simultaneously adding the other VLANs as VLAN
networks to a different OVS bridge. You might actually have to create
three separate bridges to add the three VLANs to and create three flat
networks instead.

In any case, the traffic will be untagged when it reaches the VM. So
from the VM perspective, all those choices have the same end result.

-- 
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

On 04/19/2017 12:06 PM, Ignazio Cassano wrote:
> Hi Dan, on the physical  switch the 567 is not the native vlan but is
> tagged like 555 and 556.
> I know I could set 567 as a native vlan to receive it untagged.
> But if I would like more than one flat network ?
> I am not skilled in networking but I think only one native vlan can be
> set in switch.
> Any further solution or suggestion ?
> Regards
> Ignazio
> 
> Il 19/Apr/2017 20:19, "Dan Sneddon" <dsned...@redhat.com
> <mailto:dsned...@redhat.com>> ha scritto:
> 
> On 04/19/2017 09:02 AM, Ignazio Cassano wrote:
> > Dear All,  in my openstack Newton installation compute e controllers
> > node have e separate management network nic and a lacp bond0 where
> > provider vlan (555,556) and flat vlan (567) are trunked.
> > Since I cannot specify the vlan id (567) when I create a flat
> network, I
> > need to know how I can create the bridge for flat network in
> openvswitch.
> > For providers network I created a bridge br-ex and added bond0 to that
> > bridge and configured openvswitch agent and ml2 for mapping br-ex.
> > I don't know what can I do for flat network : must I create another
> > bridge ? What interface I must add to the bridge for flat (567)
> network ?
> > I configured the same scenario with linuxbridge mechanism driver 
> and it
> > seems more easy to do.
> > Sorry for my bad english.
> > Regards
> > Ignazio
> 
> I assume that the VLAN 567 is the native (untagged) VLAN on the port in
> question? If that's so, you can do the following:
> 
> Create two provider networks of "provider:network_type vlan", plus one
> provider network with "provider:network_type flat", with all three using
> the same physical network.
> 
> 
> neutron net-create --provider:physical_network datacentre \
> --provider:network_type vlan --provider:segmentation_id 555 \
> --shared 
> 
> neutron net-create --provider:physical_network datacentre \
> --provider:network_type vlan --provider:segmentation_id 556 \
> --shared 
> 
> neutron net-create --provider:physical_network datacentre \
> --provider:network_type flat --shared 
> 
> 
> Of course, remove shared if you don't want tenants directly attaching to
> any of the above networks, and add "--router:external" if any of these
> are to be used for SNAT/floating IP.
> 
> --
> Dan Sneddon |  Senior Principal Software Engineer
> dsned...@redhat.com <mailto:dsned...@redhat.com> | 
> redhat.com/openstack <http://redhat.com/openstack>
> dsneddon:irc|  @dxs:twitter


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openvswitch flat and provider in the same bond

2017-04-19 Thread Dan Sneddon
On 04/19/2017 09:02 AM, Ignazio Cassano wrote:
> Dear All,  in my openstack Newton installation compute e controllers
> node have e separate management network nic and a lacp bond0 where
> provider vlan (555,556) and flat vlan (567) are trunked. 
> Since I cannot specify the vlan id (567) when I create a flat network, I
> need to know how I can create the bridge for flat network in openvswitch. 
> For providers network I created a bridge br-ex and added bond0 to that
> bridge and configured openvswitch agent and ml2 for mapping br-ex. 
> I don't know what can I do for flat network : must I create another
> bridge ? What interface I must add to the bridge for flat (567) network ?
> I configured the same scenario with linuxbridge mechanism driver  and it
> seems more easy to do.
> Sorry for my bad english.
> Regards 
> Ignazio

I assume that the VLAN 567 is the native (untagged) VLAN on the port in
question? If that's so, you can do the following:

Create two provider networks of "provider:network_type vlan", plus one
provider network with "provider:network_type flat", with all three using
the same physical network.


neutron net-create --provider:physical_network datacentre \
--provider:network_type vlan --provider:segmentation_id 555 \
--shared 

neutron net-create --provider:physical_network datacentre \
--provider:network_type vlan --provider:segmentation_id 556 \
--shared 

neutron net-create --provider:physical_network datacentre \
--provider:network_type flat --shared 


Of course, remove shared if you don't want tenants directly attaching to
any of the above networks, and add "--router:external" if any of these
are to be used for SNAT/floating IP.

-- 
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] help: Multiple external networks with a single L3 agent

2017-02-10 Thread Dan Sneddon
t; 8
> 9
> 10
> 11
>   
> |[DEFAULT]|
> |...|
> |auth_uri = http:||//||controller:5000|
> |auth_url = http:||//||controller:35357|
> |auth_region = RegionOne|
> |auth_plugin = password|
> |project_domain_id = default|
> |user_domain_id = default|
> |project_name = service|
> |username = neutron|
> |password = NEUTRON_PASS|
> 
> 
> In the |[DEFAULT]| section, configure the metadata host:
> 
>  2.
> 
> 
> 1
> 2
> 3
>   
> |[DEFAULT]|
> |...|
> |nova_metadata_ip = controller|
> 
>  3.
> 
> 
>  4.
> 
> In the |[DEFAULT]| section, configure the metadata proxy shared
> secret:
> 
> 
> 1
> 2
> 3
>   
> |[DEFAULT]|
> |...|
> |metadata_proxy_shared_secret = METADATA_SECRET|
> 
> 
> 
>  1.
> 
> Add the external bridge:
> 
>  2.
> 
> # ovs-vsctl add-br br-ex
> 
>  3.
> 
> Add a port to the external bridge that connects to the physical
> external network interface:
> 
> Replace /|INTERFACE_NAME|/ with the actual interface name. For
> example, /eth2/ or /ens256/.
> 
> # ovs-vsctl add-port br-ex /p5p2/
> 
> /
> /
> /Regards/
> /Gaurav Goyal/
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

In my experience, I have been able to modify the bridge mappings and add
a bridge without affecting existing networks or VMs. It is required to
restart the Neutron services after making such a change, but existing
networks and ports will continue to operate while the Neutron services
restart. If you want to have the least impact, I believe that restarting
the neutron-server and L2 agents everywhere (such as openvswitch-agent)
is sufficient, you can leave your L3 agents alone.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ironic with top-rack switches management

2017-01-03 Thread Dan Sneddon
I wouldn't say it's ready for prime time, but much of the work was done
in Newton and some are merged in master and a few patches are still in
progress. One thing that remains to be done is to test with a variety
of makes/models of switches.

You can see the progress of the whole set of patches here:
https://bugs.launchpad.net/ironic/+bug/1526403

Snapshot support for Ironic instances is still a wishlist feature:
https://bugs.launchpad.net/mos/+bug/1552348

I think the closest you could get at this point might be to use
Cinder-backed boot volumes for your Ironic nodes. That would have
impacts on performance and network traffic. You could get snapshots
for just data by using Cinder volumes for non-boot mount points.

--
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

- Original Message -
> Hello everyone.
> 
> 
> Did someone actually made Ironic running with ToR (top rack switches)
> under neutron in production? Which switch verdor/plugin (and OS version)
> do you use? Do you have some switch configuration with parts outside of
> Neutron reach? Is it worth spent efforts on integration, etc?
> 
> And one more question: Does Ironic support snapshotting of baremetal
> servers? With some kind of agent/etc?
> 
> Thanks.
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][neutron] neutron-plugin-openvswitch-agent and neutron-openvswitch-agent

2016-11-17 Thread Dan Sneddon
On 11/16/2016 10:48 PM, Akshay Kumar Sanghai wrote:
> Hi,
> I installed a kilo version before and now I am installing mitaka. The
> installation document for mitaka uses linuxbridge agent and not ovs. In
> kilo, it says to install neutron-plugin-openvswitch-agent. In mitaka,
> for linuxbridge, it says to install neuton-linuxbridge-agent.
> 
> Is there any difference between neutron-plugin-openvswitch-agent and
> neutron-openvswitch-agent?
> 
> Thanks
> Akshay
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

On Ubuntu, neutron-plugin-openvswitch-agent is the name of the package
that supplies neutron-openvswitch-agent. Both the
neutron-openvswitch-agent and the neutron-linuxbridge-agent run as
driver plugins under the ML2 plugin framework. I'm not sure what the
name of the package is for neutron-linuxbridge-agent, and it will vary
by distribution.

Neutron was originally designed to be modular, with plugins to handle
the implementation of the API. The limitation was that only one plugin
could run at a time, so ML2 (Modular Layer 2) was developed for running
multiple combination of type drivers (VLAN, GRE, VXLAN, etc.) and
mechanism drivers (OVS, linuxbridge, 3rd-party mechanisms, etc.).

The old style drivers were known as monolithic drivers, as opposed to
ML2 plugin drivers. More information on ML2 plugins can be found here:
http://docs.openstack.org/developer/neutron/devref/l2_agents.html

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Dan Sneddon
On 08/02/2016 09:57 AM, Curtis wrote:
> Hi,
> 
> I'm just curious who, if anyone, is using TripleO in production?
> 
> I'm having a hard time finding anywhere to ask end-user type
> questions. #tripleo irc seems to be just a dev channel. Not sure if
> there is anywhere for end users to ask questions. A quick look at
> stackalytics shows it's mostly RedHat contributions, though again, it
> was a quick look.
> 
> If there were other users it would be cool to perhaps try to have a
> session on it at the upcoming ops midcycle.
> 
> Thanks,
> Curtis.
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

Nearly every commercial customer of Red Hat OpenStack Platform (OSP)
since version 7 (version 9 is coming out imminently) are using TripleO
in production, since the installer is TripleO. That's hundreds of
production installations, some of them very large scale. The exact same
source code is used for RDO and OSP. HP used to use TripleO, but I
don't think they have contributed to TripleO since they updated Helion
with a proprietary installer.

Speaking for myself and the other TripleO developers at Red Hat, we do
try our best to answer user questions in #rdo. You will also find some
production users hanging out there. The best times to ask questions are
during East Coast business hours, or during business hours of GMT +1
(we have a large development office in Brno, CZ with engineers that
work on TripleO). There is also an RDO-specific mailing list available
here: https://www.redhat.com/mailman/listinfo/rdo-list

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron DHCP agent local routes

2016-04-22 Thread Dan Sneddon
Yes, there is an unfortunate possibility of confusion around the word
'subnet'. To some, that means a network segment, or VLAN, and to some
that just means the specific IP subnet. Neutron takes the latter view,
and treats multiple subnets as multiple IP ranges on the same VLAN
(which Neutron calls a 'network').

So if you add multiple subnets to a network, then it is assumed that
means multiple subnets on one physical network, or VLAN (even when
using VXLAN or GRE, there is an internal VLAN segment ID).

There is ongoing work to address the use case where you have a set of
networks that belongs to one tenant, but they are different subnets on
different VLANs with routing between them. See the 'routed networks'
work by Carl Baldwin et. al. That seems to be what the OP was going for
by adding multiple subnets to one network.

Alternately, the desired end state can probably be achieved by simply
using one subnet per network, but using a separate network for each
192.168.10.x, 192.168.11.x, and 192.168.12.x.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

On 04/22/2016 05:49 AM, Kevin Benton wrote:
> But if the subnets are on the same VLAN it should work. That's what
> that patch is designed for. There is (currently) an assumption in
> Neutron that subnets in the same network are on the same L2 segment. So
> if a VM wants to communicate with a VM in another subnet on the same
> network, it can cut out the router and use the direct route by ARPing
> for the VM on its own.
> 
> The default is for VMs to only get an IP from one of the subnets on a
> multi subnet network. So if I understand your topology, it exactly
> matches what that patch intended to address (eliminate the hairpinning
> via the router for same network communication).
> 
> Does your router do something special that skipping it breaks things?
> 
> On Apr 22, 2016 05:19, "Remco" <remcon...@gmail.com
> <mailto:remcon...@gmail.com>> wrote:
> 
> Hi Neil,
> 
> Well that explains...
> My networking setup does not conform to the commit message in such
> a way that there are multiple subnets on the network, however the
> instance does not have an IP address in all of them but only in one.
> I guess the only option for now is to patch this code, as a
> workaround.The reason for having multiple subnets on the same L2
> segment is a migration scenario. So I guess I'll just patch this as
> the situation will disappear in the near future as normally subnets
> are separated by L2 domains (vlans) in our setup.
> 
> Thanks!
> Remco
> 
> On Fri, Apr 22, 2016 at 1:16 PM, Neil Jerram
> <neil.jer...@metaswitch.com <mailto:neil.jer...@metaswitch.com>> wrote:
> 
> On 22/04/16 12:03, Remco wrote:
> > Hi Neil,
> >
> > Thanks.
> > The ip route output is as following, i guess the 0.0.0.0 gateway is 
> only
> > listed by cloud-init:
> >
> > debian@instance:/$ ip route
> > default via 192.168.10.1 dev eth0
> > 192.168.10.0/24 <http://192.168.10.0/24>
> <http://192.168.10.0/24> dev eth0  scope link
> > 169.254.169.254 via 192.168.10.1 dev eth0
> > 192.168.11.0/24 <http://192.168.11.0/24>
> <http://192.168.11.0/24> dev eth0  scope link
> > 192.168.12.0/24 <http://192.168.12.0/24>
> <http://192.168.12.0/24> dev eth0  proto kernel  scope
> > link  src 192.168.10.2
> >
> > (ip addresses are altered for security reasons).
> 
> Thanks for these clarifications.
> 
> > I'm not sure what creates these routes. I have two suspects: 
> cloud-init
> > and DHCP. As the same issue is observed on instances without 
> cloud-init
> > this rules out cloud-init.
> > We see the same issue on both Windows and Linux instances.
> 
> OK, I think you're seeing the effect of this DHCP agent code, from
> neutron/agent/linux/dhcp.py:
> 
>  host_routes.extend(["%s,0.0.0.0" % (s.cidr)
> for s in
>  self.network.subnets
>  if (s.ip_version == 4 and
>  s.cidr != subnet.cidr)])
> 
> AFAICS there is no obvious knob for suppressing this logic.
> 
> The code was added in commit 6dce817c7c2, and the commit
> message says:
> 
> =8<=
> 

Re: [Openstack-operators] keystone authentication on public interface

2016-04-14 Thread Dan Sneddon
On 04/13/2016 07:46 PM, Serguei Bezverkhi (sbezverk) wrote:
> Hello folks,
> 
> I was wondering if you let me know if enabling keystone to listen on public 
> interface for ports 5000 and 35357 is considered as a normal practice. 
> Example if a customer wants to authenticate not via horizon or some other 
> proxy but setting up OS_AUTH_URL=http://blah  variable to be able to run 
> OpenStack commands in cli.
> 
> Thank you in advance
> 
> Serguei  
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

That's a normal practice. I guess you might be surprised to learn that
we already host ports 5000 and 35357 on the Public API address? All
that is needed is to point to http://:5000/ (or HTTPS if
using SSL).

In general, you want to use port 5000 for all remote Keystone
connections, with the exception that if you want to use the API for
creating users or tenants you need to use the admin API. The only
difference between the two is that 35357 can perform admin functions on
the user database.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron][ml2] Gutting ML2 from Neutron - possible?

2016-04-01 Thread Dan Sneddon
On 04/01/2016 03:18 PM, Adam Lawson wrote:
> Hi Dan - thanks I think that answers the core plugin question. What is
> Contrail doing with the Neutron service plugin? Are there two plugins?
> 
> //adam
> 
> */
> Adam Lawson/*
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
> 
> On Fri, Apr 1, 2016 at 2:13 PM, Dan Sneddon <dsned...@redhat.com
> <mailto:dsned...@redhat.com>> wrote:
> 
> On 04/01/2016 02:07 PM, Dan Sneddon wrote:
> > On 04/01/2016 01:07 PM, Adam Lawson wrote:
> >> The Contrail team that said they are using their network product
> with
> >> OpenStack without requiring a mechanism driver with the ML2 plugin.
> >> More specifically, they said they don't use or need ML2. I
> didn't have
> >> a chance to ask them to clarify so I'm wondering how that works and
> >> what is current best practice? I think the individual misspoke but I
> >> wanted to see if this is actually being done today.
> >>
> >> Perhaps they replaced ML2 with something - exactly what though?
> >>
> >> //adam
> >>
> >> */
> >> Adam Lawson/*
> >>
> >> AQORN, Inc.
> >> 427 North Tatnall Street
> >> Ste. 58461
> >> Wilmington, Delaware 19801-2230
> >> Toll-free: (844) 4-AQORN-NOW ext. 101
> >> International: +1 302-387-4660
> >> Direct: +1 916-246-2072
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> > ML2 is what's known as a "core driver" for Neutron. This means
> that the
> > main API implementation is done inside this driver. The ML2 driver is
> > modular, and can load sub-drivers (mechanism and type drivers) to
> > implement the specific actions such as creating virtual ethernet
> ports.
> >
> > Contrail replaces this driver with the NeutronPluginContrailCoreV2,
> > which implements the main API itself, without subdrivers. There is no
> > support planned for ML2 mechanism or type drivers within the Contrail
> > plugin. Contrail implements its own intelligent virtual ethernet
> ports,
> > each of which is routing-aware, and the routing is made redundant
> using
> > dynamic multipath routing. This replaces the Open vSwitch
> > bridge/patch/port mechanism.
> >
> > The current documentation [1] for Contrail/RDO covers Packstack. The
> > initial installation is done with ML2+OVS, then the Neutron
> > configuration is modified to load the Contrail driver instead.
> >
> > In OSP-Director 8, it is possible to load the Contrail driver during
> > installation instead of ML2. This is done by including the
> environment
> > file:
> >
> > openstack overcloud deploy --templates /path/to/templates \
> > -e /path/to/templates/environments/neutron-opencontrail.yaml \
> > [...]
> >
> > This environment file will set the following:
> >
> > ###
> > resource_registry:
> >   OS::TripleO::ControllerExtraConfigPre:
> > ../puppet/extraconfig/pre_deploy/controller/neutron-opencontrail.yaml
> >   OS::TripleO::ComputeExtraConfigPre:
> > ../puppet/extraconfig/pre_deploy/compute/neutron-opencontrail.yaml
> >
> > parameter_defaults:
> >   NeutronCorePlugin:
> >
> 
> neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
> >   NeutronServicePlugins:
> >
> 
> neutron_plugin_contrail.plugins.opencontrail.loadbalancer.plugin.LoadBalancerPlugin
> >   NeutronEnableDHCPAgent: false
> >   NeutronEnableL3Agent: false
> >   NeutronEnableMetadataAgent: false
> >   NeutronEnableOVSAgent: false
> >   NeutronEnableTunnelling: false
> > 
> >
> > The files in the resource_registry section contain configuration
> > settings such as the IP address of the Contrail API server, and the
> > authentication credentials.
>

Re: [Openstack-operators] [neutron][ml2] Gutting ML2 from Neutron - possible?

2016-04-01 Thread Dan Sneddon
On 04/01/2016 02:07 PM, Dan Sneddon wrote:
> On 04/01/2016 01:07 PM, Adam Lawson wrote:
>> The Contrail team that said they are using their network product with
>> OpenStack without requiring a mechanism driver with the ML2 plugin.
>> More specifically, they said they don't use or need ML2. I didn't have
>> a chance to ask them to clarify so I'm wondering how that works and
>> what is current best practice? I think the individual misspoke but I
>> wanted to see if this is actually being done today.
>>
>> Perhaps they replaced ML2 with something - exactly what though?
>>
>> //adam
>>
>> */
>> Adam Lawson/*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> 
> ML2 is what's known as a "core driver" for Neutron. This means that the
> main API implementation is done inside this driver. The ML2 driver is
> modular, and can load sub-drivers (mechanism and type drivers) to
> implement the specific actions such as creating virtual ethernet ports.
> 
> Contrail replaces this driver with the NeutronPluginContrailCoreV2,
> which implements the main API itself, without subdrivers. There is no
> support planned for ML2 mechanism or type drivers within the Contrail
> plugin. Contrail implements its own intelligent virtual ethernet ports,
> each of which is routing-aware, and the routing is made redundant using
> dynamic multipath routing. This replaces the Open vSwitch
> bridge/patch/port mechanism.
> 
> The current documentation [1] for Contrail/RDO covers Packstack. The
> initial installation is done with ML2+OVS, then the Neutron
> configuration is modified to load the Contrail driver instead.
> 
> In OSP-Director 8, it is possible to load the Contrail driver during
> installation instead of ML2. This is done by including the environment
> file:
> 
> openstack overcloud deploy --templates /path/to/templates \
> -e /path/to/templates/environments/neutron-opencontrail.yaml \
> [...]
> 
> This environment file will set the following:
> 
> ###
> resource_registry:
>   OS::TripleO::ControllerExtraConfigPre:
> ../puppet/extraconfig/pre_deploy/controller/neutron-opencontrail.yaml
>   OS::TripleO::ComputeExtraConfigPre:
> ../puppet/extraconfig/pre_deploy/compute/neutron-opencontrail.yaml
> 
> parameter_defaults:
>   NeutronCorePlugin:
> neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
>   NeutronServicePlugins:
> neutron_plugin_contrail.plugins.opencontrail.loadbalancer.plugin.LoadBalancerPlugin
>   NeutronEnableDHCPAgent: false
>   NeutronEnableL3Agent: false
>   NeutronEnableMetadataAgent: false
>   NeutronEnableOVSAgent: false
>   NeutronEnableTunnelling: false
> 
> 
> The files in the resource_registry section contain configuration
> settings such as the IP address of the Contrail API server, and the
> authentication credentials.
> 
> In the parameter_defaults section, the NeutronCorePlugin is changed
> from ML2 to the Contrail core plugin. The loadbalancer plugin is also
> relegated to the Contrail load balancer plugin.
> 
> [1] - http://www.opencontrail.org/rdo-openstack-opencontrail-integration/
> 
> The same kind of approach applies to some other 3rd-party Neutron
> plugin providers, although there are also some that use ML2 and do the
> customization at the mechanism and type driver layer. Does that answer
> your questions?
> 

I probably should have used TripleO in the above example to reduce
confusion (OSP-Director is Red Hat's name for the supported version of
TripleO, which uses the same source code).

In any case, it was just an example of how Contrail completely replaces
ML2 when you install it in any OpenStack deployment.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron][ml2] Gutting ML2 from Neutron - possible?

2016-04-01 Thread Dan Sneddon
On 04/01/2016 01:07 PM, Adam Lawson wrote:
> The Contrail team that said they are using their network product with
> OpenStack without requiring a mechanism driver with the ML2 plugin.
> More specifically, they said they don't use or need ML2. I didn't have
> a chance to ask them to clarify so I'm wondering how that works and
> what is current best practice? I think the individual misspoke but I
> wanted to see if this is actually being done today.
> 
> Perhaps they replaced ML2 with something - exactly what though?
> 
> //adam
> 
> */
> Adam Lawson/*
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

ML2 is what's known as a "core driver" for Neutron. This means that the
main API implementation is done inside this driver. The ML2 driver is
modular, and can load sub-drivers (mechanism and type drivers) to
implement the specific actions such as creating virtual ethernet ports.

Contrail replaces this driver with the NeutronPluginContrailCoreV2,
which implements the main API itself, without subdrivers. There is no
support planned for ML2 mechanism or type drivers within the Contrail
plugin. Contrail implements its own intelligent virtual ethernet ports,
each of which is routing-aware, and the routing is made redundant using
dynamic multipath routing. This replaces the Open vSwitch
bridge/patch/port mechanism.

The current documentation [1] for Contrail/RDO covers Packstack. The
initial installation is done with ML2+OVS, then the Neutron
configuration is modified to load the Contrail driver instead.

In OSP-Director 8, it is possible to load the Contrail driver during
installation instead of ML2. This is done by including the environment
file:

openstack overcloud deploy --templates /path/to/templates \
-e /path/to/templates/environments/neutron-opencontrail.yaml \
[...]

This environment file will set the following:

###
resource_registry:
  OS::TripleO::ControllerExtraConfigPre:
../puppet/extraconfig/pre_deploy/controller/neutron-opencontrail.yaml
  OS::TripleO::ComputeExtraConfigPre:
../puppet/extraconfig/pre_deploy/compute/neutron-opencontrail.yaml

parameter_defaults:
  NeutronCorePlugin:
neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
  NeutronServicePlugins:
neutron_plugin_contrail.plugins.opencontrail.loadbalancer.plugin.LoadBalancerPlugin
  NeutronEnableDHCPAgent: false
  NeutronEnableL3Agent: false
  NeutronEnableMetadataAgent: false
  NeutronEnableOVSAgent: false
  NeutronEnableTunnelling: false


The files in the resource_registry section contain configuration
settings such as the IP address of the Contrail API server, and the
authentication credentials.

In the parameter_defaults section, the NeutronCorePlugin is changed
from ML2 to the Contrail core plugin. The loadbalancer plugin is also
relegated to the Contrail load balancer plugin.

[1] - http://www.opencontrail.org/rdo-openstack-opencontrail-integration/

The same kind of approach applies to some other 3rd-party Neutron
plugin providers, although there are also some that use ML2 and do the
customization at the mechanism and type driver layer. Does that answer
your questions?

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Interesting networking issue - need help

2016-03-31 Thread Dan Sneddon
On 03/31/2016 10:36 AM, Christopher Hull wrote:
> Hi all;
> Was originally DNS issue, but that was a downstream symptom.
> 
> Instances on Private net can't access internet TCP, but CAN ICMP. ping all.
> Details:
> 1. Instances on Public net work perfectly.
> 2. Instances on Private net can fully access Public net instances, both
> virtual and physical boxes.
>ssh from Private to Public instance works.
>http to OpenStack dashboard (physical box) from Private instance works.
> 3. Private instances can ping everything, including the internet.
> 4. Private instances can NOT TCP to my ATT gateway. (public net)
>HTTP to ATT gateway which has a web interface fails.
>Same is true for internet.  Ping, but no TCP (UDP?)
> 5. Floating IPs work.   I think the Neutron Router is fine.
> 
> Any ideas??
> -Chris
> 
> 
> 
> 
> 
> 
> 
> - Christopher T. Hull
> I am presently seeking a new career opportunity  Please see career page
> http://chrishull.com/career
> 333 Orchard Ave, Sunnyvale CA. 94085
> (415) 385 4865
> chrishul...@gmail.com <mailto:chrishul...@gmail.com>
> http://chrishull.com
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

When ICMP works but TCP doesn't, that is often a sign of an MTU problem.

Especially if you are running VXLAN, you need room for the tunnel
headers. If your MTU is 1500 on the wire, then the VM MTU must be 1450
or smaller to make room for the VXLAN headers. Check
/etc/neutron/dnsmasq-neutron.conf, and make sure this option is set to
at least 50 bytes less than your physical MTU:

/etc/neutron/dnsmasq-neutron.conf:
dhcp-option-force=26,1400

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] Question about how Operators deploy

2016-02-12 Thread Dan Sneddon
On 02/12/2016 06:04 AM, Steven Dake (stdake) wrote:
> Hi folks,
> 
> Unfortunately I won't be able to make it to the Operator midcycle
> because of budget constraints or I would find the answer to this
> question there.  The Kolla upstream is busy sorting out external ssl
> termination and a question arose in the Kolla community around operator
> requirements for publicURL vs internalURL VIP management.
> 
> At present, Kolla creates 3 Haproxy containers across 3 HA nodes with
> one VIP managed by keepalived.  The VIP is used for internal
> communication only.  Our PUBLIC_URL is set to a DNS name, and we expect
> the Operator to sort out how to map that DNS name to the internal VIP
> used by Kolla.  The way I do this in my home lab is to use NAT to NAT
> my public_URL from the internet (hosted by dyndns) to my internal VIP
> that haproxies to my 3 HA control nodes.  This is secure assuming
> someone doesn't bust through my NAT.
> 
> An alternative has been suggested which is to use TWO vips.  One for
> internal_url, one for public_url.  Then the operator would only be
> responsible for selecting where to to allocate the public_url
> endpoint's VIP.  I think this allows more flexibility without
> necessarily requiring NAT while still delivering a secure solution.
> 
> Not having ever run an OpenStack cloud in production, how do the
> Operators want it?  Our deciding factor here is what Operators want,
> not what is necessarily currently in the code base.  We still have time
> to make this work differently for Mitaka, but I need feedback/advice
> quickly.
> 
> The security guide seems to imply two VIPs are the way to Operate: (big
> diagram):
> http://docs.openstack.org/security-guide/networking/architecture.html
> 
> The IRC discussion is here for reference:
> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-02-12.log.html#t2016-02-12T12:09:08
> 
> Thanks in Advance!
> -steve
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

I am not an operator, but I work with large-scale operators to design
OpenStack networks regularly (more than one per week). In general, the
operators I work with want a separation of their Public from their
Internal APIs. This helps with accounting, since tracking accesses to
the Public API is easier when you don't have to filter out all the
internal service API calls. I have also seen some operators place the
Public APIs into a protected zone that required VPN access to get to,
while the Internal APIs were only accessible from inside the deployment.

Another interesting use case I have seen several times is when a
service VM needs to connect to the Public APIs. I have seen this when a
VM inside the cloud was used to host a self-service portal, so that VM
needs to be able to issue commands against the Public APIs in order to
provision services. In this case, it would have been difficult to
engineer a solution that allowed both the VM and the internal services
to connect to a single API without increasing the attack surface and
reducing security.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Routing to tenant networks

2016-01-12 Thread Dan Sneddon
On 01/12/2016 09:42 AM, Matt Kassawara wrote:
> Sure, you can use 'neutron router-gateway-set --disable-snat
> ' to disable NAT... just add routes where necessary.
> 
> Seems like implementation of RFC 6598 would occur outside of neutron...
> maybe on the service provider network between clouds? Perhaps someone
> from a service provider can provide more information.
> 
> On Tue, Jan 12, 2016 at 9:46 AM, Mike Spreitzer <mspre...@us.ibm.com
> <mailto:mspre...@us.ibm.com>> wrote:
> 
> Is there any condition under which a Neutron router will route
> packets from a provider network to a tenant network with
> destination address unmolested? E.g., non-RFC1918 addresses on the
> tenant network?  Does Neutron know anything about RFC6598?
> 
> Thanks,
> Mike
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

I can confirm that OpenStack doesn't have Carrier Grade NAT (CGN), but
this RFC simply sets aside a set of addresses which can be used for CGN
(100.64.0.0/10), and lays out some required and best practices for
running a CGN network.

I don't see any reason why these addresses couldn't be used. In fact,
giving RFC 6598 a readthrough it appears that Neutron NAT would fulfill
the requirements of this RFC, as long as 100.64.0.0/10 were only used
for Tenant networks and not floating IP addresses.

That said, we already have 192.168.X.X, 172.X.X.X, and 10.X.X.X
addresses. If a customer were already using all of these throughout
their network, then I could see using 100.64.0.0/10 in order to have
unique addresses within the OpenStack deployment.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] multiple gateways for network

2015-12-21 Thread Dan Sneddon
On 12/18/2015 06:00 AM, Akshay Kumar Sanghai wrote:
> Hi,
> I have a network ,net1 with 2 VMs connected to it. One router R1 is
> connected to the n/w which connects to ext-net. I have another router
> R2 that connects to a diff network net2 in the tenant. Can i create
> matching rules for the network so that it forward the packet to R2 if
> destined to that network and set R1 as the default gateway for all
> other traffic. Is this possible? I do not want to add routes
> individually to each VM on net1 to forward traffic to net2. I also do
> not want to use a single router and connect to net2 and ext-net.
> 
> I have a use-case of 3 tier network architecture and each network will
> be connected to atleast 2 other networks. For a network ,adding static
> routes to VMs is not a good way to go.
> 
> Thanks,
> Akshay
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

You can instruct the DHCP server to issue extra non-default routes to
the VMs when they get their IPs. For instance, to add a route to the
10.1.1.0/24 network via the router at 10.0.0.254, you would add the
following to the subnet:

neutron subnet-update \
--host-route destination=10.1.1.0/24,nexthop=10.0.0.254 \


That way the VMs will use the router at 10.0.0.254 to reach the remote
network, and they will use their default route for everything else.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Service Catalog TNG urls

2015-12-03 Thread Dan Sneddon
nection (like a Ceph node, for
>> instance) would have to either have its API traffic routed, or it would
>> have to be placed on an external segment. Either choice is not optimal.
>> Routers can be a chokepoint. Ceph nodes should be back-end only.
>>
>> Uniform connection path:
>> If there is only one API, and it is externally accessible, then it is
>> almost certainly on a different network segment than the database, AMQP
>> bus, redis (if applicable), etc. If there is an Internal API it can
>> share a segment with these other services while the Public API is on an
>> external segment.
>>
> 
> It seems a little contrary to me that it's preferrable to have a
> software-specific solution to security (internal/external URL in the
> catalog) vs. a decent firewall that doesn't let any traffic through to
> your non-public nodes, *or* well performing internal routers. Or even
> an internal DNS view override. The latter three all seem like simple
> solutions, that allow OpenStack and its clients to be simpler.
> 
> The only reason I can see to perpetuate this version of security in
> networking is IPv4 address space starvation. And that just isn't a
> reason, because you can give all of your nodes IPv6 addresses, and your
> API endpoint an , and be done with that. Remember that we're talking
> about "the next 5 years".
> 
>> Conclusion:
>> If there were only one API, then I would personally bind the API to a
>> local non-routed VLAN, then use HAProxy to reflect those URLs
>> externally. This makes the APIs themselves simpler, but still provides
>> the advantages of having multiple endpoints. This introduces a
>> dependency on a proxy, but I've never seen a production deployment that
>> didn't use a load-balancing proxy. In this case, the Keystone endpoint
>> list would show the internal API URLs, but they would not be reachable
>> from outside.
>>
> 
> I think we agree on "this is how we'd do it if we had to", but I wrote
> all of the above because I don't really understand what you're saying.
> If the catalog showed only internal API's, how would external clients
> reach you at all?
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

In my example, the center is squishy, but also non-routed to prevent
remote access. There is more than one approach, so consider mine an
example of one approach rather than a suggestion that one size fits all.

As far as the confusion about what I was saying, I think I was not
using specific enough terminology.

What I meant to say was that if Internal URLs are listed alongside
Public URLs in an endpoint, it's not a problem if the Internal URLs are
on a non-reachable, non-routed network. That prevents someone
accidentally using an HTTP instead of an HTTPS connection.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Service Catalog TNG urls

2015-12-03 Thread Dan Sneddon
On 12/03/2015 09:46 AM, Fox, Kevin M wrote:
> We use internal to be a private network between the controllers and the
> compute nodes that no one else has access to. Without that, we'd be stuck.
> 
> An OpenStack network that's where all the public services go, that
> isn't external to the cloud for billing purposes does make sense too
> though. Maybe all three should be clearly delineated. Maybe something like:
> 
> * Public - Available outside. may incur costs to access internally
> * Internal - Available publicly by machines in the cloud. no cost is
> ever incurred from using them.
> * Provider - Not exposed to normal users and intended for backend sorts
> of things.
> 
> Hopefully we can make it a strong enough convention that apps can rely
> on it between clouds.
> 
> Thanks,
> Kevin
> ---
> *From:* Jesse Keating [j...@bluebox.net]
> *Sent:* Thursday, December 03, 2015 8:09 AM
> *To:* Sean Dague
> *Cc:* openstack-operators
> *Subject:* Re: [Openstack-operators] Service Catalog TNG urls
> 
> We make use of http urls internally for services to talk to each other,
> but not for human users. All our human users should be using https
> public url. We don't actually utilize the internalURL framework,
> instead we use /etc/hosts entries to change the domain resolution of
> our publicURL entries, and use other config file controls to reflect
> http vs https.
> 
> Partly this is because we don't want to advertise internal IP addresses
> in the catalog. Partly this is because we do not want to advertise an
> http link that a client might accidentally use and pass credentials in
> the clear over the Internet.
> 
> I believe we would be perfectly happy to do away with adminURL and
> internalURL. It'd certainly reduce our per-site configuration entries,
> and reduce confusion around what purpose these entries serve.
> 
> 
> - jlk
> 
> On Thu, Dec 3, 2015 at 6:14 AM, Sean Dague <s...@dague.net
> <mailto:s...@dague.net>> wrote:
> 
> For folks that don't know, we've got an effort under way to look at
> some
> of what's happened with the service catalog, how it's organically
> grown,
> and do some pruning and tuning to make sure it's going to support what
> we want to do with OpenStack for the next 5 years (wiki page to dive
> deeper here - https://wiki.openstack.org/wiki/ServiceCatalogTNG).
> 
> One of the early Open Questions is about urls. Today there is a
> completely free form field to specify urls, and there are conventions
> about having publicURL, internalURL, adminURL. These are, however, only
> conventions.
> 
> The only project that's ever really used adminURL has been Keystone, so
> that's something we feel we can phase out in new representations.
> 
> The real question / concern is around public vs. internal. And
> something
> we'd love feedback from people on.
> 
> When this was brought up in Tokyo the answer we got was that internal
> URL was important because:
> 
> * users trusted it to mean "I won't get changed for bandwidth"
> * it is often http instead of https, which provides a 20% performance
> gain for transfering large amounts of data (i.e. glance images)
> 
> The question is, how hard would it be for sites to be configured so
> that
> internal routing is used whenever possible? Or is this a concept we
> need
> to formalize and make user applications always need to make the
> decision
> about which interface they should access?
> 
> -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

Kevin,

I'm not sure I'm on board with the three endpoints as listed, but at
the very least I would swap Internal and Provider in your example:

* Public - Available outside. may incur costs to access internally
* Internal - Not exposed to normal users and intended for backend sorts
  of things.
* Provider - Available publicly by machines in the cloud. no cost is
  ever incurred from using them.

Provider would then correspond to a provider network, where it is put
in place for the benefit of the VMs.

Re: [Openstack-operators] Service Catalog TNG urls

2015-12-03 Thread Dan Sneddon
On 12/03/2015 06:14 AM, Sean Dague wrote:
> For folks that don't know, we've got an effort under way to look at some
> of what's happened with the service catalog, how it's organically grown,
> and do some pruning and tuning to make sure it's going to support what
> we want to do with OpenStack for the next 5 years (wiki page to dive
> deeper here - https://wiki.openstack.org/wiki/ServiceCatalogTNG).
> 
> One of the early Open Questions is about urls. Today there is a
> completely free form field to specify urls, and there are conventions
> about having publicURL, internalURL, adminURL. These are, however, only
> conventions.
> 
> The only project that's ever really used adminURL has been Keystone, so
> that's something we feel we can phase out in new representations.
> 
> The real question / concern is around public vs. internal. And something
> we'd love feedback from people on.
> 
> When this was brought up in Tokyo the answer we got was that internal
> URL was important because:
> 
> * users trusted it to mean "I won't get changed for bandwidth"
> * it is often http instead of https, which provides a 20% performance
> gain for transfering large amounts of data (i.e. glance images)
> 
> The question is, how hard would it be for sites to be configured so that
> internal routing is used whenever possible? Or is this a concept we need
> to formalize and make user applications always need to make the decision
> about which interface they should access?
> 
>   -Sean
> 

I think the real question is whether we need to bind APIs to multiple
IP addresses, or whether we need to use a proxy to provide external
access to a single API endpoint. It seems unacceptable to me to have
the API only hosted externally, then use routing tricks for the
services to access the APIs.

While I am not an operator myself, I design OpenStack networks for
large (and very large) operators on a regular basis. I can tell you
that there is a strong desire from the customers and partners I deal
with for separate public/internal endpoints for the following reasons:

Performance:
There is a LOT of API traffic in a busy OpenStack deployment. Having
the internal OpenStack processes use the Internal API via HTTP is a
performance advantage. I strongly recommend a separate Internal API
VLAN that is non-routable, to ensure that no traffic is unencrypted
accidentally.

SecurityAuditing/Accounting:
Having a separate internal API (for the OpenStack services) from the
Public API (for humans and remote automation), allows the operator to
apply a strict firewall in front of the public API to restrict access
from outside the cloud. Such a device may also help deflect/absorb a
DOS attack against the API. This firewall can be an encryption
endpoint, so the traffic can be unencrypted and examined or logged. I
wouldn't want the extra latency of such a firewall in front of all my
OpenStack internal service calls.

Routing:
If there is only one API, then it has to be externally accessible. This
means that a node without an external connection (like a Ceph node, for
instance) would have to either have its API traffic routed, or it would
have to be placed on an external segment. Either choice is not optimal.
Routers can be a chokepoint. Ceph nodes should be back-end only.

Uniform connection path:
If there is only one API, and it is externally accessible, then it is
almost certainly on a different network segment than the database, AMQP
bus, redis (if applicable), etc. If there is an Internal API it can
share a segment with these other services while the Public API is on an
external segment.

Conclusion:
If there were only one API, then I would personally bind the API to a
local non-routed VLAN, then use HAProxy to reflect those URLs
externally. This makes the APIs themselves simpler, but still provides
the advantages of having multiple endpoints. This introduces a
dependency on a proxy, but I've never seen a production deployment that
didn't use a load-balancing proxy. In this case, the Keystone endpoint
list would show the internal API URLs, but they would not be reachable
from outside.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Double NAT in neutron ?

2015-10-27 Thread Dan Sneddon
If you have a NAT server that translates public IPs to private IPs, then it is
always going to get the inbound traffic to the public IP.

So, even if the public IPs are routable on the local network (are you sure they
are?), you wouldn't be able to use those public IPs as long as the NAT server is
listening for inbound traffic to those IPs. You might send traffic out, but the
return traffic is going to go to the NAT server and not your VM.

None of this has anything to do with OpenStack or private IPs, you just have
local routing issues.

-Dan Sneddon

- Original Message -
> Dear All,
> 
> We get a pool of Public IPs which statically map to private IP addresses . If
> I assign any one of those private IP address to physical interface it is
> reachable from internet.
> 
> In neutron setup I created the external network using the range of those
> private ip addresses and associate them as Floating IPs to the instances .
> 
> When I ping/connect using the floating IPs (range from private IPs) it works
> , but when I use the assigned public IP it cannot ping/connect.
> 
> 
> Our setup:
> internet -> public ip -> natted-private-ip ->neutron-internal-ip->instance
> | | 
> | | 
> -- Natted (floating ips) --
> 
> Typical setup:
> internet -> public ip -> neutron-internal-ip->instance
> | | 
> | | 
> -- Natted (floating ips) --
> 
> Any hint ?
> 
> --
> 
> Regards
> 
> Zeeshan Ali Shah
> System Administrator - PDC HPC
> PhD researcher (IT security)
> Kungliga Tekniska Hogskolan
> +46 8 790 9115
> http://www.pdc.kth.se/members/zashah
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Different OpenStack components

2015-10-09 Thread Dan Sneddon
On 10/09/2015 12:47 PM, Abhishek Talwar wrote:
> Hi Folks,
> 
> I have been working with OpenStack from a while now, I know that other
> than the main componets (nova, neutron, glance, cinder, horizon,
> tempest, keystone etc) there are many more components in OpenStack
> (like Sahara, Trove).
> 
> So, where can I see the list of all existing OpenStack components and
> is there any documentation for these components so that I can read what
> all roles these components play.
> 
> Thanks and Regards
> Abhishek Talwar
> 
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

Here is a supposedly up-to-date list of all the project teams. This
will give you an idea what projects are available, both core and optional.

http://governance.openstack.org/reference/projects/index.html

The official list is the projects.yaml that is linked to from this
page, and it should be up-to-date:

https://wiki.openstack.org/w/index.php?title=Project_Teams

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators