Hi Yang, > Another confusion I have is about network_vlan_ranges. Is this network VLAN > id range?
Yes, it is. But the range is only used for tenant networks when tenant_network_types == vlan. Neutron will automatically assign a vlan ID from this range when a user creates a network. > If so, does it has to match external network? No. In fact, I wouldn’t include the external vlan ID at all. Admins can specify any vlan ID when creating a network and are not limited to the configuration. That would leave you with this configuration: network_vlan_ranges = physnet1 > For example, we only have one external VLAN we can use as Our provider > network and that VLAN id is 775 (xxx.xxx.xxx.0/26). Should I define > network_vlan_ranges as following? > [ml2] > type_drivers=vlan > tenant_network_types = vlan > mechanism_drivers=openvswitch > # > [ml2_type_vlan] > # this tells Openstack that the internal name "physnet1" provides the vlan > range 100-199 > network_vlan_ranges = physnet1:775 > # If you do specify a range of one vlan, it would look like this: network_vlan_ranges = physnet1:775:775 In the configuration you shared, users would be limited to creating vlan networks. If vlan 775 already exists as a provider network, then the tenant will get an error upon trying to create a network, since there are no vlans left for allocation. If you don’t have any vlans available other than 775 then you’ll need to look into using gre or vxlan instead to overcome that limitation. James > On Jun 27, 2015, at 6:47 AM, YANG LI <yan...@clemson.edu> wrote: > > Thank you so much, James! This is so helpful. Another confusion I have is > about network_vlan_ranges. Is this network VLAN id range? If so, does it has > to match external network? For example, we only have one external VLAN we can > use as Our provider network and that VLAN id is 775 (xxx.xxx.xxx.0/26). > Should I define network_vlan_ranges as following? > > [ml2] > type_drivers=vlan > tenant_network_types = vlan > mechanism_drivers=openvswitch > # > [ml2_type_vlan] > # this tells Openstack that the internal name "physnet1" provides the vlan > range 100-199 > network_vlan_ranges = physnet1:775 > # > > Thanks, > Yang > Sent from my iPhone > > On Jun 26, 2015, at 8:54 AM, "James Denton" <james.den...@rackspace.com > <mailto:james.den...@rackspace.com>> wrote: > >> You can absolutely have instances in the same network span different compute >> nodes. As an admin, you can run ‘nova show <instanceid>’ and see the host in >> the output: >> >> root@controller01:~# nova show 7bb18175-87da-4d1f-8dca-2ef07fee9d21 | grep >> host >> | OS-EXT-SRV-ATTR:host | compute02 >> | >> >> That info is not available to non-admin users by default. >> >> James >> >>> On Jun 26, 2015, at 7:38 AM, YANG LI <yan...@clemson.edu >>> <mailto:yan...@clemson.edu>> wrote: >>> >>> Thanks, James for the explanation. it make more sense now. <http://now.it/> >>> it is possible that a instances on same tenant network reside on different >>> compute nodes right? how do I tell which compute node a instance is on? >>> >>> Thanks, >>> Yang >>> >>>> On Jun 24, 2015, at 10:27 AM, James Denton <james.den...@rackspace.com >>>> <mailto:james.den...@rackspace.com>> wrote: >>>> >>>> Hello. >>>> >>>>> all three nodes will have eth0 on management/api network. since I am >>>>> using ml2 plugin with vlan for tenant network, I think all compute node >>>>> should have eth1 as the second nic on provider network. Is this correct? >>>>> I understand provider network is for instance to get external access to >>>>> internet, but how is instance live on compute1 communicate with instance >>>>> live on compute2? are they also go through provider network? >>>> >>>> In short, yes. If you’re connecting instances to vlan “provider” networks, >>>> traffic between instances on different compute nodes will traverse the >>>> “provider bridge”, get tagged out eth1, and hit the physical switching >>>> fabric. Your external gateway device could also sit in that vlan, and the >>>> default route on the instance would direct external traffic to that device. >>>> >>>> In reality, every network has ‘provider’ attributes that describe the >>>> network type, segmentation id, and bridge interface (for vlan/flat only). >>>> So tenant networks that leverage vlans would have provider attributes set >>>> by Neutron automatically based on the configuration set in the ML2 config >>>> file. If you use Neutron routers that connect to both ‘tenant’ vlan-based >>>> networks and external ‘provider’ networks, all of that traffic could >>>> traverse the same provider bridge on the controller/network node, but >>>> would be tagged accordingly based on the network (ie. vlan 100 for >>>> external network, vlan 200 for tenant network). >>>> >>>> Hope that’s not too confusing! >>>> >>>> James >>>> >>>>> On Jun 24, 2015, at 8:54 AM, YANG LI <yan...@clemson.edu >>>>> <mailto:yan...@clemson.edu>> wrote: >>>>> >>>>> I am working on install openstack from scratch, but get confused with >>>>> network part. I want to have one controller node, two compute nodes. >>>>> >>>>> the controller node will only handle following services: >>>>> glance-api >>>>> glance-registry >>>>> keystone >>>>> nova-api >>>>> nova-cert >>>>> nova-conductor >>>>> nova-consoleauth >>>>> nova-novncproxy >>>>> nova-scheduler >>>>> qpid >>>>> mysql >>>>> neutron-server >>>>> >>>>> compute nodes will have following services: >>>>> neutron-dhcp-agent >>>>> neutron-l3-agent >>>>> neutron-metadata-agent >>>>> neutron-openvswitch-agent >>>>> neutron-ovs-cleanup >>>>> openvswtich >>>>> nova-compute >>>>> >>>>> all three nodes will have eth0 on management/api network. since I am >>>>> using ml2 plugin with vlan for tenant network, I think all compute node >>>>> should have eth1 as the second nic on provider network. Is this correct? >>>>> I understand provider network is for instance to get external access to >>>>> internet, but how is instance live on compute1 communicate with instance >>>>> live on compute2? are they also go through provider network? >>>>> _______________________________________________ >>>>> Mailing list: >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> >>>>> Post to : openstack@lists.openstack.org >>>>> <mailto:openstack@lists.openstack.org> >>>>> Unsubscribe : >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> >>>> >>> >>
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack