Re: [openstack-dev] [Zun] Propose a change of Zun core membership
+1 for both, Welcome Pradeep! Cheers, Sudipto On 05/12/16 6:02 AM, Wenzhi Yu wrote: +1, welcome Pradeep! 在 2016年12月4日,下午10:49,Qiming Teng写道: +1 to both. On Wed, Nov 30, 2016 at 11:37:59PM +, Hongbin Lu wrote: Hi Zun cores, I am going to propose the following change of the Zun core reviewers team: + Pradeep Kumar Singh (pradeep-singh-u) - Vivek Jain (vivek-jain-openstack) Pradeep was proven to be a significant contributor to Zun. He ranked first at the number of commits, and his patches were non-trivial and with high quality. His reviews were also very helpful, and often prompted us to re-think the design. It would be great to have him in the core team. I would like to thank Vivek for his interest to join the core team when Zun was found. However, he became inactive in the past a few months, but he is welcomed to re-join the core team as long as he becomes active again. According to the OpenStack Governance process [1], we require a minimum of 4 +1 votes from Zun core reviewers within a 1 week voting window (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or there is a veto vote prior to the end of the voting window, this proposal is rejected. [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess Best regards, Hongbin __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] [sahara] Unable to ping virtual machines with floating IPs
Hi, I am trying an OpenStack Sahara based on OpenStack Mitaka (Lab experiment) - using a single controller and compute node. The controller node is running inside a virtual machine on top of the compute node. (This runs Sahara as well) The controller node has two interfaces - one public via br0 (LB) on the compute node and one private via br-ex (ovs). Both the IPs are reachable from the controller to the compute host. I use the public interface as the management network. I run the neutron-l3-agent with br-ex (configured with 192. range) as the external_bridge on the compute host. I see that the neutron router port state for the 192. network remains in BUILD state, even though the interfaces (namespaces) are all created properly on the compute node and even the router IPs are reachable. I am running the neutron-openvswitch-agent with bridge_mappings set to default:br-ex I have created an External FLAT network on the controller with the same subnet range as br-ex (that is 192.x.x.x) to use as floating ips. The reason I did this is because, I don't have free floating public IPs - hence I created a network topology that looks kind of like below: On this, everytime I boot a virtual machine - and attach a floating IP (192. range) - the IP doesn't ping. However, if I restart the iptables on the compute node (that runs the l3-agent and the openvswitch agent) - the floating IP becomes pingable and I also can login to the virtual machine from either the controller or the compute node. Can someone help me understand this behavior? Thanks, Sudipto __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt
Thanks Devdatta/Maxime for your comments. I am definitely not rigid about implementing the workflow in Nova and it's well known that there can be multiple integration points for this work including that in docker itself. However, there are two prime reasons why we chose Nova as integration point in OpenStack: 1. Minimal changes to a VM boot workflow. No need to depend on Swift or any other service. 2. Faster boot up times - since the downloading of the virtual machine image is negated. Downloading the docker filesystems should be more or less easier. Some comments inline. Thanks, Sudipto On 27/07/16 11:59 PM, Maxime Belanger wrote: +1 on this, Still you loose all the great stuff about the containers but it is a first step towards native container orchestration platform. IMHO, it is not about just losing stuff. We are not emulating a docker workflow. The expectation is to have the ability to run a container inside a virtual machine and then take that filesystem out and run it natively on the hardware as desired. You can debate on whether it's really needed in Nova or elsewhere and I think that's a fair debate. I am sure there are further technical challenges to overcome, if we want to think in this direction. *From:* Devdatta Kulkarni <devdatta.kulka...@rackspace.com> *Sent:* July 27, 2016 12:21:30 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt Hi Sudipta, There is another approach you can consider which does not need any changes to Nova. The approach works as follows: - Save the container image tar in Swift - Generate a Swift tempURL for the container file - Boot Nova vm and pass instructions for following steps through cloud init / user data - download the container file from Swift (wget) I believe this has to be carried out for every docker image? That is if i have a nginx image and it's provisioned twice, a fresh copy of such has to be wget'ed every time? IF the nova workflow is acceptable, then there can be optimizations thought around this. At this moment, my implementation copies the cached image for each of the containers - atleast making further boots faster. Also, how do you tackle the problem with snapshoting a container? - load it (docker load) - run it (docker run) Do you run the docker native commands inside the virtual machine? In such a case, do you actually install docker as a part of the cloud-init scripts? Do you have numbers w.r.t the boot time of the container image in this case? We have implemented this approach in Solum (where we use Heat for deploying a VM and then run application container on it by providing above instructions through user_data of the HOT). Thanks, Devdatta - From: Sudipta Biswas <sbisw...@linux.vnet.ibm.com> Sent: Wednesday, July 27, 2016 9:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt Premise: While working with customers, we have realized: - They want to use containers but are wary of using the same host kernel for multiple containers. - They already have a significant investment (including skills) in OpenStack's Virtual Machine workflow and would like to re-use it as much as possible. - They are very interested in using docker images. There are some existing approaches like Hyper, Secure Containers workflows which already tries to address the first point. But we wanted to arrive at an approach that addresses all the above three in context of OpenStack Nova with minimalist changes. Design Considerations: We tried a few experiments with the present libvirt driver in nova to accomplish a work flow to deploy containers inside virtual machines in OpenStack via Nova. The fundamental premise of our approach is to run a single container encapsulated in a single VM. This VM image just has a bare minimum operating system required to run it. The container filesystem comes from the docker image. We would like to get the feedback on the below approaches from the community before proposing this as a spec or blueprint. Approach 1 User workflow: 1. The docker image is obtained in the form of a tar file. 2. Upload this tar file in glance. This support is already there in glance were a container-type of docker is supported. 3. Use this image along with nova libvirt driver to deploy a virtual machine. Following are some of the changes to the OpenStack code that implements this approach: 1. Define a new conf parameter in nova called – base_vm_image=/var/lib/libvirt/images/baseimage.qcow2 This option is used to specify the base VM image. 2. define a new sub_virt_type = container in nova conf. Setting this parameter will ensure mounting of the container filesystem inside the VM.
Re: [openstack-dev] [nova][nova-docker] nova-scheduler filters configuration
On 28/07/16 12:34 PM, Yasemin DEMİRAL (BİLGEM BTE) wrote: hi I work on installation nova-docker in multi nodes openstack mitaka systems. how can I configure nova-scheduler filter to compute node and controller node ? You should have the nova-scheduler ideally on the controller node (but it can be anywhere other than the compute nodes itself). I am not sure if we recommend excluding any particular scheduler in case of nova-docker that usually apply for virtual machines, but you should be able to use the filters you need via the nova.conf on your scheduler node. Thank you Yasemin Demiral __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][rfc] Booting docker images using nova libvirt
*Premise:** * While working with customers, we have realized: - They want to use containers but are wary of using the same host kernel for multiple containers. - They already have a significant investment (including skills) in OpenStack's Virtual Machine workflow and would like to re-use it as much as possible. - They are very interested in using docker images. There are some existing approaches like Hyper, Secure Containers workflows which already tries to address the first point. But we wanted to arrive at an approach that addresses all the above three in context of OpenStack Nova with minimalist changes. * **Design Considerations:* We tried a few experiments with the present libvirt driver in nova to accomplish a work flow to deploy containers inside virtual machines in OpenStack via Nova. The fundamental premise of our approach is to run a single container encapsulated in a single VM. This VM image just has a bare minimum operating system required to run it. The container filesystem comes from the docker image. We would like to get the feedback on the below approaches from the community before proposing this as a spec or blueprint. *Approach 1* User workflow: 1. The docker image is obtained in the form of a tar file. 2. Upload this tar file in glance. This support is already there in glance were a container-type of docker is supported. 3. Use this image along with nova libvirt driver to deploy a virtual machine. Following are some of the changes to the OpenStack code that implements this approach: 1. Define a new conf parameter in nova called – /base_vm_image/=/var/lib/libvirt/images/baseimage.qcow2 This option is used to specify the base VM image. 2. define a new /sub_virt_type/ = container in nova conf. Setting this parameter will ensure mounting of the container filesystem inside the VM. Unless qemu and kvm are used as virt_type – this workflow will not work at this moment. 3. In the virt/libvirt/driver.py we do the following based on the sub_virt_type = container: - We create a qcow2 disk from the /base_vm_image/ and expose that 'disk' as the boot disk for the virtual machine. Note – this is very similar to a regular virtual machine boot minus the fact that the image is not downloaded from glance but instead it is present on the host. - We download the docker image into the //var/lib/nova/instances/_base directory/ and then for each new virtual machine boot – we create a new directory //var/lib/nova/instances// as it's and copy the docker filesystem to it. Note – there are subsequent improvements to this idea that could be performed around the lines of using a union filesystem approach. - The step above allows each virtual machine to have a different copy of the filesystem. - We create a '/passthrough/' mount of the filesystem via libvirt. This code is also present in the nova libvirt driver and we just trigger it based on our sub_virt_type parameter. 4. A cloud init – userdata is provided that looks somewhat like this: / //runcmd:// // - mount -t 9p -o trans=virtio share_dir /mnt// // - chroot /mnt /bin// The /command_to_run /is usually the entrypoint to for the docker image. There could be better approaches to determine the entrypoint as well (say from docker image metadata). * **Approach 2.* In this approach, the workflow remains the same as the first one with the exception that the docker image is changed into a qcow2 image using a tool like virt-make-fs before uploading it to glance, instead of a tar file. A tool like virt-make-fs can convert a tar file to a qcow2 image very easily. This image is then downloaded on the compute node and a qcow2 disk is created/attached to the virtual machine that boots using the /base_vm_image/. *Approach 3* A custom qcow2 image is created using kernel, initramfs and the docker image and uploaded to glance. No changes are needed in openstack nova. It boots as a regular VM. Changes will be needed in image generation tools and will involve few additional tasks from an operator point of view. I look forward to your comments/suggestions on the above. Thanks, Sudipto __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev