[openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-16 Thread yongquan Fu
Dear all,



 We would like to present an extension to the vm-booting functionality of
Nova when a number of homogeneous vms need to be launched at the same time.



The motivation for our work is to increase the speed of provisioning vms
for large-scale scientific computing and big data processing. In that case,
we often need to boot tens and hundreds virtual machine instances at the
same time.


Currently, under the Openstack, we found that creating a large number
of virtual machine instances is very time-consuming. The reason is the
booting procedure is a centralized operation that involve performance
bottlenecks. Before a virtual machine can be actually started, OpenStack
either copy the image file (swift) or attach the image volume (cinder) from
storage server to compute node via network. Booting a single VM need to
read a large amount of image data from the image storage server. So
creating a large number of virtual machine instances would cause a
significant workload on the servers. The servers become quite busy even
unavailable during the deployment phase. It would consume a very long time
before the whole virtual machine cluster useable.



  Our extension is based on our work on vmThunder, a novel mechanism
accelerating the deployment of large number virtual machine instances. It
is written in Python, can be integrated with OpenStack easily. VMThunder
addresses the problem described above by following improvements: on-demand
transferring (network attached storage), compute node caching, P2P
transferring and prefetching. VMThunder is a scalable and cost-effective
accelerator for bulk provisioning of virtual machines.



  We hope to receive your feedbacks. Any comments are extremely welcome.
Thanks in advance.



PS:



VMThunder enhanced nova blueprint:
https://blueprints.launchpad.net/nova/+spec/thunderboost

 VMThunder standalone project: https://launchpad.net/vmthunder;

 VMThunder prototype: https://github.com/lihuiba/VMThunder

 VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

 VMThunder portal: http://www.vmthunder.org/

VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



  Regards



  vmThunder development group

  PDL

  National University of Defense Technology
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-16 Thread Zhi Yan Liu
Hello Yongquan Fu,

My thoughts:

1. Currently Nova has already supported image caching mechanism. It
could caches the image on compute host which VM had provisioning from
it before, and next provisioning (boot same image) doesn't need to
transfer it again only if cache-manger clear it up.
2. P2P transferring and prefacing is something that still based on
copy mechanism, IMHO, zero-copy approach is better, even
transferring/prefacing could be optimized by such approach. (I have
not check "on-demand transferring" of VMThunder, but it is a kind of
transferring as well, at last from its literal meaning).
And btw, IMO, we have two ways can go follow zero-copy idea:
a. when Nova and Glance use same backend storage, we could use storage
special CoW/snapshot approach to prepare VM disk instead of
copy/transferring image bits (through HTTP/network or local copy).
b. without "unified" storage, we could attach volume/LUN to compute
node from backend storage as a base image, then do such CoW/snapshot
on it to prepare root/ephemeral disk of VM. This way just like
boot-from-volume but different is that we do CoW/snapshot on Nova side
instead of Cinder/storage side.

For option #a, we have already got some progress:
https://blueprints.launchpad.net/nova/+spec/image-multiple-location
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.

For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
is one of optimized approach for image transferring valuably.

zhiyan

On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu  wrote:
>
> Dear all,
>
>
>
>  We would like to present an extension to the vm-booting functionality of
> Nova when a number of homogeneous vms need to be launched at the same time.
>
>
>
> The motivation for our work is to increase the speed of provisioning vms for
> large-scale scientific computing and big data processing. In that case, we
> often need to boot tens and hundreds virtual machine instances at the same
> time.
>
>
> Currently, under the Openstack, we found that creating a large number of
> virtual machine instances is very time-consuming. The reason is the booting
> procedure is a centralized operation that involve performance bottlenecks.
> Before a virtual machine can be actually started, OpenStack either copy the
> image file (swift) or attach the image volume (cinder) from storage server
> to compute node via network. Booting a single VM need to read a large amount
> of image data from the image storage server. So creating a large number of
> virtual machine instances would cause a significant workload on the servers.
> The servers become quite busy even unavailable during the deployment phase.
> It would consume a very long time before the whole virtual machine cluster
> useable.
>
>
>
>   Our extension is based on our work on vmThunder, a novel mechanism
> accelerating the deployment of large number virtual machine instances. It is
> written in Python, can be integrated with OpenStack easily. VMThunder
> addresses the problem described above by following improvements: on-demand
> transferring (network attached storage), compute node caching, P2P
> transferring and prefetching. VMThunder is a scalable and cost-effective
> accelerator for bulk provisioning of virtual machines.
>
>
>
>   We hope to receive your feedbacks. Any comments are extremely welcome.
> Thanks in advance.
>
>
>
> PS:
>
>
>
> VMThunder enhanced nova blueprint:
> https://blueprints.launchpad.net/nova/+spec/thunderboost
>
>  VMThunder standalone project: https://launchpad.net/vmthunder;
>
>  VMThunder prototype: https://github.com/lihuiba/VMThunder
>
>  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder
>
>  VMThunder portal: http://www.vmthunder.org/
>
> VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf
>
>
>
>   Regards
>
>
>
>   vmThunder development group
>
>   PDL
>
>   National University of Defense Technology
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
>IMHO, zero-copy approach is better
VMThunder's "on-demand transferring" is the same thing as your "zero-copy 
approach".
VMThunder is uses iSCSI as the transferring protocol, which is option #b of 
yours.




>Under #b approach, my former experience from our previous similar
>Cloud deployment (not OpenStack) was that: under 2 PC server storage
>nodes (general *local SAS disk*, without any storage backend) +
>2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>VMs in a minute.

suppose booting one instance requires reading 300MB of data, so 500 ones
require 150GB.  Each of the storage server needs to send data at a rate of 
150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even 
for high-end storage appliances. In production  systems, this request (booting 
500 VMs in one shot) will significantly disturb  other running instances 
accessing the same storage nodes.


VMThunder eliminates this problem by P2P transferring and on-compute-node
caching. Even a pc server with one 1gb NIC (this is a true pc server!) can boot
500 VMs in a minute with ease. For the first time, VMThunder makes bulk 
provisioning of VMs practical for production cloud systems. This is the 
essential
value of VMThunder.








===
From: Zhi Yan Liu 
Date: 2014-04-17 0:02 GMT+08:00
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder
To: "OpenStack Development Mailing List (not for usage questions)" 



Hello Yongquan Fu,

My thoughts:

1. Currently Nova has already supported image caching mechanism. It
could caches the image on compute host which VM had provisioning from
it before, and next provisioning (boot same image) doesn't need to
transfer it again only if cache-manger clear it up.
2. P2P transferring and prefacing is something that still based on
copy mechanism, IMHO, zero-copy approach is better, even
transferring/prefacing could be optimized by such approach. (I have
not check "on-demand transferring" of VMThunder, but it is a kind of
transferring as well, at last from its literal meaning).
And btw, IMO, we have two ways can go follow zero-copy idea:
a. when Nova and Glance use same backend storage, we could use storage
special CoW/snapshot approach to prepare VM disk instead of
copy/transferring image bits (through HTTP/network or local copy).
b. without "unified" storage, we could attach volume/LUN to compute
node from backend storage as a base image, then do such CoW/snapshot
on it to prepare root/ephemeral disk of VM. This way just like
boot-from-volume but different is that we do CoW/snapshot on Nova side
instead of Cinder/storage side.

For option #a, we have already got some progress:
https://blueprints.launchpad.net/nova/+spec/image-multiple-location
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.

For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
is one of optimized approach for image transferring valuably.

zhiyan


On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu  wrote:
>
> Dear all,
>
>
>
>  We would like to present an extension to the vm-booting functionality of
> Nova when a number of homogeneous vms need to be launched at the same time.
>
>
>
> The motivation for our work is to increase the speed of provisioning vms for
> large-scale scientific computing and big data processing. In that case, we
> often need to boot tens and hundreds virtual machine instances at the same
> time.
>
>
> Currently, under the Openstack, we found that creating a large number of
> virtual machine instances is very time-consuming. The reason is the booting
> procedure is a centralized operation that involve performance bottlenecks.
> Before a virtual machine can be actually started, OpenStack either copy the
> image file (swift) or attach the image volume (cinder) from storage server
> to compute node via network. Booting a single VM need to read a large amount
> of image data from the image storage server. So creating a large number of
> virtual machine instances would cause a significant workload on the servers.
> The servers become quite busy even unavailable during the deployment phase.
> It would consume a very long time before the whole virtual machine cluster
> useable.
>
>
>
>   Our extension is based on our work on vmThunder, a novel mechanism
> accelerating the deployment of large number virtual machine instances. It i

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba  wrote:
>>IMHO, zero-copy approach is better
> VMThunder's "on-demand transferring" is the same thing as your "zero-copy
> approach".
> VMThunder is uses iSCSI as the transferring protocol, which is option #b of
> yours.
>

IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

>
>>Under #b approach, my former experience from our previous similar
>>Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>nodes (general *local SAS disk*, without any storage backend) +
>>2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>>VMs in a minute.
> suppose booting one instance requires reading 300MB of data, so 500 ones
> require 150GB.  Each of the storage server needs to send data at a rate of
> 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
> for high-end storage appliances. In production  systems, this request
> (booting
> 500 VMs in one shot) will significantly disturb  other running instances
> accessing the same storage nodes.
>
> VMThunder eliminates this problem by P2P transferring and on-compute-node
> caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
> boot
> 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
> provisioning of VMs practical for production cloud systems. This is the
> essential
> value of VMThunder.
>

As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan

>
>
>
> ===============================
> From: Zhi Yan Liu 
> Date: 2014-04-17 0:02 GMT+08:00
> Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
> process of a number of vms via VMThunder
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
>
> Hello Yongquan Fu,
>
> My thoughts:
>
> 1. Currently Nova has already supported image caching mechanism. It
> could caches the image on compute host which VM had provisioning from
> it before, and next provisioning (boot same image) doesn't need to
> transfer it again only if cache-manger clear it up.
> 2. P2P transferring and prefacing is something that still based on
> copy mechanism, IMHO, zero-copy approach is better, even
> transferring/prefacing could be optimized by such approach. (I have
> not check "on-demand transferring" of VMThunder, but it is a kind of
> transferring as well, at last from its literal meaning).
> And btw, IMO, we have two ways can go follow zero-copy idea:
> a. when Nova and Glance use same backend storage, we could use storage
> special CoW/snapshot approach to prepare VM disk instead of
> copy/transferring image bits (through HTTP/network or local copy).
> b. without "unified" storage, we could attach volume/LUN to compute
> node from backend storage as a base image, then do such CoW/snapshot
> on it to prepare root/ephemeral disk of VM. This way just like
> boot-from-volume but different is that we do CoW/snapshot on Nova side
> instead of Cinder/storage side.
>
> For option #a, we have already got some progress:
> https://blueprints.launchpad.net/nova/+spec/image-multiple-location
> https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
> https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler
>
> Under #b approach, my former experience from our previous similar
> Cloud deployment (not OpenStack) was that: under 2 PC server storage
> nodes (general *local SAS disk*, without any storage backend) +
> 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
> VMs in a minute.
>
> For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
> is one of optimized approach for image transferring valuably.
>
> zhiyan
>
> On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu  wrote:
>>
>> Dear all,
>>
>>
>>
>>  We would like to present an extension to the vm-booting functionality of
>> Nova when a number of homogeneous vms ne

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Jesse Pretorius
On 17 April 2014 11:11, Zhi Yan Liu  wrote:

> As I said currently Nova already has image caching mechanism, so in
> this case P2P is just an approach could be used for downloading or
> preheating for image caching.
>
> I think  P2P transferring/pre-caching sounds a  good way to go, as I
> mentioned as well, but actually for the area I'd like to see something
> like zero-copy + CoR. On one hand we can leverage the capability of
> on-demand downloading image bits by zero-copy approach, on the other
> hand we can prevent to reading data from remote image every time by
> CoR.
>

This whole discussion reminded me of this:

https://blueprints.launchpad.net/glance/+spec/glance-bittorrent-delivery
http://tropicaldevel.wordpress.com/2013/01/11/an-image-transfers-service-for-openstack/

The general idea was that Glance would be able to serve images through
torrents, enabling the capability for compute hosts to participate in image
delivery. Well, the second part was where I thought it was going - I'm not
sure if that was the intention.

It didn't seem to go anywhere, but I thought it was a nifty idea.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
>IMO we'd better to use backend storage optimized approach to access
>remote image from compute node instead of using iSCSI only. And from
>my experience, I'm sure iSCSI is short of stability under heavy I/O
>workload in product environment, it could causes either VM filesystem
>to be marked as readonly or VM kernel panic.Yes, in this situation, the 
>problem lies in the backend storage, so no otherprotocol will perform better. 
>However, P2P transferring will greatly reduceworkload on the backend storage, 
>so as to increase responsiveness.


>As I said currently Nova already has image caching mechanism, so in
>this case P2P is just an approach could be used for downloading or
>preheating for image caching.
Nova's image caching is file level, while VMThunder's is block-level. And
VMThunder is for working in conjunction with Cinder, not Glance. VMThunder
currently uses facebook's flashcache to realize caching, and dm-cache,
bcache are also options in the future.



>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>mentioned as well, but actually for the area I'd like to see something
>like zero-copy + CoR. On one hand we can leverage the capability of
>on-demand downloading image bits by zero-copy approach, on the other
>hand we can prevent to reading data from remote image every time by
>CoR.

Yes, on-demand transferring is what you mean by "zero-copy", and caching
is something close to CoR. In fact, we are working on a kernel module called
foolcache that realize a true CoR. See https://github.com/lihuiba/dm-foolcache.







National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073

At 2014-04-17 17:11:48,"Zhi Yan Liu"  wrote:
>On Thu, Apr 17, 2014 at 4:41 PM, lihuiba  wrote:
>>>IMHO, zero-copy approach is better
>> VMThunder's "on-demand transferring" is the same thing as your "zero-copy
>> approach".
>> VMThunder is uses iSCSI as the transferring protocol, which is option #b of
>> yours.
>>
>
>IMO we'd better to use backend storage optimized approach to access
>remote image from compute node instead of using iSCSI only. And from
>my experience, I'm sure iSCSI is short of stability under heavy I/O
>workload in product environment, it could causes either VM filesystem
>to be marked as readonly or VM kernel panic.
>
>>
>>>Under #b approach, my former experience from our previous similar
>>>Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>>nodes (general *local SAS disk*, without any storage backend) +
>>>2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>>>VMs in a minute.
>> suppose booting one instance requires reading 300MB of data, so 500 ones
>> require 150GB.  Each of the storage server needs to send data at a rate of
>> 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
>> for high-end storage appliances. In production  systems, this request
>> (booting
>> 500 VMs in one shot) will significantly disturb  other running instances
>> accessing the same storage nodes.
>>
>> VMThunder eliminates this problem by P2P transferring and on-compute-node
>> caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
>> boot
>> 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
>> provisioning of VMs practical for production cloud systems. This is the
>> essential
>> value of VMThunder.
>>
>
>As I said currently Nova already has image caching mechanism, so in
>this case P2P is just an approach could be used for downloading or
>preheating for image caching.
>
>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>mentioned as well, but actually for the area I'd like to see something
>like zero-copy + CoR. On one hand we can leverage the capability of
>on-demand downloading image bits by zero-copy approach, on the other
>hand we can prevent to reading data from remote image every time by
>CoR.
>
>zhiyan
>
>>
>>
>>
>> ===
>> From: Zhi Yan Liu 
>> Date: 2014-04-17 0:02 GMT+08:00
>> Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
>> process of a number of vms via VMThunder
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>>
>>
>>
>> Hello Yongquan Fu,
>>
>> My thoughts:
>>
>> 1. Currently Nova has already supported image caching mechanism. It
>> could ca

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
glance-bittorrent-delivery and VMThunder have similar goals  fast 
provisioning
of large amount of VMs, and they share some ideas like P2P transferring, but 
they 
go with different techniques.


VMThunder only downloads data blocks that are really used by VMs, so as to 
reduce bandwith and time required to provision. We have experiments showing
that only a few hundred MB of data is needed to boot an mainstream OS like
CentOS 6.x, Ubuntu 12.04, Windows 2008, etc., while the images are GBs or
even tens of GBs large.



National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073



在 2014-04-17 19:06:27,"Jesse Pretorius"  写道:



This whole discussion reminded me of this:


https://blueprints.launchpad.net/glance/+spec/glance-bittorrent-delivery
http://tropicaldevel.wordpress.com/2013/01/11/an-image-transfers-service-for-openstack/


The general idea was that Glance would be able to serve images through 
torrents, enabling the capability for compute hosts to participate in image 
delivery. Well, the second part was where I thought it was going - I'm not sure 
if that was the intention.


It didn't seem to go anywhere, but I thought it was a nifty idea.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Michael Still
If you'd like to have a go at implementing this in nova's Juno
release, then you need to create a new-style blueprint in the
nova-specs repository. You can find more details about that process at
https://wiki.openstack.org/wiki/Blueprints#Nova

Some initial thoughts though, some of which have already been brought up:

 - _some_ libvirt drivers already have image caching. I am unsure if
all of them do, I'd have to check.

 - we already have blueprints for better support of glance multiple
image locations, it might be better to extend that work than to do
something completely separate.

 - the xen driver already does bittorrent image delivery IIRC, you
could take a look at how that do that.

 - pre-caching images has been proposed for libvirt for a long time,
but never implemented. I think that's definitely something of interest
to deployers.

Cheers,
Michael

On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu  wrote:
>
> Dear all,
>
>
>
>  We would like to present an extension to the vm-booting functionality of
> Nova when a number of homogeneous vms need to be launched at the same time.
>
>
>
> The motivation for our work is to increase the speed of provisioning vms for
> large-scale scientific computing and big data processing. In that case, we
> often need to boot tens and hundreds virtual machine instances at the same
> time.
>
>
> Currently, under the Openstack, we found that creating a large number of
> virtual machine instances is very time-consuming. The reason is the booting
> procedure is a centralized operation that involve performance bottlenecks.
> Before a virtual machine can be actually started, OpenStack either copy the
> image file (swift) or attach the image volume (cinder) from storage server
> to compute node via network. Booting a single VM need to read a large amount
> of image data from the image storage server. So creating a large number of
> virtual machine instances would cause a significant workload on the servers.
> The servers become quite busy even unavailable during the deployment phase.
> It would consume a very long time before the whole virtual machine cluster
> useable.
>
>
>
>   Our extension is based on our work on vmThunder, a novel mechanism
> accelerating the deployment of large number virtual machine instances. It is
> written in Python, can be integrated with OpenStack easily. VMThunder
> addresses the problem described above by following improvements: on-demand
> transferring (network attached storage), compute node caching, P2P
> transferring and prefetching. VMThunder is a scalable and cost-effective
> accelerator for bulk provisioning of virtual machines.
>
>
>
>   We hope to receive your feedbacks. Any comments are extremely welcome.
> Thanks in advance.
>
>
>
> PS:
>
>
>
> VMThunder enhanced nova blueprint:
> https://blueprints.launchpad.net/nova/+spec/thunderboost
>
>  VMThunder standalone project: https://launchpad.net/vmthunder;
>
>  VMThunder prototype: https://github.com/lihuiba/VMThunder
>
>  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder
>
>  VMThunder portal: http://www.vmthunder.org/
>
> VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf
>
>
>
>   Regards
>
>
>
>   vmThunder development group
>
>   PDL
>
>   National University of Defense Technology
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
a already has image caching mechanism, so in
>>this case P2P is just an approach could be used for downloading or
>>preheating for image caching.
>>
>>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>>mentioned as well, but actually for the area I'd like to see something
>>like zero-copy + CoR. On one hand we can leverage the capability of
>>on-demand downloading image bits by zero-copy approach, on the other
>>hand we can prevent to reading data from remote image every time by
>>CoR.
>>
>>zhiyan
>>
>>>
>>>
>>>
>>> ===
>>> From: Zhi Yan Liu 
>>> Date: 2014-04-17 0:02 GMT+08:00
>>> Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
>>> process of a number of vms via VMThunder
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>>
>>>
>>>
>>> Hello Yongquan Fu,
>>>
>>> My thoughts:
>>>
>>> 1. Currently Nova has already supported image caching mechanism. It
>>> could caches the image on compute host which VM had provisioning from
>>> it before, and next provisioning (boot same image) doesn't need to
>>> transfer it again only if cache-manger clear it up.
>>> 2. P2P transferring and prefacing is something that still based on
>>> copy mechanism, IMHO, zero-copy approach is better, even
>>> transferring/prefacing could be optimized by such approach. (I have
>>> not check "on-demand transferring" of VMThunder, but it is a kind of
>>> transferring as well, at last from its literal meaning).
>>> And btw, IMO, we have two ways can go follow zero-copy idea:
>>> a. when Nova and Glance use same backend storage, we could use storage
>>> special CoW/snapshot approach to prepare VM disk instead of
>>> copy/transferring image bits (through HTTP/network or local copy).
>>> b. without "unified" storage, we could attach volume/LUN to compute
>>> node from backend storage as a base image, then do such CoW/snapshot
>>> on it to prepare root/ephemeral disk of VM. This way just like
>>> boot-from-volume but different is that we do CoW/snapshot on Nova side
>>> instead of Cinder/storage side.
>>>
>>> For option #a, we have already got some progress:
>>> https://blueprints.launchpad.net/nova/+spec/image-multiple-location
>>> https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
>>> https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler
>>>
>>> Under #b approach, my former experience from our previous similar
>>> Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>> nodes (general *local SAS disk*, without any storage backend) +
>>> 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>>> VMs in a minute.
>>>
>>> For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
>>> is one of optimized approach for image transferring valuably.
>>>
>>> zhiyan
>>>
>>> On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu  wrote:
>>>>
>>>> Dear all,
>>>>
>>>>
>>>>
>>>>  We would like to present an extension to the vm-booting functionality
>>>> of
>>>> Nova when a number of homogeneous vms need to be launched at the same
>>>> time.
>>>>
>>>>
>>>>
>>>> The motivation for our work is to increase the speed of provisioning vms
>>>> for
>>>> large-scale scientific computing and big data processing. In that case,
>>>> we
>>>> often need to boot tens and hundreds virtual machine instances at the
>>>> same
>>>> time.
>>>>
>>>>
>>>> Currently, under the Openstack, we found that creating a large
>>>> number
>>>> of
>>>> virtual machine instances is very time-consuming. The reason is the
>>>> booting
>>>> procedure is a centralized operation that involve performance
>>>> bottlenecks.
>>>> Before a virtual machine can be actually started, OpenStack either copy
>>>> the
>>>> image file (swift) or attach the image volume (cinder) from storage
>>>> server
>>>> to compute node via network. Booting a single VM need to read a large
>>>> amount
>>>> of image data from 

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 5:19 AM, Michael Still  wrote:
> If you'd like to have a go at implementing this in nova's Juno
> release, then you need to create a new-style blueprint in the
> nova-specs repository. You can find more details about that process at
> https://wiki.openstack.org/wiki/Blueprints#Nova
>
> Some initial thoughts though, some of which have already been brought up:
>
>  - _some_ libvirt drivers already have image caching. I am unsure if
> all of them do, I'd have to check.
>

Thanks for clarification.

>  - we already have blueprints for better support of glance multiple
> image locations, it might be better to extend that work than to do
> something completely separate.
>

Totally agreed. And I think currently seems there are two places (at
least) could be leveraged:

1. Making this as an image download plug-ins for Nova, to be built-in
or independent. I prefer to go this way, but need to make sure its
context is enough for your case.
2. Making this as a built-in or independent image handler plug-ins, as
a part of supporting of multiple-image-locations (on going) as Michael
mentions here.

zhiyan

>  - the xen driver already does bittorrent image delivery IIRC, you
> could take a look at how that do that.
>
>  - pre-caching images has been proposed for libvirt for a long time,
> but never implemented. I think that's definitely something of interest
> to deployers.
>
> Cheers,
> Michael
>
> On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu  wrote:
>>
>> Dear all,
>>
>>
>>
>>  We would like to present an extension to the vm-booting functionality of
>> Nova when a number of homogeneous vms need to be launched at the same time.
>>
>>
>>
>> The motivation for our work is to increase the speed of provisioning vms for
>> large-scale scientific computing and big data processing. In that case, we
>> often need to boot tens and hundreds virtual machine instances at the same
>> time.
>>
>>
>> Currently, under the Openstack, we found that creating a large number of
>> virtual machine instances is very time-consuming. The reason is the booting
>> procedure is a centralized operation that involve performance bottlenecks.
>> Before a virtual machine can be actually started, OpenStack either copy the
>> image file (swift) or attach the image volume (cinder) from storage server
>> to compute node via network. Booting a single VM need to read a large amount
>> of image data from the image storage server. So creating a large number of
>> virtual machine instances would cause a significant workload on the servers.
>> The servers become quite busy even unavailable during the deployment phase.
>> It would consume a very long time before the whole virtual machine cluster
>> useable.
>>
>>
>>
>>   Our extension is based on our work on vmThunder, a novel mechanism
>> accelerating the deployment of large number virtual machine instances. It is
>> written in Python, can be integrated with OpenStack easily. VMThunder
>> addresses the problem described above by following improvements: on-demand
>> transferring (network attached storage), compute node caching, P2P
>> transferring and prefetching. VMThunder is a scalable and cost-effective
>> accelerator for bulk provisioning of virtual machines.
>>
>>
>>
>>   We hope to receive your feedbacks. Any comments are extremely welcome.
>> Thanks in advance.
>>
>>
>>
>> PS:
>>
>>
>>
>> VMThunder enhanced nova blueprint:
>> https://blueprints.launchpad.net/nova/+spec/thunderboost
>>
>>  VMThunder standalone project: https://launchpad.net/vmthunder;
>>
>>  VMThunder prototype: https://github.com/lihuiba/VMThunder
>>
>>  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder
>>
>>  VMThunder portal: http://www.vmthunder.org/
>>
>> VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf
>>
>>
>>
>>   Regards
>>
>>
>>
>>   vmThunder development group
>>
>>   PDL
>>
>>   National University of Defense Technology
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
lutely a heavy burden even
>>>> for high-end storage appliances. In production  systems, this request
>>>> (booting
>>>> 500 VMs in one shot) will significantly disturb  other running instances
>>>> accessing the same storage nodes.
>>>>
>
>btw, I believe the case/numbers is not true as well, since remote
>image bits could be loaded on-demand instead of load them all on boot
>stage.
>
>zhiyan
>
>>>> VMThunder eliminates this problem by P2P transferring and on-compute-node
>>>> caching. Even a pc server with one 1gb NIC (this is a true pc server!)
>>>> can
>>>> boot
>>>> 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
>>>> provisioning of VMs practical for production cloud systems. This is the
>>>> essential
>>>> value of VMThunder.
>>>>
>>>
>>>As I said currently Nova already has image caching mechanism, so in
>>>this case P2P is just an approach could be used for downloading or
>>>preheating for image caching.
>>>
>>>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>>>mentioned as well, but actually for the area I'd like to see something
>>>like zero-copy + CoR. On one hand we can leverage the capability of
>>>on-demand downloading image bits by zero-copy approach, on the other
>>>hand we can prevent to reading data from remote image every time by
>>>CoR.
>>>
>>>zhiyan
>>>
>>>>
>>>>
>>>>
>>>> ===
>>>> From: Zhi Yan Liu 
>>>> Date: 2014-04-17 0:02 GMT+08:00
>>>> Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
>>>> process of a number of vms via VMThunder
>>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>> 
>>>>
>>>>
>>>>
>>>> Hello Yongquan Fu,
>>>>
>>>> My thoughts:
>>>>
>>>> 1. Currently Nova has already supported image caching mechanism. It
>>>> could caches the image on compute host which VM had provisioning from
>>>> it before, and next provisioning (boot same image) doesn't need to
>>>> transfer it again only if cache-manger clear it up.
>>>> 2. P2P transferring and prefacing is something that still based on
>>>> copy mechanism, IMHO, zero-copy approach is better, even
>>>> transferring/prefacing could be optimized by such approach. (I have
>>>> not check "on-demand transferring" of VMThunder, but it is a kind of
>>>> transferring as well, at last from its literal meaning).
>>>> And btw, IMO, we have two ways can go follow zero-copy idea:
>>>> a. when Nova and Glance use same backend storage, we could use storage
>>>> special CoW/snapshot approach to prepare VM disk instead of
>>>> copy/transferring image bits (through HTTP/network or local copy).
>>>> b. without "unified" storage, we could attach volume/LUN to compute
>>>> node from backend storage as a base image, then do such CoW/snapshot
>>>> on it to prepare root/ephemeral disk of VM. This way just like
>>>> boot-from-volume but different is that we do CoW/snapshot on Nova side
>>>> instead of Cinder/storage side.
>>>>
>>>> For option #a, we have already got some progress:
>>>> https://blueprints.launchpad.net/nova/+spec/image-multiple-location
>>>> https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
>>>> https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler
>>>>
>>>> Under #b approach, my former experience from our previous similar
>>>> Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>>> nodes (general *local SAS disk*, without any storage backend) +
>>>> 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>>>> VMs in a minute.
>>>>
>>>> For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
>>>> is one of optimized approach for image transferring valuably.
>>>>
>>>> zhiyan
>>>>
>>>> On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu  wrote:
>>>>>
>>>>> Dear all,
>>>>>
>>>>>
>>>>>
>>>>>  We would like to present an extension to the vm-booting functionality
>>

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
> - _some_ libvirt drivers already have image caching. I am unsure if
>all of them do, I'd have to check.
 '$instances_path/_base' is used to cache images downloaded fromglance, in 
file-level. While VMThunder employs find-grained block-level cacheing, for 
volumes served by cinder.

> - we already have blueprints for better support of glance multiple
>image locations, it might be better to extend that work than to do
>something completely separate.
Is there a cinder multiple volume locations? We are considering to
support something like that.

> - the xen driver already does bittorrent image delivery IIRC, you
>could take a look at how that do that.
We are trying to do bittorrent image delivery for libvirt, too.

> - pre-caching images has been proposed for libvirt for a long time,
>but never implemented. I think that's definitely something of interest
>to deployers.

What is pre-caching? Deploying images to compute nodes before they 
are used?



Huiba Li
National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073





At 2014-04-18 05:19:23,"Michael Still"  wrote:
>If you'd like to have a go at implementing this in nova's Juno
>release, then you need to create a new-style blueprint in the
>nova-specs repository. You can find more details about that process at
>https://wiki.openstack.org/wiki/Blueprints#Nova
>
>Some initial thoughts though, some of which have already been brought up:
>
> - _some_ libvirt drivers already have image caching. I am unsure if
>all of them do, I'd have to check.
>
> - we already have blueprints for better support of glance multiple
>image locations, it might be better to extend that work than to do
>something completely separate.
>
> - the xen driver already does bittorrent image delivery IIRC, you
>could take a look at how that do that.
>
> - pre-caching images has been proposed for libvirt for a long time,
>but never implemented. I think that's definitely something of interest
>to deployers.
>
>Cheers,
>Michael
>
>On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu  wrote:
>>
>> Dear all,
>>
>>
>>
>>  We would like to present an extension to the vm-booting functionality of
>> Nova when a number of homogeneous vms need to be launched at the same time.
>>
>>
>>
>> The motivation for our work is to increase the speed of provisioning vms for
>> large-scale scientific computing and big data processing. In that case, we
>> often need to boot tens and hundreds virtual machine instances at the same
>> time.
>>
>>
>> Currently, under the Openstack, we found that creating a large number of
>> virtual machine instances is very time-consuming. The reason is the booting
>> procedure is a centralized operation that involve performance bottlenecks.
>> Before a virtual machine can be actually started, OpenStack either copy the
>> image file (swift) or attach the image volume (cinder) from storage server
>> to compute node via network. Booting a single VM need to read a large amount
>> of image data from the image storage server. So creating a large number of
>> virtual machine instances would cause a significant workload on the servers.
>> The servers become quite busy even unavailable during the deployment phase.
>> It would consume a very long time before the whole virtual machine cluster
>> useable.
>>
>>
>>
>>   Our extension is based on our work on vmThunder, a novel mechanism
>> accelerating the deployment of large number virtual machine instances. It is
>> written in Python, can be integrated with OpenStack easily. VMThunder
>> addresses the problem described above by following improvements: on-demand
>> transferring (network attached storage), compute node caching, P2P
>> transferring and prefetching. VMThunder is a scalable and cost-effective
>> accelerator for bulk provisioning of virtual machines.
>>
>>
>>
>>   We hope to receive your feedbacks. Any comments are extremely welcome.
>> Thanks in advance.
>>
>>
>>
>> PS:
>>
>>
>>
>> VMThunder enhanced nova blueprint:
>> https://blueprints.launchpad.net/nova/+spec/thunderboost
>>
>>  VMThunder standalone project: https://launchpad.net/vmthunder;
>>
>>  VMThunder prototype: https://github.com/lihuiba/VMThunder
>>
>>  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder
>>
>>  VMThunder portal: http://www.vmthunder.org/
>>
>> VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf
>>
>>
>>
>>   Regards
>>
>>
>>
>>   vmThunder development group
>>
>>   PDL
>>
>>   National University of Defense Technology
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Rackspace Australia
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listi

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
nstead of using iSCSI only. And from
>>>>my experience, I'm sure iSCSI is short of stability under heavy I/O
>>>>workload in product environment, it could causes either VM filesystem
>>>>to be marked as readonly or VM kernel panic.
>>>>
>>>>>
>>>>>>Under #b approach, my former experience from our previous similar
>>>>>>Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>>>>>nodes (general *local SAS disk*, without any storage backend) +
>>>>>>2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
>>>>>>VMs in a minute.
>>>>> suppose booting one instance requires reading 300MB of data, so 500
>>>>> ones
>>>>> require 150GB.  Each of the storage server needs to send data at a rate
>>>>> of
>>>>> 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden
>>>>> even
>>>>> for high-end storage appliances. In production  systems, this request
>>>>> (booting
>>>>> 500 VMs in one shot) will significantly disturb  other running
>>>>> instances
>>>>> accessing the same storage nodes.
>>>>>
>>
>>btw, I believe the case/numbers is not true as well, since remote
>>image bits could be loaded on-demand instead of load them all on boot
>>stage.
>>
>>zhiyan
>>
>>>>> VMThunder eliminates this problem by P2P transferring and
>>>>> on-compute-node
>>>>> caching. Even a pc server with one 1gb NIC (this is a true pc server!)
>>>>> can
>>>>> boot
>>>>> 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
>>>>> provisioning of VMs practical for production cloud systems. This is the
>>>>> essential
>>>>> value of VMThunder.
>>>>>
>>>>
>>>>As I said currently Nova already has image caching mechanism, so in
>>>>this case P2P is just an approach could be used for downloading or
>>>>preheating for image caching.
>>>>
>>>>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>>>>mentioned as well, but actually for the area I'd like to see something
>>>>like zero-copy + CoR. On one hand we can leverage the capability of
>>>>on-demand downloading image bits by zero-copy approach, on the other
>>>>hand we can prevent to reading data from remote image every time by
>>>>CoR.
>>>>
>>>>zhiyan
>>>>
>>>>>
>>>>>
>>>>>
>>>>> ===
>>>>> From: Zhi Yan Liu 
>>>>> Date: 2014-04-17 0:02 GMT+08:00
>>>>> Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
>>>>> process of a number of vms via VMThunder
>>>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>>> 
>>>>>
>>>>>
>>>>>
>>>>> Hello Yongquan Fu,
>>>>>
>>>>> My thoughts:
>>>>>
>>>>> 1. Currently Nova has already supported image caching mechanism. It
>>>>> could caches the image on compute host which VM had provisioning from
>>>>> it before, and next provisioning (boot same image) doesn't need to
>>>>> transfer it again only if cache-manger clear it up.
>>>>> 2. P2P transferring and prefacing is something that still based on
>>>>> copy mechanism, IMHO, zero-copy approach is better, even
>>>>> transferring/prefacing could be optimized by such approach. (I have
>>>>> not check "on-demand transferring" of VMThunder, but it is a kind of
>>>>> transferring as well, at last from its literal meaning).
>>>>> And btw, IMO, we have two ways can go follow zero-copy idea:
>>>>> a. when Nova and Glance use same backend storage, we could use storage
>>>>> special CoW/snapshot approach to prepare VM disk instead of
>>>>> copy/transferring image bits (through HTTP/network or local copy).
>>>>> b. without "unified" storage, we could attach volume/LUN to compute
>>>>> node from backend storage as a base image, then do such CoW/snapshot
>>>>> on it to prepare root/ephemeral disk of VM. This way just like
>>>>> boot-from-volume but diff

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-18 Thread Zhi Yan Liu
uld be leveraged by operation/best-practice level.
>>>>
>>>>btw, we are doing some works to make Glance to integrate Cinder as a
>>>>unified block storage backend.
>>>>
>>>>>
>>>>>>I think  P2P transferring/pre-caching sounds a  good way to go, as I
>>>>>>mentioned as well, but actually for the area I'd like to see something
>>>>>>like zero-copy + CoR. On one hand we can leverage the capability of
>>>>>>on-demand downloading image bits by zero-copy approach, on the other
>>>>>>hand we can prevent to reading data from remote image every time by
>>>>>>CoR.
>>>>>
>>>>> Yes, on-demand transferring is what you mean by "zero-copy", and
>>>>> caching
>>>>> is something close to CoR. In fact, we are working on a kernel module
>>>>> called
>>>>> foolcache that realize a true CoR. See
>>>>> https://github.com/lihuiba/dm-foolcache.
>>>>>
>>>>
>>>>Yup. And it's really interesting to me, will take a look, thanks for
>>>> sharing.
>>>>
>>>>>
>>>>>
>>>>>
>>>>> National Key Laboratory for Parallel and Distributed
>>>>> Processing, College of Computer Science, National University of Defense
>>>>> Technology, Changsha, Hunan Province, P.R. China
>>>>> 410073
>>>>>
>>>>>
>>>>> At 2014-04-17 17:11:48,"Zhi Yan Liu"  wrote:
>>>>>>On Thu, Apr 17, 2014 at 4:41 PM, lihuiba 
>>>>>> wrote:
>>>>>>>>IMHO, zero-copy approach is better
>>>>>>> VMThunder's "on-demand transferring" is the same thing as your
>>>>>>> "zero-copy
>>>>>>> approach".
>>>>>>> VMThunder is uses iSCSI as the transferring protocol, which is option
>>>>>>> #b
>>>>>>> of
>>>>>>> yours.
>>>>>>>
>>>>>>
>>>>>>IMO we'd better to use backend storage optimized approach to access
>>>>>>remote image from compute node instead of using iSCSI only. And from
>>>>>>my experience, I'm sure iSCSI is short of stability under heavy I/O
>>>>>>workload in product environment, it could causes either VM filesystem
>>>>>>to be marked as readonly or VM kernel panic.
>>>>>>
>>>>>>>
>>>>>>>>Under #b approach, my former experience from our previous similar
>>>>>>>>Cloud deployment (not OpenStack) was that: under 2 PC server storage
>>>>>>>>nodes (general *local SAS disk*, without any storage backend) +
>>>>>>>>2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning
>>>>>>>> 500
>>>>>>>>VMs in a minute.
>>>>>>> suppose booting one instance requires reading 300MB of data, so 500
>>>>>>> ones
>>>>>>> require 150GB.  Each of the storage server needs to send data at a
>>>>>>> rate
>>>>>>> of
>>>>>>> 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden
>>>>>>> even
>>>>>>> for high-end storage appliances. In production  systems, this request
>>>>>>> (booting
>>>>>>> 500 VMs in one shot) will significantly disturb  other running
>>>>>>> instances
>>>>>>> accessing the same storage nodes.
>>>>>>>
>>>>
>>>>btw, I believe the case/numbers is not true as well, since remote
>>>>image bits could be loaded on-demand instead of load them all on boot
>>>>stage.
>>>>
>>>>zhiyan
>>>>
>>>>>>> VMThunder eliminates this problem by P2P transferring and
>>>>>>> on-compute-node
>>>>>>> caching. Even a pc server with one 1gb NIC (this is a true pc
>>>>>>> server!)
>>>>>>> can
>>>>>>> boot
>>>>>>> 500 VMs in a minute with ease. For the first time, VMThunder makes
>>>>>>> bulk
>>>>>>> provisioning of VMs practical for production cloud systems. This is
>>>>>>> the
>>>>>>

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-22 Thread Jay Pipes
Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.

On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
> I actually support the idea Huiba has proposed, and I am thinking of
> how to optimize the large data transfer(for example, 100G in a short
> time) as well. 
> I registered two blueprints in nova-specs, one is for an image upload
> plug-in to upload the image to
> glance(https://review.openstack.org/#/c/84671/), the other is a data
> transfer plug-in(https://review.openstack.org/#/c/87207/) for data
> migration among nova nodes. I would like to see other transfer
> protocols, like FTP, bitTorrent, p2p, etc, implemented for data
> transfer in OpenStack besides HTTP. 
> 
> Data transfer may have many use cases. I summarize them into two
> catalogs. Please feel free to comment on it. 
> 1. The machines are located in one network, e.g. one domain, one
> cluster, etc. The characteristic is the machines can access each other
> directly via the IP addresses(VPN is beyond consideration). In this
> case, data can be transferred via iSCSI, NFS, and definitive zero-copy
> as Zhiyan mentioned. 
> 2. The machines are located in different networks, e.g. two data
> centers, two firewalls, etc. The characteristic is the machines can
> not access each other directly via the IP addresses(VPN is beyond
> consideration). The machines are isolated, so they can not be
> connected with iSCSI, NFS, etc. In this case, data have to go via the
> protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
> can work for this case. Zhiyan, please help me with this doubt. 
> 
> I guess for data transfer, including image downloading, image
> uploading, live migration, etc, OpenStack needs to taken into account
> the above two catalogs for data transfer.

For live migration, we use shared storage so I don't think it's quite
the same as getting/putting image bits from/to arbitrary locations.

>  It is hard to say that one protocol is better than another, and one
> approach prevails another(BitTorrent is very cool, but if there is
> only one source and only one target, it would not be that faster than
> a direct FTP). The key is the use
> case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).

Right, a good solution would allow for some flexibility via multiple
transfer drivers.

> Jay Pipes has suggested we figure out a blueprint for a separate
> library dedicated to the data(byte) transfer, which may be put in oslo
> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
> Zhiyan, everyone else, do you think we come up with a blueprint about
> the data transfer in oslo can work?

Yes, so I believe the most appropriate solution is to create a library
-- in oslo or a standalone library like taskflow -- that would offer a
simple byte streaming library that could be used by nova.image to expose
a neat and clean task-based API.

Right now, there is a bunch of random image transfer code spread
throughout nova.image and in each of the virt drivers there seems to be
different re-implementations of similar functionality. I propose we
clean all that up and have nova.image expose an API so that a virt
driver could do something like this:

from nova.image import api as image_api

...

task = image_api.copy(from_path_or_uri, to_path_or_uri)
# do some other work
copy_task_result = task.wait()

Within nova.image.api.copy(), we would use the aforementioned transfer
library to move the image bits from the source to the destination using
the most appropriate method.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-22 Thread lihuiba
>For live migration, we use shared storage so I don't think it's quite
>the same as getting/putting image bits from/to arbitrary locations.With a good 
>zero-copy transfer lib, live migration support can be extended to non-shared 
>storage, or cross-datacenter. It's a kind ofvalue.



>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()

+1  looks cool!
how about zero-copying?






At 2014-04-23 07:21:27,"Jay Pipes"  wrote:
>Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
>
>On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
>> I actually support the idea Huiba has proposed, and I am thinking of
>> how to optimize the large data transfer(for example, 100G in a short
>> time) as well. 
>> I registered two blueprints in nova-specs, one is for an image upload
>> plug-in to upload the image to
>> glance(https://review.openstack.org/#/c/84671/), the other is a data
>> transfer plug-in(https://review.openstack.org/#/c/87207/) for data
>> migration among nova nodes. I would like to see other transfer
>> protocols, like FTP, bitTorrent, p2p, etc, implemented for data
>> transfer in OpenStack besides HTTP. 
>> 
>> Data transfer may have many use cases. I summarize them into two
>> catalogs. Please feel free to comment on it. 
>> 1. The machines are located in one network, e.g. one domain, one
>> cluster, etc. The characteristic is the machines can access each other
>> directly via the IP addresses(VPN is beyond consideration). In this
>> case, data can be transferred via iSCSI, NFS, and definitive zero-copy
>> as Zhiyan mentioned. 
>> 2. The machines are located in different networks, e.g. two data
>> centers, two firewalls, etc. The characteristic is the machines can
>> not access each other directly via the IP addresses(VPN is beyond
>> consideration). The machines are isolated, so they can not be
>> connected with iSCSI, NFS, etc. In this case, data have to go via the
>> protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
>> can work for this case. Zhiyan, please help me with this doubt. 
>> 
>> I guess for data transfer, including image downloading, image
>> uploading, live migration, etc, OpenStack needs to taken into account
>> the above two catalogs for data transfer.
>
>For live migration, we use shared storage so I don't think it's quite
>the same as getting/putting image bits from/to arbitrary locations.
>
>>  It is hard to say that one protocol is better than another, and one
>> approach prevails another(BitTorrent is very cool, but if there is
>> only one source and only one target, it would not be that faster than
>> a direct FTP). The key is the use
>> case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).
>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.
>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.
>
>Best,
>-jay
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-23 Thread Jay Pipes
On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
> >For live migration, we use shared storage so I don't think it's quite
> >the same as getting/putting image bits from/to arbitrary locations.
> With a good zero-copy transfer lib, live migration support can be 
> extended to non-shared storage, or cross-datacenter. It's a kind of
> value.

Hmm, I totally see the value of doing this. Not sure that there could be
the same kinds of "liveness" guarantees with non-shared-storage, but I
am certainly happy to see a proof of concept in this area! :)

> >task = image_api.copy(from_path_or_uri, to_path_or_uri)
> ># do some other work
> >copy_task_result = task.wait()
> +1  looks cool!
> how about zero-copying?

It would be an implementation detail within nova.image.api.copy()
function (and the aforementioned "image bits mover library") :)

The key here is to leak as little implementation detail out of the
nova.image.api module

Best,
-jay

> At 2014-04-23 07:21:27,"Jay Pipes"  wrote:
> >Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
> >
> >On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
> >> I actually support the idea Huiba has proposed, and I am thinking of
> >> how to optimize the large data transfer(for example, 100G in a short
> >> time) as well. 
> >> I registered two blueprints in nova-specs, one is for an image upload
> >> plug-in to upload the image to
> >> glance(https://review.openstack.org/#/c/84671/), the other is a data
> >> transfer plug-in(https://review.openstack.org/#/c/87207/) for data
> >> migration among nova nodes. I would like to see other transfer
> >> protocols, like FTP, bitTorrent, p2p, etc, implemented for data
> >> transfer in OpenStack besides HTTP. 
> >> 
> >> Data transfer may have many use cases. I summarize them into two
> >> catalogs. Please feel free to comment on it. 
> >> 1. The machines are located in one network, e.g. one domain, one
> >> cluster, etc. The characteristic is the machines can access each other
> >> directly via the IP addresses(VPN is beyond consideration). In this
> >> case, data can be transferred via iSCSI, NFS, and definitive zero-copy
> >> as Zhiyan mentioned. 
> >> 2. The machines are located in different networks, e.g. two data
> >> centers, two firewalls, etc. The characteristic is the machines can
> >> not access each other directly via the IP addresses(VPN is beyond
> >> consideration). The machines are isolated, so they can not be
> >> connected with iSCSI, NFS, etc. In this case, data have to go via the
> >> protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
> >> can work for this case. Zhiyan, please help me with this doubt. 
> >> 
> >> I guess for data transfer, including image downloading, image
> >> uploading, live migration, etc, OpenStack needs to taken into account
> >> the above two catalogs for data transfer.
> >
> >For live migration, we use shared storage so I don't think it's quite
> >the same as getting/putting image bits from/to arbitrary locations.
> >
> >>  It is hard to say that one protocol is better than another, and one
> >> approach prevails another(BitTorrent is very cool, but if there is
> >> only one source and only one target, it would not be that faster than
> >> a direct FTP). The key is the use
> >> case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).
> >
> >Right, a good solution would allow for some flexibility via multiple
> >transfer drivers.
> >
> >> Jay Pipes has suggested we figure out a blueprint for a separate
> >> library dedicated to the data(byte) transfer, which may be put in oslo
> >> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
> >> Zhiyan, everyone else, do you think we come up with a blueprint about
> >> the data transfer in oslo can work?
> >
> >Yes, so I believe the most appropriate solution is to create a library
> >-- in oslo or a standalone library like taskflow -- that would offer a
> >simple byte streaming library that could be used by nova.image to expose
> >a neat and clean task-based API.
> >
> >Right now, there is a bunch of random image transfer code spread
> >throughout nova.image and in each of the virt drivers there seems to be
> >different re-implementations of similar functionality. I propose we
> >clean all that up and have nova.image expose an API so that a virt
> >driver could do something like this:
> >
> >from nova.image import api as image_api
> >
> >...
> >
> >task = image_api.copy(from_path_or_uri, to_path_or_uri)
> ># do some other work
> >copy_task_result = task.wait()
> >
> >Within nova.image.api.copy(), we would use the aforementioned transfer
> >library to move the image bits from the source to the destination using
> >the most appropriate method.
> >
> >Best,
> >-jay
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/lis

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-23 Thread Zhang Zhaoning
> For live migration, we use shared storage so I don't think it's quite the 
> same as getting/putting image bits from/to arbitrary locations.

Hi, I'm the first author of the TPDS paper of VMThunder.

Based on the same techniques and architecture of VMThunder, we have given a
new live migration method. As the same as VMThunder, this new live
migration method is a storage level solution and can support several kinds
of popular hypervisor without modifying them.

Technically, this method is a pre-copy and post-copy hybrid method and
based on a previous work of ours, which is a live storage migration. (DLSM:
Decoupled Live Storage Migration with Distributed Device Mapper
Storage.  *accepted
by t**he 8th IEEE International Symposium on Service-Oriented System
Engineering (SOSE)*, Oxford, U.K. April, 2014.)

We are now working on this method and preparing to post a new paper to one
coming conference.

In summary, I think that the architecture of VMThunder is a powerful
platform that can support several popular IaaS features, like large scale
provisioning, bulk data dissemination, live migration, live spawning, and
etc. (VM cluster live spawning is also a current work of us that still
under researching)


Regards,

Thank you!

~-~-~-~-~-~-~-~-~-~-~
Brian Zhaoning Zhang
PhD Cand.
PDL, NUDT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-23 Thread Zhi Yan Liu
First, after the discussing between I and Vincent, I'm sure what we
talked for "zero-copying" in above mail threads is different with
"full-copying"/transferring. The transferring/full-copying means image
bits duplication, it focuses on using some method to accelerate image
bits replication and transferring between Glance/backend-storage and
Nova compute host, like P2P, FTP, HTTP, etc., but zero-copying means
there is NO any bits duplication and transferring happened between
those two sides during the image preparing for VM provisioning, so the
latter one focuses on a) how to attaching remote image volume/disk
within Glance managed backend-storage from Nova compute host directly,
and b) making the VM's root disk from remote template image  based on
such hypervisor and particular storage technology, btw c) preventing
image bits uploading for VM snapshot/capture case. They are totally
different to me. (refer: review comments in
https://review.openstack.org/#/c/84671/ )

Second, on the implementation level, I consider to put all image
handling related code into nova.image namespace sounds neat but I
think it can not work (the "leak" as last mail side here). IMO, the
transferring/full-copying logic is more applicable for nova.image
namespace, such transferring approach can be implemented based on
existing download module plugins structure, e.g. P2P, FTP, but for the
zero-copying, regarding to my above points of view, I consider to
implement it in nova.virt + nova.virt. is make more sense,
since it's more related with particular hypervisor and/or storage
technology. (refer: inline comments in
https://review.openstack.org/#/c/86583/6/specs/juno/image-multiple-location.rst
)

zhiyan

On Wed, Apr 23, 2014 at 11:02 PM, Jay Pipes  wrote:
> On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
>> >For live migration, we use shared storage so I don't think it's quite
>> >the same as getting/putting image bits from/to arbitrary locations.
>> With a good zero-copy transfer lib, live migration support can be
>> extended to non-shared storage, or cross-datacenter. It's a kind of
>> value.
>
> Hmm, I totally see the value of doing this. Not sure that there could be
> the same kinds of "liveness" guarantees with non-shared-storage, but I
> am certainly happy to see a proof of concept in this area! :)
>
>> >task = image_api.copy(from_path_or_uri, to_path_or_uri)
>> ># do some other work
>> >copy_task_result = task.wait()
>> +1  looks cool!
>> how about zero-copying?
>
> It would be an implementation detail within nova.image.api.copy()
> function (and the aforementioned "image bits mover library") :)
>
> The key here is to leak as little implementation detail out of the
> nova.image.api module
>
> Best,
> -jay
>
>> At 2014-04-23 07:21:27,"Jay Pipes"  wrote:
>> >Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
>> >
>> >On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
>> >> I actually support the idea Huiba has proposed, and I am thinking of
>> >> how to optimize the large data transfer(for example, 100G in a short
>> >> time) as well.
>> >> I registered two blueprints in nova-specs, one is for an image upload
>> >> plug-in to upload the image to
>> >> glance(https://review.openstack.org/#/c/84671/), the other is a data
>> >> transfer plug-in(https://review.openstack.org/#/c/87207/) for data
>> >> migration among nova nodes. I would like to see other transfer
>> >> protocols, like FTP, bitTorrent, p2p, etc, implemented for data
>> >> transfer in OpenStack besides HTTP.
>> >>
>> >> Data transfer may have many use cases. I summarize them into two
>> >> catalogs. Please feel free to comment on it.
>> >> 1. The machines are located in one network, e.g. one domain, one
>> >> cluster, etc. The characteristic is the machines can access each other
>> >> directly via the IP addresses(VPN is beyond consideration). In this
>> >> case, data can be transferred via iSCSI, NFS, and definitive zero-copy
>> >> as Zhiyan mentioned.
>> >> 2. The machines are located in different networks, e.g. two data
>> >> centers, two firewalls, etc. The characteristic is the machines can
>> >> not access each other directly via the IP addresses(VPN is beyond
>> >> consideration). The machines are isolated, so they can not be
>> >> connected with iSCSI, NFS, etc. In this case, data have to go via the
>> >> protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
>> >> can work for this case. Zhiyan, please help me with this doubt.
>> >>
>> >> I guess for data transfer, including image downloading, image
>> >> uploading, live migration, etc, OpenStack needs to taken into account
>> >> the above two catalogs for data transfer.
>> >
>> >For live migration, we use shared storage so I don't think it's quite
>> >the same as getting/putting image bits from/to arbitrary locations.
>> >
>> >>  It is hard to say that one protocol is better than another, and one
>> >> approach prevails another(BitTorrent is very cool, but if there is

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-24 Thread Sheng Bo Hou
Right, our discussion ends up with two directions to access the image: 
full-copying(upload and download the whole image) and 
zero-copying(remotely access the image and copy on demand).
For image transfer via "full-copying", nova.image module works, because we 
only care about how the data(bytes/image) is going to send. It does not 
matter what the hypervisor is and what the storage it is. I think we can 
take the library approach with task = image_api.copy(from_path_or_uri, 
to_path_or_uri) as Jay proposed.
For image transfer via "zero-copying" as Zhi Yan mentioned, depends on the 
hypervisor and the backend storage. nova.virt is able to access hypervisor 
and storage  context, but not nova.image. The image does not have to be in 
the same location, where to launch the VM. We care about how the 
data(image) is accessed(how the VM is launched.)
Each of them needs one module to put, and it is hard to merge them.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Zhi Yan Liu  
2014/04/24 14:30
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 



To
"OpenStack Development Mailing List (not for usage questions)" 
, 
cc

Subject
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder






First, after the discussing between I and Vincent, I'm sure what we
talked for "zero-copying" in above mail threads is different with
"full-copying"/transferring. The transferring/full-copying means image
bits duplication, it focuses on using some method to accelerate image
bits replication and transferring between Glance/backend-storage and
Nova compute host, like P2P, FTP, HTTP, etc., but zero-copying means
there is NO any bits duplication and transferring happened between
those two sides during the image preparing for VM provisioning, so the
latter one focuses on a) how to attaching remote image volume/disk
within Glance managed backend-storage from Nova compute host directly,
and b) making the VM's root disk from remote template image  based on
such hypervisor and particular storage technology, btw c) preventing
image bits uploading for VM snapshot/capture case. They are totally
different to me. (refer: review comments in
https://review.openstack.org/#/c/84671/ )

Second, on the implementation level, I consider to put all image
handling related code into nova.image namespace sounds neat but I
think it can not work (the "leak" as last mail side here). IMO, the
transferring/full-copying logic is more applicable for nova.image
namespace, such transferring approach can be implemented based on
existing download module plugins structure, e.g. P2P, FTP, but for the
zero-copying, regarding to my above points of view, I consider to
implement it in nova.virt + nova.virt. is make more sense,
since it's more related with particular hypervisor and/or storage
technology. (refer: inline comments in
https://review.openstack.org/#/c/86583/6/specs/juno/image-multiple-location.rst

)

zhiyan

On Wed, Apr 23, 2014 at 11:02 PM, Jay Pipes  wrote:
> On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
>> >For live migration, we use shared storage so I don't think it's quite
>> >the same as getting/putting image bits from/to arbitrary locations.
>> With a good zero-copy transfer lib, live migration support can be
>> extended to non-shared storage, or cross-datacenter. It's a kind of
>> value.
>
> Hmm, I totally see the value of doing this. Not sure that there could be
> the same kinds of "liveness" guarantees with non-shared-storage, but I
> am certainly happy to see a proof of concept in this area! :)
>
>> >task = image_api.copy(from_path_or_uri, to_path_or_uri)
>> ># do some other work
>> >copy_task_result = task.wait()
>> +1  looks cool!
>> how about zero-copying?
>
> It would be an implementation detail within nova.image.api.copy()
> function (and the aforementioned "image bits mover library") :)
>
> The key here is to leak as little implementation detail out of the
> nova.image.api module
>
> Best,
> -jay
>
>> At 2014-04-23 07:21:27,"Jay Pipes"  wrote:
>> >Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments 
inline.
>> >
>> >On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
>> >> I actually support the idea Huiba has proposed, and I am thinking of
>> >&g

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-24 Thread Christopher Lefelhocz
Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes"  wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/environment needs.

>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.

If I understand correctly, we'll create some common library around this.
It would be good to understand the details a bit better.  I've thought a
bit about this issue.  The one area that I get stuck at is providing a
common set of downloads which work across drivers effectively.  Part of
the reason there are a bunch or random image transfers is historical, but
also because performance was already a problem.  Examples include:
transferring to compute first then copying to dom0 causing performance
issues, needs in some drivers to download image completely to validate
prior to putting in place, etc.

It may be easy to say we'll push most of this to the dom0, but I know for
Xen our python stack is somewhat limited so that may be an issue.

By the way we've been working on proposing a simpler image pre caching
system/strategy.  It focuses specifically on the image caching portion of
this discussion.  For those interested, see the nova-spec
https://review.openstack.org/#/c/85792.  We'd like to leverage whatever
optimized image download strategy is available.

Christopher 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-24 Thread Solly Ross
Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases (especially 
when dealing
with transferring between different formats, like raw to qcow2, etc).  This 
builds on what
Christopher was saying -- there's actually a reason why we have code for each 
driver.  While
having a common image copying library would be nice, I think a better way to do 
it would be to
have some sort of library composed of building blocks, such that each driver 
could make use of
common functionality while still tailoring the operation to the quirks of the 
particular drivers.

Best Regards,
Solly Ross

- Original Message -
From: "Christopher Lefelhocz" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes"  wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/environment needs.

>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.

If I understand correctly, we'll create some common library around this.
It would be good to understand the details a bit better.  I've thought a
bit about this issue.  The one area that I get stuck at is providing a
common set of downloads which work across drivers effectively.  Part of
the reason there are a bunch or random image transfers is historical, but
also because performance was already a problem.  Examples include:
transferring to compute first then copying to dom0 causing performance
issues, needs in some drivers to download image completely to validate
prior to putting in place, etc.

It may be easy to say we'll push most of this to the dom0, but I know for
Xen our python stack is somewhat limited so that may be an issue.

By the way we've been working on proposing a simpler image pre caching
system/strategy.  It focuses specifically on the image caching portion of
this discussion.  For those interested, see the nova-spec
https://review.openstack.org/#/c/85792.  We'd like to leverage whatever
optimized image download strategy is available.

Christopher 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-26 Thread lihuiba
>Hmm, I totally see the value of doing this. Not sure that there could be
>the same kinds of "liveness" guarantees with non-shared-storage, but I
>am certainly happy to see a proof of concept in this area! :)
By "liveness", if you mean down time of migration, our current results
show that liveness is guaranteed with non-shared-storage. Some preliminary
work has been published in a conference SOSE14, which can be found at
http://www.vmthunder.org/dlsm_sose2014_final.pdf   And we have made
some improvements to it, and the work is still under development. We
are planning to write a new paper and submit it to another conference in 
this summer.




>> how about zero-copying?
>
>It would be an implementation detail within nova.image.api.copy()
>function (and the aforementioned "image bits mover library") :)

IMHO, (pre-)copying and zero-copying are different in nature, and it's
not necessary to mask such difference by a single interface. With 2
sets of interfaces, programmers (users of copying service) will be
reminded of the cost of (pre-)copying, or the risk of runtime network 
congestion of zero-copying.



At 2014-04-23 23:02:29,"Jay Pipes"  wrote:
>On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
>> >For live migration, we use shared storage so I don't think it's quite
>> >the same as getting/putting image bits from/to arbitrary locations.
>> With a good zero-copy transfer lib, live migration support can be 
>> extended to non-shared storage, or cross-datacenter. It's a kind of
>> value.
>
>Hmm, I totally see the value of doing this. Not sure that there could be
>the same kinds of "liveness" guarantees with non-shared-storage, but I
>am certainly happy to see a proof of concept in this area! :)
>
>> >task = image_api.copy(from_path_or_uri, to_path_or_uri)
>> ># do some other work
>> >copy_task_result = task.wait()
>> +1  looks cool!
>> how about zero-copying?
>
>It would be an implementation detail within nova.image.api.copy()
>function (and the aforementioned "image bits mover library") :)
>
>The key here is to leak as little implementation detail out of the
>nova.image.api module
>
>Best,
>-jay
>
>> At 2014-04-23 07:21:27,"Jay Pipes"  wrote:
>> >Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
>> >
>> >On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
>> >> I actually support the idea Huiba has proposed, and I am thinking of
>> >> how to optimize the large data transfer(for example, 100G in a short
>> >> time) as well. 
>> >> I registered two blueprints in nova-specs, one is for an image upload
>> >> plug-in to upload the image to
>> >> glance(https://review.openstack.org/#/c/84671/), the other is a data
>> >> transfer plug-in(https://review.openstack.org/#/c/87207/) for data
>> >> migration among nova nodes. I would like to see other transfer
>> >> protocols, like FTP, bitTorrent, p2p, etc, implemented for data
>> >> transfer in OpenStack besides HTTP. 
>> >> 
>> >> Data transfer may have many use cases. I summarize them into two
>> >> catalogs. Please feel free to comment on it. 
>> >> 1. The machines are located in one network, e.g. one domain, one
>> >> cluster, etc. The characteristic is the machines can access each other
>> >> directly via the IP addresses(VPN is beyond consideration). In this
>> >> case, data can be transferred via iSCSI, NFS, and definitive zero-copy
>> >> as Zhiyan mentioned. 
>> >> 2. The machines are located in different networks, e.g. two data
>> >> centers, two firewalls, etc. The characteristic is the machines can
>> >> not access each other directly via the IP addresses(VPN is beyond
>> >> consideration). The machines are isolated, so they can not be
>> >> connected with iSCSI, NFS, etc. In this case, data have to go via the
>> >> protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
>> >> can work for this case. Zhiyan, please help me with this doubt. 
>> >> 
>> >> I guess for data transfer, including image downloading, image
>> >> uploading, live migration, etc, OpenStack needs to taken into account
>> >> the above two catalogs for data transfer.
>> >
>> >For live migration, we use shared storage so I don't think it's quite
>> >the same as getting/putting image bits from/to arbitrary locations.
>> >
>> >>  It is hard to say that one protocol is better than another, and one
>> >> approach prevails another(BitTorrent is very cool, but if there is
>> >> only one source and only one target, it would not be that faster than
>> >> a direct FTP). The key is the use
>> >> case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).
>> >
>> >Right, a good solution would allow for some flexibility via multiple
>> >transfer drivers.
>> >
>> >> Jay Pipes has suggested we figure out a blueprint for a separate
>> >> library dedicated to the data(byte) transfer, which may be put in oslo
>> >> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> >> Zhiyan, everyone else

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-27 Thread Sheng Bo Hou
I have done a little test for the image download and upload. I created an 
API for the image access, containing copyFrom and sendTo. I moved the 
image download and upload code from XenApi into the implementation for 
Http with some modifications, and the code worked for libvirt as well. 
copyFrom means to download the image and return the image data, and 
different hypervisors can choose to save it in a file or import it to the 
datastore; sendTo is used to upload the image and the image data is passed 
in as a parameter.

I also did an investigation about how each hypervisor is doing the image 
upload and download.

For the download:
libvirt, hyper-v and baremetal use the code image_service.download to 
download the image and save it into a file.
vmwareapi uses the code image_service.download to download the image and 
import it into the datastore.
XenAPi uses image_service.download to download the image for VHD image.

For the upload:
They use image_service.upload to upload the image.

I think we can conclude that it is possible to have a common image 
transfer library with different implementations for different protocols.
This is a small demo code for the library: 
https://review.openstack.org/#/c/90601/(Jay, is it close to the library as 
you mentioned?). I just replaced the upload and download part with the 
http implementation for the imageapi and it worked fine.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Solly Ross  
2014/04/25 01:46
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 



To
"OpenStack Development Mailing List (not for usage questions)" 
, 
cc

Subject
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder






Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases 
(especially when dealing
with transferring between different formats, like raw to qcow2, etc). This 
builds on what
Christopher was saying -- there's actually a reason why we have code for 
each driver.  While
having a common image copying library would be nice, I think a better way 
to do it would be to
have some sort of library composed of building blocks, such that each 
driver could make use of
common functionality while still tailoring the operation to the quirks of 
the particular drivers.

Best Regards,
Solly Ross

- Original Message -
From: "Christopher Lefelhocz" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting 
process of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes"  wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/environment needs.

>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.

If I understand correctly, we'll create some common library around this.
It would be good to understand the deta

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-27 Thread Sheng Bo Hou
Jay, Huiba, Chris, Solly, Zhiyan, and everybody else,

I am so excited that two of the proposals: Image Upload Plugin(
http://summit.openstack.org/cfp/details/353) and Data transfer service 
Plugin(http://summit.openstack.org/cfp/details/352) have been merged 
together and scheduled in the coming design summit. If you show up in 
Atlanta, please come this session(
http://junodesignsummit.sched.org/event/c00119362c07e4cb203d1c4053add187) 
and start our discussion, on Wednesday, May 14 • 11:50am - 12:30pm.

I will propose a common image transfer library for all the OpenStack 
projects to to upload and download the images. If it is approved, with 
this library, Huiba, you can feel free to implement the transfer protocols 
you like.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Sheng Bo Hou/China/IBM@IBMCN 
2014/04/27 22:33
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 



To
"OpenStack Development Mailing List \(not for usage questions\)" 
, 
cc

Subject
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder






I have done a little test for the image download and upload. I created an 
API for the image access, containing copyFrom and sendTo. I moved the 
image download and upload code from XenApi into the implementation for 
Http with some modifications, and the code worked for libvirt as well. 
copyFrom means to download the image and return the image data, and 
different hypervisors can choose to save it in a file or import it to the 
datastore; sendTo is used to upload the image and the image data is passed 
in as a parameter. 

I also did an investigation about how each hypervisor is doing the image 
upload and download. 

For the download: 
libvirt, hyper-v and baremetal use the code image_service.download to 
download the image and save it into a file. 
vmwareapi uses the code image_service.download to download the image and 
import it into the datastore. 
XenAPi uses image_service.download to download the image for VHD image. 

For the upload: 
They use image_service.upload to upload the image. 

I think we can conclude that it is possible to have a common image 
transfer library with different implementations for different protocols. 
This is a small demo code for the library: 
https://review.openstack.org/#/c/90601/(Jay, is it close to the library as 
you mentioned?). I just replaced the upload and download part with the 
http implementation for the imageapi and it worked fine. 

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 


Solly Ross  
2014/04/25 01:46 

Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 



To
"OpenStack Development Mailing List (not for usage questions)" 
, 
cc

Subject
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder








Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases 
(especially when dealing
with transferring between different formats, like raw to qcow2, etc). This 
builds on what
Christopher was saying -- there's actually a reason why we have code for 
each driver.  While
having a common image copying library would be nice, I think a better way 
to do it would be to
have some sort of library composed of building blocks, such that each 
driver could make use of
common functionality while still tailoring the operation to the quirks of 
the particular drivers.

Best Regards,
Solly Ross

- Original Message -
From: "Christopher Lefelhocz" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting 
process of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes"  wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre cach

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-28 Thread Christopher Lefelhocz
Waiting on review.openstack.org to come back live to look at the demo code and 
provide more accurate feedback…

Interesting and good to hear the code moved easily.  The possibility to have a 
functional a common image transfer service wasn't questioned (IMHO).  What I 
was stating was that we'll need strong data to point to how the common code 
doesn't degrade download performance for various driver/deployments.  I do 
think having a common set of configuration (and driver calls?) for the download 
options makes a lot of sense (like glance has done for image_service.download). 
 I'm just a little more cautious when it comes to true common download code at 
this point.

Christopher

From: Sheng Bo Hou mailto:sb...@cn.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, April 27, 2014 9:33 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder

I have done a little test for the image download and upload. I created an API 
for the image access, containing copyFrom and sendTo. I moved the image 
download and upload code from XenApi into the implementation for Http with some 
modifications, and the code worked for libvirt as well.
copyFrom means to download the image and return the image data, and different 
hypervisors can choose to save it in a file or import it to the datastore; 
sendTo is used to upload the image and the image data is passed in as a 
parameter.

I also did an investigation about how each hypervisor is doing the image upload 
and download.

For the download:
libvirt, hyper-v and baremetal use the code image_service.download to download 
the image and save it into a file.
vmwareapi uses the code image_service.download to download the image and import 
it into the datastore.
XenAPi uses image_service.download to download the image for VHD image.

For the upload:
They use image_service.upload to upload the image.

I think we can conclude that it is possible to have a common image transfer 
library with different implementations for different protocols.
This is a small demo code for the library: 
https://review.openstack.org/#/c/90601/(Jay, is it close to the library as you 
mentioned?). I just replaced the upload and download part with the http 
implementation for the imageapi and it worked fine.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: 
sb...@cn.ibm.com<mailto:sb...@cn.ibm.com>
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层邮编:100193


Solly Ross mailto:sr...@redhat.com>>

2014/04/25 01:46
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 
mailto:openstack-dev@lists.openstack.org>>




To
"OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>,
cc

Subject
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of 
a number of vms via VMThunder







Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases (especially 
when dealing
with transferring between different formats, like raw to qcow2, etc).  This 
builds on what
Christopher was saying -- there's actually a reason why we have code for each 
driver.  While
having a common image copying library would be nice, I think a better way to do 
it would be to
have some sort of library composed of building blocks, such that each driver 
could make use of
common functionality while still tailoring the operation to the quirks of the 
particular drivers.

Best Regards,
Solly Ross

- Original Message -
From: "Christopher Lefelhocz" 
mailto:christopher.lefel...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes" mailto:jaypi...@gmail.com>> 
wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/e

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-30 Thread lihuiba
Sorry for being late. I was busy with something else these days.


It'll be great to have a dedicated image transferring library that provides 
both pre-copying
and zero-copying sematics, and we are glad to have  VMThunder integrated in it. 
Before
that library is done, however, we plan to propose a blue print that solely 
focus on integrate
VMThunder into Open Stack, as a plug-in of course. Then we can move VMThunder 
into
the newly created transferring library with a refactory process.


Does this plan make sense?






BTW, I'll not be able to goto the summit. It's too far away. Pity.




At 2014-04-28 11:01:13,"Sheng Bo Hou"  wrote:
Jay, Huiba, Chris, Solly, Zhiyan, and everybody else,

I am so excited that two of the proposals: Image Upload 
Plugin(http://summit.openstack.org/cfp/details/353) and Data transfer service 
Plugin(http://summit.openstack.org/cfp/details/352) have been merged together 
and scheduled in the coming design summit. If you show up in Atlanta, please 
come this 
session(http://junodesignsummit.sched.org/event/c00119362c07e4cb203d1c4053add187)
 and start our discussion, on Wednesday, May 14 • 11:50am - 12:30pm.

I will propose a common image transfer library for all the OpenStack projects 
to to upload and download the images. If it is approved, with this library, 
Huiba, you can feel free to implement the transfer protocols you like.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



| Sheng Bo Hou/China/IBM@IBMCN

2014/04/27 22:33

|
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 

|

|
|
To
| "OpenStack Development Mailing List \(not for usage questions\)" 
, |
|
cc
| |
|
Subject
| Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder |


| | |

|



I have done a little test for the image download and upload. I created an API 
for the image access, containing copyFrom and sendTo. I moved the image 
download and upload code from XenApi into the implementation for Http with some 
modifications, and the code worked for libvirt as well.
copyFrom means to download the image and return the image data, and different 
hypervisors can choose to save it in a file or import it to the datastore; 
sendTo is used to upload the image and the image data is passed in as a 
parameter.

I also did an investigation about how each hypervisor is doing the image upload 
and download.

For the download:
libvirt, hyper-v and baremetal use the code image_service.download to download 
the image and save it into a file.
vmwareapi uses the code image_service.download to download the image and import 
it into the datastore.
XenAPi uses image_service.download to download the image for VHD image.

For the upload:
They use image_service.upload to upload the image.

I think we can conclude that it is possible to have a common image transfer 
library with different implementations for different protocols.
This is a small demo code for the library: 
https://review.openstack.org/#/c/90601/(Jay, is it close to the library as you 
mentioned?). I just replaced the upload and download part with the http 
implementation for the imageapi and it worked fine.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



| Solly Ross 

2014/04/25 01:46


|
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 

|

|
|
To
| "OpenStack Development Mailing List (not for usage questions)" 
, |
|
cc
| |
|
Subject
| Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder |



| | |

|




Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases (especially 
when dealing
with transferring between different formats, like raw to qcow2, etc).  This 
builds on what
Christopher was saying -- there's actually a reason why we have code for each 
driver.  While
having a common image copying library would be nice, I think a better way to do 
it would be to
have some sort of library composed of building blocks, such that each driver 
could make use of
common