Re: [one-users] Copy image on Ceph to file-based datastore

2014-03-23 Thread Stuart Longland
Hi Jaime,
On 21/03/14 19:57, Jaime Melis wrote:
 Have you benchmarked Online write-through caching / Online write-back
 / Offline write-back ?

Benchmarked?  I haven't even written the code yet.

At the moment without caching I'm getting about 120MB/sec in my VMs when
using virtio storage.  I've been told by one of the would-be users that
this is unusably slow.

Sébastian Han did some work with RBD+FlashCache here:
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/

 Are you proposing we should allow the three options or just stick to one?
 
 Anyways, this sounds like an amazing addon :)

Well, I can see cases where all three are useful.  Production data
workloads where data integrity is crucial, you'll want write-through
caching.

Speed-critical work loads, one of the write-back modes would be faster.
 Online write-back if you need live migration or don't have the space
for offline write-back.

 http://opennebula.org/addons/create/
 
 It'd be nice if in the machine template where you specify individual
 disks, if you could state what caching mode to use.  Alternatives would
 be to specify it in the image template or (least favourable) in the
 datastore template.
 
 
 If I understood correctly, you can use the CACHE option here:
 http://docs.opennebula.org/4.4/user/references/template.html#persistent-and-clone-disks
 
 although it might need a change in order to be able to specify the
 target cache. How do you think the interface should be?

That looks more like how libvirt handles the cache to the back-end
device.  Then again, we're pretty much taking over whatever caching
libvirt does and supplanting our own, so maybe it's appropriate to
hijack that option and just tell libvirt to disable its cache.
-- 
Stuart Longland
Contractor
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Copy image on Ceph to file-based datastore

2014-03-21 Thread Jaime Melis
Hi,

Have you benchmarked Online write-through caching / Online write-back / Offline
write-back ?

Are you proposing we should allow the three options or just stick to one?

Anyways, this sounds like an amazing addon :)
http://opennebula.org/addons/create/

It'd be nice if in the machine template where you specify individual
 disks, if you could state what caching mode to use.  Alternatives would
 be to specify it in the image template or (least favourable) in the
 datastore template.


If I understood correctly, you can use the CACHE option here:
http://docs.opennebula.org/4.4/user/references/template.html#persistent-and-clone-disks

although it might need a change in order to be able to specify the target
cache. How do you think the interface should be?

cheers,
Jaime

-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Copy image on Ceph to file-based datastore

2014-03-18 Thread Jaime Melis
Hi Stuart,

before moving on to the implementation details, how are you thinking of
specifying if a VM should run from the datastore or from a local file?

I'm afraid this is going to be very tricky, because we need to figure out
how to tell the core to generate a deployment file that references a local
file and not a ceph disk [1]

We want the 'if' to go this way:
https://github.com/OpenNebula/one/blob/master/src/vmm/LibVirtDriverKVM.cc#L502
and not this one:
https://github.com/OpenNebula/one/blob/master/src/vmm/LibVirtDriverKVM.cc#L445



On Thu, Mar 13, 2014 at 8:47 AM, Stuart Longland stua...@vrt.com.au wrote:

 Hi all,

 In OpenNebula, it seems we can easily set it up to use Ceph as a
 datastore for images, cloning non-persistent images to make temporary
 copies, etc.

 We can also have the images stored as files on the frontend computer and
 dolled out via SSH; and in the case of persistent images, copied back
 when the VM is undeployed.

 What I'm missing is a hybrid between the two.  We've got a 3-node Ceph
 cluster which is working well.  However, compared to a local SSD, the
 Ceph cluster with its 6 spinners is a bit on the slow side in comparison.

 There are VMs for which we don't care about resiliency, and so it makes
 sense to use the SSDs attached to the VM hosts, and store the image
 itself in Ceph.

 The thought is, when the VM starts up, rather than having it dished out
 from the frontend computer, the host does a:

 rbd export pool-name/image-name
 /var/lib/one/datastores/123/45/disk.0

 Then, if the image is persistent, it does a

 rbd import pool-name/image-name
 /var/lib/one/datastores/123/45/disk.0

 (Note: not sure if import will overwrite the image, it may be
 necessary to do a rbd rm first of the old image... or better, import the
 new copy then remove and rename.)

 I had a tinker looking at the ssh transfer driver, but couldn't quite
 figure out how to reach out to the other datastore without some ugly
 kludges.

 Anyone else done somthing like this?

 Regards,
 --
 Stuart Longland
 Systems Engineer
  _ ___
 \  /|_) |   T: +61 7 3535 9619
  \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064   http://www.vrt.com.au


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Copy image on Ceph to file-based datastore

2014-03-13 Thread Stuart Longland
Hi all,

In OpenNebula, it seems we can easily set it up to use Ceph as a
datastore for images, cloning non-persistent images to make temporary
copies, etc.

We can also have the images stored as files on the frontend computer and
dolled out via SSH; and in the case of persistent images, copied back
when the VM is undeployed.

What I'm missing is a hybrid between the two.  We've got a 3-node Ceph
cluster which is working well.  However, compared to a local SSD, the
Ceph cluster with its 6 spinners is a bit on the slow side in comparison.

There are VMs for which we don't care about resiliency, and so it makes
sense to use the SSDs attached to the VM hosts, and store the image
itself in Ceph.

The thought is, when the VM starts up, rather than having it dished out
from the frontend computer, the host does a:

rbd export pool-name/image-name /var/lib/one/datastores/123/45/disk.0

Then, if the image is persistent, it does a

rbd import pool-name/image-name /var/lib/one/datastores/123/45/disk.0

(Note: not sure if import will overwrite the image, it may be
necessary to do a rbd rm first of the old image... or better, import the
new copy then remove and rename.)

I had a tinker looking at the ssh transfer driver, but couldn't quite
figure out how to reach out to the other datastore without some ugly
kludges.

Anyone else done somthing like this?

Regards,
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org