Re: [one-users] How to use ceph filesystem

2013-12-11 Thread Jaime Melis
Hi Mario,

yes, the problem is that the datastore space reported by Ceph is the
aggregate of all the pools, you can't ask for the available space of a
specific pool, as far as I know.

Happy to hear it worked in the end.

Cheers,
Jaime


On Tue, Dec 10, 2013 at 7:11 PM, Mario Giammarco mgiamma...@gmail.comwrote:

 Sorry I this is my fault.
 I have created an rbd called one and not a pool.
 Now I created a pool called one and it works.
 Please note that I have found it only looking in logs for an error of
 qemu-img command.
 The gui gave me the correct size of the filesystem even without pool one
 created so I supposed that all was ok.


 2013/12/10 Mario Giammarco mgiamma...@gmail.com

 I have checked  the frontend node as oneadmin user. I have tried ceph
 -w ceph status ceph rbd ls and these commands work.
 I have also added user oneadmin to disk group.
 All these checks are ok but I am not able to add nothing to ceph one
 rbd.

 With previous installation of 4.2 and same ceph it was all ok.


 2013/12/1 kenneth samonte kenn...@apolloglobal.net

 The first requirement is that the front end node is able to communicate
 with the ceph cluster.
 Veriy it by loggin in as the oneadmin user and issue the command ceph
 -w.
 The ceph cluster shoul respond HEALTH_OK.
 Then define it as a datastore in openebula as an image datastore. No
 further confiuration needed and when you uplaod images to he front end, you
 should choose the ceph image to save the omages in rbd format.


 Mario Giammarco mgiamma...@gmail.com wrote:

 Hello,
 I am a newbie but I would like to try opennebula with ceph filesystem.

 I start with default opennebula installation with three datastores:
 system, default and files.

 Now, even if they are configured as shared they are not an nfs mount,
 simply directories on  the disk.

 Now I have installed ceph, I have created a ceph rbd and datastore but
 then?

 Should I delete existing repositories? How can I replace them with ceph?

 Do I need to replace them?

 Thanks,
 Mario




 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-10 Thread kenneth samonte
The first requirement is that the front end node is able to communicate with 
the ceph cluster. 
Veriy it by loggin in as the oneadmin user and issue the command ceph -w. 
The ceph cluster shoul respond HEALTH_OK.
Then define it as a datastore in openebula as an image datastore. No further 
confiuration needed and when you uplaod images to he front end, you should 
choose the ceph image to save the omages in rbd format.

Mario Giammarco mgiamma...@gmail.com wrote:

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-10 Thread Jon
Note: you will have to mount a different rbd for each
/var/lib/oneadmin/datastores/0, you can't mount the same rbd on multiple
hosts (rbds are not cluster aware per se. That is to say, unless you
include a clustered filesystem on top of your rbds--google ceph
iscsi--but at that point, why not use cephfs other than it not being redy
for primetime and adding additional infrastructure complexity (which using
iscsi on top of ceph does anyway and provides a single point of failure,
defeating the purpose of ceph)).

I rewrote my ssh transfer drivers to rsync the files before live migrating
the vm. Though, there are obvious pitfalls with doing that as there is no
grantee of consistency.

Ultimately, my solution was to use a separate rbd for swap, or not
configure any at all (libvirt does support memory ballooning, though I'm
not sure that O:N provides any controls), that way only non-vm files are
ever stored in datastores/0 (I.e. only OpenNebula files are stored there).
On Dec 4, 2013 3:13 AM, Kenneth kenn...@apolloglobal.net wrote:

  Going directly to RBD layer will be used once you use ceph as IMAGE
 datastore. Nebula will be interfacing directly with ceph and your images
 will be in RBD format.This is already the direct way. This is the same
 things as using KVM with ceph RBD.

 The only thing you may want to use is the cephfs (which is not RBD)  is
 when you use it for the SYSTEM datastore and use shared for TM. This is
 what I use. Besides, system datastore does't contain a lot of files so if
 this method is inefficient, I won't notice it all.

 But if you also want to use RBD for the system datastore, you can still
 use it. You just mount an RBD image in the /var/lib/datastore/0/ of you
 nodes.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/04/2013 04:08 PM, Mario Giammarco wrote:

I have read all posts of this interesting thread.
 You suggest to use ceph as a shared filesystem and it is a good idea I
 agree.

 But I supposed that, because kvm supports ceph rbd and because opennebula
 supports ceph, there is a direct way to use it.
 I mean not using ceph DFS layer and going directly to rbd layer (also for
 system datastore)
 I do not understand what advantages has opennebula actual ceph support,
 can you explain to me?

 Thanks,
 Mario


 2013/12/3 Jaime Melis jme...@c12g.com

 Hi Mario,

 Cephfs CAN be used a shared filesystem datastore. I don't completely
 agree with Kenneth's recommendation of using 'ssh' as the TM for the system
 datastore. I think you can go for 'shared' as long as you have the
 /var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
 about what DFS solution you're using, it will simply assume files are
 already there.

 Another thing worth mentioning, from 4.4 onwards the HOST attribute of
 the datastore should be renamed as BRIDGE_LIST.

 cheers,
 Jaime


  On Tue, Dec 3, 2013 at 11:28 AM, Kenneth kenn...@apolloglobal.netwrote:

   Actually, I'm using ceph as the system datastore. I used the cephfs
 (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/

 Regarding ssh for trasfer driver, I haven't really used it since I'm all
 on ceph, both system and image datastore. I may be wrong but that is how I
 understand it from the docs.
  ---

 Thanks,
 Kenneth
 Apollo Global Corp.

   On 12/03/2013 06:11 PM, Mario Giammarco wrote:

   My problem was that  because ceph is a distributed filesystem (and so
 it can be used as an alternative to nfs) I supposed I can use as a shared
 system datastore.
 Reading your reply I can see it is not true. Probably official
 documentation should clarify this.

 Infact I hoped to use ceph as system datastore because ceph is fault
 tolerant and nfs is not.

 Thanks for help,
 Mario


 2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose
 it whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on
 your system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
  ---

 Thanks,
 Kenneth
 Apollo Global Corp.

   On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer 
 files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to
 communicate with the ceph cluster. Issue the command ceph -s on all 
 nodes
 including the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - -   

Re: [one-users] How to use ceph filesystem

2013-12-10 Thread Mario Giammarco
I have checked  the frontend node as oneadmin user. I have tried ceph -w
ceph status ceph rbd ls and these commands work.
I have also added user oneadmin to disk group.
All these checks are ok but I am not able to add nothing to ceph one rbd.

With previous installation of 4.2 and same ceph it was all ok.


2013/12/1 kenneth samonte kenn...@apolloglobal.net

 The first requirement is that the front end node is able to communicate
 with the ceph cluster.
 Veriy it by loggin in as the oneadmin user and issue the command ceph
 -w.
 The ceph cluster shoul respond HEALTH_OK.
 Then define it as a datastore in openebula as an image datastore. No
 further confiuration needed and when you uplaod images to he front end, you
 should choose the ceph image to save the omages in rbd format.


 Mario Giammarco mgiamma...@gmail.com wrote:

 Hello,
 I am a newbie but I would like to try opennebula with ceph filesystem.

 I start with default opennebula installation with three datastores:
 system, default and files.

 Now, even if they are configured as shared they are not an nfs mount,
 simply directories on  the disk.

 Now I have installed ceph, I have created a ceph rbd and datastore but
 then?

 Should I delete existing repositories? How can I replace them with ceph?

 Do I need to replace them?

 Thanks,
 Mario

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-10 Thread Mario Giammarco
Sorry I this is my fault.
I have created an rbd called one and not a pool.
Now I created a pool called one and it works.
Please note that I have found it only looking in logs for an error of
qemu-img command.
The gui gave me the correct size of the filesystem even without pool one
created so I supposed that all was ok.


2013/12/10 Mario Giammarco mgiamma...@gmail.com

 I have checked  the frontend node as oneadmin user. I have tried ceph -w
 ceph status ceph rbd ls and these commands work.
 I have also added user oneadmin to disk group.
 All these checks are ok but I am not able to add nothing to ceph one rbd.

 With previous installation of 4.2 and same ceph it was all ok.


 2013/12/1 kenneth samonte kenn...@apolloglobal.net

 The first requirement is that the front end node is able to communicate
 with the ceph cluster.
 Veriy it by loggin in as the oneadmin user and issue the command ceph
 -w.
 The ceph cluster shoul respond HEALTH_OK.
 Then define it as a datastore in openebula as an image datastore. No
 further confiuration needed and when you uplaod images to he front end, you
 should choose the ceph image to save the omages in rbd format.


 Mario Giammarco mgiamma...@gmail.com wrote:

 Hello,
 I am a newbie but I would like to try opennebula with ceph filesystem.

 I start with default opennebula installation with three datastores:
 system, default and files.

 Now, even if they are configured as shared they are not an nfs mount,
 simply directories on  the disk.

 Now I have installed ceph, I have created a ceph rbd and datastore but
 then?

 Should I delete existing repositories? How can I replace them with ceph?

 Do I need to replace them?

 Thanks,
 Mario



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-04 Thread Mario Giammarco
I have read all posts of this interesting thread.
You suggest to use ceph as a shared filesystem and it is a good idea I
agree.

But I supposed that, because kvm supports ceph rbd and because opennebula
supports ceph, there is a direct way to use it.
I mean not using ceph DFS layer and going directly to rbd layer (also for
system datastore)
I do not understand what advantages has opennebula actual ceph support, can
you explain to me?

Thanks,
Mario


2013/12/3 Jaime Melis jme...@c12g.com

 Hi Mario,

 Cephfs CAN be used a shared filesystem datastore. I don't completely agree
 with Kenneth's recommendation of using 'ssh' as the TM for the system
 datastore. I think you can go for 'shared' as long as you have the
 /var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
 about what DFS solution you're using, it will simply assume files are
 already there.

 Another thing worth mentioning, from 4.4 onwards the HOST attribute of the
 datastore should be renamed as BRIDGE_LIST.

 cheers,
 Jaime


 On Tue, Dec 3, 2013 at 11:28 AM, Kenneth kenn...@apolloglobal.net wrote:

  Actually, I'm using ceph as the system datastore. I used the cephfs
 (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/

 Regarding ssh for trasfer driver, I haven't really used it since I'm all
 on ceph, both system and image datastore. I may be wrong but that is how I
 understand it from the docs.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/03/2013 06:11 PM, Mario Giammarco wrote:

   My problem was that  because ceph is a distributed filesystem (and so
 it can be used as an alternative to nfs) I supposed I can use as a shared
 system datastore.
 Reading your reply I can see it is not true. Probably official
 documentation should clarify this.

 Infact I hoped to use ceph as system datastore because ceph is fault
 tolerant and nfs is not.

 Thanks for help,
 Mario


 2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose
 it whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on your
 system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
  ---

 Thanks,
 Kenneth
 Apollo Global Corp.

   On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to communicate
 with the ceph cluster. Issue the command ceph -s on all nodes including
 the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs
 ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active
 you can upload images on the sunstone GUI. Be aware that conversion of
 images to RBD format of ceph may take quite some time.


 I see in your configuration that system datastore is shared!

 Thanks again,
 Mario


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-04 Thread Kenneth
 

Going directly to RBD layer will be used once you use ceph as IMAGE
datastore. Nebula will be interfacing directly with ceph and your images
will be in RBD format.This is already the direct way. This is the same
things as using KVM with ceph RBD. 

The only thing you may want to use
is the cephfs (which is not RBD) is when you use it for the SYSTEM
datastore and use shared for TM. This is what I use. Besides, system
datastore does't contain a lot of files so if this method is
inefficient, I won't notice it all. 

But if you also want to use RBD
for the system datastore, you can still use it. You just mount an RBD
image in the /var/lib/datastore/0/ of you nodes.

---

Thanks,
Kenneth
Apollo Global Corp.

On 12/04/2013 04:08 PM, Mario
Giammarco wrote: 

 I have read all posts of this interesting thread.
You suggest to use ceph as a shared filesystem and it is a good idea I
agree.
 
 But I supposed that, because kvm supports ceph rbd and
because opennebula supports ceph, there is a direct way to use it. I
mean not using ceph DFS layer and going directly to rbd layer (also for
system datastore) I do not understand what advantages has opennebula
actual ceph support, can you explain to me?
 
 Thanks, Mario 
 

2013/12/3 Jaime Melis jme...@c12g.com
 
 Hi Mario, 
 
 Cephfs
CAN be used a shared filesystem datastore. I don't completely agree with
Kenneth's recommendation of using 'ssh' as the TM for the system
datastore. I think you can go for 'shared' as long as you have the
/var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
about what DFS solution you're using, it will simply assume files are
already there. 
 
 Another thing worth mentioning, from 4.4 onwards
the HOST attribute of the datastore should be renamed as BRIDGE_LIST.

 
 cheers,
 Jaime 
 
 On Tue, Dec 3, 2013 at 11:28 AM,
Kenneth kenn...@apolloglobal.net wrote: 
 
 Actually, I'm using
ceph as the system datastore. I used the cephfs (CEPH FUSE) and mounted
it on all nodes on /var/lib/one/datastores/0/ 
 
 Regarding ssh
for trasfer driver, I haven't really used it since I'm all on ceph, both
system and image datastore. I may be wrong but that is how I understand
it from the docs. 
 
 ---
 
 Thanks,
 Kenneth
 Apollo
Global Corp.
 
 On 12/03/2013 06:11 PM, Mario Giammarco wrote:

 
 My problem was that because ceph is a distributed filesystem
(and so it can be used as an alternative to nfs) I supposed I can use as
a shared system datastore. Reading your reply I can see it is not true.
Probably official documentation should clarify this.
 
 Infact I
hoped to use ceph as system datastore because ceph is fault tolerant and
nfs is not.
 
 Thanks for help, Mario 
 
 2013/12/3
Kenneth kenn...@apolloglobal.net
 
 Ceph won't be the default
image datastore, but you can always choose it whenever you create an
image. 
 
 You said you don't have an NFS disk and your just
use plain disk on your system datastore so you SHOULD use ssh in order
to have live migrations. 
 
 Mine uses shared as datastore
since I mounted a shared folder on each nebula node. 
 

---
 
 Thanks,
 Kenneth
 Apollo Global Corp.


 On 12/03/2013 03:01 PM, Mario Giammarco wrote: 
 

First, thanks you for your very detailed reply!
 
 2013/12/3
Kenneth kenn...@apolloglobal.net
 
 You don't need to
replace existing datastores, the important is you edit the system
datastore as ssh because you still need to transfer files in each node
when you deploy a VM.
 
 So I lose live migration, right?

 If I understand correctly ceph cannot be default datastore also.

 
 Next, you should make sure that all your node are able
to communicate with the ceph cluster. Issue the command ceph -s on all
nodes including the front end to be sure that they are connected to
ceph.
 
 ... will check... 
 

oneadmin@cloud-node1:~$ onedatastore list
 

___
 Users mailing
list
 Users@lists.opennebula.org

http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
 

-- 
 
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made
Simple
 http://www.c12g.com [2] | jme...@c12g.com
 
 -- 
 

Confidentiality Warning: The information contained in this e-mail and

 any accompanying documents, unless otherwise expressly indicated, is

 confidential and privileged, and is intended solely for the person

 and/or entity to whom it is addressed (i.e. those identified in the

 To and cc box). They are the property of C12G Labs S.L.. 

Unauthorized distribution, review, use, disclosure, or copying of this

 communication, or any part thereof, is strictly prohibited and may
be 
 unlawful. If you have received this e-mail in error, please
notify us 
 immediately by e-mail at ab...@c12g.com and delete the
e-mail and 
 attachments and any copy from your system. C12G's thanks
you for your 
 cooperation.
 

Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2]
http://www.c12g.com
___
Users mailing list

Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Kenneth
 

Ceph won't be the default image datastore, but you can always choose
it whenever you create an image. 

You said you don't have an NFS disk
and your just use plain disk on your system datastore so you SHOULD use
ssh in order to have live migrations. 

Mine uses shared as datastore
since I mounted a shared folder on each nebula node.

---

Thanks,
Kenneth
Apollo Global Corp.

On 12/03/2013 03:01 PM, Mario
Giammarco wrote: 

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net
 
 You don't need to
replace existing datastores, the important is you edit the system
datastore as ssh because you still need to transfer files in each node
when you deploy a VM.
 
 So I lose live migration, right? 
 If I
understand correctly ceph cannot be default datastore also. 
 
 Next,
you should make sure that all your node are able to communicate with the
ceph cluster. Issue the command ceph -s on all nodes including the
front end to be sure that they are connected to ceph.
 
 ... will
check... 
 
 oneadmin@cloud-node1:~$ onedatastore list
 ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Mario Giammarco
My problem was that  because ceph is a distributed filesystem (and so it
can be used as an alternative to nfs) I supposed I can use as a shared
system datastore.
Reading your reply I can see it is not true. Probably official
documentation should clarify this.

Infact I hoped to use ceph as system datastore because ceph is fault
tolerant and nfs is not.

Thanks for help,
Mario


2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose it
 whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on your
 system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to communicate
 with the ceph cluster. Issue the command ceph -s on all nodes including
 the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs   ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active you
 can upload images on the sunstone GUI. Be aware that conversion of images
 to RBD format of ceph may take quite some time.


 I see in your configuration that system datastore is shared!

 Thanks again,
 Mario


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Kenneth
 

Actually, I'm using ceph as the system datastore. I used the cephfs
(CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/


Regarding ssh for trasfer driver, I haven't really used it since I'm
all on ceph, both system and image datastore. I may be wrong but that is
how I understand it from the docs. 
---

Thanks,
Kenneth
Apollo Global
Corp.

On 12/03/2013 06:11 PM, Mario Giammarco wrote: 

 My problem was
that because ceph is a distributed filesystem (and so it can be used as
an alternative to nfs) I supposed I can use as a shared system
datastore. Reading your reply I can see it is not true. Probably
official documentation should clarify this.
 
 Infact I hoped to use
ceph as system datastore because ceph is fault tolerant and nfs is
not.
 
 Thanks for help, Mario 
 
 2013/12/3 Kenneth
kenn...@apolloglobal.net
 
 Ceph won't be the default image
datastore, but you can always choose it whenever you create an image.

 
 You said you don't have an NFS disk and your just use plain disk
on your system datastore so you SHOULD use ssh in order to have live
migrations. 
 
 Mine uses shared as datastore since I mounted a
shared folder on each nebula node. 
 
 ---
 
 Thanks,

Kenneth
 Apollo Global Corp.
 
 On 12/03/2013 03:01 PM, Mario
Giammarco wrote: 
 
 First, thanks you for your very detailed
reply!
 
 2013/12/3 Kenneth kenn...@apolloglobal.net
 

You don't need to replace existing datastores, the important is you edit
the system datastore as ssh because you still need to transfer files
in each node when you deploy a VM.
 
 So I lose live migration,
right? 
 If I understand correctly ceph cannot be default datastore
also. 
 
 Next, you should make sure that all your node are able
to communicate with the ceph cluster. Issue the command ceph -s on all
nodes including the front end to be sure that they are connected to
ceph.
 
 ... will check... 
 
 oneadmin@cloud-node1:~$
onedatastore list
 ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Jaime Melis
Hi Mario,

Cephfs CAN be used a shared filesystem datastore. I don't completely agree
with Kenneth's recommendation of using 'ssh' as the TM for the system
datastore. I think you can go for 'shared' as long as you have the
/var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
about what DFS solution you're using, it will simply assume files are
already there.

Another thing worth mentioning, from 4.4 onwards the HOST attribute of the
datastore should be renamed as BRIDGE_LIST.

cheers,
Jaime


On Tue, Dec 3, 2013 at 11:28 AM, Kenneth kenn...@apolloglobal.net wrote:

  Actually, I'm using ceph as the system datastore. I used the cephfs
 (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/

 Regarding ssh for trasfer driver, I haven't really used it since I'm all
 on ceph, both system and image datastore. I may be wrong but that is how I
 understand it from the docs.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/03/2013 06:11 PM, Mario Giammarco wrote:

   My problem was that  because ceph is a distributed filesystem (and so
 it can be used as an alternative to nfs) I supposed I can use as a shared
 system datastore.
 Reading your reply I can see it is not true. Probably official
 documentation should clarify this.

 Infact I hoped to use ceph as system datastore because ceph is fault
 tolerant and nfs is not.

 Thanks for help,
 Mario


 2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose it
 whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on your
 system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
  ---

 Thanks,
 Kenneth
 Apollo Global Corp.

   On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to communicate
 with the ceph cluster. Issue the command ceph -s on all nodes including
 the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs   ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active you
 can upload images on the sunstone GUI. Be aware that conversion of images
 to RBD format of ceph may take quite some time.


 I see in your configuration that system datastore is shared!

 Thanks again,
 Mario


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-02 Thread Kenneth
 

You don't need to replace existing datastores, the important is you
edit the system datastore as ssh because you still need to transfer
files in each node when you deploy a VM. 

Next, you should make sure
that all your node are able to communicate with the ceph cluster. Issue
the command ceph -s on all nodes including the front end to be sure
that they are connected to ceph. 

Also make a pool in ceph with the
name one, because it will be used as the default pool. 

Create a pool:


 ceph osd pool create [poolname] [pg_num] [pgp_num] 

 Example: ceph
osd pool create one 196 196 

##Be sure that the pool one is already
created on the ceph cluster before adding a ceph datastore to
opennebula. Refer to the ceph documentation for adding pools. 

##Create
a datastore template file 

nano ceph.ds 

## Add the following lines.


NAME = cephds 

DS_MAD = ceph 

TM_MAD = ceph 

 # the following
line *must* be preset 

DISK_TYPE = RBD 

 POOL_NAME = one 

HOST =
cloud-node1 

##The host node can be any node just be sure that it can
access the ceph nodes. This cloud-ceph node will perform the RAW to RBD
conversions of the VMs so define here the node with the lowest load,
which is the cloud-front end node. Save and exit the file. 

## Create
the datastore. 

onedatastore create ceph.ds 

##Verify that open nebula
can see the datastore. 

oneadmin@cloud-node1:~$ onedatastore list 

 ID
NAME SIZE AVAIL CLUSTER IMAGES TYPE DS TM 

 0 system - - - 0 sys -
shared 

 1 default 7.3G 71% - 1 img fs shared 

 2 files 7.3G 71% - 0
fil fs ssh 

 100 CEPHDS  5.5T 59%   - 3
IMG  CEPH CEPH 

Once you add the you have verified that the ceph
datastore is active you can upload images on the sunstone GUI. Be aware
that conversion of images to RBD format of ceph may take quite some
time. 
---

Thanks,
Kenneth
Apollo Global Corp.

On 12/01/2013 07:14 PM,
Mario Giammarco wrote: 

 Hello, 
 I am a newbie but I would like to
try opennebula with ceph filesystem. 
 
 I start with default
opennebula installation with three datastores: system, default and
files. 
 
 Now, even if they are configured as shared they are not
an nfs mount, simply directories on the disk. 
 
 Now I have installed
ceph, I have created a ceph rbd and datastore but then? 
 
 Should I
delete existing repositories? How can I replace them with ceph? 
 
 Do
I need to replace them? 
 
 Thanks, 
 Mario 
 

___
 Users mailing list

Users@lists.opennebula.org

http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]



Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-02 Thread Mario Giammarco
First, thanks you for your very detailed reply!


2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you edit
 the system datastore as ssh because you still need to transfer files in
 each node when you deploy a VM.


So I lose live migration, right?
If I understand correctly ceph cannot be default datastore also.

 Next, you should make sure that all your node are able to communicate with
 the ceph cluster. Issue the command ceph -s on all nodes including the
 front end to be sure that they are connected to ceph.


... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs   ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active you
 can upload images on the sunstone GUI. Be aware that conversion of images
 to RBD format of ceph may take quite some time.


I see in your configuration that system datastore is shared!

Thanks again,
Mario
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] How to use ceph filesystem

2013-12-01 Thread Mario Giammarco
Hello,
I am a newbie but I would like to try opennebula with ceph filesystem.

I start with default opennebula installation with three datastores: system,
default and files.

Now, even if they are configured as shared they are not an nfs mount,
simply directories on  the disk.

Now I have installed ceph, I have created a ceph rbd and datastore but then?

Should I delete existing repositories? How can I replace them with ceph?

Do I need to replace them?

Thanks,
Mario
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org