Re: [ceph-users] Openstack RBD EC pool

2019-02-16 Thread Konstantin Shalygin

### ceph.conf
[global]
fsid = b5e30221-a214-353c-b66b-8c37b4349123
mon host = ceph-mon.service.i.ewcs.ch
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
###


## ceph.ec.conf
[global]
fsid = b5e30221-a214-353c-b66b-8c37b4349123
mon host = ceph-mon.service.i..
auth cluster required = cephx
auth service required = cephx
auth client required = cephx

[client.cinder-ec]
rbd default data pool = ewos1-prod_cinder_ec
#
There is not necessary to split this settings to two files. Use one 
ceph.conf instead.



[client.cinder-ec]
rbd default data pool = ewos1-prod_cinder_ec


But your pool is:


ceph osd pool create cinder_ec 512 512 erasure ec32




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Openstack RBD EC pool

2019-02-15 Thread Florian Engelmann

Hi,

I tried to add a "archive" storage class to our Openstack environment by 
introducing a second storage backend offering RBD volumes having their 
data in an erasure coded pool. As I will have to specify a data-pool I 
tried it as follows:



### keyring files:
ceph.client.cinder.keyring
ceph.client.cinder-ec.keyring

### ceph.conf
[global]
fsid = b5e30221-a214-353c-b66b-8c37b4349123
mon host = ceph-mon.service.i.ewcs.ch
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
###


## ceph.ec.conf
[global]
fsid = b5e30221-a214-353c-b66b-8c37b4349123
mon host = ceph-mon.service.i..
auth cluster required = cephx
auth service required = cephx
auth client required = cephx

[client.cinder-ec]
rbd default data pool = ewos1-prod_cinder_ec
#

# cinder-volume.conf
...
[ceph1-rp3-1]
volume_backend_name = ceph1-rp3-1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = xxxcc8b-xx-ae16xx
rbd_pool = cinder
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
report_discard_supported = true
rbd_exclusive_cinder_pool = true
enable_deferred_deletion = true
deferred_deletion_delay = 259200
deferred_deletion_purge_interval = 3600

[ceph1-ec-1]
volume_backend_name = ceph1-ec-1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/ceph.ec.conf
rbd_user = cinder-ec
rbd_secret_uuid = xxcc8b-xx-ae16xx
rbd_pool = cinder_ec_metadata
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 3
rbd_store_chunk_size = 4
rados_connect_timeout = -1
report_discard_supported = true
rbd_exclusive_cinder_pool = true
enable_deferred_deletion = true
deferred_deletion_delay = 259200
deferred_deletion_purge_interval = 3600
##


I created three pools (for cinder) like:
ceph osd pool create cinder 512 512 replicated rack_replicated_rule
ceph osd pool create cinder_ec_metadata 6 6 replicated rack_replicated_rule
ceph osd pool create cinder_ec 512 512 erasure ec32
ceph osd pool set cinder_ec allow_ec_overwrites true


I am able to use backend ceph1-rp3-1 without any errors (create, attach, 
delete, snapshot). I am also able to create volumes via:


openstack volume create --size 100 --type ec1 myvolume_ec

but I am not able to attach it to any instance. I get erros like:

==> libvirtd.log <==
2019-02-15 22:23:01.771+: 27895: error : 
qemuMonitorJSONCheckError:392 : internal error: unable to execute QEMU 
command 'device_add': Property 'scsi-hd.drive' can't find value 
'drive-scsi0-0-0-3'


My instance got three disks (root,swap and one cinder replicated volume) 
amd looks like:



  instance-254e
  6d41c54b-753a-46c7-a573-bedf8822fbf5
  
xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0;>

  
  x3-1
  2019-02-15 21:18:24
  
16384
80
8192
0
4
  
  
...
  
  

...
  
hvm


...
  
/usr/bin/qemu-system-x86_64

  
  

  
  name='nova/6d41c54b-753a-46c7-a573-bedf8822fbf5_disk'>




  
  
  
  


  
  

  
  name='nova/6d41c54b-753a-46c7-a573-bedf8822fbf5_disk.swap'>




  
  
  
  


  
  

  
  name='cinder/volume-01e8cb68-1f86-4142-958c-fdd1c301833a'>




  
  
  
125829120
1000
  
  01e8cb68-1f86-4142-958c-fdd1c301833a
  
  


  
  function='0x0'/>


...


Any ideas?

All the best,
Florian


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com