hi, 我使用cloudstack4.2,ceph做主存储,
现在对ceph上面的卷做快照(卷定义20G,实际空间1.3G),发现在二级存储上的快照文件比20G还大,是raw格式的文件。
但我直接手动使用qemu-img 将rbd文件导出来为raw格式的文件也就1.3G实际空间。
现在我想看看cloudstack源代码里面他是怎么做的这个快照, 找了半天,没找到这段代码在哪儿? 谁能指点下 谢谢!!
HI.i encounter some problem when use rbd.
i have successful add rbd to cloudstack primary storage,but it can't create
instance with rbd,this is the error infomation:
management log:
INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-1:) checking health of
usage server
4.2上支持的比较好,可以看下这部分相关的Release Notes:
Snaphotting, Backups, Cloning and System VMs for RBD Primary Storage
These new RBD features require at least librbd 0.61.7 (Cuttlefish) and
libvirt 0.9.14 on the KVM hypervisors.
This release of CloudStack will leverage the features of RBD format 2. This
tack.apache.org";
: RE: RBD??
cloudstack4.1 ceph??
> From: 474745...@qq.com
> To: users-cn@cloudstack.apache.org
> Subject: RBD??
> Date: Wed, 16 Oct 2013 10:19:45 +0800
>
> ??
> c
cloudstack4.1 主存储还像不支持ceph?
> From: 474745...@qq.com
> To: users-cn@cloudstack.apache.org
> Subject: 配置RBD作为主存储时,迁移出现若干问题
> Date: Wed, 16 Oct 2013 10:19:45 +0800
>
> 环境如下:
> cloudstack4.1.0
> hypervisor是kvm,qemu-kvm版本1,2.0(手动编译),libvirt版本1.1.2(手动编译)
> ceph版本0.67.3(yum安
??
cloudstack4.1.0
hypervisor??kvm,qemu-kvm1,2.0()??libvirt1.1.2()
ceph0.67.3??yum??
1.??rbd??windows
2003,??rbd
2.windows 2003nfs
cloudstack??4.1.0,NFS??RBD
RBD??mon??mds??4??osd??osdcloudstack-agent??
NFS
http://snag.gy/92N3F.jpg
??
http://snag.gy/kyCq1.jpg
环境情况:
Cloudstack 4.1.1, Ceph 0.6.1.7, Qemu 0.12.1.2-2.355 with RBD enable
I have added ceph RBD as primary successfully with ubuntu 12.04 kvm.
but fail on centos 6.4 kvm host in cloudstack.
以下是在centos kvm host上的部分测试,可以通过rbd访问ceph
[root@centos-kvm01 ~]# qemu-img -v | grep rbd
Supported