Hi lists
Is there any solution or documents that ceph as xenserver or xen backend
storage?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi list
I have a test ceph cluster include 3 nodes (node0: mon, node1: osd and nfs
server1, node2 osd and nfs server2).
os :centos6.6 ,kernel :3.10.94-1.el6.elrepo.x86_64, ceph version 0.94.5
I followed the http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
instructions to setup an
Hi list
I have a test ceph cluster include 3 nodes (node0: mon, node1: osd and nfs
server1, node2 osd and nfs server2).
os :centos6.6 ,kernel :3.10.94-1.el6.elrepo.x86_64, ceph version 0.94.5
I followed the http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
instructions to setup an
Hi
While using ping_ping to test the cephfs file systems mounted by the
ceph-fuse, i found it failed.
my cluster is 4 nodes. one mon/mds , three server(3 osd/server), OS :
centos6.6 , add "fuse_disable_pagecache=true , fuse_use_invalidate_cb=true " in
ceph.conf
[root@node2 opt]#
Hi list
my cluster include 4 nodes , 1 mon ,3 osd nodes(4 SATA/node),totall 12 osds.
ceph version is 0.72. each osd node has 1Gbps NIC, mon node has 2*1Gbps NIC.
tgt is on mon node, client is vmware. upload(copy) a 500G file in Vmware. the
HW Accelerated in VMware had turned off as Nick
Hi list
my cluster is ceph0.8, kernel update to 3.18.3 servers(4 osd /server), 12
osds, 1 mon . the cluster status is ok.
[root@node0 ~]# rbd -p rbd create myimage2 --order 22 --size 1
--stripe-unit 65536 --stripe-count 16 --image-format 2
[root@node0 ~]# rbd map myimage2
rbd: add
--
发自我的网易邮箱手机智能版
br/br/br/
- Original Message -
From: Timofey Titovets nefelim...@gmail.com
To: maoqi1982 maoqi1...@126.com
Cc: ceph-users@lists.ceph.com
Sent: Tue, 23 Jun 2015 08:35:36 +0300
Subject: Re: [ceph-users] ceph0.72 tgt wmware performance very bad
Which backend you use
Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Timofey Titovets
Sent: 23 June 2015 06:36
To: maoqi1982
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph0.72 tgt wmware performance very bad
Which backend you use in TGT for rbd?
2015-06-23 5:44
Hi list:
my cluster include 4 server ,12 osd(4 osd/server), 1 mon(1 server), 1Gbps link,
ceph version is 0.72 , the cluster status is ok, client is vmware vcenter.
use rbd as tgt backend,expose 2TB LUN via iscsi to vmware,the performance is
very bad. the bw just 10kB/s.
but when use windows7
Hi list
our ceph(0.72) cluster use ubuntu12.04 is ok . client server run openstack
install CentOS6.4 final, the kernel is up to
kernel-2.6.32-358.123.2.openstack.el6.x86_64.
the question is the kernel does not support the rbd.ko ceph.ko. can anyone
help me to add the rbd.ko ceph.ko in
Hi list
i follow the http://ceph.com/docs/master/radosgw/federated-config/ to test the
muti-geography function.failed. .Does anyone success deploy FEDERATED
GATEWAYS?Is the function in ceph ok or not? if anyone success deploy it please
give me some help
issue?
On 04/17/2014 07:32 AM, maoqi1982 wrote:
Hi list
i follow the http://ceph.com/docs/master/radosgw/federated-config/ to
test the muti-geography function.failed. .Does anyone success deploy
FEDERATED GATEWAYS?Is the function in ceph ok or not? if anyone
success deploy it please give me
Hi list
Does dumpling v0.67.4 or v0.71 now support Multi-region / Disaster Recovery
function ? if v0.67.4/0.71 support that which doc can i refer to configure
regions/zones/agents ? may anyone give a link ?
thanks ___
ceph-users mailing list
Hi list
my ceph version is dumpling 0.67 ,i want use RGW Geo-Replication and Disaster
Recovery function, can I refer the doc
http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/ ( v0.71 ) to
deploy the region /zones/agent___
ceph-users
HI lists
Does dumpling version(2013/8/1) have the Erasure encoding as a storage
backend function ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
15 matches
Mail list logo