Hi!
Awesome :)) Thanks for such a great work!
Cheers,
Sébastien
On 10.08.2013 02:52, Alfredo Deza wrote:
I am very pleased to announce the release of ceph-deploy to the Python
Package Index.
The OS packages are yet to come, I will make sure to update this
thread when they do.
For now, if
mount.nfs 10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206 -o
soft,timeo=133,retrans=2147483647,tcp,noac
this gives mount -o doesnt exist
Moya Solutions, Inc.
am...@moyasolutions.com
0 | 646-918-5238 x 102
F | 646-390-1806
- Original Message -
From: "Sébastien RICCIO"
To: "Andr
Hi ceph-users,
I'm actually evaluating ceph for a project and I'm getting quite low
write performances, so please if you have time reading this post and
give me some advices :)
My test setup using some free hardware we have laying in our datacenter:
Three ceph server nodes, on each one is r
Hi,
Oh thanks I'll try again with those removed. My mistakre, sorry... I was
trying to follow the conf file guide :)
Cheers,
Sébastien
On 24.07.2013 07:19, Sage Weil wrote:
It's the config file. You no longer need to (or should) enumerate the
daemons in the config file; the sysvinit/upsta
Hi! While trying to install ceph using ceph-deploy the monitors nodes
are stuck waiting on this process:
/usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c)
I tried to run mannually the command and it loops on this:
connect to /var/run/ceph/ceph-mon.a.asok failed with (2) No such file
Hi,
Same issue here about disk list, it doesn't list all available disks and
output some bogus information.
Also had some problems with ceph-deploy osd create which I guess uses
some code from osd prepare. Same thing about the temporary directory.
After re-trying the command 3-4 times it fina
? I rebooted the xenserver host and
tried again... It worked...
Some mystery here :)
Cheers,
Sébastien
On 21.07.2013 14:01, Sébastien RICCIO wrote:
Hi,
thanks a lot for your answer. I was successfully able to create the
storage pool with virsh.
However it is not listed when I issue a virsh
e1: 0/0/1 up
[root@xen-blade05 ~]# ceph osd lspools
0 data,1 metadata,2 rbd,
Any idea where to look now ? :)
Thanks for your help.
Cheers,
Sébastien
On 21.07.2013 11:42, Wido den Hollander wrote:
Hi,
On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:
Hi !
I'm currently trying to get t
Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named "ceph", the pool is named "rbd".
ceph.xml:
rbd
ceph
secret.xml:
client.admin
[root@xen-b