for such tools?
Regards,
--
Xavier Villaneau
Storage Software Eng. at Concurrent Computer Corp.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rtuuid-workaround.rules
instead of 60-ceph-by-parttypeuuid.rules if it's the later that is used in
your system. The `udevadm test` log should give good clues to whether
that's the issue or not.
Kind Regards,
--
Xavier Villaneau
Software Engineer, Concurrent Computer Corporation
On Sat, Apr
bcrush.org/xvillaneau/crush-docs/raw/v0.1.0/converted/Ceph%20pool%20capacity%20analysis.pdf
Any comment, correction or review is welcome. Additionally, if there are
other common pool usage scenarios that could be covered, I will gladly add
them in.
Best Regards,
--
Xavier Villaneau
Won't solve clus
red in a cluster
- Built-in basic scenarios for "compare" such as adding a node or removing
an OSD.
Please share your ideas, those will eventually help making a better tool!
Regards,
--
Xavier Villaneau
Software Engineer, working with Ceph during day and sometimes at night too.
Hello ceph-users,
I am currently making tests on a small cluster, and Cache Tiering is one
of those tests. The cluster runs Ceph 0.87 Giant on three Ubuntu 14.04
servers with the 3.16.0 kernel, for a total of 8 OSD and 1 MON.
Since there are no SSDs in those servers, I am testing Cache Tierin
Hello,
I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.
Le 24/02/2015 09:58, ceph-users a écrit :
Hi all,
I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible
I've configured in my hosts list
[mdss]
ho
Hello,
Le 20/02/2015 12:26, Sudarshan Pathak a écrit :
Hello everyone,
I have a cluster running with OpenStack. It has 6 OSD (3 in each 2
different locations). Each pool has 3 replication size with 2 copy in
primary location and 1 copy at secondary location.
Everything is running as expecte