On Monday, January 14, 2013 08:51:57 Alexis GÜNST HORN wrote:
At the end, the client mountpoint become unresponsive, and the only
way is to force reboot.
I am going to throw this out there as I've seen something similar, but not
with ceph. Back in 2005-ish I was experimenting with ATA over
Hi,
On 01/14/2013 08:51 AM, Alexis GÜNST HORN wrote:
Hello,
I've a 0.56.1 Ceph cluster up and running. RBD is working fine, but
i've some troubles with CephFS.
Here is my config :
- only 2 OSD nodes, with 10 disks each + SSD for journal.
- OSDs hosts are gigabit (public) + gigabit (private)
Hello,
Thanks for your answer.
Both OSDs and client are CentOS 6.3 with 3.7.1 kernel.
And yes, the script creates empty loop devices of different sizes.
MDS MON are on one of the 2 OSD hosts.
I already try to put them on a separate server.
I know that CephFS is not considered as stable yet,
Hello,
I've a 0.56.1 Ceph cluster up and running. RBD is working fine, but
i've some troubles with CephFS.
Here is my config :
- only 2 OSD nodes, with 10 disks each + SSD for journal.
- OSDs hosts are gigabit (public) + gigabit (private)
- one client which is 10 gigabit
The client mount a