Re: [ceph-users] MDS crash when client goes to sleep

2014-03-22 Thread Yan, Zheng
On Sun, Mar 23, 2014 at 11:47 AM, Sage Weil wrote: > Hi, > > I looked at this a bit earlier and wasn't sure why we would be getting a > remote_reset event after a sleep/wake cycle. The patch should fix the > crash, but I'm a bit worried something is not quite right on the client > side, too... >

Re: [ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Kyle Bader
> I have two nodes with 8 OSDs on each. First node running 2 monitors on > different virtual machines (mon.1 and mon.2), second node runing mon.3 > After several reboots (I have tested power failure scenarios) "ceph -w" on > node 2 always fails with message: > > root@bes-mon3:~# ceph --verbose -w

Re: [ceph-users] MDS crash when client goes to sleep

2014-03-22 Thread Sage Weil
Hi, I looked at this a bit earlier and wasn't sure why we would be getting a remote_reset event after a sleep/wake cycle. The patch should fix the crash, but I'm a bit worried something is not quite right on the client side, too... sage On Sun, 23 Mar 2014, Yan, Zheng wrote: > thank you for

Re: [ceph-users] OSD Restarts cause excessively high load average and "requests are blocked > 32 sec"

2014-03-22 Thread Quenten Grasso
Hi Kyle, Thanks, I turned on debug ms = 1 and debug osd = 10 and restarted osd.54 heres here's log for that one. ceph-osd.54.log.bz2 http://www67.zippyshare.com/v/99704627/file.html Strace osd 53, strace.zip http://www43.zippyshare.com/v/17581165/file.html Thanks, Quenten -Original Messa

Re: [ceph-users] MDS crash when client goes to sleep

2014-03-22 Thread Yan, Zheng
thank you for reporting this. Below patch should fix this issue --- diff --git a/src/mds/MDS.cc b/src/mds/MDS.cc index 57c7f4a..6b53c14 100644 --- a/src/mds/MDS.cc +++ b/src/mds/MDS.cc @@ -2110,6 +2110,7 @@ bool MDS::ms_handle_reset(Connection *con) if (session->is_closed()) { dout(3) <<

Re: [ceph-users] why object can't be recovered when delete one replica

2014-03-22 Thread Kyle Bader
> I upload a file through swift API, then delete it in the “current” directory > in the secondary OSD manually, why the object can’t be recovered? > > If I delete it in the primary OSD, the object is deleted directly in the > pool .rgw.bucket and it can’t be recovered from the secondary OSD. > > Do

Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-22 Thread Kyle Bader
> ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir > /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb > ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping > (dm-crypt?): dm-0 It sounds like device-mapper still thinks it's using the the volume, you might be able t

Re: [ceph-users] osd rebalance question

2014-03-22 Thread Kyle Bader
> I need to add a extend server, which reside several osds, to a > running ceph cluster. During add osds, ceph would not automatically modify > the ceph.conf. So I manually modify the ceph.conf > > And restart the whole ceph cluster with command: ’service ceph –a restart’. > I just confuse

Re: [ceph-users] OSD + FlashCache vs. Cache Pool for RBD...

2014-03-22 Thread Kyle Bader
> One downside of the above arrangement: I read that support for mapping > newer-format RBDs is only present in fairly recent kernels. I'm running > Ubuntu 12.04 on the cluster at present with its stock 3.2 kernel. There > is a PPA for the 3.11 kernel used in Ubuntu 13.10, but if you're looking >

Re: [ceph-users] What's the difference between using /dev/sdb and /dev/sdb1 as osd?

2014-03-22 Thread Kyle Bader
> If I want to use a disk dedicated for osd, can I just use something like > /dev/sdb instead of /dev/sdb1? Is there any negative impact on performance? You can pass /dev/sdb to ceph-disk-prepare and it will create two partitions, one for the journal (raw partition) and one for the data volume (de

Re: [ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Pavel V. Kaygorodov
> You have file config sync? > ceph.conf are same on all servers, keys also not differs. I have checked the problem now and see ceph -w working fine on all hosts. Mysterious :-/ Pavel. > 22 марта 2014 г. 16:11 пользователь "Pavel V. Kaygorodov" > написал: > Hi! > > I have two nodes with 8 O

Re: [ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Ирек Фасихов
You have file config sync? 22 марта 2014 г. 16:11 пользователь "Pavel V. Kaygorodov" написал: > Hi! > > I have two nodes with 8 OSDs on each. First node running 2 monitors on > different virtual machines (mon.1 and mon.2), second node runing mon.3 > After several reboots (I have tested power fail

[ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Pavel V. Kaygorodov
Hi! I have two nodes with 8 OSDs on each. First node running 2 monitors on different virtual machines (mon.1 and mon.2), second node runing mon.3 After several reboots (I have tested power failure scenarios) "ceph -w" on node 2 always fails with message: root@bes-mon3:~# ceph --verbose -w Error