Hi all,
Our MDS still fine today. Thanks everyone!
Regards,
Bazli
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mohd Bazli Ab Karim
Sent: Monday, January 19, 2015 11:38 AM
To: John Spray
Cc: ceph-users@lists.ceph.com; ce
Hi John,
Good shot!
I've increased the osd_max_write_size to 1GB (still smaller than osd journal
size) and now the mds still running fine after an hour.
Now checking if fs still accessible or not. Will update from time to time.
Thanks again John.
Regards,
Bazli
-Original Message-
From
On Sat, Jan 17, 2015 at 11:47 AM, Lindsay Mathieson
wrote:
> On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
>> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
>> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
>
> Does the kernel version
On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
Does the kernel version matter if you are using ceph-fuse?
___
It has just been pointed out to me that you can also workaround this
issue on your existing system by increasing the osd_max_write_size
setting on your OSDs (default 90MB) to something higher, but still
smaller than your osd journal size. That might get you on a path to
having an accessible filesy
Agree. I was about to upgrade to 0.90, but has postponed it due to this error.
Any chance for me to recover it first before upgrading it?
Thanks Wido.
Regards,
Bazli
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Wido den
Hmm, upgrading should help here, as the problematic data structure
(anchortable) no longer exists in the latest version. I haven't
checked, but hopefully we don't try to write it during upgrades.
The bug you're hitting is more or less the same as a similar one we
have with the sessiontable in the
Dear Ceph-Users, Ceph-Devel,
Apologize me if you get double post of this email.
I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2 down
and only 1 up) at the moment.
Plus I have one CephFS client mounted to it.
Now, the MDS always get aborted after recovery and active fo
On 01/16/2015 08:37 AM, Mohd Bazli Ab Karim wrote:
> Dear Ceph-Users, Ceph-Devel,
>
> Apologize me if you get double post of this email.
>
> I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2
> down and only 1 up) at the moment.
> Plus I have one CephFS client mounted to