[ceph-users] Mysql performance on CephFS vs RBD

2017-04-29 Thread Babu Shanmugam
Hi, I did some basic experiments with mysql and measured the time taken by a set of operations on CephFS and RBD. The RBD measurements are taken on a 1GB RBD disk with ext4 filesystem. Following are my observation. The time listed below are in seconds. *Plain file system*

Re: [ceph-users] Ceph program memory usage

2017-04-29 Thread Roger Brown
How interesting! Thank you for that. On Sat, Apr 29, 2017 at 4:04 PM Bryan Henderson wrote: > A few months ago, I posted here asking why the Ceph program takes so much > memory (virtual, real, and address space) for what seems to be a simple > task. > Nobody knew, but I

[ceph-users] Ceph program memory usage

2017-04-29 Thread Bryan Henderson
A few months ago, I posted here asking why the Ceph program takes so much memory (virtual, real, and address space) for what seems to be a simple task. Nobody knew, but I have done extensive research and I have the answer now, and thought I would publish it here. All it takes to do a Ceph

Re: [ceph-users] LRC low level plugin configuration can't express maximal erasure resilience

2017-04-29 Thread Loic Dachary
Hi Matan, On 04/29/2017 10:47 PM, Matan Liram wrote: > LRC low level plugin configuration of the following example copes with a > single erasure while it can easily protect from two. > > In case I use the layers: > 1: DDc_ _ > 2: DDD_ _ _ _c_ > 3: _ _ _DDD_ _c > > Neither of the rules

[ceph-users] LRC low level plugin configuration can't express maximal erasure resilience

2017-04-29 Thread Matan Liram
LRC low level plugin configuration of the following example copes with a single erasure while it can easily protect from two. In case I use the layers: 1: DDc_ _ 2: DDD_ _ _ _c_ 3: _ _ _DDD_ _c Neither of the rules protect from 2 failures. However, if we calculate the XOR of the two local

Re: [ceph-users] Why is cls_log_add logging so much?

2017-04-29 Thread Willem Jan Withagen
On 29-04-17 00:16, Gregory Farnum wrote: > On Tue, Apr 4, 2017 at 2:49 AM, Jens Rosenboom wrote: >> On a busy cluster, I'm seeing a couple of OSDs logging millions of >> lines like this: >> >> 2017-04-04 06:35:18.240136 7f40ff873700 0 >> cls/log/cls_log.cc:129: storing

[ceph-users] Failed to read JournalPointer - MDS error (mds rank 0 is damaged)

2017-04-29 Thread Martin B Nielsen
Hi, We're using ceph 10.2.5 and cephfs. We had a weird monitor (mon0r0) which had some sort of meltdown as current active mds node. The monitor node called elections on/off over ~1 hour, sometimes with 5-10min between. On every occasion mds was also doing a replay, reconnect, rejoin => active