I mapped an image to a system, and used blockdev to make it readonly. But it
failed.[root@ceph0 mnt]# blockdev --setro /dev/rbd2[root@ceph0 mnt]# blockdev
--getro /dev/rbd20
It's on Centos6.4 with kernel 3.10.6 .Ceph 0.61.8 .
Any idea?
Centos 6.4Kernel 3.10.6Ceph 0.61.8
My ceph cluster is deployed on three nodes.One rbd image was created, mapped to
one of the three nodes, formatted with ext4, and mounted.When rebooting this
node, it hung umouting the file system on the rbd.
My guess about the root cause:When the system
Time skews happen frequently when the systems running monitors are
restarted.With ntp server configured, the time skew between systems will be
fixed over some time. But the ceph monitors won't find it at once if there are
no time check messages at that time, so the ceph status will be still
[mailto:ceph-users-boun...@lists.ceph.com]
On Behalf Of Da Chun Ng
Sent: Montag, 2. September 2013 04:49
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Is it possible to change the pg number after adding new
osds?
According to the doc, the pg numbers should be enlarged for better read/write
According to the doc, the pg numbers should be enlarged for better read/write
balance if the osd number is increased.But seems the pg number cannot be
changed on the fly. It's fixed when the pool is created. Am I right?
Centos 6.4Ceph Cuttlefish 0.61.7, or 0.61.8.
I changed the MTU to 9216(or 9000), then restarted all the cluster nodes. The
whole cluster hung, with messages in the mon log as below:4048 2013-08-26
15:52:43.028554 7fd83f131700 1 mon.ceph0@0(electing).elector(15) init, last
seen epoch 154049
CC: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Poor write/random read/random write performance
On 08/19/2013 12:05 PM, Da Chun Ng wrote:
Thank you! Testing now.
How about pg num? I'm using the default size 64, as I tried with (100 *
osd_num)/replica_size, but it decreased
I have a 3 nodes, 15 osds ceph cluster setup:* 15 7200 RPM SATA disks, 5 for
each node.* 10G network* Intel(R) Xeon(R) CPU E5-2620(6 cores) 2.00GHz, for
each node.* 64G Ram for each node.
I deployed the cluster with ceph-deploy, and created a new data pool for
cephfs.Both the data and metadata
Sorry, forget to tell the OS and kernel version.
It's Centos 6.4 with kernel 3.10.6 .fio 2.0.13 .
From: dachun...@outlook.com
To: ceph-users@lists.ceph.com
Date: Mon, 19 Aug 2013 11:28:24 +
Subject: [ceph-users] Poor write/random read/random write performance
I have a 3 nodes, 15 osds
] Poor write/random read/random write performance
On 08/19/2013 06:28 AM, Da Chun Ng
wrote:
I have a 3 nodes, 15 osds ceph cluster setup:
* 15 7200 RPM SATA disks, 5 for each node.
* 10G network
* Intel(R) Xeon(R) CPU E5-2620(6
Subject: Re: [ceph-users] Poor write/random read/random write performance
On 08/19/2013 08:59 AM, Da Chun Ng wrote:
Thanks very much! Mark.
Yes, I put the data and journal on the same disk, no SSD in my environment.
My controllers are general SATA II.
Ok, so in this case the lack of WB
11 matches
Mail list logo