Hi all,
Has anyone tried setting cache-tier to forward mode in luminous 12.2.1 ? Our
cluster cannot write to rados pool once the mode to set to forward. We setup
the cache-tier with forward mode and then do rados bench. However, the
throughput from rados bench is 0, and iostat shows no disk usa
Hi,
To my understand, the bluestore write workflow is
For normal big write
1. Write data to block
2. Update metadata to rocksdb
3. Rocksdb write to memory and block.wal
4. Once reach threshold, flush entries in block.wal to block.db
For overwrite and small write
1. Write data and metadata to ro
Hello,
We plan to change our filestore osd to bluestore backend, and doing survey now.
Two questions need your help.
1. Is there any way to dump the rocksdb to let us check the content?
2. How can I get the space usage information of the db partition? We want to
figure out a reasonable size for
You can get rpm from here
https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/old/2.3.0/CentOS/nfs-ganesha.repo
You have to fix the path mismatch error in the repo file manually.
> On Aug 20, 2017, at 5:38 AM, Marc Roos wrote:
>
>
>
> Where can you get the nfs-ganesha-ceph rpm? Is
You can check linux source code to see the feature supported by kernel client.
e.g. linux 4.13-rc5
(https://github.com/torvalds/linux/blob/v4.13-rc5/drivers/block/rbd.c)
in drivers/block/rbd.c:
/* Feature bits */
#define RBD_FEATURE_LAYERING(1ULL<<0)
#define RBD_FEATURE_STRIPINGV2
Hi all,
We have a cluster whose fsmap and mdsmap have different value. Also, each mds
has different mdsmap epoch. Active mds has epoch 52, and other two standby mds
have 53 and 55, respectively. Why are the mdsmap epoch of each mds different?
Our cluster:
ceph 11.2.0
3 nodes. Each node has a mo
> On Jun 6, 2017, at 11:18 AM, jiajia zhong wrote:
>
> it's very similar to ours. but is there any need to seperate the osds for
> different pools ? why ?
> below's our crushmap.
>
> -98 6.29997 root tier_cache
> -94 1.3 host cephn1-ssd
unt = 0? Looks like
it is used to record the read/write recency only? Sorry for the stupid question
but i’m trying to understand the cache-tier behavior :)
Thanks,
Ting Yi Lin
> Em Seg, 5 de jun de 2017 23:26, TYLin <mailto:wooer...@gmail.com>> escreveu:
>> On Jun 5, 201
> On Jun 5, 2017, at 6:47 PM, Christian Balzer wrote:
>
> Personally I avoid odd numbered releases, but my needs for stability
> and low update frequency seem to be far off the scale for "normal" Ceph
> users.
>
> W/o precise numbers of files and the size of your SSDs (which type?) it is
> hard
Hi Christian,
Thanks for you quick reply.
> On Jun 5, 2017, at 2:01 PM, Christian Balzer wrote:
>
>
> Hello,
>
> On Mon, 5 Jun 2017 12:25:25 +0800 TYLin wrote:
>
>> Hi all,
>>
>> We’re using cache-tier with write-back mode but the write thr
Hi all,
We’re using cache-tier with write-back mode but the write throughput is not as
good as we expect. We use CephFS and create a 20GB file in it. While data is
writing, we use iostat to get the disk statistics. From iostat, we saw that ssd
(cache-tier) is idle most of the time and hdd (stor
wn and
> pause flags.
> #ceph osd unset noout
> #ceph osd unset norecover
> #ceph osd unset norebalance
> #ceph osd unset nobackfill
> #ceph osd unset nodown
> #ceph osd unset pause
> 6.Check and verify the cluster is in healthy state, Verify all the
> clients are a
n OSD daemons.
> I've done those operation tomorrow and my cluster working again.
>
> Пятница, 7 апреля 2017, 13:47 +05:00 от TYLin :
>
> Hi all,
>
> We’re trying to stop and then restart our ceph cluster. Our steps are as
> following:
>
> stop cluster:
> stop mds
Hi all,
We’re trying to stop and then restart our ceph cluster. Our steps are as
following:
stop cluster:
stop mds -> stop osd -> stop mon
restart cluster:
start mon -> start osd -> start mds
Our cluster will stuck in cephfs degraded and mds is replaying journal. After
restart
Hi all,
We have a CephFS which its metadata pool and data pool share same set of OSDs.
According to the PGs calculation:
(100*num_osds) / num_replica
If we have 56 OSDs, we should set 5120 PGs to each pool to make the data evenly
distributed to all the OSDs. However, if we set metadata pool an
Hi all,
We got 4 PG active+remapped in our cluster. We set the pool’s ruleset to
ruleset 0 and got HEALTH_OK. After we set the ruleset to ruleset 1, 4 pg is
active+remapped. The testing result from crushtool also shows some bad mapping
exists. Anyone happened to know the reason?
pool 0 'rbd
16 matches
Mail list logo