[ceph-users] Trouble about reading gwcli disks state

2022-05-16 Thread icy chan
Hi, I would like to ask if anybody knows how to handle the gwcli status below. - Disks state in gwcli shows as "Unknowm" - Clients still mounting the "Unknown" disks and seems working normally. Two of the rbd disks show "Unknown" instead of "Online" in gwcli.

[ceph-users] Re: ceph-iscsi issue after upgrading from nautilus to octopus

2021-10-05 Thread icy chan
. A related bug report found in: https://tracker.ceph.com/issues/5 Regs, Icy On Fri, 16 Apr 2021 at 10:44, icy chan wrote: > Hi, > > I had several clusters running as nautilus and pending upgrading to > octopus. > > I am now testing the upgrade steps for ceph clu

[ceph-users] Can single Ceph cluster run on various OS families

2021-07-27 Thread icy chan
Hi, I had Ceph clusters running with Nautilus with CentOS 7. But I would like to enjoy new features from later versions. (e.g. enhanced iSCSI performance.) Since Ceph Octopus no longer fully supports CentOS 7. I am targeted to migrate the Ceph cluster from CentOS 7 to CentOS 8 or Ubuntu. I would

[ceph-users] ceph-iscsi issue after upgrading from nautilus to octopus

2021-04-15 Thread icy chan
Hi, I had several clusters running as nautilus and pending upgrading to octopus. I am now testing the upgrade steps for ceph cluster from nautilus to octopus using cephadm adopt in lab referred to below link: - https://docs.ceph.com/en/octopus/cephadm/adoption/ Lab environment: 3 all-in-one

[ceph-users] Re: Cache pools at or near target size but no evict happen

2020-05-31 Thread icy chan
the objects > are getting marked dirty? > > Do you see "dirty" entries in ceph df detail? > > > Zitat von icy chan : > > > Hi Eugen, > > > > Sorry for the missing information. "cached-hdd-cache" is the overlay tier > > of "cache

[ceph-users] Re: Cache pools at or near target size but no evict happen

2020-05-28 Thread icy chan
min_read_recency_for_promote 1 min_write_recency_for_promote 1 stripe_width 0 Regs, Icy On Thu, 28 May 2020 at 18:25, Eugen Block wrote: > I don't see a cache_mode enabled on the pool, did you set one? > > > > Zitat von icy chan : > > > Hi, > > > > I had

[ceph-users] Cache pools at or near target size but no evict happen

2020-05-28 Thread icy chan
Hi, I had configured a cache tier with max object counts 500k. But no evict happens when the object counts hit the configured maximum. Anyone experienced this issue? What should I do? $ ceph health detail HEALTH_WARN 1 cache pools at or near target size CACHE_POOL_NEAR_FULL 1 cache pools at or

[ceph-users] Re: Mismatched object counts between "rados df" and "rados ls" after rbd images removal

2020-05-25 Thread icy chan
everyone especially Eugen. Regs, Icy On Thu, 21 May 2020 at 08:33, icy chan wrote: > Hi Eugen, > > Thanks for the suggestion. The object counts of rbd pool are still stay on > 430.11K. (all images were deleted 3 days +.) > I will keep monitor it and post the results here.

[ceph-users] Re: Mismatched object counts between "rados df" and "rados ls" after rbd images removal

2020-05-20 Thread icy chan
ol and it took two days until the number of objects > cleaned up, so it's nothing unusual. Just watch the number from time > to time and this thread in case the numbers don't decrease. > > > Zitat von icy chan : > > > Hi Eugen, > > > > Thanks for your reply. > &

[ceph-users] Re: Mismatched object counts between "rados df" and "rados ls" after rbd images removal

2020-05-20 Thread icy chan
tory object. > ---snip--- > > > The gateway.conf is your iSCSI gateway configuration stored in the cluster. > > > Zitat von icy chan : > > > Hi, > > > > The numbers of object counts from "rados df" and "rados ls" are different > > i

[ceph-users] Mismatched object counts between "rados df" and "rados ls" after rbd images removal

2020-05-18 Thread icy chan
Hi, The numbers of object counts from "rados df" and "rados ls" are different in my testing environment. I think it maybe some zero bytes or unclean objects since I removed all rbd images on top of it few days ago. How can I make it right / found out where are those ghost objects? Or i should

[ceph-users] Re: Need help on cache tier monitoring

2020-05-17 Thread icy chan
Hi, Anyone can help on this? Regs, Icy On Tue, 12 May 2020 at 10:44, icy chan wrote: > Hi, > > I had configured a cache tier with below parameters: > cache_target_dirty_ratio: 0.1 > cache_target_dirty_high_ratio: 0.7 > cache_target_full_ratio: 0.9 > > The

[ceph-users] Need help on cache tier monitoring

2020-05-11 Thread icy chan
Hi, I had configured a cache tier with below parameters: cache_target_dirty_ratio: 0.1 cache_target_dirty_high_ratio: 0.7 cache_target_full_ratio: 0.9 The cache tier did improved the performance much. And I am targeted to keep the cache tier with only 10% of dirty data. Remaining data ( 80% )