Hi,
I would like to ask if anybody knows how to handle the gwcli status below.
- Disks state in gwcli shows as "Unknowm"
- Clients still mounting the "Unknown" disks and seems working normally.
Two of the rbd disks show "Unknown" instead of "Online" in gwcli.
.
A related bug report found in: https://tracker.ceph.com/issues/5
Regs,
Icy
On Fri, 16 Apr 2021 at 10:44, icy chan wrote:
> Hi,
>
> I had several clusters running as nautilus and pending upgrading to
> octopus.
>
> I am now testing the upgrade steps for ceph clu
Hi,
I had Ceph clusters running with Nautilus with CentOS 7. But I would like
to enjoy new features from later versions. (e.g. enhanced iSCSI
performance.)
Since Ceph Octopus no longer fully supports CentOS 7. I am targeted to
migrate the Ceph cluster from CentOS 7 to CentOS 8 or Ubuntu.
I would
Hi,
I had several clusters running as nautilus and pending upgrading to
octopus.
I am now testing the upgrade steps for ceph cluster from nautilus
to octopus using cephadm adopt in lab referred to below link:
- https://docs.ceph.com/en/octopus/cephadm/adoption/
Lab environment:
3 all-in-one
the objects
> are getting marked dirty?
>
> Do you see "dirty" entries in ceph df detail?
>
>
> Zitat von icy chan :
>
> > Hi Eugen,
> >
> > Sorry for the missing information. "cached-hdd-cache" is the overlay tier
> > of "cache
min_read_recency_for_promote 1
min_write_recency_for_promote 1 stripe_width 0
Regs,
Icy
On Thu, 28 May 2020 at 18:25, Eugen Block wrote:
> I don't see a cache_mode enabled on the pool, did you set one?
>
>
>
> Zitat von icy chan :
>
> > Hi,
> >
> > I had
Hi,
I had configured a cache tier with max object counts 500k. But no evict
happens when the object counts hit the configured maximum.
Anyone experienced this issue? What should I do?
$ ceph health detail
HEALTH_WARN 1 cache pools at or near target size
CACHE_POOL_NEAR_FULL 1 cache pools at or
everyone especially Eugen.
Regs,
Icy
On Thu, 21 May 2020 at 08:33, icy chan wrote:
> Hi Eugen,
>
> Thanks for the suggestion. The object counts of rbd pool are still stay on
> 430.11K. (all images were deleted 3 days +.)
> I will keep monitor it and post the results here.
ol and it took two days until the number of objects
> cleaned up, so it's nothing unusual. Just watch the number from time
> to time and this thread in case the numbers don't decrease.
>
>
> Zitat von icy chan :
>
> > Hi Eugen,
> >
> > Thanks for your reply.
> &
tory object.
> ---snip---
>
>
> The gateway.conf is your iSCSI gateway configuration stored in the cluster.
>
>
> Zitat von icy chan :
>
> > Hi,
> >
> > The numbers of object counts from "rados df" and "rados ls" are different
> > i
Hi,
The numbers of object counts from "rados df" and "rados ls" are different
in my testing environment. I think it maybe some zero bytes or unclean
objects since I removed all rbd images on top of it few days ago.
How can I make it right / found out where are those ghost objects? Or i
should
Hi,
Anyone can help on this?
Regs,
Icy
On Tue, 12 May 2020 at 10:44, icy chan wrote:
> Hi,
>
> I had configured a cache tier with below parameters:
> cache_target_dirty_ratio: 0.1
> cache_target_dirty_high_ratio: 0.7
> cache_target_full_ratio: 0.9
>
> The
Hi,
I had configured a cache tier with below parameters:
cache_target_dirty_ratio: 0.1
cache_target_dirty_high_ratio: 0.7
cache_target_full_ratio: 0.9
The cache tier did improved the performance much. And I am targeted to keep
the cache tier with only 10% of dirty data. Remaining data ( 80% )
13 matches
Mail list logo