On Wed, Jun 15, 2022 at 7:23 AM Venky Shankar wrote:
>
> On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/55974
> > Release Notes - https://github.com/ceph/ceph/pull/46576
> >
> > Seeking approvals
(oops, i had cc'ed this to the old ceph-users list)
On Wed, Jun 15, 2022 at 1:56 PM Casey Bodley wrote:
>
> On Mon, May 11, 2020 at 10:20 AM Abhishek Lekshmanan
> wrote:
> >
> >
> > The basic premise is for an account to be a container for users, and
> > also related functionality like roles &
I have found that I can only reproduce it on clusters built initially on
pacific. My cluster which went nautilus to pacific does not reproduce the
issue. My working theory is it is related to rocksdb sharding:
Hi,
"df -h" on the OSD host shows 187G is being used.
"du -sh /" shows 36G. bluefs_buffered_io is enabled here.
What's taking that 150G disk space, cache?
Then where is that cache file? Any way to configure it smaller?
# free -h
totalusedfree shared
On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs -
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote:
>
> Hi Eugen,
>
> in essence I would like the property "thick provisioned" to be sticky after
> creation and apply to any other operation that would be affected.
>
> To answer the use-case question: this is a disk image on a pool designed for
So basically, you need the reverse sparsify command, right? ;-)
I only find several mailing list thready asking why someone would want
thick-provisioning but it happened eventually. I suppose cloning and
flattening the resulting image is not a desirable workaround.
Zitat von Frank Schilder
On Tue, Jun 14, 2022 at 1:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
Hi *,
I finally caught some debug logs during the cache pressure warnings.
In the meantime I had doubled the mds_cache_memory_limit to 128 GB
which decreased the number cache pressure messages significantly, but
they still appear a few times per day.
Turning on debug logs for a few
Hi,
I have Ceph Pacific 16.2.9 with CephFS and 4 MDS (2 active, 2 standby-reply)
==
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active mds3 Reqs: 31 /s 162k 159k 69.5k 177k
1 active mds1 Reqs: 4 /s 31.0k 28.7k 10.6k
Hi all,
while setting up a system with cephadm under Quincy, I bootstrapped from host A, added mons on hosts B
and C, and rebooted host A.
Afterwards, ceph seemed to be in a healthy state (no OSDs yet, of course), but my host A
was "offline".
I was afraid I had run into
/Whoops, already corrected...
Sent from my iPhone
> On 15 Jun 2022, at 09:51, Konstantin Shalygin wrote:
>
> The link with mistype, I think
>
> https://github.com/ceph/go-ceph/releases/tag/v0.16.0
>
>
> k
> Sent from my iPhone
>
>>> On 14 Jun 2022, at 23:37, John Mulligan
>>> wrote:
The link with mistype, I think
https://github.com/ceph/go-ceph/releases/tag/v0.16.0
k
Sent from my iPhone
> On 14 Jun 2022, at 23:37, John Mulligan wrote:
>
> On Tuesday, June 14, 2022 4:29:59 PM EDT John Mulligan wrote:
>> I'm happy to announce another release of the go-ceph API library.
The other fixes landed to nautilus and later releases
I suggest to you upgrade to nautilus as soon as possible, this is very stable
release (14.2.22)
k
Sent from my iPhone
> On 14 Jun 2022, at 12:13, tao song wrote:
>
>
> Thanks , we have backport some PR to 12.2.12,but the problem
15 matches
Mail list logo