On 12/16/19 2:42 PM, Lars Täuber wrote:
There seems to be a bug in nautilus.
I think about increasing the number of PG's for the data pool again, because
the average number of PG's per OSD now is 76.8.
What do you say?
May be bug in Nautilus, may be in osdmaptool.
Please, upload your binary
Mon, 16 Dec 2019 15:17:37 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 2:42 PM, Lars Täuber wrote:
> > There seems to be a bug in nautilus.
> >
> > I think about increasing the number of PG's for the data pool again,
> > because the average number of PG's per OSD now is 76.8.
> > Wh
On 12/16/19 3:25 PM, Lars Täuber wrote:
Here it comes.
Maybe some bug in osdmaptool, when defined pools is less than one no
actually do_upmap is executed.
Try like this:
`osdmaptool osdmap.om --upmap upmap.sh --upmap-pool=cephfs_data
--upmap-pool=cephfs_metadata --upmap-deviation=0 --upmap
Mon, 16 Dec 2019 15:38:30 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 3:25 PM, Lars Täuber wrote:
> > Here it comes.
>
> Maybe some bug in osdmaptool, when defined pools is less than one no
> actually do_upmap is executed.
>
> Try like this:
>
> `osdmaptool osdmap.om --upmap u
Hello,
I'd recommend doing this with just one OSD to contrast and compare,
ideally of course with an additional node (but you're unlikely to have
that).
In my (very specific) use case an older cluster with Jewel and filestore
with collocated journal, a 3 node SSD pool with 5 SSDs each sees 2-3%
Yes, CephFS makes no attempt to maintain atime. If that's something
you care about you should make a ticket and a case for why it's
important. :)
On Sat, Dec 14, 2019 at 5:42 AM Oliver Freyermuth
wrote:
>
> Hi together,
>
> I had a look at ceph-fuse code and if I read it correctly, it does indeed
On Fri, Dec 13, 2019 at 10:36 AM Dan van der Ster wrote:
>
> Hi all,
>
> We have said in the past that an EC pool should have min_size=k+1, for
> the same reasons that a replica 3 pool needs min_size=2.
> And we've heard several stories about replica 3, min_size=1 leading to
> incomplete PGs.
>
>
Hi Gregory,
I saw ceph -s showing 'snaptrim'(?), but I have still have these
'removed_snaps' listed on this pool (also on other pools, I don't
remember creating/deleting them) A 'ceph tell mds.c scrub start /test/
recursive repair' did not remove those. Can/should/how I remove these?
On Mon, Dec 16, 2019 at 11:34 AM Marc Roos wrote:
>
>
>
> Hi Gregory,
>
> I saw ceph -s showing 'snaptrim'(?), but I have still have these
> 'removed_snaps' listed on this pool (also on other pools, I don't
> remember creating/deleting them) A 'ceph tell mds.c scrub start /test/
> recursive repair
Hi Thomas,
do you have the backward-compatible weight-set still active?
Try removing it with:
$ ceph osd crush weight-set rm-compat
I'm unsure if it solves my similar problem, but the progress looks very
promising.
Cheers,
Lars
___
ceph-users mailing
Hi!
Is there a mean to list all snapshots existing in a (subdir of) Cephfs?
I can't use the find dommand to look for the ".snap" dirs.
I'd like to remove certain (or all) snapshots within a CephFS. But how do I
find them?
Thanks,
Lars
___
ceph-users m
Hi,
can you please advise how to verify if and which weight-set is active?
Regards
Thomas
Am 16.12.2019 um 14:04 schrieb Lars Täuber:
> Hi Thomas,
>
> do you have the backward-compatible weight-set still active?
> Try removing it with:
> $ ceph osd crush weight-set rm-compat
>
> I'm unsure if it
Mon, 16 Dec 2019 14:42:49 +0100
Thomas Schneider <74cmo...@gmail.com> ==> Lars Täuber ,
ceph-users@ceph.io :
> Hi,
>
> can you please advise how to verify if and which weight-set is active?
try:
$ ceph osd crush weight-set ls
Lars
>
> Regards
> Thomas
>
> Am 16.12.2019 um 14:04 schrieb Lars
I had similar concerns when we moved to bluestore. We run filestore
clusters with HDD OSD's and SSD journals and I was worried bluestore
wouldn't perform as well with the change from journals to WAL. As it
turns out our bluestore clusters outperform our filestore clusters in
all regards, latency, t
On Thu, Dec 12, 2019 at 11:46:19PM +, Bryan Stillwell
wrote:
> On our test cluster after upgrading to 14.2.5 I'm having problems with the
> mons pegging a CPU core while moving data around. I'm currently converting
> the OSDs from FileStore to BlueStore by marking the OSDs out in multiple
Sasha,
I was able to get past it by restarting the ceph-mon processes every time it
got stuck, but that's not a very good solution for a production cluster.
Right now I'm trying to narrow down what is causing the problem. Rebuilding
the OSDs with BlueStore doesn't seem to be enough. I believe
Bryan, thank you. We are about to start testing 14.2.2 -> 14.2.5 upgrade,
so folks here are a bit cautious :-) We don't need to convert but may have
to rebuild few disks after an upgrade.
On Mon, Dec 16, 2019 at 3:57 PM Bryan Stillwell
wrote:
> Sasha,
>
> I was able to get past it by restartin
Am 16.12.19 um 11:43 schrieb Gregory Farnum:
> Yes, CephFS makes no attempt to maintain atime. If that's something
> you care about you should make a ticket and a case for why it's
> important. :)
Thanks for confirming :-).
For those following along and also interested, I created the ticket here:
Hi guys,
I am running red hat ceph (basically luminous - ceph version
12.2.12-48.el7cp (26388d73d88602005946d4381cc5796d42904858)) and am seeing
something similar on our test cluster.
One of the mons is running at around 300% cpu non stop. It doesn't seem to
be the lead mon or one in particular,
Mine was easier than I thought, turns out it was a bunch of rados client
connections stuck trying to do bench cleanup on a no-longer existing pool,
probably endlessly trying to find where the (also no longer existing) osds
it needs to talk to are.
On Tue, 17 Dec 2019 at 12:41, Rafael Lopez wrote
I want run two realm on one ceph cluster, but I found rgw will use only one
.rgw.root pool save rgw meta data, so , if run two realm on one ceph cluster,
will be error ?
黄明友
IT基础架构部经理
V.Photos 云摄影
移动电话: +86 13540630430
客服电话:400 - 806 - 5775
电子邮件: hmy@v.photos
官方网址: www.v.photos
上海 黄浦区中山东二路
Hi Konstantin,
the cluster has finished it's backfilling.
I got this situation:
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.86/1.08 STDDEV: 2.05
Now I created a new upmap.sh and sourced it. The cluster is busy again with 3%
of its objects.
I'll report the result.
Thanks for all your hints.
Re
Hello,
I have enabled MGR module "Dashboard" and created a self-signed cert:
root@ld5506:~# ceph dashboard create-self-signed-cert
Self-signed certificate created
Checking the MGR log I get multiple errors for any action in the
dashboard with Ceph version 14.2.4.1
(596a387fb278758406deabf997735a1
23 matches
Mail list logo