[ceph-users] Re: v14.2.16 Nautilus released

2020-12-17 Thread Dan van der Ster
Thanks for this. Is download.ceph.com more heavily loaded than usual? It's taking more than 24 hours to rsync this release to our local mirror (and AFAICT none of the European mirrors have caught up yet). Cheers, Dan On Thu, Dec 17, 2020 at 3:55 AM David Galloway wrote: > > This is the 16th

[ceph-users] Re: changing OSD IP addresses in octopus/docker environment

2020-12-17 Thread 胡 玮文
What if you just stop the containers, configure the new IP address for that server, then restart the containers? I think it should just work as long as this server can still reach the MONs. > 在 2020年12月18日,03:18,Philip Brown 写道: > > I was wondering how to change the IPs used for the OSD

[ceph-users] MDS Corruption: ceph_assert(!p) in MDCache::add_inode

2020-12-17 Thread Brandon Lyon
This is attempt #3 to submit this issue to this mailing list. I don't expect this to be received. I give up. I have an issue with MDS corruption which so far I haven't been able to resolve using the recovery steps I've found online. I'm on v15.2.6. I've tried all the recovery steps mentioned

[ceph-users] Re: cephfs flags question

2020-12-17 Thread Stefan Kooman
On 12/17/20 7:45 PM, Patrick Donnelly wrote: When a file system is newly created, it's assumed you want all the stable features on, including multiple MDS, directory fragmentation, snapshots, etc. That's what those flags are for. If you've been upgrading your cluster, you need to turn those on

[ceph-users] Re: cephfs flags question

2020-12-17 Thread Stefan Kooman
On 12/17/20 5:54 PM, Patrick Donnelly wrote: file system flags are not the same as the "feature" flags. See this doc for the feature flags: https://docs.ceph.com/en/latest/cephfs/administration/#minimum-client-version Thanks for making that clear. Note that the new "fs feature" and "fs

[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 11:35 AM Stefan Kooman wrote: > > On 12/17/20 7:45 PM, Patrick Donnelly wrote: > > > > > When a file system is newly created, it's assumed you want all the > > stable features on, including multiple MDS, directory fragmentation, > > snapshots, etc. That's what those flags

[ceph-users] changing OSD IP addresses in octopus/docker environment

2020-12-17 Thread Philip Brown
I was wondering how to change the IPs used for the OSD servers, in my new Octopus based environment, which uses all those docker/podman images by default. imiting date range to within a year, doesnt seem to hit anything. unlimited google search pulled up

[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 10:27 AM Stefan Kooman wrote: > > In any case, I think what you're asking is about the file system flags > > and not the required_client_features. > > That's correct. So I checked the file system flags on different clusters > (some installed luminous, some mimic, some

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 12:09 PM Philip Brown wrote: > > Huhhh.. > It seems worthwhile to point out two inconsistencies, then. > > 1. the "old way", of ceph config set global rbd_cache false > > doesnt require this odd redundant "global set global" syntax. It is confusing > to users to have to

[ceph-users] Re: performance degredation every 30 seconds

2020-12-17 Thread Philip Brown
one final word of warning for everyone. while i no longer have the performance glitch I can no longer reproduce it. Doing ceph config set global rbd_cache true does not seem to reproduce the old behaviour. even if i do things like unmap and remap the test rbd. Which is worrying. because

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
Huhhh.. It seems worthwhile to point out two inconsistencies, then. 1. the "old way", of ceph config set global rbd_cache false doesnt require this odd redundant "global set global" syntax. It is confusing to users to have to specify "global" twice. may I suggest that the syntax for rbd config

[ceph-users] Re: performance degredation every 30 seconds

2020-12-17 Thread Philip Brown
I am happy to say, this seems to have been the solution. After running ceph config set global rbd_cache false I can now run the full 256 thread varient, fio --direct=1 --rw=randwrite --bs=4k --ioengine=libaio --filename=/dev/rbd0 --iodepth=256 --numjobs=1 --time_based --group_reporting

[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 3:23 AM Stefan Kooman wrote: > > Hi List, > > In order te reproduce an issue we see on a production cluster (cephFS > client: ceph-fuse outperform kernel client by a factor of 5) we would > like to have a test cluster to have the same cephfs "flags" as > production.

[ceph-users] Re: reliability of rados_stat() function

2020-12-17 Thread Peter Lieven
Am 01.12.20 um 17:32 schrieb Peter Lieven: > Hi all, > > > the rados_stat() function has a TODO in the comments: > > > * TODO: when are these set, and by whom? can they be out of date? > > Can anyone help with this? How reliably is the pmtime updated? Is there a > minimum update interval? > >

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 11:21 AM Philip Brown wrote: > > I guess I left out in my examples, where I tried rbd_cache as well, and failed > > # rbd config global set rbd_cache false > rbd: invalid config entity: rbd_cache (must be global, client or client.) But that's not a valid command -- you

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
I guess I left out in my examples, where I tried rbd_cache as well, and failed # rbd config global set rbd_cache false rbd: invalid config entity: rbd_cache (must be global, client or client.) So, while i am happy to file a documentation pull request.. I still need to find the specific

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 10:41 AM Philip Brown wrote: > > Huhhh... > > Its unfortunate that every google search i did for turning off rbd cache, > specified "put it in the [client] section". > Doh. > > Maybe this would make a good candidate to update the ceph rbd docs? As an open source project,

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
Huhhh... Its unfortunate that every google search i did for turning off rbd cache, specified "put it in the [client] section". Doh. Maybe this would make a good candidate to update the ceph rbd docs? Speaking of which.. what is the *exact* syntax for that command please? None of the below

[ceph-users] Namespace usability for mutitenancy

2020-12-17 Thread George Shuklin
Hello. Had been someone starting using namespaces for real production for multi-tenancy? How good is it at isolating tenants from each other? Can they see each other presence, quotas, etc? Is is safe to give access via cephx to (possibly hostile to each other) users to the same pool with

[ceph-users] cephfs flags question

2020-12-17 Thread Stefan Kooman
Hi List, In order te reproduce an issue we see on a production cluster (cephFS client: ceph-fuse outperform kernel client by a factor of 5) we would like to have a test cluster to have the same cephfs "flags" as production. However, it's not completely clear how certain features influence

[ceph-users] Data migration between clusters

2020-12-17 Thread Szabo, Istvan (Agoda)
What is the easiest and best way to migrate bucket from an old cluster to a new one? Luminous to octopus not sure does it matter from the data perspective. This message is confidential and is for the sole use of the intended recipient(s). It may also be

[ceph-users] who's managing the cephcsi plugin?

2020-12-17 Thread Marc Roos
Is this cephcsi plugin under control of redhat? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 7:22 AM Eugen Block wrote: > > Hi, > > > [client] > > rbd cache = false > > rbd cache writethrough until flush = false > > this is the rbd client's config, not the global MON config you're > reading here: > > > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'`

[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Eugen Block
Hi, [client] rbd cache = false rbd cache writethrough until flush = false this is the rbd client's config, not the global MON config you're reading here: # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config show |grep rbd_cache "rbd_cache": "true", If you want to

[ceph-users] Re: allocate_bluefs_freespace failed to allocate / ceph_abort_msg("bluefs enospc")

2020-12-17 Thread Stephan Austermühle
Hi Igor, thanks for your reply. To workaround it you might want to switch both bluestore and bluefs allocators back to bitmap for now. Indeed, setting both allocators to bitmap brought the OSD back online and the cluster recovered. You rescued my cluster. ;-) Cheers Stephan smime.p7s