[ceph-users] Re: v14.2.16 Nautilus released

2020-12-17 Thread Dan van der Ster
Thanks for this.

Is download.ceph.com more heavily loaded than usual? It's taking more
than 24 hours to rsync this release to our local mirror (and AFAICT
none of the European mirrors have caught up yet).

Cheers, Dan

On Thu, Dec 17, 2020 at 3:55 AM David Galloway  wrote:
>
> This is the 16th backport release in the Nautilus series. This release fixes a
> security flaw in CephFS. We recommend users to update to this release.
>
> Notable Changes
> ---
> * CVE-2020-27781 : OpenStack Manila use of ceph_volume_client.py library 
> allowed
>   tenant access to any Ceph credential's secret. (Kotresh Hiremath 
> Ravishankar,
>   Ramana Raja)
>
>
> Changelog
> -
> * pybind/ceph_volume_client: disallow authorize on existing auth ids (Kotresh
>   Hiremath Ravishankar, Ramana Raja)
>
>
> Getting Ceph
> 
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-14.2.16.tar.gz
> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: 762032d6f509d5e7ee7dc008d80fe9c87086603c
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: changing OSD IP addresses in octopus/docker environment

2020-12-17 Thread 胡 玮文
What if you just stop the containers, configure the new IP address for that 
server, then restart the containers? I think it should just work as long as 
this server can still reach the MONs.

> 在 2020年12月18日,03:18,Philip Brown  写道:
> 
> I was wondering how to change the IPs used for the OSD servers, in my new 
> Octopus based environment, which uses all those docker/podman images by 
> default.
> 
> imiting date range to within a year, doesnt seem to hit anything.
> 
> unlimited google search pulled up 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020503.html
> 
> 
> but that references editing /etc/ceph/ceph.conf and changing the [osd.x] 
> sections.
> which dont exist with Octopus, as far as I've found so far.
> 
> It doesnt exist in the top level host's /etc/ceph/ceph.conf
> nor does it exist in the container's conf file, as viewed via "cephadm shell"
> 
> So, what are the options here?
> 
> 
> 
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
> 5 Peters Canyon Rd Suite 250 
> Irvine CA 92606 
> Office 714.918.1310| Fax 714.918.1325 
> pbr...@medata.com| www.medata.com
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] MDS Corruption: ceph_assert(!p) in MDCache::add_inode

2020-12-17 Thread Brandon Lyon
This is attempt #3 to submit this issue to this mailing list. I don't
expect this to be received. I give up.

I have an issue with MDS corruption which so far I haven't been able to
resolve using the recovery steps I've found online. I'm on v15.2.6. I've
tried all the recovery steps mentioned here, except copying the pool:
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/

When I try to start an MDS instance, it crashes after a few seconds. It
logs a bunch of "bad backtrace on directory inode" errors before failing on
an assertion in MDCache::add_inode, line 313:
https://github.com/ceph/ceph/blob/cb8c61a60551b72614257d632a574d420064c17a/src/mds/MDCache.cc#L313

Here's the output of journalctl -xe: https://pastebin.com/9g1UJaKQ

I asked in the IRC channel, and it was suggested I might be able to
manually delete the duplicate inodes using the RADOS API, though I don't
know specifically how I would do that. I have also cloned the code and
built Ceph with the problem assertion replaced with a return, but I haven't
tried using it yet and I'm saving that as my last resort. I'd appreciate
any help you all can give.

Thank you,
- Brandon Lyon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-17 Thread Stefan Kooman

On 12/17/20 7:45 PM, Patrick Donnelly wrote:



When a file system is newly created, it's assumed you want all the
stable features on, including multiple MDS, directory fragmentation,
snapshots, etc. That's what those flags are for. If you've been
upgrading your cluster, you need to turn those on yourself.

OK, fair enough. So I tried adding "allow_dirfrags" which gives me:

ceph fs set cephfs allow_dirfrags true
Directory fragmentation is now permanently enabled. This command is 
DEPRECATED and will be REMOVED from future releases.


And I enabled snapshot support:

ceph fs set cephfs allow_new_snaps true
enabled new snapshots

However, this has not changed the "flags" of the filesystem in any way. 
So I guess there are still features not enabled that are enabled on 
newly installed clusters. Where can I find a list of features that I can 
enable? I have searched through documentation but I don't see anything 
related. It's also not described / suggested in the part about upgrading 
the MDS cluster (IMHO that would be a logical place) [1].


Gr. Stefan

[1]: https://docs.ceph.com/en/latest/cephfs/upgrading/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-17 Thread Stefan Kooman

On 12/17/20 5:54 PM, Patrick Donnelly wrote:


file system flags are not the same as the "feature" flags. See this
doc for the feature flags:

https://docs.ceph.com/en/latest/cephfs/administration/#minimum-client-version


Thanks for making that clear.


Note that the new "fs feature" and "fs required_client_features"
commands will be new in Pacific. They provide better control on the
exact features you want to require. The old style of specifying the
minimum release was inflexible and made it difficult to require
specific features the kernel client supports. (For example, the kernel
client is only now just about to support "nautilus" because of
messenger v2 support.)


Ah, nice. Both the fs feature part as v2 support for kernel.




We would like to have the test cluster get the "1c" flags and see if we
can reproduce the issue. How can we achieve that?


You can't set 0x1c directly. These correspond to reserved feature bits
for unspecified older Ceph releases. Suggest you just set the
min_compat_client to jewel.


Up to now we have never set a "min_compat_client". But I guess we can 
enforce jewel nowadays (thats the lowest ceph features gives for clients).



In any case, I think what you're asking is about the file system flags
and not the required_client_features.


That's correct. So I checked the file system flags on different clusters 
(some installed luminous, some mimic, some nautilus) and for the 
clusters that started as luminous the file sytems flags are either "1c" 
or "1e". The ones with "1e" have been installed with newer luminous 
releases. So does the filesystem flags bit ever change during the 
lifetime of a cluster? What exactly is the purpose of the filesystem 
flags bit?


Thanks,

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 11:35 AM Stefan Kooman  wrote:
>
> On 12/17/20 7:45 PM, Patrick Donnelly wrote:
>
> >
> > When a file system is newly created, it's assumed you want all the
> > stable features on, including multiple MDS, directory fragmentation,
> > snapshots, etc. That's what those flags are for. If you've been
> > upgrading your cluster, you need to turn those on yourself.
> OK, fair enough. So I tried adding "allow_dirfrags" which gives me:
>
> ceph fs set cephfs allow_dirfrags true
> Directory fragmentation is now permanently enabled. This command is
> DEPRECATED and will be REMOVED from future releases.
>
> And I enabled snapshot support:
>
> ceph fs set cephfs allow_new_snaps true
> enabled new snapshots
>
> However, this has not changed the "flags" of the filesystem in any way.
> So I guess there are still features not enabled that are enabled on
> newly installed clusters. Where can I find a list of features that I can
> enable?

Apologies for linking code:
https://github.com/ceph/ceph/blob/master/src/include/ceph_fs.h#L275-L285

>I have searched through documentation but I don't see anything
> related. It's also not described / suggested in the part about upgrading
> the MDS cluster (IMHO that would be a logical place) [1].

You're the first person I'm aware of asking for this. :)

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] changing OSD IP addresses in octopus/docker environment

2020-12-17 Thread Philip Brown
I was wondering how to change the IPs used for the OSD servers, in my new 
Octopus based environment, which uses all those docker/podman images by default.

imiting date range to within a year, doesnt seem to hit anything.

unlimited google search pulled up 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020503.html


but that references editing /etc/ceph/ceph.conf and changing the [osd.x] 
sections.
which dont exist with Octopus, as far as I've found so far.

It doesnt exist in the top level host's /etc/ceph/ceph.conf
nor does it exist in the container's conf file, as viewed via "cephadm shell"

So, what are the options here?



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 10:27 AM Stefan Kooman  wrote:
> > In any case, I think what you're asking is about the file system flags
> > and not the required_client_features.
>
> That's correct. So I checked the file system flags on different clusters
> (some installed luminous, some mimic, some nautilus) and for the
> clusters that started as luminous the file sytems flags are either "1c"
> or "1e". The ones with "1e" have been installed with newer luminous
> releases. So does the filesystem flags bit ever change during the
> lifetime of a cluster? What exactly is the purpose of the filesystem
> flags bit?

When a file system is newly created, it's assumed you want all the
stable features on, including multiple MDS, directory fragmentation,
snapshots, etc. That's what those flags are for. If you've been
upgrading your cluster, you need to turn those on yourself.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 12:09 PM Philip Brown  wrote:
>
> Huhhh..
> It seems worthwhile to point out two inconsistencies, then.
>
> 1. the "old way", of  ceph config set global rbd_cache false
>
> doesnt require this odd redundant "global set global" syntax. It is confusing 
> to users to have to specify "global" twice.
> may I suggest that the syntax for rbd config be adjusted, so that people can 
> simply use
> rbd config global ...
> or
> rbd config client ...
>
> not
> rbd config global set (global/client)

Everyone has different opinions. "rbd help config global set" clearly
defines all the expected parameters. We can't be all things to all
people.

> 2. The fact that "ceph config set global rbd_cache false" works, seems to 
> imply that putting
>
> [global]
> rbd cache = false
>
> in /etc/ceph/ceph.conf should work also.
>
> Except it doesnt.
> Even after fully shutting down every node in the ceph cluster and doing a 
> cold startup.
>
> is that a bug?

Nope [1]. How would changing a random configuration file on a random
node affect the configuration on another node?

>
>
>
>
> - Original Message -
> From: "Jason Dillaman" 
> To: "Philip Brown" 
> Cc: "dillaman" , "ceph-users" 
> Sent: Thursday, December 17, 2020 8:24:59 AM
> Subject: Re: [ceph-users] Re: bug? cant turn off rbd cache?
>
> On Thu, Dec 17, 2020 at 11:21 AM Philip Brown  wrote:
> >
> > I guess I left out in my examples, where I tried rbd_cache as well, and 
> > failed
> >
> > # rbd config global set rbd_cache  false
> > rbd: invalid config entity: rbd_cache (must be global, client or 
> > client.)
>
> But that's not a valid command -- you attempted to provide the
> configuration key name for the entity (as the error message is telling
> you).
>
> $ rbd config global set global rbd_cache false
> ... or ...
> $ rbd config global set client rbd_cache false
> ... or ...
> $ rbd config global set client.host_a rbd_cache false
>
> >
> >
>

[1] 
https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/#config-sources

-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: performance degredation every 30 seconds

2020-12-17 Thread Philip Brown


one final word of warning for everyone.

while i no longer have the performance glitch
I can no longer reproduce it.

Doing
 ceph config set global rbd_cache true
does not seem to reproduce the old behaviour. even if i do things like unmap 
and remap the test rbd.

Which is worrying. because if I cant control the behaviour who is to say it 
wont mysteriously come back?


- Original Message -
From: "Philip Brown" 
To: "Sebastian Trojanowski" 
Cc: "ceph-users" 
Sent: Thursday, December 17, 2020 9:02:05 AM
Subject: Re: [ceph-users] Re: performance degredation every 30 seconds

I am happy to say, this seems to have been the solution.

After running

 ceph config set global rbd_cache false

I can now run the full 256 thread varient,
fio  --direct=1 --rw=randwrite --bs=4k --ioengine=libaio  --filename=/dev/rbd0 
--iodepth=256  --numjobs=1 --time_based --group_reporting --name=iops-test-job 
--runtime=120 --eta-newline=1


and there is no longer a noticeable performance dip.

Thanks Sebastian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
Huhhh..
It seems worthwhile to point out two inconsistencies, then.

1. the "old way", of  ceph config set global rbd_cache false

doesnt require this odd redundant "global set global" syntax. It is confusing 
to users to have to specify "global" twice.
may I suggest that the syntax for rbd config be adjusted, so that people can 
simply use
rbd config global ...
or 
rbd config client ...

not
rbd config global set (global/client)



2. The fact that "ceph config set global rbd_cache false" works, seems to imply 
that putting

[global]
rbd cache = false

in /etc/ceph/ceph.conf should work also.

Except it doesnt.
Even after fully shutting down every node in the ceph cluster and doing a cold 
startup.

is that a bug?





- Original Message -
From: "Jason Dillaman" 
To: "Philip Brown" 
Cc: "dillaman" , "ceph-users" 
Sent: Thursday, December 17, 2020 8:24:59 AM
Subject: Re: [ceph-users] Re: bug? cant turn off rbd cache?

On Thu, Dec 17, 2020 at 11:21 AM Philip Brown  wrote:
>
> I guess I left out in my examples, where I tried rbd_cache as well, and failed
>
> # rbd config global set rbd_cache  false
> rbd: invalid config entity: rbd_cache (must be global, client or client.)

But that's not a valid command -- you attempted to provide the
configuration key name for the entity (as the error message is telling
you).

$ rbd config global set global rbd_cache false
... or ...
$ rbd config global set client rbd_cache false
... or ...
$ rbd config global set client.host_a rbd_cache false

>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: performance degredation every 30 seconds

2020-12-17 Thread Philip Brown
I am happy to say, this seems to have been the solution.

After running

 ceph config set global rbd_cache false

I can now run the full 256 thread varient,
fio  --direct=1 --rw=randwrite --bs=4k --ioengine=libaio  --filename=/dev/rbd0 
--iodepth=256  --numjobs=1 --time_based --group_reporting --name=iops-test-job 
--runtime=120 --eta-newline=1


and there is no longer a noticeable performance dip.

Thanks Sebastian


- Original Message -
From: "Sebastian Trojanowski" 
To: "ceph-users" 
Sent: Tuesday, December 15, 2020 1:34:39 AM
Subject: [ceph-users] Re: performance degredation every 30 seconds

Hi,

check your rbd cache, by default it's enabled, for ssd/nvme better is to 
disable it. Looks like your cache/buffers are full and need flush. It 
could harmful your env.

BR,
Sebastian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-17 Thread Patrick Donnelly
On Thu, Dec 17, 2020 at 3:23 AM Stefan Kooman  wrote:
>
> Hi List,
>
> In order te reproduce an issue we see on a production cluster (cephFS
> client: ceph-fuse outperform kernel client by a factor of 5) we would
> like to have a test cluster to have the same cephfs "flags" as
> production. However, it's not completely clear how certain features
> influence the cephfs flags. What I could find in the source code,
> cephfs_features.h, is that it *seems* to correspond to the Ceph release.
> For example CEPHFS_FEATURE_NAUTILUS gets a "12" as feature bit. An
> upgraded (Luminous -> Mimic -> Nautilus) cephfs gives us the following
> cephfs flags: "1c".
>
> A (newly installed) Nautilus cluster gives "10" when new snapshots are
> not allowed (ceph fs set cephfs allow_new_snaps false) and "12" when new
> snapshots are allowed (ceph fs set cephfs allow_new_snaps true).

file system flags are not the same as the "feature" flags. See this
doc for the feature flags:

https://docs.ceph.com/en/latest/cephfs/administration/#minimum-client-version

Note that the new "fs feature" and "fs required_client_features"
commands will be new in Pacific. They provide better control on the
exact features you want to require. The old style of specifying the
minimum release was inflexible and made it difficult to require
specific features the kernel client supports. (For example, the kernel
client is only now just about to support "nautilus" because of
messenger v2 support.)

> We would like to have the test cluster get the "1c" flags and see if we
> can reproduce the issue. How can we achieve that?

You can't set 0x1c directly. These correspond to reserved feature bits
for unspecified older Ceph releases. Suggest you just set the
min_compat_client to jewel.

In any case, I think what you're asking is about the file system flags
and not the required_client_features.

--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reliability of rados_stat() function

2020-12-17 Thread Peter Lieven
Am 01.12.20 um 17:32 schrieb Peter Lieven:
> Hi all,
>
>
> the rados_stat() function has a TODO in the comments:
>
>
> * TODO: when are these set, and by whom? can they be out of date?
>
> Can anyone help with this? How reliably is the pmtime updated? Is there a 
> minimum update interval?
>
> Thank you,
>
> Peter
>

Kindly pinging, is there noone who can help with this?


Thank you,

Peter

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 11:21 AM Philip Brown  wrote:
>
> I guess I left out in my examples, where I tried rbd_cache as well, and failed
>
> # rbd config global set rbd_cache  false
> rbd: invalid config entity: rbd_cache (must be global, client or client.)

But that's not a valid command -- you attempted to provide the
configuration key name for the entity (as the error message is telling
you).

$ rbd config global set global rbd_cache false
... or ...
$ rbd config global set client rbd_cache false
... or ...
$ rbd config global set client.host_a rbd_cache false

>
>
> So, while i am happy to file a documentation pull request.. I still need to 
> find the specific command line that actually *works*, for the "rbd config" 
> variant, etc.
>
>
>
> - Original Message -
> From: "Jason Dillaman" 
> To: "Philip Brown" 
> Cc: "ceph-users" 
> Sent: Thursday, December 17, 2020 7:48:22 AM
> Subject: Re: [ceph-users] Re: bug? cant turn off rbd cache?
>
> On Thu, Dec 17, 2020 at 10:41 AM Philip Brown  wrote:
> >
> > Huhhh...
> >
> > Its unfortunate that every google search i did for turning off rbd cache, 
> > specified "put it in the [client] section".
> > Doh.
> >
> > Maybe this would make a good candidate to update the ceph rbd docs?
>
> As an open source project, I highly encourage you to give back in any
> way you can [1].
>
> > Speaking of which.. what is the *exact* syntax for that command please?
> > None of the below work:
> >
> >  rbd config global set rbd_cache  false
> >  rbd config global set rbd cache  false
> >  rbd config global set conf_rbd_cache  false
> >  rbd config global set rbd cache  false
> >  rbd config global set global cache  false
> >
> > and the error messages are not helpful. for example,
> >
> > # rbd config global set  cache  false
> > rbd: invalid config entity: cache (must be global, client or client.)
> >
> >
> > ??? I already specified global??
>
> ... because a global configuration override can apply to all entities,
> to all clients, or to specific clients (hence the examples for the
> entity parameter).
>
> > but then
> >
> > # rbd config global set global cache  false
> > rbd: not rbd option: cache
>
> ... the configuration option is "rbd_cache" as documented here [2].
>
> >
> >
> > Very frustrating.
> >
> >
> >
>


-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
I guess I left out in my examples, where I tried rbd_cache as well, and failed

# rbd config global set rbd_cache  false
rbd: invalid config entity: rbd_cache (must be global, client or client.)



So, while i am happy to file a documentation pull request.. I still need to 
find the specific command line that actually *works*, for the "rbd config" 
variant, etc.



- Original Message -
From: "Jason Dillaman" 
To: "Philip Brown" 
Cc: "ceph-users" 
Sent: Thursday, December 17, 2020 7:48:22 AM
Subject: Re: [ceph-users] Re: bug? cant turn off rbd cache?

On Thu, Dec 17, 2020 at 10:41 AM Philip Brown  wrote:
>
> Huhhh...
>
> Its unfortunate that every google search i did for turning off rbd cache, 
> specified "put it in the [client] section".
> Doh.
>
> Maybe this would make a good candidate to update the ceph rbd docs?

As an open source project, I highly encourage you to give back in any
way you can [1].

> Speaking of which.. what is the *exact* syntax for that command please?
> None of the below work:
>
>  rbd config global set rbd_cache  false
>  rbd config global set rbd cache  false
>  rbd config global set conf_rbd_cache  false
>  rbd config global set rbd cache  false
>  rbd config global set global cache  false
>
> and the error messages are not helpful. for example,
>
> # rbd config global set  cache  false
> rbd: invalid config entity: cache (must be global, client or client.)
>
>
> ??? I already specified global??

... because a global configuration override can apply to all entities,
to all clients, or to specific clients (hence the examples for the
entity parameter).

> but then
>
> # rbd config global set global cache  false
> rbd: not rbd option: cache

... the configuration option is "rbd_cache" as documented here [2].

>
>
> Very frustrating.
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 10:41 AM Philip Brown  wrote:
>
> Huhhh...
>
> Its unfortunate that every google search i did for turning off rbd cache, 
> specified "put it in the [client] section".
> Doh.
>
> Maybe this would make a good candidate to update the ceph rbd docs?

As an open source project, I highly encourage you to give back in any
way you can [1].

> Speaking of which.. what is the *exact* syntax for that command please?
> None of the below work:
>
>  rbd config global set rbd_cache  false
>  rbd config global set rbd cache  false
>  rbd config global set conf_rbd_cache  false
>  rbd config global set rbd cache  false
>  rbd config global set global cache  false
>
> and the error messages are not helpful. for example,
>
> # rbd config global set  cache  false
> rbd: invalid config entity: cache (must be global, client or client.)
>
>
> ??? I already specified global??

... because a global configuration override can apply to all entities,
to all clients, or to specific clients (hence the examples for the
entity parameter).

> but then
>
> # rbd config global set global cache  false
> rbd: not rbd option: cache

... the configuration option is "rbd_cache" as documented here [2].

>
>
> Very frustrating.
>
>
>
> - Original Message -
> From: "Jason Dillaman" 
> To: "Eugen Block" 
> Cc: "ceph-users" 
> Sent: Thursday, December 17, 2020 5:09:17 AM
> Subject: [ceph-users] Re: bug? cant turn off rbd cache?
>
>
>
> If you want to change global (librbd) settings, please use "rbd config
> global set ..." (which is just a wrapper around "ceph config set
> global ..."). The conf files only apply to the single host where they
> exist.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

[1] https://docs.ceph.com/en/latest/start/documenting-ceph/
[2] https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#cache-settings

-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Philip Brown
Huhhh...

Its unfortunate that every google search i did for turning off rbd cache, 
specified "put it in the [client] section".
Doh.

Maybe this would make a good candidate to update the ceph rbd docs?

Speaking of which.. what is the *exact* syntax for that command please?
None of the below work:

 rbd config global set rbd_cache  false
 rbd config global set rbd cache  false
 rbd config global set conf_rbd_cache  false
 rbd config global set rbd cache  false
 rbd config global set global cache  false

and the error messages are not helpful. for example,

# rbd config global set  cache  false
rbd: invalid config entity: cache (must be global, client or client.)


??? I already specified global??

but then 

# rbd config global set global cache  false
rbd: not rbd option: cache



Very frustrating.



- Original Message -
From: "Jason Dillaman" 
To: "Eugen Block" 
Cc: "ceph-users" 
Sent: Thursday, December 17, 2020 5:09:17 AM
Subject: [ceph-users] Re: bug? cant turn off rbd cache?



If you want to change global (librbd) settings, please use "rbd config
global set ..." (which is just a wrapper around "ceph config set
global ..."). The conf files only apply to the single host where they
exist.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Namespace usability for mutitenancy

2020-12-17 Thread George Shuklin
Hello.

Had been someone starting using namespaces for real production for
multi-tenancy?

How good is it at isolating tenants from each other? Can they see each
other presence, quotas, etc?

Is is safe to give access via cephx to (possibly hostile to each other)
users to the same pool with restrictions 'user per namespace'?

How badly can one user affect others? Quotas restrict space overuse, but
what about IO and omaps overuse?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephfs flags question

2020-12-17 Thread Stefan Kooman

Hi List,

In order te reproduce an issue we see on a production cluster (cephFS 
client: ceph-fuse outperform kernel client by a factor of 5) we would 
like to have a test cluster to have the same cephfs "flags" as 
production. However, it's not completely clear how certain features 
influence the cephfs flags. What I could find in the source code, 
cephfs_features.h, is that it *seems* to correspond to the Ceph release. 
For example CEPHFS_FEATURE_NAUTILUS gets a "12" as feature bit. An 
upgraded (Luminous -> Mimic -> Nautilus) cephfs gives us the following 
cephfs flags: "1c".


A (newly installed) Nautilus cluster gives "10" when new snapshots are 
not allowed (ceph fs set cephfs allow_new_snaps false) and "12" when new 
snapshots are allowed (ceph fs set cephfs allow_new_snaps true).


We would like to have the test cluster get the "1c" flags and see if we 
can reproduce the issue. How can we achieve that?


Any info on how those cephfs flags are constructed is welcome.

Thanks,

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Data migration between clusters

2020-12-17 Thread Szabo, Istvan (Agoda)
What is the easiest and best way to migrate bucket from an old cluster to a new 
one?

Luminous to octopus not sure does it matter from the data perspective.


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] who's managing the cephcsi plugin?

2020-12-17 Thread Marc Roos



Is this cephcsi plugin under control of redhat?


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Jason Dillaman
On Thu, Dec 17, 2020 at 7:22 AM Eugen Block  wrote:
>
> Hi,
>
> > [client]
> > rbd cache = false
> > rbd cache writethrough until flush = false
>
> this is the rbd client's config, not the global MON config you're
> reading here:
>
> > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config
> > show |grep rbd_cache
> > "rbd_cache": "true",
>
> If you want to change the global "rbd_cache" setting you have to
> change that in the [global] (or maybe [mon]) section.
> But I'm not sure which effect this will have.

If you want to change global (librbd) settings, please use "rbd config
global set ..." (which is just a wrapper around "ceph config set
global ..."). The conf files only apply to the single host where they
exist.

> Regards,
> Eugen
>
>
> Zitat von Philip  Brown :
>
> > Oops.. sent this to the "wrong" list previously. (lists.ceph.com)
> > Lets try the proper one this time :-/
> >
> > not sure if this an actual bug or I'm doing something else wrong.
> > but in Octopus, I have on the master node
> >
> >
> > # ceph --version
> > ceph version 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8)
> > octopus (stable)
> > # tail -3 /etc/ceph/ceph.conf
> > [client]
> > rbd cache = false
> > rbd cache writethrough until flush = false
> >
> >
> > but I have restarted ALL nodes.. and yet rbd cache is still on.
> >
> > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config
> > show |grep rbd_cache
> > "rbd_cache": "true",
> >
> >
> > What else am I supposed to do???
> >
> >
> >
> >
> > --
> > Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> > 5 Peters Canyon Rd Suite 250
> > Irvine CA 92606
> > Office 714.918.1310| Fax 714.918.1325
> > pbr...@medata.com| www.medata.com
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug? cant turn off rbd cache?

2020-12-17 Thread Eugen Block

Hi,


[client]
rbd cache = false
rbd cache writethrough until flush = false


this is the rbd client's config, not the global MON config you're  
reading here:


# ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config  
show |grep rbd_cache

"rbd_cache": "true",


If you want to change the global "rbd_cache" setting you have to  
change that in the [global] (or maybe [mon]) section.

But I'm not sure which effect this will have.

Regards,
Eugen


Zitat von Philip  Brown :


Oops.. sent this to the "wrong" list previously. (lists.ceph.com)
Lets try the proper one this time :-/

not sure if this an actual bug or I'm doing something else wrong.  
but in Octopus, I have on the master node



# ceph --version
ceph version 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8)  
octopus (stable)

# tail -3 /etc/ceph/ceph.conf
[client]
rbd cache = false
rbd cache writethrough until flush = false


but I have restarted ALL nodes.. and yet rbd cache is still on.

# ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config  
show |grep rbd_cache

"rbd_cache": "true",


What else am I supposed to do???




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr...@medata.com| www.medata.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: allocate_bluefs_freespace failed to allocate / ceph_abort_msg("bluefs enospc")

2020-12-17 Thread Stephan Austermühle

Hi Igor,

thanks for your reply.


To workaround it you might want to switch both bluestore and bluefs allocators 
back to bitmap for now.

Indeed, setting both allocators to bitmap brought the OSD back online and the 
cluster recovered.

You rescued my cluster. ;-)

Cheers

Stephan



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io