[ceph-users] Re: how to disable ceph version check?

2023-11-07 Thread Boris
You can mute it with 

"ceph health mute ALERT"
where alert is the caps keyword from "ceph health detail"

But I would update asap. 

Cheers
 Boris

> Am 08.11.2023 um 02:02 schrieb zxcs :
> 
> Hi, Experts,
> 
> we have a ceph cluster report HEALTH_ERR due to multiple old versions.
> 
>health: HEALTH_ERR
>There are daemons running multiple old versions of ceph
> 
> after run `ceph version`, we see three ceph versions in {16.2.*} , these 
> daemons are ceph osd.
> 
> our question is: how to stop this version check , we cannot upgrade all old 
> daemon.
> 
> 
> 
> Thanks,
> Xiong
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pool(s) do not have an application enabled after upgrade ti 17.2.7

2023-11-07 Thread Dmitry Melekhov

08.11.2023 00:15, Eugen Block пишет:

Hi,

I think I need to remove pools cephfs.cephfs.meta and 
cephfs.cephfs.data  using


cephosdpooldelete{pool-name}[{pool-name}--yes-i-really-really-mean-it]

by the way, as far as I know,  deleting pools not allowed by default, 
I have to allow it first.

|ceph tell mon.\* injectargs '--mon-allow-pool-delete=true' Thank you! |


yes, looks like deleting those pools would be safe. The injectargs 
command should work, although I do it like this:


$ ceph config set mon mon_allow_pool_delete true

And then set it back to false when I'm done and other people have 
access to the cluster to minimize the risk. ;-)



Thank you!

Removed pools, looks like everything works :-)

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
Hello Casey,

And on further inspection, we identified that there were bucket policies set 
from the initial days; we were in v16.2.12.
We upgraded the cluster to v17.2.7 two days ago and it seems obvious that the 
IAM error logs are generated the next minute rgw daemon upgraded from v16.2.12 
to v17.2.7. Looks like there is some issue with parsing.

I'm thinking to downgrade back to v17.2.6 and earlier, please let me know if 
this is a good option for now.

Thanks,
Jayanth

From: Jayanth Reddy 
Sent: Tuesday, November 7, 2023 11:59:38 PM
To: Casey Bodley 
Cc: Wesley Dillingham ; ceph-users 
; Adam Emerson 
Subject: Re: [ceph-users] Re: owner locked out of bucket via bucket policy

Hello Casey,

Thank you for the quick response. I see `rgw_policy_reject_invalid_principals` 
is not present in v17.2.7. Please let me know.

Regards
Jayanth

On Tue, Nov 7, 2023 at 11:50 PM Casey Bodley 
mailto:cbod...@redhat.com>> wrote:
On Tue, Nov 7, 2023 at 12:41 PM Jayanth Reddy
mailto:jayanthreddy5...@gmail.com>> wrote:
>
> Hello Wesley and Casey,
>
> We've ended up with the same issue and here it appears that even the user 
> with "--admin" isn't able to do anything. We're now unable to figure out if 
> it is due to bucket policies, ACLs or IAM of some sort. I'm seeing these IAM 
> errors in the logs
>
> ```
>
> Nov  7 00:02:00 ceph-05 radosgw[4054570]: req 8786689665323103851 
> 0.00368s s3:get_obj Error reading IAM Policy: Terminate parsing due to 
> Handler error.
>
> Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s s3:list_bucket Error reading IAM Policy: Terminate parsing due 
> to Handler error.

it's failing to parse the bucket policy document, but the error
message doesn't say what's wrong with it

disabling rgw_policy_reject_invalid_principals might help if it's
failing on the Principal

> Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s s3:list_bucket init_permissions on 
> :window-dev[1d0fa0b4-04eb-48f9-889b-a60de865ccd8.24143.10]) failed, ret=-13
> Nov  7 22:51:40 ceph-feed-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s op->ERRORHANDLER: err_no=-13 new_err_no=-13
>
> ```
>
> Please help what's wrong here. We're in Ceph v17.2.7.
>
> Regards,
> Jayanth
>
> On Thu, Oct 26, 2023 at 7:14 PM Wesley Dillingham 
> mailto:w...@wesdillingham.com>> wrote:
>>
>> Thank you, this has worked to remove the policy.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn 
>>
>>
>> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley 
>> mailto:cbod...@redhat.com>> wrote:
>>
>> > On Wed, Oct 25, 2023 at 4:59 PM Wesley Dillingham 
>> > mailto:w...@wesdillingham.com>>
>> > wrote:
>> > >
>> > > Thank you, I am not sure (inherited cluster). I presume such an admin
>> > user created after-the-fact would work?
>> >
>> > yes
>> >
>> > > Is there a good way to discover an admin user other than iterate over
>> > all users and retrieve user information? (I presume radosgw-admin user info
>> > --uid=" would illustrate such administrative access?
>> >
>> > not sure there's an easy way to search existing users, but you could
>> > create a temporary admin user for this repair
>> >
>> > >
>> > > Respectfully,
>> > >
>> > > Wes Dillingham
>> > > w...@wesdillingham.com
>> > > LinkedIn
>> > >
>> > >
>> > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley 
>> > > mailto:cbod...@redhat.com>> wrote:
>> > >>
>> > >> if you have an administrative user (created with --admin), you should
>> > >> be able to use its credentials with awscli to delete or overwrite this
>> > >> bucket policy
>> > >>
>> > >> On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham <
>> > w...@wesdillingham.com> wrote:
>> > >> >
>> > >> > I have a bucket which got injected with bucket policy which locks the
>> > >> > bucket even to the bucket owner. The bucket now cannot be accessed
>> > (even
>> > >> > get its info or delete bucket policy does not work) I have looked in
>> > the
>> > >> > radosgw-admin command for a way to delete a bucket policy but do not
>> > see
>> > >> > anything. I presume I will need to somehow remove the bucket policy
>> > from
>> > >> > however it is stored in the bucket metadata / omap etc. If anyone can
>> > point
>> > >> > me in the right direction on that I would appreciate it. Thanks
>> > >> >
>> > >> > Respectfully,
>> > >> >
>> > >> > *Wes Dillingham*
>> > >> > w...@wesdillingham.com
>> > >> > LinkedIn 
>> > >> > ___
>> > >> > ceph-users mailing list -- 
>> > >> > ceph-users@ceph.io
>> > >> > To unsubscribe send an email to 
>> > >> > ceph-users-le...@ceph.io
>> > >> >
>> > >>
>> >
>> >
>> 

[ceph-users] Permanent KeyError: 'TYPE' ->17.2.7: return self.blkid_api['TYPE'] == 'part'

2023-11-07 Thread Harry G Coin
These repeat for every host, only after upgrading from prev release 
Quincy to 17.2.7.   As a result, the cluster is always warned, never 
indicates healthy.


root@noc1:~# ceph health detail

HEALTH_WARN failed to probe daemons or devices
[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
   host sysmon1 `cephadm ceph-volume` failed: cephadm exited with an 
error code: 1, stderr: Inferring config 
/var/lib/ceph/4067126d-01cb-40af-824a-881c130140f8/mon.sysmon1/config
Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host 
--stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint 
/usr/sbin/ceph-volume --privileged --group-add=disk --init -e 
CONTAINER_IMAGE=quay.io/ceph/ceph@sha2
56:92e8fa7d8ca17a7a5bbfde6e596fdfecc8e165fcb94d86493f4e6c7b1f326e4e -e 
NODE_NAME=sysmon1 -e CEPH_USE_RANDOM_NONCE=1 -e 
CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v 
/var/run/ceph/4067126d-01cb-40af-824a-881c130140f8:/var
/run/ceph:z -v 
/var/log/ceph/4067126d-01cb-40af-824a-881c130140f8:/var/log/ceph:z -v 
/var/lib/ceph/4067126d-01cb-40af-824a-881c130140f8/crash:/var/lib/ceph/crash:z 
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lv
m -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v 
/tmp/ceph-tmpl1e27bun:/etc/ceph/ceph.conf:z 
quay.io/ceph/ceph@sha256:92e8fa7d8ca17a7a5bbfde6e596fdfecc8e165fcb94d86493f4e6c7b1f326e4e 
inventory --format=json-pretty --filter-for-batch
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.
/usr/bin/docker: stderr  stderr: Unknown device, --name=, --path=, or 
absolute path in /dev/ or /sys expected.

/usr/bin/docker: stderr Traceback (most recent call last):
/usr/bin/docker: stderr   File "/usr/sbin/ceph-volume", line 11, in 

/usr/bin/docker: stderr load_entry_point('ceph-volume==1.0.0', 
'console_scripts', 'ceph-volume')()
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in 
__init__

/usr/bin/docker: stderr self.main(self.argv)
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, 
in newfunc

/usr/bin/docker: stderr return f(*a, **kw)
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main

/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in 
dispatch

/usr/bin/docker: stderr instance.main()
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 
60, in main

/usr/bin/docker: stderr list_all=self.args.list_all))
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 50, 
in __init__

/usr/bin/docker: stderr sys_info.devices.keys()]
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 49, 
in 

/usr/bin/docker: stderr all_devices_vgs=all_devices_vgs) for k in
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 147, 
in __init__
/usr/bin/docker: stderr self.available_lvm, 
self.rejected_reasons_lvm = self._check_lvm_reject_reasons()
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 646, 
in _check_lvm_reject_reasons
/usr/bin/docker: stderr 
rejected.extend(self._check_generic_reject_reasons())
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 601, 
in _check_generic_reject_reasons

/usr/bin/docker: stderr if self.is_acceptable_device:
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 502, 
in is_acceptable_device
/usr/bin/docker: stderr return self.is_device or self.is_partition 
or self.is_lv
/usr/bin/docker: stderr   File 
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 482, 
in is_partition

/usr/bin/docker: stderr return self.blkid_api['TYPE'] == 'part'
/usr/bin/docker: stderr KeyError: 'TYPE'
Traceback (most recent call last):
 File 
"/var/lib/ceph/4067126d-01cb-40af-824a-881c130140f8/cephadm.8b92cafd937eb89681ee011f9e70f85937fd09c4bd61ed4a59981d275a1f255b", 
line 9679, in 

   main()
 File 
"/var/lib/ceph/4067126d-01cb-40af-824a-881c130140f8/cephadm.8b92cafd937eb89681ee011f9e70f85937fd09c4bd61ed4a59981d275a1f255b", 
line 9667, in main

   r = ctx.func(ctx)
 File 

[ceph-users] how to disable ceph version check?

2023-11-07 Thread zxcs
Hi, Experts,

we have a ceph cluster report HEALTH_ERR due to multiple old versions. 

health: HEALTH_ERR
There are daemons running multiple old versions of ceph

after run `ceph version`, we see three ceph versions in {16.2.*} , these 
daemons are ceph osd.

our question is: how to stop this version check , we cannot upgrade all old 
daemon.



Thanks,
Xiong
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS stuck in rejoin

2023-11-07 Thread Xiubo Li

Hi Frank,

Recently I found a new possible case could cause this, please see 
https://github.com/ceph/ceph/pull/54259. This is just a ceph side fix, 
after this we need to fix it in kclient too, which hasn't done yet.


Thanks

- Xiubo

On 8/8/23 17:44, Frank Schilder wrote:

Dear Xiubo,

the nearfull pool is an RBD pool and has nothing to do with the file system. 
All pools for the file system have plenty of capacity.

I think we have an idea what kind of workload caused the issue. We had a user 
run a computation that reads the same file over and over again. He started 100 
such jobs in parallel and our storage servers were at 400% load. I saw 167K 
read IOP/s on an HDD pool that has an aggregated raw IOP/s budget of ca. 11K. 
Clearly, most of this was served from RAM.

It is possible that this extreme load situation triggered a race that remained 
undetected/unreported. There is literally no related message in any logs near 
the time the warning started popping up. It shows up out of nowhere.

We asked the user to change his workflow to use local RAM disk for the input 
files. I don't think we can reproduce the problem anytime soon.

About the bug fixes, I'm eagerly waiting for this and another one. Any idea 
when they might show up in distro kernels?

Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Xiubo Li 
Sent: Tuesday, August 8, 2023 2:57 AM
To: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: MDS stuck in rejoin


On 8/7/23 21:54, Frank Schilder wrote:

Dear Xiubo,

I managed to collect some information. It looks like there is nothing in the 
dmesg log around the time the client failed to advance its TID. I collected 
short snippets around the critical time below. I have full logs in case you are 
interested. Its large files, I will need to do an upload for that.

I also have a dump of "mds session ls" output for clients that showed the same 
issue later. Unfortunately, no consistent log information for a single incident.

Here the summary, please let me know if uploading the full package makes sense:

- Status:

On July 29, 2023

ceph status/df/pool stats/health detail at 01:05:14:
cluster:
  health: HEALTH_WARN
  1 pools nearfull

ceph status/df/pool stats/health detail at 01:05:28:
cluster:
  health: HEALTH_WARN
  1 clients failing to advance oldest client/flush tid
  1 pools nearfull

Okay, then this could be the root cause.

If the pool nearful it could block flushing the journal logs to the pool
and then the MDS couldn't safe reply to the requests and then block them
like this.

Could you fix the pool nearful issue first and then check could you see
it again ?



[...]

On July 31, 2023

ceph status/df/pool stats/health detail at 10:36:16:
cluster:
  health: HEALTH_WARN
  1 clients failing to advance oldest client/flush tid
  1 pools nearfull

cluster:
  health: HEALTH_WARN
  1 pools nearfull

- client evict command (date, time, command):

2023-07-31 10:36  ceph tell mds.ceph-11 client evict id=145678457

We have a 1h time difference between the date stamp of the command and the 
dmesg date stamps. However, there seems to be a weird 10min delay from issuing 
the evict command until it shows up in dmesg on the client.

- dmesg:

[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 12:59:14 2023] beegfs: enabling unsafe global rkey
[Fri Jul 28 16:07:47 2023] slurm.epilog.cl (24175): drop_caches: 3
[Sat Jul 29 18:21:30 2023] libceph: mds2 192.168.32.75:6801 socket closed (con 
state OPEN)
[Sat Jul 29 18:21:30 2023] libceph: mds2 192.168.32.75:6801 socket closed (con 
state OPEN)
[Sat Jul 29 18:21:30 2023] libceph: mds2 192.168.32.75:6801 socket closed (con 
state OPEN)
[Sat Jul 29 18:21:42 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:21:42 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:21:43 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:21:43 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:21:43 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:21:43 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:26:39 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:26:39 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:26:39 2023] ceph: mds2 reconnect start
[Sat Jul 29 18:26:40 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:26:40 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:26:40 2023] ceph: mds2 reconnect success
[Sat Jul 29 18:26:49 2023] ceph: update_snap_trace error -22

This is a known bug and we have fixed it in both kclient and ceph side:

https://tracker.ceph.com/issues/61200


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Adam King
I think the orch code itself is doing fine, but a bunch of tests are
failing due to https://tracker.ceph.com/issues/63151. I think that's likely
related to the ganesha build we have included in the container and if we
want nfs over rgw to work properly in this release I think we'll have to
update it. From previous notes in the tracker, it looks like 5.5-2 is
currently in there (specifically nfs-ganesha-rgw-5.5-2.el8s.x86_64  package
probably has an issue).

On Tue, Nov 7, 2023 at 4:02 PM Yuri Weinstein  wrote:

> 3 PRs above mentioned were merged and I am returning some tests:
> https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd
>
> Still seeing approvals.
> smoke - Laura, Radek, Prashant, Venky in progress
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey in progress
> fs - Venky
> orch - Adam King
> rbd - Ilya approved
> krbd - Ilya approved
> upgrade/quincy-x (reef) - Laura PTL
> powercycle - Brad
> perf-basic - in progress
>
>
> On Tue, Nov 7, 2023 at 8:38 AM Casey Bodley  wrote:
> >
> > On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein 
> wrote:
> > >
> > > Details of this release are summarized here:
> > >
> > > https://tracker.ceph.com/issues/63443#note-1
> > >
> > > Seeking approvals/reviews for:
> > >
> > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> > > rados - Neha, Radek, Travis, Ernesto, Adam King
> > > rgw - Casey
> >
> > rgw results are approved. https://github.com/ceph/ceph/pull/54371
> > merged to reef but is needed on reef-release
> >
> > > fs - Venky
> > > orch - Adam King
> > > rbd - Ilya
> > > krbd - Ilya
> > > upgrade/quincy-x (reef) - Laura PTL
> > > powercycle - Brad
> > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> > >
> > > Please reply to this email with approval and/or trackers of known
> > > issues/PRs to address them.
> > >
> > > TIA
> > > YuriW
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Yuri Weinstein
3 PRs above mentioned were merged and I am returning some tests:
https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd

Still seeing approvals.
smoke - Laura, Radek, Prashant, Venky in progress
rados - Neha, Radek, Travis, Ernesto, Adam King
rgw - Casey in progress
fs - Venky
orch - Adam King
rbd - Ilya approved
krbd - Ilya approved
upgrade/quincy-x (reef) - Laura PTL
powercycle - Brad
perf-basic - in progress


On Tue, Nov 7, 2023 at 8:38 AM Casey Bodley  wrote:
>
> On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein  wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/63443#note-1
> >
> > Seeking approvals/reviews for:
> >
> > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> > rados - Neha, Radek, Travis, Ernesto, Adam King
> > rgw - Casey
>
> rgw results are approved. https://github.com/ceph/ceph/pull/54371
> merged to reef but is needed on reef-release
>
> > fs - Venky
> > orch - Adam King
> > rbd - Ilya
> > krbd - Ilya
> > upgrade/quincy-x (reef) - Laura PTL
> > powercycle - Brad
> > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> >
> > Please reply to this email with approval and/or trackers of known
> > issues/PRs to address them.
> >
> > TIA
> > YuriW
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
Hello Casey,

Thank you for the quick response. I see
`rgw_policy_reject_invalid_principals` is not present in v17.2.7. Please
let me know.

Regards
Jayanth

On Tue, Nov 7, 2023 at 11:50 PM Casey Bodley  wrote:

> On Tue, Nov 7, 2023 at 12:41 PM Jayanth Reddy
>  wrote:
> >
> > Hello Wesley and Casey,
> >
> > We've ended up with the same issue and here it appears that even the
> user with "--admin" isn't able to do anything. We're now unable to figure
> out if it is due to bucket policies, ACLs or IAM of some sort. I'm seeing
> these IAM errors in the logs
> >
> > ```
> >
> > Nov  7 00:02:00 ceph-05 radosgw[4054570]: req 8786689665323103851
> 0.00368s s3:get_obj Error reading IAM Policy: Terminate parsing due to
> Handler error.
> >
> > Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583
> 0.0s s3:list_bucket Error reading IAM Policy: Terminate parsing due
> to Handler error.
>
> it's failing to parse the bucket policy document, but the error
> message doesn't say what's wrong with it
>
> disabling rgw_policy_reject_invalid_principals might help if it's
> failing on the Principal
>
> > Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583
> 0.0s s3:list_bucket init_permissions on
> :window-dev[1d0fa0b4-04eb-48f9-889b-a60de865ccd8.24143.10]) failed, ret=-13
> > Nov  7 22:51:40 ceph-feed-05 radosgw[4054570]: req 13293029267332025583
> 0.0s op->ERRORHANDLER: err_no=-13 new_err_no=-13
> >
> > ```
> >
> > Please help what's wrong here. We're in Ceph v17.2.7.
> >
> > Regards,
> > Jayanth
> >
> > On Thu, Oct 26, 2023 at 7:14 PM Wesley Dillingham 
> wrote:
> >>
> >> Thank you, this has worked to remove the policy.
> >>
> >> Respectfully,
> >>
> >> *Wes Dillingham*
> >> w...@wesdillingham.com
> >> LinkedIn 
> >>
> >>
> >> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley 
> wrote:
> >>
> >> > On Wed, Oct 25, 2023 at 4:59 PM Wesley Dillingham <
> w...@wesdillingham.com>
> >> > wrote:
> >> > >
> >> > > Thank you, I am not sure (inherited cluster). I presume such an
> admin
> >> > user created after-the-fact would work?
> >> >
> >> > yes
> >> >
> >> > > Is there a good way to discover an admin user other than iterate
> over
> >> > all users and retrieve user information? (I presume radosgw-admin
> user info
> >> > --uid=" would illustrate such administrative access?
> >> >
> >> > not sure there's an easy way to search existing users, but you could
> >> > create a temporary admin user for this repair
> >> >
> >> > >
> >> > > Respectfully,
> >> > >
> >> > > Wes Dillingham
> >> > > w...@wesdillingham.com
> >> > > LinkedIn
> >> > >
> >> > >
> >> > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley 
> wrote:
> >> > >>
> >> > >> if you have an administrative user (created with --admin), you
> should
> >> > >> be able to use its credentials with awscli to delete or overwrite
> this
> >> > >> bucket policy
> >> > >>
> >> > >> On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham <
> >> > w...@wesdillingham.com> wrote:
> >> > >> >
> >> > >> > I have a bucket which got injected with bucket policy which
> locks the
> >> > >> > bucket even to the bucket owner. The bucket now cannot be
> accessed
> >> > (even
> >> > >> > get its info or delete bucket policy does not work) I have
> looked in
> >> > the
> >> > >> > radosgw-admin command for a way to delete a bucket policy but do
> not
> >> > see
> >> > >> > anything. I presume I will need to somehow remove the bucket
> policy
> >> > from
> >> > >> > however it is stored in the bucket metadata / omap etc. If
> anyone can
> >> > point
> >> > >> > me in the right direction on that I would appreciate it. Thanks
> >> > >> >
> >> > >> > Respectfully,
> >> > >> >
> >> > >> > *Wes Dillingham*
> >> > >> > w...@wesdillingham.com
> >> > >> > LinkedIn 
> >> > >> > ___
> >> > >> > ceph-users mailing list -- ceph-users@ceph.io
> >> > >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> > >> >
> >> > >>
> >> >
> >> >
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSD fails to start after 17.2.6 to 17.2.7 update

2023-11-07 Thread Matthew Booth
I just discovered that rook is tracking this here:
https://github.com/rook/rook/issues/13136

On Tue, 7 Nov 2023 at 18:09, Matthew Booth  wrote:

> On Tue, 7 Nov 2023 at 16:26, Matthew Booth  wrote:
>
>> FYI I left rook as is and reverted to ceph 17.2.6 and the issue is
>> resolved.
>>
>> The code change was added by
>> commit 2e52c029bc2b052bb96f4731c6bb00e30ed209be:
>> ceph-volume: fix broken workaround for atari partitions
>>
>> broken by bea9f4b643ce32268ad79c0fc257b25ff2f8333c
>> This commits fixes that regression.
>>
>> Fixes: https://tracker.ceph.com/issues/62001
>>
>> Signed-off-by: Guillaume Abrioux 
>> (cherry picked from commit b3fd5b513176fb9ba1e6e0595ded4b41d401c68e)
>>
>> It feels like a regression to me.
>>
>
> It looks like the issue is that the argument passed on List.generate is
> '/dev/sdc', but lsblk's NAME field contains 'sdc'. The NAME field was not
> used this way in v17.2.6.
>
> I haven't checked, but I assume that ceph-bluestore-tool can accept either
> 'sdc' or '/dev/sdc'.
>
> Matt
>
>
>>
>> Matt
>>
>> On Tue, 7 Nov 2023 at 16:13, Matthew Booth  wrote:
>>
>>> Firstly I'm rolling out a rook update from v1.12.2 to v1.12.7 (latest
>>> stable) and ceph from 17.2.6 to 17.2.7 at the same time. I mention this in
>>> case the problem is actually caused by rook rather than ceph. It looks like
>>> ceph to my uninitiated eyes, though.
>>>
>>> The update just started bumping my OSDs and the first one fails in the
>>> 'activate' init container. The complete logs for this container are:
>>>
>>> + OSD_ID=5
>>> + CEPH_FSID=
>>> + OSD_UUID=
>>> + OSD_STORE_FLAG=--bluestore
>>> + OSD_DATA_DIR=/var/lib/ceph/osd/ceph-5
>>> + CV_MODE=raw
>>> + DEVICE=/dev/sdc
>>> + cp --no-preserve=mode /etc/temp-ceph/ceph.conf /etc/ceph/ceph.conf
>>> + python3 -c '
>>> import configparser
>>>
>>> config = configparser.ConfigParser()
>>> config.read('\''/etc/ceph/ceph.conf'\'')
>>>
>>> if not config.has_section('\''global'\''):
>>> config['\''global'\''] = {}
>>>
>>> if not config.has_option('\''global'\'','\''fsid'\''):
>>> config['\''global'\'']['\''fsid'\''] = '\'''\''
>>>
>>> with open('\''/etc/ceph/ceph.conf'\'', '\''w'\'') as configfile:
>>> config.write(configfile)
>>> '
>>> + ceph -n client.admin auth get-or-create osd.5 mon 'allow profile osd'
>>> mgr 'allow profile osd' osd 'allow *' -k
>>> /etc/ceph/admin-keyring-store/keyring
>>> [osd.5]
>>> key = 
>>> + [[ raw == \l\v\m ]]
>>> ++ mktemp
>>> + OSD_LIST=/tmp/tmp.CekJVsr9gr
>>> + ceph-volume raw list /dev/sdc
>>> Traceback (most recent call last):
>>>   File "/usr/sbin/ceph-volume", line 11, in 
>>> load_entry_point('ceph-volume==1.0.0', 'console_scripts',
>>> 'ceph-volume')()
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41,
>>> in __init__
>>> self.main(self.argv)
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py",
>>> line 59, in newfunc
>>> return f(*a, **kw)
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153,
>>> in main
>>> terminal.dispatch(self.mapper, subcommand_args)
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
>>> 194, in dispatch
>>> instance.main()
>>>   File
>>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line
>>> 32, in main
>>> terminal.dispatch(self.mapper, self.argv)
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
>>> 194, in dispatch
>>> instance.main()
>>>   File
>>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>>> 166, in main
>>> self.list(args)
>>>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py",
>>> line 16, in is_root
>>> return func(*a, **kw)
>>>   File
>>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>>> 122, in list
>>> report = self.generate(args.device)
>>>   File
>>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>>> 91, in generate
>>> info_device = [info for info in info_devices if info['NAME'] ==
>>> dev][0]
>>> IndexError: list index out of range
>>>
>>> So it has failed executing `ceph-volume raw list /dev/sdc`.
>>>
>>> It looks like this code is new in 17.2.7. Is this a regression? What
>>> would be the simplest way to back out of it?
>>>
>>> Thanks,
>>> Matt
>>> --
>>> Matthew Booth
>>>
>>
>>
>> --
>> Matthew Booth
>>
>
>
> --
> Matthew Booth
>


-- 
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Casey Bodley
On Tue, Nov 7, 2023 at 12:41 PM Jayanth Reddy
 wrote:
>
> Hello Wesley and Casey,
>
> We've ended up with the same issue and here it appears that even the user 
> with "--admin" isn't able to do anything. We're now unable to figure out if 
> it is due to bucket policies, ACLs or IAM of some sort. I'm seeing these IAM 
> errors in the logs
>
> ```
>
> Nov  7 00:02:00 ceph-05 radosgw[4054570]: req 8786689665323103851 
> 0.00368s s3:get_obj Error reading IAM Policy: Terminate parsing due to 
> Handler error.
>
> Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s s3:list_bucket Error reading IAM Policy: Terminate parsing due 
> to Handler error.

it's failing to parse the bucket policy document, but the error
message doesn't say what's wrong with it

disabling rgw_policy_reject_invalid_principals might help if it's
failing on the Principal

> Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s s3:list_bucket init_permissions on 
> :window-dev[1d0fa0b4-04eb-48f9-889b-a60de865ccd8.24143.10]) failed, ret=-13
> Nov  7 22:51:40 ceph-feed-05 radosgw[4054570]: req 13293029267332025583 
> 0.0s op->ERRORHANDLER: err_no=-13 new_err_no=-13
>
> ```
>
> Please help what's wrong here. We're in Ceph v17.2.7.
>
> Regards,
> Jayanth
>
> On Thu, Oct 26, 2023 at 7:14 PM Wesley Dillingham  
> wrote:
>>
>> Thank you, this has worked to remove the policy.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn 
>>
>>
>> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley  wrote:
>>
>> > On Wed, Oct 25, 2023 at 4:59 PM Wesley Dillingham 
>> > wrote:
>> > >
>> > > Thank you, I am not sure (inherited cluster). I presume such an admin
>> > user created after-the-fact would work?
>> >
>> > yes
>> >
>> > > Is there a good way to discover an admin user other than iterate over
>> > all users and retrieve user information? (I presume radosgw-admin user info
>> > --uid=" would illustrate such administrative access?
>> >
>> > not sure there's an easy way to search existing users, but you could
>> > create a temporary admin user for this repair
>> >
>> > >
>> > > Respectfully,
>> > >
>> > > Wes Dillingham
>> > > w...@wesdillingham.com
>> > > LinkedIn
>> > >
>> > >
>> > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley  wrote:
>> > >>
>> > >> if you have an administrative user (created with --admin), you should
>> > >> be able to use its credentials with awscli to delete or overwrite this
>> > >> bucket policy
>> > >>
>> > >> On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham <
>> > w...@wesdillingham.com> wrote:
>> > >> >
>> > >> > I have a bucket which got injected with bucket policy which locks the
>> > >> > bucket even to the bucket owner. The bucket now cannot be accessed
>> > (even
>> > >> > get its info or delete bucket policy does not work) I have looked in
>> > the
>> > >> > radosgw-admin command for a way to delete a bucket policy but do not
>> > see
>> > >> > anything. I presume I will need to somehow remove the bucket policy
>> > from
>> > >> > however it is stored in the bucket metadata / omap etc. If anyone can
>> > point
>> > >> > me in the right direction on that I would appreciate it. Thanks
>> > >> >
>> > >> > Respectfully,
>> > >> >
>> > >> > *Wes Dillingham*
>> > >> > w...@wesdillingham.com
>> > >> > LinkedIn 
>> > >> > ___
>> > >> > ceph-users mailing list -- ceph-users@ceph.io
>> > >> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> > >> >
>> > >>
>> >
>> >
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSD fails to start after 17.2.6 to 17.2.7 update

2023-11-07 Thread Matthew Booth
On Tue, 7 Nov 2023 at 16:26, Matthew Booth  wrote:

> FYI I left rook as is and reverted to ceph 17.2.6 and the issue is
> resolved.
>
> The code change was added by
> commit 2e52c029bc2b052bb96f4731c6bb00e30ed209be:
> ceph-volume: fix broken workaround for atari partitions
>
> broken by bea9f4b643ce32268ad79c0fc257b25ff2f8333c
> This commits fixes that regression.
>
> Fixes: https://tracker.ceph.com/issues/62001
>
> Signed-off-by: Guillaume Abrioux 
> (cherry picked from commit b3fd5b513176fb9ba1e6e0595ded4b41d401c68e)
>
> It feels like a regression to me.
>

It looks like the issue is that the argument passed on List.generate is
'/dev/sdc', but lsblk's NAME field contains 'sdc'. The NAME field was not
used this way in v17.2.6.

I haven't checked, but I assume that ceph-bluestore-tool can accept either
'sdc' or '/dev/sdc'.

Matt


>
> Matt
>
> On Tue, 7 Nov 2023 at 16:13, Matthew Booth  wrote:
>
>> Firstly I'm rolling out a rook update from v1.12.2 to v1.12.7 (latest
>> stable) and ceph from 17.2.6 to 17.2.7 at the same time. I mention this in
>> case the problem is actually caused by rook rather than ceph. It looks like
>> ceph to my uninitiated eyes, though.
>>
>> The update just started bumping my OSDs and the first one fails in the
>> 'activate' init container. The complete logs for this container are:
>>
>> + OSD_ID=5
>> + CEPH_FSID=
>> + OSD_UUID=
>> + OSD_STORE_FLAG=--bluestore
>> + OSD_DATA_DIR=/var/lib/ceph/osd/ceph-5
>> + CV_MODE=raw
>> + DEVICE=/dev/sdc
>> + cp --no-preserve=mode /etc/temp-ceph/ceph.conf /etc/ceph/ceph.conf
>> + python3 -c '
>> import configparser
>>
>> config = configparser.ConfigParser()
>> config.read('\''/etc/ceph/ceph.conf'\'')
>>
>> if not config.has_section('\''global'\''):
>> config['\''global'\''] = {}
>>
>> if not config.has_option('\''global'\'','\''fsid'\''):
>> config['\''global'\'']['\''fsid'\''] = '\'''\''
>>
>> with open('\''/etc/ceph/ceph.conf'\'', '\''w'\'') as configfile:
>> config.write(configfile)
>> '
>> + ceph -n client.admin auth get-or-create osd.5 mon 'allow profile osd'
>> mgr 'allow profile osd' osd 'allow *' -k
>> /etc/ceph/admin-keyring-store/keyring
>> [osd.5]
>> key = 
>> + [[ raw == \l\v\m ]]
>> ++ mktemp
>> + OSD_LIST=/tmp/tmp.CekJVsr9gr
>> + ceph-volume raw list /dev/sdc
>> Traceback (most recent call last):
>>   File "/usr/sbin/ceph-volume", line 11, in 
>> load_entry_point('ceph-volume==1.0.0', 'console_scripts',
>> 'ceph-volume')()
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41,
>> in __init__
>> self.main(self.argv)
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
>> 59, in newfunc
>> return f(*a, **kw)
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153,
>> in main
>> terminal.dispatch(self.mapper, subcommand_args)
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
>> 194, in dispatch
>> instance.main()
>>   File
>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line
>> 32, in main
>> terminal.dispatch(self.mapper, self.argv)
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
>> 194, in dispatch
>> instance.main()
>>   File
>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>> 166, in main
>> self.list(args)
>>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
>> 16, in is_root
>> return func(*a, **kw)
>>   File
>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>> 122, in list
>> report = self.generate(args.device)
>>   File
>> "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line
>> 91, in generate
>> info_device = [info for info in info_devices if info['NAME'] ==
>> dev][0]
>> IndexError: list index out of range
>>
>> So it has failed executing `ceph-volume raw list /dev/sdc`.
>>
>> It looks like this code is new in 17.2.7. Is this a regression? What
>> would be the simplest way to back out of it?
>>
>> Thanks,
>> Matt
>> --
>> Matthew Booth
>>
>
>
> --
> Matthew Booth
>


-- 
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
Hello Wesley and Casey,

We've ended up with the same issue and here it appears that even the user
with "--admin" isn't able to do anything. We're now unable to figure out if
it is due to bucket policies, ACLs or IAM of some sort. I'm seeing
these IAM errors in the logs

```

Nov  7 00:02:00 ceph-05 radosgw[4054570]: req 8786689665323103851
0.00368s s3:get_obj Error reading *IAM* Policy: Terminate parsing due
to Handler error.

Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583
0.0s s3:list_bucket Error reading IAM Policy: Terminate parsing due
to Handler error.
Nov  7 22:51:40 ceph-05 radosgw[4054570]: req 13293029267332025583
0.0s s3:list_bucket init_permissions on
:window-dev[1d0fa0b4-04eb-48f9-889b-a60de865ccd8.24143.10]) failed, ret=-13
Nov  7 22:51:40 ceph-feed-05 radosgw[4054570]: req 13293029267332025583
0.0s op->ERRORHANDLER: err_no=-13 new_err_no=-13
```

Please help what's wrong here. We're in Ceph v17.2.7.

Regards,
Jayanth

On Thu, Oct 26, 2023 at 7:14 PM Wesley Dillingham 
wrote:

> Thank you, this has worked to remove the policy.
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn 
>
>
> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley  wrote:
>
> > On Wed, Oct 25, 2023 at 4:59 PM Wesley Dillingham  >
> > wrote:
> > >
> > > Thank you, I am not sure (inherited cluster). I presume such an admin
> > user created after-the-fact would work?
> >
> > yes
> >
> > > Is there a good way to discover an admin user other than iterate over
> > all users and retrieve user information? (I presume radosgw-admin user
> info
> > --uid=" would illustrate such administrative access?
> >
> > not sure there's an easy way to search existing users, but you could
> > create a temporary admin user for this repair
> >
> > >
> > > Respectfully,
> > >
> > > Wes Dillingham
> > > w...@wesdillingham.com
> > > LinkedIn
> > >
> > >
> > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley 
> wrote:
> > >>
> > >> if you have an administrative user (created with --admin), you should
> > >> be able to use its credentials with awscli to delete or overwrite this
> > >> bucket policy
> > >>
> > >> On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham <
> > w...@wesdillingham.com> wrote:
> > >> >
> > >> > I have a bucket which got injected with bucket policy which locks
> the
> > >> > bucket even to the bucket owner. The bucket now cannot be accessed
> > (even
> > >> > get its info or delete bucket policy does not work) I have looked in
> > the
> > >> > radosgw-admin command for a way to delete a bucket policy but do not
> > see
> > >> > anything. I presume I will need to somehow remove the bucket policy
> > from
> > >> > however it is stored in the bucket metadata / omap etc. If anyone
> can
> > point
> > >> > me in the right direction on that I would appreciate it. Thanks
> > >> >
> > >> > Respectfully,
> > >> >
> > >> > *Wes Dillingham*
> > >> > w...@wesdillingham.com
> > >> > LinkedIn 
> > >> > ___
> > >> > ceph-users mailing list -- ceph-users@ceph.io
> > >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> > >> >
> > >>
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Casey Bodley
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey

rgw results are approved. https://github.com/ceph/ceph/pull/54371
merged to reef but is needed on reef-release

> fs - Venky
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade/quincy-x (reef) - Laura PTL
> powercycle - Brad
> perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> TIA
> YuriW
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSD fails to start after 17.2.6 to 17.2.7 update

2023-11-07 Thread Matthew Booth
FYI I left rook as is and reverted to ceph 17.2.6 and the issue is resolved.

The code change was added by
commit 2e52c029bc2b052bb96f4731c6bb00e30ed209be:
ceph-volume: fix broken workaround for atari partitions

broken by bea9f4b643ce32268ad79c0fc257b25ff2f8333c
This commits fixes that regression.

Fixes: https://tracker.ceph.com/issues/62001

Signed-off-by: Guillaume Abrioux 
(cherry picked from commit b3fd5b513176fb9ba1e6e0595ded4b41d401c68e)

It feels like a regression to me.

Matt

On Tue, 7 Nov 2023 at 16:13, Matthew Booth  wrote:

> Firstly I'm rolling out a rook update from v1.12.2 to v1.12.7 (latest
> stable) and ceph from 17.2.6 to 17.2.7 at the same time. I mention this in
> case the problem is actually caused by rook rather than ceph. It looks like
> ceph to my uninitiated eyes, though.
>
> The update just started bumping my OSDs and the first one fails in the
> 'activate' init container. The complete logs for this container are:
>
> + OSD_ID=5
> + CEPH_FSID=
> + OSD_UUID=
> + OSD_STORE_FLAG=--bluestore
> + OSD_DATA_DIR=/var/lib/ceph/osd/ceph-5
> + CV_MODE=raw
> + DEVICE=/dev/sdc
> + cp --no-preserve=mode /etc/temp-ceph/ceph.conf /etc/ceph/ceph.conf
> + python3 -c '
> import configparser
>
> config = configparser.ConfigParser()
> config.read('\''/etc/ceph/ceph.conf'\'')
>
> if not config.has_section('\''global'\''):
> config['\''global'\''] = {}
>
> if not config.has_option('\''global'\'','\''fsid'\''):
> config['\''global'\'']['\''fsid'\''] = '\'''\''
>
> with open('\''/etc/ceph/ceph.conf'\'', '\''w'\'') as configfile:
> config.write(configfile)
> '
> + ceph -n client.admin auth get-or-create osd.5 mon 'allow profile osd'
> mgr 'allow profile osd' osd 'allow *' -k
> /etc/ceph/admin-keyring-store/keyring
> [osd.5]
> key = 
> + [[ raw == \l\v\m ]]
> ++ mktemp
> + OSD_LIST=/tmp/tmp.CekJVsr9gr
> + ceph-volume raw list /dev/sdc
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-volume", line 11, in 
> load_entry_point('ceph-volume==1.0.0', 'console_scripts',
> 'ceph-volume')()
>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in
> __init__
> self.main(self.argv)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
> 59, in newfunc
> return f(*a, **kw)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153,
> in main
> terminal.dispatch(self.mapper, subcommand_args)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
> 194, in dispatch
> instance.main()
>   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py",
> line 32, in main
> terminal.dispatch(self.mapper, self.argv)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
> 194, in dispatch
> instance.main()
>   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
> line 166, in main
> self.list(args)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
> 16, in is_root
> return func(*a, **kw)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
> line 122, in list
> report = self.generate(args.device)
>   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
> line 91, in generate
> info_device = [info for info in info_devices if info['NAME'] == dev][0]
> IndexError: list index out of range
>
> So it has failed executing `ceph-volume raw list /dev/sdc`.
>
> It looks like this code is new in 17.2.7. Is this a regression? What would
> be the simplest way to back out of it?
>
> Thanks,
> Matt
> --
> Matthew Booth
>


-- 
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] OSD fails to start after 17.2.6 to 17.2.7 update

2023-11-07 Thread Matthew Booth
Firstly I'm rolling out a rook update from v1.12.2 to v1.12.7 (latest
stable) and ceph from 17.2.6 to 17.2.7 at the same time. I mention this in
case the problem is actually caused by rook rather than ceph. It looks like
ceph to my uninitiated eyes, though.

The update just started bumping my OSDs and the first one fails in the
'activate' init container. The complete logs for this container are:

+ OSD_ID=5
+ CEPH_FSID=
+ OSD_UUID=
+ OSD_STORE_FLAG=--bluestore
+ OSD_DATA_DIR=/var/lib/ceph/osd/ceph-5
+ CV_MODE=raw
+ DEVICE=/dev/sdc
+ cp --no-preserve=mode /etc/temp-ceph/ceph.conf /etc/ceph/ceph.conf
+ python3 -c '
import configparser

config = configparser.ConfigParser()
config.read('\''/etc/ceph/ceph.conf'\'')

if not config.has_section('\''global'\''):
config['\''global'\''] = {}

if not config.has_option('\''global'\'','\''fsid'\''):
config['\''global'\'']['\''fsid'\''] = '\'''\''

with open('\''/etc/ceph/ceph.conf'\'', '\''w'\'') as configfile:
config.write(configfile)
'
+ ceph -n client.admin auth get-or-create osd.5 mon 'allow profile osd' mgr
'allow profile osd' osd 'allow *' -k /etc/ceph/admin-keyring-store/keyring
[osd.5]
key = 
+ [[ raw == \l\v\m ]]
++ mktemp
+ OSD_LIST=/tmp/tmp.CekJVsr9gr
+ ceph-volume raw list /dev/sdc
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 11, in 
load_entry_point('ceph-volume==1.0.0', 'console_scripts',
'ceph-volume')()
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in
__init__
self.main(self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
59, in newfunc
return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in
main
terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
194, in dispatch
instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py",
line 32, in main
terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
194, in dispatch
instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
line 166, in main
self.list(args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
16, in is_root
return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
line 122, in list
report = self.generate(args.device)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
line 91, in generate
info_device = [info for info in info_devices if info['NAME'] == dev][0]
IndexError: list index out of range

So it has failed executing `ceph-volume raw list /dev/sdc`.

It looks like this code is new in 17.2.7. Is this a regression? What would
be the simplest way to back out of it?

Thanks,
Matt
-- 
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Seagate Exos power settings - any experiences at your sites?

2023-11-07 Thread Alex Gorbachev
We have been seeing some odd behavior with scrubbing (very slow) and OSD
warnings on a couple of new clusters.  A bit of research turned up this:

https://www.reddit.com/r/truenas/comments/p1ebnf/seagate_exos_load_cyclingidling_info_solution/

We've installed the tool from https://github.com/Seagate/openSeaChest and
disabled EPC power features similar to:

openSeaChest_PowerControl --scan|grep ST|awk '{print $2}'|xargs -I {}
openSeaChest_PowerControl -d {} --EPCfeature disable

Things seem to be better now on those two clusters. Has anyone seen
anything similar? This would seem to be a huge issue if all defaults on
Exos are wrong (stop-and-go on all Ceph/ZFS workloads).
--
Best regards,
Alex Gorbachev
--
Intelligent Systems Services Inc.
http://www.iss-integration.com
https://www.linkedin.com/in/alex-gorbachev-iss/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Found unknown daemon type ceph-exporter on host after upgrade to 17.2.7

2023-11-07 Thread Dmitry Melekhov

Hello!

I see

[WRN] Found unknown daemon type ceph-exporter on host

for all 3 ceph servers in logs, after upgrade to 17.2.7 from 17.2.6 in 
dashboard


and

cephadm ['--image', 
'quay.io/ceph/ceph@sha256:1fcdbead4709a7182047f8ff9726e0f17b0b209aaa6656c5c8b2339b818e70bb', 
'--timeout', '895', 'ls']
2023-11-07 16:15:37,531 7fddb699b740 WARNING version for unknown daemon 
type ceph-exporter


in

cephadm.log


There were no such messages before.

I guess this will not create any problem, but is there any way to fix this?


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] pool(s) do not have an application enabled after upgrade ti 17.2.7

2023-11-07 Thread Dmitry Melekhov



Hello!


I'm very new to ceph ,s orry I'm asking extremely basic questions.


I just upgraded 17.2.6 to 17.2.7 and got warning:

2 pool(s) do not have an application enabled

These pools are

5 cephfs.cephfs.meta
6 cephfs.cephfs.data

I don't remember why and how I created them, I just followed some 
instruction...

And don't remember their state before upgrade :-(
And I see in dashboard 0 bytes is used in both pools.

But I have two other pools

3 cephfs_data
4 cephfs_metadata

which are in use by cephfs:

ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

and really have data in them.

Could you tell me, can I just remove these two pools without 
application, if everything works , i.e. cephfs is mounted and accessible?


Thank you!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph dashboard reports CephNodeNetworkPacketErrors

2023-11-07 Thread Dominique Ramaekers
Hi David,

Thanks for the quick response!

The bond reports not a single link failure. Nor do I register packet losses 
with ping. The network cards in the server are already replaced. Cables are 
new. With my setup I easily reach 2KIOPS over the cluster. So I do not assume 
network congestion when I get the error on ±300 IOPS and <100MB/s usage…

I’ll let a network technician look at the switch. I hope he’ll find a reason 
for the packet errors…

Greetings,

Dominique.

Van: David C. 
Verzonden: dinsdag 7 november 2023 11:39
Aan: Dominique Ramaekers 
CC: ceph-users@ceph.io
Onderwerp: Re: [ceph-users] Ceph dashboard reports CephNodeNetworkPacketErrors

Hi Dominique,

The consistency of the data should not be at risk with such a problem.
But on the other hand, it's better to solve the network problem.

Perhaps look at the state of bond0 :
cat /proc/net/bonding/bond0
As well as the usual network checks


Cordialement,

David CASIER




Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers 
mailto:dominique.ramaek...@cometal.be>> a écrit 
:
Hi,

I'm using Ceph on a 4-host cluster for a year now. I recently discovered the 
Ceph Dashboard :-)

No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or >10 
packets/s...

Although all systems work great, I'm worried.

'ip -s link show eno5' results:
2: eno5:  mtu 1500 qdisc mq master bond0 
state UP mode DEFAULT group default qlen 1000
link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr 
5c:ba:2c:08:b3:90
RX: bytes   packets errors dropped  missed   mcast
 734153938129 645770129  20160   0   0  342301
TX: bytes   packets errors dropped carrier collsns
1085134190597 923843839  0   0   0   0
altname enp178s0f0

So in average 0,0003% of RX packet errors!

All the four hosts uses the same 10Gb HP switch. The hosts themselves are HP 
Proliant G10 servers. I would expect 0% packet loss...

Anyway. Should I be worried about data consistency? Or can Ceph handle this 
amount of packet errors?

Greetings,

Dominique.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io

Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers 
mailto:dominique.ramaek...@cometal.be>> a écrit 
:
Hi,

I'm using Ceph on a 4-host cluster for a year now. I recently discovered the 
Ceph Dashboard :-)

No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or >10 
packets/s...

Although all systems work great, I'm worried.

'ip -s link show eno5' results:
2: eno5:  mtu 1500 qdisc mq master bond0 
state UP mode DEFAULT group default qlen 1000
link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr 
5c:ba:2c:08:b3:90
RX: bytes   packets errors dropped  missed   mcast
 734153938129 645770129  20160   0   0  342301
TX: bytes   packets errors dropped carrier collsns
1085134190597 923843839  0   0   0   0
altname enp178s0f0

So in average 0,0003% of RX packet errors!

All the four hosts uses the same 10Gb HP switch. The hosts themselves are HP 
Proliant G10 servers. I would expect 0% packet loss...

Anyway. Should I be worried about data consistency? Or can Ceph handle this 
amount of packet errors?

Greetings,

Dominique.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Redeploy ceph orch OSDs after reboot, but don't mark as 'unmanaged'

2023-11-07 Thread Janek Bevendorff
Actually, ceph cephadm osd activate doesn't do what I expected it to do. 
It  seems to be looking for new OSDs to create instead of looking for 
existing OSDs to activate. Hence, it does nothing on my hosts and only 
prints 'Created no osd(s) on host XXX; already created?' So this 
wouldn't be an option either, even if I were willing to deploy the admin 
key on the OSD hosts.



On 07/11/2023 11:41, Janek Bevendorff wrote:

Hi,

We have our cluster RAM-booted, so we start from a clean slate after 
every reboot. That means I need to redeploy all OSD daemons as well. 
At the moment, I run cephadm deploy via Salt on the rebooted node, 
which brings the deployed OSDs back up, but the problem with this is 
that the deployed OSD shows up as 'unmanaged' in ceph orch ps afterwards.


I could simply skip the cephadm call and wait for the Ceph 
orchestrator to reconcile and auto-activate the disks, but that can 
take up to 15 minutes, which is unacceptable. Running ceph cephadm osd 
activate is not an option either, since I don't have the admin keyring 
deployed on the OSD hosts (I could do that, but I don't want to).


How can I manually activate the OSDs after a reboot and hand over 
control to the Ceph orchestrator afterwards? I checked the deployments 
in /var/lib/ceph/, but the only difference I found between my 
manual cephadm deployment and what ceph orch does is that the device 
links to /dev/mapper/ceph--... instead of /dev/ceph-...


Any hints appreciated!

Janek 


smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Redeploy ceph orch OSDs after reboot, but don't mark as 'unmanaged'

2023-11-07 Thread Janek Bevendorff

Hi,

We have our cluster RAM-booted, so we start from a clean slate after 
every reboot. That means I need to redeploy all OSD daemons as well. At 
the moment, I run cephadm deploy via Salt on the rebooted node, which 
brings the deployed OSDs back up, but the problem with this is that the 
deployed OSD shows up as 'unmanaged' in ceph orch ps afterwards.


I could simply skip the cephadm call and wait for the Ceph orchestrator 
to reconcile and auto-activate the disks, but that can take up to 15 
minutes, which is unacceptable. Running ceph cephadm osd activate is not 
an option either, since I don't have the admin keyring deployed on the 
OSD hosts (I could do that, but I don't want to).


How can I manually activate the OSDs after a reboot and hand over 
control to the Ceph orchestrator afterwards? I checked the deployments 
in /var/lib/ceph/, but the only difference I found between my 
manual cephadm deployment and what ceph orch does is that the device 
links to /dev/mapper/ceph--... instead of /dev/ceph-...


Any hints appreciated!

Janek



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph dashboard reports CephNodeNetworkPacketErrors

2023-11-07 Thread David C.
Hi Dominique,

The consistency of the data should not be at risk with such a problem.
But on the other hand, it's better to solve the network problem.

Perhaps look at the state of bond0 :
cat /proc/net/bonding/bond0
As well as the usual network checks


Cordialement,

*David CASIER*





Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers <
dominique.ramaek...@cometal.be> a écrit :

> Hi,
>
> I'm using Ceph on a 4-host cluster for a year now. I recently discovered
> the Ceph Dashboard :-)
>
> No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or
> >10 packets/s...
>
> Although all systems work great, I'm worried.
>
> 'ip -s link show eno5' results:
> 2: eno5:  mtu 1500 qdisc mq master
> bond0 state UP mode DEFAULT group default qlen 1000
> link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr
> 5c:ba:2c:08:b3:90
> RX: bytes   packets errors dropped  missed   mcast
>  734153938129 645770129  20160   0   0  342301
> TX: bytes   packets errors dropped carrier collsns
> 1085134190597 923843839  0   0   0   0
> altname enp178s0f0
>
> So in average 0,0003% of RX packet errors!
>
> All the four hosts uses the same 10Gb HP switch. The hosts themselves are
> HP Proliant G10 servers. I would expect 0% packet loss...
>
> Anyway. Should I be worried about data consistency? Or can Ceph handle
> this amount of packet errors?
>
> Greetings,
>
> Dominique.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers <
dominique.ramaek...@cometal.be> a écrit :

> Hi,
>
> I'm using Ceph on a 4-host cluster for a year now. I recently discovered
> the Ceph Dashboard :-)
>
> No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or
> >10 packets/s...
>
> Although all systems work great, I'm worried.
>
> 'ip -s link show eno5' results:
> 2: eno5:  mtu 1500 qdisc mq master
> bond0 state UP mode DEFAULT group default qlen 1000
> link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr
> 5c:ba:2c:08:b3:90
> RX: bytes   packets errors dropped  missed   mcast
>  734153938129 645770129  20160   0   0  342301
> TX: bytes   packets errors dropped carrier collsns
> 1085134190597 923843839  0   0   0   0
> altname enp178s0f0
>
> So in average 0,0003% of RX packet errors!
>
> All the four hosts uses the same 10Gb HP switch. The hosts themselves are
> HP Proliant G10 servers. I would expect 0% packet loss...
>
> Anyway. Should I be worried about data consistency? Or can Ceph handle
> this amount of packet errors?
>
> Greetings,
>
> Dominique.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph dashboard reports CephNodeNetworkPacketErrors

2023-11-07 Thread Dominique Ramaekers
Hi,

I'm using Ceph on a 4-host cluster for a year now. I recently discovered the 
Ceph Dashboard :-)

No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or >10 
packets/s...

Although all systems work great, I'm worried.

'ip -s link show eno5' results:
2: eno5:  mtu 1500 qdisc mq master bond0 
state UP mode DEFAULT group default qlen 1000
link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr 
5c:ba:2c:08:b3:90
RX: bytes   packets errors dropped  missed   mcast
 734153938129 645770129  20160   0   0  342301
TX: bytes   packets errors dropped carrier collsns
1085134190597 923843839  0   0   0   0
altname enp178s0f0

So in average 0,0003% of RX packet errors!

All the four hosts uses the same 10Gb HP switch. The hosts themselves are HP 
Proliant G10 servers. I would expect 0% packet loss...

Anyway. Should I be worried about data consistency? Or can Ceph handle this 
amount of packet errors?

Greetings,

Dominique.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Venky Shankar
On Tue, Nov 7, 2023 at 9:46 AM Venky Shankar  wrote:
>
> Hi Yuri,
>
> On Tue, Nov 7, 2023 at 3:01 AM Yuri Weinstein  wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/63443#note-1
> >
> > Seeking approvals/reviews for:
> >
> > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> > rados - Neha, Radek, Travis, Ernesto, Adam King
> > rgw - Casey
> > fs - Venky
>
> Please include the qa fixes for POOL_APP_NOT_ENABLED warnings for fs suite 
> from
>
> https://github.com/ceph/ceph/pull/54380
>
> No need to rebuild packages - just using the updated qa suite and
> rerunning the failed/dead jobs would suffice.

fs rerun for failed+dead jobs -
https://pulpito.ceph.com/vshankar-2023-11-07_05:14:36-fs-reef-release-distro-default-smithi/

>
> > orch - Adam King
> > rbd - Ilya
> > krbd - Ilya
> > upgrade/quincy-x (reef) - Laura PTL
> > powercycle - Brad
> > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> >
> > Please reply to this email with approval and/or trackers of known
> > issues/PRs to address them.
> >
> > TIA
> > YuriW
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
>
> --
> Cheers,
> Venky



-- 
Cheers,
Venky
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io