[ceph-users] Re: 17.2.7 quincy dashboard issues

2023-10-30 Thread Matthew Darwin
Ok, so I tried the new ceph dashboard by "set-prometheus-api-host" 
(note "host" and not "url") and it returns the wrong data.  We have 4 
ceph clusters going into the same prometheus instance.  How does it 
know which data to pull? Do I need to pass a promql query?


The capacity widget at the top right (not using prometheus) shows 35% 
of 51 TiB used (test cluster data)... This is correct. The chart shows 
use capacity is 1.7 PiB, which is coming from the production cluster 
(incorrect).


Ideas?


On 2023-10-30 11:30, Nizamudeen A wrote:
Ah yeah, probably that's why the utilization charts are empty 
because it relies on

the prometheus info.

And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250

Regards,
Nizam

On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin  wrote:

Hello,

We're not using prometheus within ceph (ceph dashboards show in our
grafana which is hosted elsewhere). The old dashboard showed the
metrics fine, so not sure why in a patch release we would need
to make
configuration changes to get the same metrics Agree it
should be
off by default.

"ceph dashboard feature disable dashboard" works to put the old
dashboard back.  Thanks.

On 2023-10-30 00:09, Nizamudeen A wrote:
> Hi Matthew,
>
> Is the prometheus configured in the cluster? And also the
> PROMETHUEUS_API_URL is set? You can set it manually by ceph
dashboard
> set-prometheus-api-url .
>
> You can switch to the old Dashboard by switching the feature
toggle in the
> dashboard. `ceph dashboard feature disable dashboard` and
reloading the
> page. Probably this should have been disabled by default.
>
> Regards,
> Nizam
>
> On Sun, Oct 29, 2023, 23:04 Matthew Darwin wrote:
>
>> Hi all,
>>
>> I see17.2.7 quincy is published as debian-bullseye packages. 
So I
>> tried it on a test cluster.
>>
>> I must say I was not expecting the big dashboard change in a
patch
>> release.  Also all the "cluster utilization" numbers are all
blank now
>> (any way to fix it?), so the dashboard is much less usable now.
>>
>> Thoughts?
>> ___
>> ceph-users mailing list --ceph-users@ceph.io
>> To unsubscribe send an email toceph-users-le...@ceph.io
>>
> ___
> ceph-users mailing list --ceph-users@ceph.io
> To unsubscribe send an email toceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Add nats_adapter

2023-10-30 Thread Yuval Lifshitz
Hi Vahideh,
Lua scripting was added in pacific. Did you try uploading that file to a
"pacific" RGW?
What is failing there?

Yuval

On Mon, Oct 30, 2023 at 5:04 PM Vahideh Alinouri 
wrote:

> Dear Ceph Users,
>
> I am requesting the backporting changes related to the nats_adapter.lua.
> This feature is in a version newer than pacific, but we don't have it
> in pacific version.
>
> I would greatly appreciate it if someone from the Ceph development
> team backport this change to the pacific version.
>
> Best regards,
> Vahideh Alinouri
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 17.2.7 quincy

2023-10-30 Thread Nizamudeen A
Ah yeah, probably that's why the utilization charts are empty because it
relies on
the prometheus info.

And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250

Regards,
Nizam

On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin  wrote:

> Hello,
>
> We're not using prometheus within ceph (ceph dashboards show in our
> grafana which is hosted elsewhere). The old dashboard showed the
> metrics fine, so not sure why in a patch release we would need to make
> configuration changes to get the same metrics Agree it should be
> off by default.
>
> "ceph dashboard feature disable dashboard" works to put the old
> dashboard back.  Thanks.
>
> On 2023-10-30 00:09, Nizamudeen A wrote:
> > Hi Matthew,
> >
> > Is the prometheus configured in the cluster? And also the
> > PROMETHUEUS_API_URL is set? You can set it manually by ceph dashboard
> > set-prometheus-api-url .
> >
> > You can switch to the old Dashboard by switching the feature toggle in
> the
> > dashboard. `ceph dashboard feature disable dashboard` and reloading the
> > page. Probably this should have been disabled by default.
> >
> > Regards,
> > Nizam
> >
> > On Sun, Oct 29, 2023, 23:04 Matthew Darwin  wrote:
> >
> >> Hi all,
> >>
> >> I see17.2.7 quincy is published as debian-bullseye packages.  So I
> >> tried it on a test cluster.
> >>
> >> I must say I was not expecting the big dashboard change in a patch
> >> release.  Also all the "cluster utilization" numbers are all blank now
> >> (any way to fix it?), so the dashboard is much less usable now.
> >>
> >> Thoughts?
> >> ___
> >> ceph-users mailing list --ceph-users@ceph.io
> >> To unsubscribe send an email toceph-users-le...@ceph.io
> >>
> > ___
> > ceph-users mailing list --ceph-users@ceph.io
> > To unsubscribe send an email toceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Add nats_adapter

2023-10-30 Thread Vahideh Alinouri
Dear Ceph Users,

I am requesting the backporting changes related to the nats_adapter.lua.
This feature is in a version newer than pacific, but we don't have it
in pacific version.

I would greatly appreciate it if someone from the Ceph development
team backport this change to the pacific version.

Best regards,
Vahideh Alinouri
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] v17.2.7 Quincy released

2023-10-30 Thread Yuri Weinstein
We're happy to announce the 7th backport release in the Quincy series.

https://ceph.io/en/news/blog/2023/v17-2-7-quincy-released/

Notable Changes
---

* `ceph mgr dump` command now displays the name of the Manager module that
  registered a RADOS client in the `name` field added to elements of the
  `active_clients` array. Previously, only the address of a module's RADOS
  client was shown in the `active_clients` array.

* mClock Scheduler: The mClock scheduler (default scheduler in Quincy) has
  undergone significant usability and design improvements to address the slow
  backfill issue. Some important changes are:

  * The 'balanced' profile is set as the default mClock profile because it
represents a compromise between prioritizing client IO or recovery IO. Users
can then choose either the 'high_client_ops' profile to prioritize client IO
or the 'high_recovery_ops' profile to prioritize recovery IO.

  * QoS parameters including reservation and limit are now specified in terms
of a fraction (range: 0.0 to 1.0) of the OSD's IOPS capacity.

  * The cost parameters (osd_mclock_cost_per_io_usec_* and
osd_mclock_cost_per_byte_usec_*) have been removed. The cost of an operation
is now determined using the random IOPS and maximum sequential bandwidth
capability of the OSD's underlying device.

  * Degraded object recovery is given higher priority when compared to misplaced
object recovery because degraded objects present a data safety issue not
present with objects that are merely misplaced. Therefore, backfilling
operations with the 'balanced' and 'high_client_ops' mClock profiles may
progress slower than what was seen with the 'WeightedPriorityQueue' (WPQ)
scheduler.

  * The QoS allocations in all mClock profiles are optimized based on the above
fixes and enhancements.

  * For more detailed information see:
https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/

* RGW: S3 multipart uploads using Server-Side Encryption now replicate
  correctly in multi-site. Previously, the replicas of such objects were
  corrupted on decryption.  A new tool, ``radosgw-admin bucket resync encrypted
  multipart``, can be used to identify these original multipart uploads. The
  ``LastModified`` timestamp of any identified object is incremented by 1
  nanosecond to cause peer zones to replicate it again.  For multi-site
  deployments that make any use of Server-Side Encryption, we recommended
  running this command against every bucket in every zone after all zones have
  upgraded.

* CephFS: MDS evicts clients which are not advancing their request tids which
  causes a large buildup of session metadata resulting in the MDS going
  read-only due to the RADOS operation exceeding the size threshold.
  `mds_session_metadata_threshold` config controls the maximum size that a
  (encoded) session metadata can grow.

* CephFS: After recovering a Ceph File System post following the disaster
  recovery procedure, the recovered files under `lost+found` directory can now
  be deleted.

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-17.2.7.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: b12291d110049b2f35e32e0de30d70e9a4c060d2
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph OSD reported Slow operations

2023-10-30 Thread prabhav
In a production setup of  36 OSDs( SAS disks) totalling 180 TB allocated to a 
single Ceph Cluster with 3 monitors and 3 managers. There were 830 volumes and 
VMs created in Openstack with Ceph as a backend. On Sep 21, users reported 
slowness in accessing the VMs. 
Analysing the logs lead us to problem with SAS , Network congestion and Ceph 
configuration( as all default values were used). We updated the Network from 
1Gbps to 10Gbps for public and cluster networking. There was no change. 
The ceph benchmark performance showed that 28 OSDs out of 36 OSDs reported very 
low IOPS of 30 to 50 while the remaining showed 300+ IOPS. 
We gradually started reducing the load on the ceph cluster  and now the volumes 
count is 650. Now the slow operations has gradually reduced but I am aware that 
this is not the solution.
Ceph configuration is updated with increasing the
osd_journal_size to 10 GB,
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
bluestore_cache_trim_max_skip_pinned=1

After one month, now we faced another issue with Mgr daemon stopped in all 3 
quorums and 16 OSDs went down. From the ceph-mon,ceph-mgr.log could not get the 
reason. Please guide me as its a production setup
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Solution for heartbeat and slow ops warning

2023-10-30 Thread huongnv

Hi guys,

Do you can find this solution for issue with long hearbeat and warning 
slow ops?


Thank you so much.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [quincy - 17.2.6] Lua scripting in the rados gateway - HTTP_REMOTE-ADDR missing

2023-10-30 Thread stephan
Hi Yuval,

this is cool. Thanks for the fast reply and PR. Fingers crossed it gets merged 
soon.
This would be very valuable for us and hopefully for other too.

Cheers

Stephan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Enterprise SSD require for Ceph Reef Cluster

2023-10-30 Thread Nafiz Imtiaz
Hi,

We have a ceph cluster running reef version. We want to buy some enterprise
ssd for our ceph cluster, and our prepared storage size is 1.92TB.

For that, we have selected the Intel model. Please give a review about this
model, and if you have any other model preference, please share with us.
Thank you

Brand: Intel
SSD: 1.92TB 2.5'' Enterprise SATA,  6Gb/s
Model: D3-S4510


Regards,


Nafiz Imtiaz

Assistant Manager, Product Development

IT Division

Bangladesh Export Import Company Ltd.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RGW access logs with bucket name

2023-10-30 Thread Casey Bodley
another option is to enable the rgw ops log, which includes the bucket
name for each request

the http access log line that's visible at log level 1 follows a known
apache format that users can scrape, so i've resisted adding extra
s3-specific stuff like bucket/object names there. there was some
recent discussion around this in
https://github.com/ceph/ceph/pull/50350, which had originally extended
that access log line

On Mon, Oct 30, 2023 at 6:03 AM Boris Behrens  wrote:
>
> Hi Dan,
>
> we are currently moving all the logging into lua scripts, so it is not an
> issue anymore for us.
>
> Thanks
>
> ps: the ceph analyzer is really cool. plusplus
>
> Am Sa., 28. Okt. 2023 um 22:03 Uhr schrieb Dan van der Ster <
> dan.vanders...@clyso.com>:
>
> > Hi Boris,
> >
> > I found that you need to use debug_rgw=10 to see the bucket name :-/
> >
> > e.g.
> > 2023-10-28T19:55:42.288+ 7f34dde06700 10 req 3268931155513085118
> > 0.0s s->object=... s->bucket=xyz-bucket-123
> >
> > Did you find a more convenient way in the meantime? I think we should
> > log bucket name at level 1.
> >
> > Cheers, Dan
> >
> > --
> > Dan van der Ster
> > CTO
> >
> > Clyso GmbH
> > p: +49 89 215252722 | a: Vancouver, Canada
> > w: https://clyso.com | e: dan.vanders...@clyso.com
> >
> > Try our Ceph Analyzer: https://analyzer.clyso.com
> >
> > On Thu, Mar 30, 2023 at 4:15 AM Boris Behrens  wrote:
> > >
> > > Sadly not.
> > > I only see the the path/query of a request, but not the hostname.
> > > So when a bucket is accessed via hostname (
> > https://bucket.TLD/object?query)
> > > I only see the object and the query (GET /object?query).
> > > When a bucket is accessed bia path (https://TLD/bucket/object?query) I
> > can
> > > see also the bucket in the log (GET bucket/object?query)
> > >
> > > Am Do., 30. März 2023 um 12:58 Uhr schrieb Szabo, Istvan (Agoda) <
> > > istvan.sz...@agoda.com>:
> > >
> > > > It has the full url begins with the bucket name in the beast logs http
> > > > requests, hasn’t it?
> > > >
> > > > Istvan Szabo
> > > > Staff Infrastructure Engineer
> > > > ---
> > > > Agoda Services Co., Ltd.
> > > > e: istvan.sz...@agoda.com
> > > > ---
> > > >
> > > > On 2023. Mar 30., at 17:44, Boris Behrens  wrote:
> > > >
> > > > Email received from the internet. If in doubt, don't click any link
> > nor
> > > > open any attachment !
> > > > 
> > > >
> > > > Bringing up that topic again:
> > > > is it possible to log the bucket name in the rgw client logs?
> > > >
> > > > currently I am only to know the bucket name when someone access the
> > bucket
> > > > via https://TLD/bucket/object instead of https://bucket.TLD/object.
> > > >
> > > > Am Di., 3. Jan. 2023 um 10:25 Uhr schrieb Boris Behrens  > >:
> > > >
> > > > Hi,
> > > >
> > > > I am looking forward to move our logs from
> > > >
> > > > /var/log/ceph/ceph-client...log to our logaggregator.
> > > >
> > > >
> > > > Is there a way to have the bucket name in the log file?
> > > >
> > > >
> > > > Or can I write the rgw_enable_ops_log into a file? Maybe I could work
> > with
> > > >
> > > > this.
> > > >
> > > >
> > > > Cheers and happy new year
> > > >
> > > > Boris
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
> > im
> > > > groüen Saal.
> > > > ___
> > > > ceph-users mailing list -- ceph-users@ceph.io
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > >
> > > >
> > > > --
> > > > This message is confidential and is for the sole use of the intended
> > > > recipient(s). It may also be privileged or otherwise protected by
> > copyright
> > > > or other legal rules. If you have received it by mistake please let us
> > know
> > > > by reply email and delete it from your system. It is prohibited to copy
> > > > this message or disclose its content to anyone. Any confidentiality or
> > > > privilege is not waived or lost by any mistaken delivery or
> > unauthorized
> > > > disclosure of the message. All messages sent to and from Agoda may be
> > > > monitored to ensure compliance with company policies, to protect the
> > > > company's interests and to remove potential malware. Electronic
> > messages
> > > > may be intercepted, amended, lost or deleted, or contain viruses.
> > > >
> > >
> > >
> > > --
> > > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> > > groüen Saal.
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe 

[ceph-users] Packages for 17.2.7 released without release notes / announcement (Re: Re: Status of Quincy 17.2.5 ?)

2023-10-30 Thread Christian Rohmann

Sorry to dig up this old thread ...

On 25.01.23 10:26, Christian Rohmann wrote:

On 20/10/2022 10:12, Christian Rohmann wrote:

1) May I bring up again my remarks about the timing:

On 19/10/2022 11:46, Christian Rohmann wrote:

I believe the upload of a new release to the repo prior to the 
announcement happens quite regularly - it might just be due to the 
technical process of releasing.
But I agree it would be nice to have a more "bit flip" approach to 
new releases in the repo and not have the packages appear as updates 
prior to the announcement and final release and update notes.
By my observations sometimes there are packages available on the 
download servers via the "last stable" folders such as 
https://download.ceph.com/debian-quincy/ quite some time before the 
announcement of a release is out.
I know it's hard to time this right with mirrors requiring some time 
to sync files, but would be nice to not see the packages or have 
people install them before there are the release notes and potential 
pointers to changes out. 


Todays 16.2.11 release shows the exact issue I described above 

1) 16.2.11 packages are already available via e.g. 
https://download.ceph.com/debian-pacific
2) release notes not yet merged: 
(https://github.com/ceph/ceph/pull/49839), thus 
https://ceph.io/en/news/blog/2022/v16-2-11-pacific-released/ show a 
404 :-)
3) No announcement like 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/QOCU563UD3D3ZTB5C5BJT5WRSJL5CVSD/ 
to the ML yet.




I really appreciate the work (implementation and also testing) that goes 
into each release.
But the release of 17.2.7 showed the issue of "packages available before 
the news is out":


1) packages are available on e.g. download.ceph.com
2) There are NO release notes on at 
https://docs.ceph.com/en/latest/releases/ yet

3) And there is no announcement on the ML yet


It would be awesome if you could consider bit-flip releases with 
packages only available right with the communication / release notes.




Regards


Christian






___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 17.2.7 quincy

2023-10-30 Thread Matthew Darwin

Hello,

We're not using prometheus within ceph (ceph dashboards show in our 
grafana which is hosted elsewhere). The old dashboard showed the 
metrics fine, so not sure why in a patch release we would need to make 
configuration changes to get the same metrics Agree it should be 
off by default.


"ceph dashboard feature disable dashboard" works to put the old 
dashboard back.  Thanks.


On 2023-10-30 00:09, Nizamudeen A wrote:

Hi Matthew,

Is the prometheus configured in the cluster? And also the
PROMETHUEUS_API_URL is set? You can set it manually by ceph dashboard
set-prometheus-api-url .

You can switch to the old Dashboard by switching the feature toggle in the
dashboard. `ceph dashboard feature disable dashboard` and reloading the
page. Probably this should have been disabled by default.

Regards,
Nizam

On Sun, Oct 29, 2023, 23:04 Matthew Darwin  wrote:


Hi all,

I see17.2.7 quincy is published as debian-bullseye packages.  So I
tried it on a test cluster.

I must say I was not expecting the big dashboard change in a patch
release.  Also all the "cluster utilization" numbers are all blank now
(any way to fix it?), so the dashboard is much less usable now.

Thoughts?
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io


___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RGW access logs with bucket name

2023-10-30 Thread Boris Behrens
Hi Dan,

we are currently moving all the logging into lua scripts, so it is not an
issue anymore for us.

Thanks

ps: the ceph analyzer is really cool. plusplus

Am Sa., 28. Okt. 2023 um 22:03 Uhr schrieb Dan van der Ster <
dan.vanders...@clyso.com>:

> Hi Boris,
>
> I found that you need to use debug_rgw=10 to see the bucket name :-/
>
> e.g.
> 2023-10-28T19:55:42.288+ 7f34dde06700 10 req 3268931155513085118
> 0.0s s->object=... s->bucket=xyz-bucket-123
>
> Did you find a more convenient way in the meantime? I think we should
> log bucket name at level 1.
>
> Cheers, Dan
>
> --
> Dan van der Ster
> CTO
>
> Clyso GmbH
> p: +49 89 215252722 | a: Vancouver, Canada
> w: https://clyso.com | e: dan.vanders...@clyso.com
>
> Try our Ceph Analyzer: https://analyzer.clyso.com
>
> On Thu, Mar 30, 2023 at 4:15 AM Boris Behrens  wrote:
> >
> > Sadly not.
> > I only see the the path/query of a request, but not the hostname.
> > So when a bucket is accessed via hostname (
> https://bucket.TLD/object?query)
> > I only see the object and the query (GET /object?query).
> > When a bucket is accessed bia path (https://TLD/bucket/object?query) I
> can
> > see also the bucket in the log (GET bucket/object?query)
> >
> > Am Do., 30. März 2023 um 12:58 Uhr schrieb Szabo, Istvan (Agoda) <
> > istvan.sz...@agoda.com>:
> >
> > > It has the full url begins with the bucket name in the beast logs http
> > > requests, hasn’t it?
> > >
> > > Istvan Szabo
> > > Staff Infrastructure Engineer
> > > ---
> > > Agoda Services Co., Ltd.
> > > e: istvan.sz...@agoda.com
> > > ---
> > >
> > > On 2023. Mar 30., at 17:44, Boris Behrens  wrote:
> > >
> > > Email received from the internet. If in doubt, don't click any link
> nor
> > > open any attachment !
> > > 
> > >
> > > Bringing up that topic again:
> > > is it possible to log the bucket name in the rgw client logs?
> > >
> > > currently I am only to know the bucket name when someone access the
> bucket
> > > via https://TLD/bucket/object instead of https://bucket.TLD/object.
> > >
> > > Am Di., 3. Jan. 2023 um 10:25 Uhr schrieb Boris Behrens  >:
> > >
> > > Hi,
> > >
> > > I am looking forward to move our logs from
> > >
> > > /var/log/ceph/ceph-client...log to our logaggregator.
> > >
> > >
> > > Is there a way to have the bucket name in the log file?
> > >
> > >
> > > Or can I write the rgw_enable_ops_log into a file? Maybe I could work
> with
> > >
> > > this.
> > >
> > >
> > > Cheers and happy new year
> > >
> > > Boris
> > >
> > >
> > >
> > >
> > > --
> > > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
> im
> > > groüen Saal.
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > >
> > >
> > > --
> > > This message is confidential and is for the sole use of the intended
> > > recipient(s). It may also be privileged or otherwise protected by
> copyright
> > > or other legal rules. If you have received it by mistake please let us
> know
> > > by reply email and delete it from your system. It is prohibited to copy
> > > this message or disclose its content to anyone. Any confidentiality or
> > > privilege is not waived or lost by any mistaken delivery or
> unauthorized
> > > disclosure of the message. All messages sent to and from Agoda may be
> > > monitored to ensure compliance with company policies, to protect the
> > > company's interests and to remove potential malware. Electronic
> messages
> > > may be intercepted, amended, lost or deleted, or contain viruses.
> > >
> >
> >
> > --
> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> > groüen Saal.
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] dashboard ERROR exception

2023-10-30 Thread farhad kh
i use ceph 17.2.6 and when i deploy two number of separate rgw realm with
zonegroup and zone , dashboard enabled access for  bouth object gateway and
i can create user and bucket and etc .but when i trying create bucket in on
of object gatways .i get this error in below:


debug 2023-10-29T12:19:50.697+ 7fd203a26700  0 [dashboard ERROR
rest_client] RGW REST API failed PUT req status: 400
debug 2023-10-29T12:19:50.697+ 7fd203a26700  0 [dashboard ERROR
exception] Dashboard Exception
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/controllers/rgw.py", line 304, in
create
lock_enabled)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 534, in
func_wrapper
**kwargs)
  File "/usr/share/ceph/mgr/dashboard/services/rgw_client.py", line 563, in
create_bucket
return request(data=data, headers=headers)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 323, in __call__
data, raw_content, headers)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 452, in
do_request
resp.content)
dashboard.rest_client.RequestException: RGW REST API failed request with
status code 400
(b'{"Code":"InvalidLocationConstraint","Message":"The specified
location-constr'
 b'aint is not
valid","BucketName":"farhad2","RequestId":"tx03fa9d80c50a79d'
 b'b6-00653e4de6-285b3-test","HostId":"285b3-test-test"}')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 47, in
dashboard_exception_handler
return handler(*args, **kwargs)
  File "/lib/python3.6/site-packages/cherrypy/_cpdispatch.py", line 54, in
__call__
return self.callable(*self.args, **self.kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/_base_controller.py",
line 258, in inner
ret = func(*args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/_rest_controller.py",
line 191, in wrapper
return func(*vpath, **params)
  File "/usr/share/ceph/mgr/dashboard/controllers/rgw.py", line 315, in
create
raise DashboardException(e, http_status_code=500, component='rgw')
dashboard.exceptions.DashboardException: RGW REST API failed request with
status code 400
(b'{"Code":"InvalidLocationConstraint","Message":"The specified
location-constr'
 b'aint is not
valid","BucketName":"farhad2","RequestId":"tx03fa9d80c50a79d'
 b'b6-00653e4de6-285b3-test","HostId":"285b3-test-test"}')
debug 2023-10-29T12:19:50.701+ 7fd203a26700  0 [dashboard INFO request]
[192.168.0.1:55833] [POST] [500] [0.031s] [admin] [252.0B] /api/rgw/bucket
debug 2023-10-29T12:19:50.713+ 7fd204a28700  0 [dashboard ERROR
rest_client] RGW REST API failed GET req status: 404
debug 2023-10-29T12:19:50.715+ 7fd204a28700  0 [dashboard ERROR
exception] Dashboard Exception
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/controllers/rgw.py", line 145, in
proxy
result = instance.proxy(method, path, params, None)
  File "/usr/share/ceph/mgr/dashboard/services/rgw_client.py", line 513, in
proxy
params, data)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 534, in
func_wrapper
**kwargs)
  File "/usr/share/ceph/mgr/dashboard/services/rgw_client.py", line 507, in
_proxy_request
raw_content=True)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 323, in __call__
data, raw_content, headers)
  File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 452, in
do_request
resp.content)
dashboard.rest_client.RequestException: RGW REST API failed request with
status code 404
(b'{"Code":"NoSuchBucket","RequestId":"tx086cdcd9547b301e2-00653e4de6-285b3'
 b'-test","HostId":"285b3-test-test"}')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 47, in
dashboard_exception_handler
return handler(*args, **kwargs)
  File "/lib/python3.6/site-packages/cherrypy/_cpdispatch.py", line 54, in
__call__
return self.callable(*self.args, **self.kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/_base_controller.py",
line 258, in inner
ret = func(*args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/_rest_controller.py",
line 191, in wrapper
return func(*vpath, **params)
  File "/usr/share/ceph/mgr/dashboard/controllers/rgw.py", line 275, in get
result = self.proxy(daemon_name, 'GET', 'bucket', {'bucket': bucket})
  File "/usr/share/ceph/mgr/dashboard/controllers/rgw.py", line 151, in
proxy
raise DashboardException(e, http_status_code=http_status_code,
component='rgw')
dashboard.exceptions.DashboardException: RGW REST API failed request with
status code 404
(b'{"Code":"NoSuchBucket","RequestId":"tx086cdcd9547b301e2-00653e4de6-285b3'
 b'-test","HostId":"285b3-test-test"}')
--
my cluster have tow realm :
1) rgw-realm= test ,