Hi,
if your failure domain is "host" and you have enough redundancy (e.g.
replicated size 3 or proper erasure-code profiles and rulesets) you
should be able to reboot without any issue. Depending on how long the
reboot would take, you could set the noout flag, the default are 10
minutes
I will look into the bug that you submitted.
Thanks,
Pritha
On Thu, Mar 2, 2023 at 3:46 AM wrote:
> Hello,
>
> I just submitted: https://tracker.ceph.com/issues/58890
>
> Here are more details about the configuration. Note that I've tried a URL
> with and without a trailing `/` slash like what
The latest version of quincy seems to be having problems cleaning up multipart
fragments from canceled uploads.
The bucket is empty:
% s3cmd -c .s3cfg ls s3://warp-benchmark
%
However, it's got 11TB of data and 700k objects.
# radosgw-admin bucket stats --bucket=warp-benchmark
{
Hello,
I just submitted: https://tracker.ceph.com/issues/58890
Here are more details about the configuration. Note that I've tried a URL with
and without a trailing `/` slash like what appears in the ISS.
STS OpenIDConnectProvider
{
"ClientIDList": [
"radosgw"
],
"CreateDate":
> By the sounds of it, a cluster may be configured for the 100 PG / OSD target;
> adding pools to the former configuration scenario will require an increase in
> OSDs to maintain that recommended PG distribution target and accommodate an
> increase of PGs resulting from additional pools.
Thank you for this perspective Anthony.
I was honestly hoping the autoscaler would work in my case, however I had
less than desired results with it. On 17.2.5 it actually failed to scale
as advertised. I had a pool created via the web console, with 1 PG, then
kicked off a job to migrate data.
I have a Nautilus cluster with 7 nodes, 210 HDDs. I recently added the 7th
node with 30 OSDs which are currently rebalancing very slowly. I just noticed
that the ethernet interface only negotiated a 1Gb connection, even though it
has a 10Gb interface. I’m not sure why, but would like to
Also, here is the ceph version: ceph version 17.2.5
(e04241aa9b639588fa6c864845287d2824cb6b55) quincy (stable)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
What version of ceph are you using? Can you share the trust policy that is
attached to the role being assumed?
Thanks,
Pritha
On Wed, Mar 1, 2023 at 9:07 PM wrote:
> I've setup RadosGW with STS ontop of my ceph cluster. It works great and
> fine but I'm also trying to setup authentication
I've setup RadosGW with STS ontop of my ceph cluster. It works great and fine
but I'm also trying to setup authentication with an OpenIDConnect provider. I'm
have a hard time troubleshooting issues because the radosgw log file doesn't
have much information in it. For example when I try to use
As sepia stabilizes we want to plan for this release ASAP.
If you still have outstanding PRs for pls "needs-qa" for 'quincy'
milestone and make sure that they are approved and passing checks.
Thx
On Fri, Feb 17, 2023 at 8:24 AM Yuri Weinstein wrote:
>
> Hello
>
> We are planning to start QE
We're actually writing this for RGW right now. It'll be a bit before
it's productized, but it's in the works.
Daniel
On 2/28/23 14:13, Fox, Kevin M wrote:
Minio no longer lets you read / write from the posix side. Only through minio
itself. :(
Haven't found a replacement yet. If you do,
Hi,
do you know if your crush tree already had the "shadow" tree (probably
not)? If there wasn't a shadow-tree ("default~hdd") then the remapping
is expected. What exact version did you install this cluster with?
storage01:~ # ceph osd crush tree --show-shadow
ID CLASS WEIGHT TYPE NAME
13 matches
Mail list logo