Hi,
Has something change with 'rbd diff' in Octopus or have I hit a bug? I am no
longer able to obtain the list of objects that have changed between two
snapshots of an image, it always lists all allocated regions of the RBD image.
This behaviour however only occurs when I add the '--whole-obje
Reed Dier writes:
> I don't have a solution to offer, but I've seen this for years with no
> solution.
> Any time a MGR bounces, be it for upgrades, or a new daemon coming online,
> etc, I'll see a scale spike like is reported below.
Interesting to read that we are not the only ones.
> Jus
I was able to figure out the solution with this rule:
step take default
step choose indep 0 type host
step chooseleaf indep 1 type osd
step emit
step take default
step choose indep 0 type host
step chooseleaf indep 1 type osd
step emi
I'm trying to figure out a CRUSH rule that will spread data out across my
cluster as much as possible, but not more than 2 chunks per host.
If I use the default rule with an osd failure domain like this:
step take default
step choose indep 0 type osd
step emit
I get clustering of 3-4 chunks on
Hi all,
I wanted to provide an RCA for the outage you may have been affected by
yesterday. Some services that went down:
- All CI/testing
- quay.ceph.io
- telemetry.ceph.com (your cluster may have gone into HEALTH_WARN if you report
telemetry data)
- lists.ceph.io (so all mailing lists)
All o
> -Original Message-
> From: Eugen Block [mailto:ebl...@nde.ag]
> Sent: Tuesday, May 11, 2021 11:39 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: monitor connection error
>
> Hi,
>
> > What is this error trying to tell me? TIA
>
> it tells you that the cluster is not reachable
I don't have a solution to offer, but I've seen this for years with no solution.
Any time a MGR bounces, be it for upgrades, or a new daemon coming online, etc,
I'll see a scale spike like is reported below.
Just out of curiosity, which MGR plugins are you using?
I have historically used the infl
Hi Partick,
Thanks for getting back to me. Looks like I found the issue. Its due to the
fact that I had thought I had increased the max_file_size on ceph to 20TB turns
out I missed a zero and set it to 1.89 TB.
I had originally tried to fallocate the space for the 8TB volume which kept
errorin
Hi everyone,
Today is the last day to get your proposal in for the Ceph June Month
event! The types of talks include:
* Lightning talk - 5 minutes
* Presentation - 20 minutes with q/a
* Unconference (Bof) - 40 minutes
We will be confirming with speakers for the date/time by May 16th.
https://ce
The federated user will be allowed to perform only those s3 actions that
are explicitly allowed by the role's permission policy. The permission
policy is there for someone to exercise finer grained control over what s3
action is allowed and what is not, hence it differs from what regular users
are
Hi,
Can you try with the following ARN:
arn:aws:iam:::user/oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b
The format of the user id is: $$ , and in
$oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b, the '$' before oidc is a
separator for a tenant which is empty here, and ARN for a user is of the
format: arn
Hi,
I just deployed a test cluster to try that out, too. I only deployed
three MONs, but this should also apply.
I tried to create the third datacenter and put the tiebreaker there but got
the following error:
root@ceph-node-01:/home/cloud
Hi
I have started to see segfaults during multiplart upload to one of the
buckets
File is about 60MB in size
Upload of the same file to a brand new bucket works OK
Command used
aws --profile=tester --endpoint=$HOST_S3_API --region="" s3 cp
./pack-a9201afb4682b74c7c5a5d6070e661662bdfea1a.pack
s3://
Hi all
Scenario is as follows
Federated user assumes a role via AssumeRoleWithWebIdentity, which gives
permission to create a bucket.
User creates a bucket and becomes an owner (this is visible in Ceph's web
ui as Owner $oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b).
User cannot list the content of t
Hi all
I'm working on the following scenario
User is authenticated with OIDC and tries to access a bucket which it does
not own.
How to specify user ID etc. to give access to such a user?
By trial and error I found out that principal can be specified as
"Principal": {"Federated":["arn:aws:sts:::a
15 matches
Mail list logo