[ceph-users] Re: monitoring apply_latency / commit_latency ?

2023-03-24 Thread Konstantin Shalygin
Hi Matthias,

Prometheus exporter already have all this metrics, you can setup Grafana panels 
as you want
Also, the apply latency in a metric for a pre-bluestore, i.e. filestore
For Bluestore apply latency is the same as commit latency, you can check this 
via `ceph osd perf` command




k

> On 25 Mar 2023, at 00:02, Matthias Ferdinand  wrote:
> 
> Hi,
> 
> I would like to understand how the per-OSD data from "ceph osd perf"
> (i.e.  apply_latency, commit_latency) is generated. So far I couldn't
> find documentation on this. "ceph osd perf" output is nice for a quick
> glimpse, but is not very well suited for graphing. Output values are
> from the most recent 5s-averages apparently.
> 
> With "ceph daemon osd.X perf dump" OTOH, you get quite a lot of latency
> metrics, while it is just not obvious to me how they aggregate into
> apply_latency and commit_latency. Or some comparably easy read latency
> metric (something that is missing completely in "ceph osd perf").
> 
> Can somebody shed some light on this?
> 
> 
> Regards
> Matthias
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-24 Thread Yuri Weinstein
Details of this release are updated here:

https://tracker.ceph.com/issues/59070#note-1
Release Notes - TBD

The slowness we experienced seemed to be self-cured.
Neha, Radek, and Laura please provide any findings if you have them.

Seeking approvals/reviews for:

rados - Neha, Radek, Travis, Ernesto, Adam King (rerun on Build 2 with
PRs merged on top of quincy-release)
rgw - Casey (rerun on Build 2 with PRs merged on top of quincy-release)
fs - Venky

upgrade/octopus-x - Neha, Laura (package issue Adam Kraitman any updates?)
upgrade/pacific-x - Neha, Laura, Ilya see https://tracker.ceph.com/issues/58914
upgrade/quincy-p2p - Neha, Laura
client-upgrade-octopus-quincy-quincy - Neha, Laura (package issue Adam
Kraitman any updates?)
powercycle - Brad

Please reply to this email with approval and/or trackers of known
issues/PRs to address them.

Josh, Neha - gibba and LRC upgrades pending major suites approvals.
RC release - pending major suites approvals.

On Tue, Mar 21, 2023 at 1:04 PM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The reruns were in the queue for 4 days because of some slowness issues.
> The core team (Neha, Radek, Laura, and others) are trying to narrow
> down the root cause.
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to test
> and merge at least one PR https://github.com/ceph/ceph/pull/50575 for
> the core)
> rgw - Casey
> fs - Venky (the fs suite has an unusually high amount of failed jobs,
> any reason to suspect it in the observed slowness?)
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade/octopus-x - Laura is looking into failures
> upgrade/pacific-x - Laura is looking into failures
> upgrade/quincy-p2p - Laura is looking into failures
> client-upgrade-octopus-quincy-quincy - missing packages, Adam Kraitman
> is looking into it
> powercycle - Brad
> ceph-volume - needs a rerun on merged
> https://github.com/ceph/ceph-ansible/pull/7409
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Also, share any findings or hypnosis about the slowness in the
> execution of the suite.
>
> Josh, Neha - gibba and LRC upgrades pending major suites approvals.
> RC release - pending major suites approvals.
>
> Thx
> YuriW
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] monitoring apply_latency / commit_latency ?

2023-03-24 Thread Matthias Ferdinand
Hi,

I would like to understand how the per-OSD data from "ceph osd perf"
(i.e.  apply_latency, commit_latency) is generated. So far I couldn't
find documentation on this. "ceph osd perf" output is nice for a quick
glimpse, but is not very well suited for graphing. Output values are
from the most recent 5s-averages apparently.

With "ceph daemon osd.X perf dump" OTOH, you get quite a lot of latency
metrics, while it is just not obvious to me how they aggregate into
apply_latency and commit_latency. Or some comparably easy read latency
metric (something that is missing completely in "ceph osd perf").

Can somebody shed some light on this?


Regards
Matthias
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Generated signurl is accessible from restricted IPs in bucket policy

2023-03-24 Thread Aggelos.Toumasis
Hello Robin,

Thanks a lot for the response! This is my first time posting, I did not get a 
notification that it was accepted to be posted and missed your email.
Coming back to your question, the solution was to set up the policies of the 
buckets as described 
here.


From: Robin H. Johnson 
Date: Friday, 10 February 2023 at 06:57
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: Generated signurl is accessible from restricted IPs 
in bucket policy
On Wed, Feb 08, 2023 at 03:07:20PM -, Aggelos Toumasis wrote:
> Hi there,
>
> We noticed after creating a signurl that the bucket resources were
> accessible from IPs that were originally restricted from accessing
> them (using a bucket policy).  Using the s3cmd utility we confirmed
> that the Policy is correctly applied and you can access it only for
> the allowed IPs.
>
> Is this an expected behavior or do we miss something?
Can you share the bucket policy?

Also, are you using some reverse proxy in front of RGW, and if so:
are both the proxy & RGW configured for the correct headers to agree on
the actual source IP.

IIRC depending how the policy is written, you have have either of:
- presigned URL || IP-check
- presigned URL && IP-check

--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC profiles where m>k (EC 8+12)

2023-03-24 Thread Anthony D'Atri
A custom CRUSH rule can have two steps to enforce that.

> On Mar 24, 2023, at 11:04, Danny Webb  wrote:
> 
> The question I have regarding this setup is, how can you guarantee that the 
> 12 m chunks will be located evenly across the two rooms.  What would happen 
> if by chance all 12 chunks were in room B?  Usually you use failure domains 
> to make sure of the distribution of chunks across domains, but you can't do 
> that here as you are using host as failure domain, but need room to be 
> somehow included in that.
> 
> From: Fabien Sirjean 
> Sent: 24 March 2023 12:00
> To: ceph-users 
> Subject: [ceph-users] EC profiles where m>k (EC 8+12)
> 
> CAUTION: This email originates from outside THG
> 
> Hi Ceph users!
> 
> I've been proposed an interesting EC setup I hadn't thought about before.
> 
> Scenario is : we have two server rooms and want to store ~4PiB with the
> ability to loose 1 server room without loss of data or RW availability.
> 
> For the context, performance is not needed (cold storage mostly, used as
> a big filesystem).
> 
> The idea is to use EC 8+12 over 24 servers (12 on each server room), so
> if we loose 1 room we still have half of the EC parts (10/20) and are
> able to loose 2 more servers before reaching the point where we loose data.
> 
> I find this pretty elegant when working on a two-sites context, as
> efficiency is 40% (better than 33% three times replication) and the
> redundancy is good.
> 
> What do you think of this setup ? Did you ever used EC profiles with M > K ?
> 
> Thanks for sharing your thoughts!
> 
> Cheers,
> 
> Fabien
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> Danny Webb
> Principal OpenStack Engineer
> danny.w...@thehutgroup.com
> [THG Ingenuity Logo]
> [https://i.imgur.com/wbpVRW6.png]
>   [https://i.imgur.com/c3040tr.png] 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC profiles where m>k (EC 8+12)

2023-03-24 Thread Danny Webb
The question I have regarding this setup is, how can you guarantee that the 12 
m chunks will be located evenly across the two rooms.  What would happen if by 
chance all 12 chunks were in room B?  Usually you use failure domains to make 
sure of the distribution of chunks across domains, but you can't do that here 
as you are using host as failure domain, but need room to be somehow included 
in that.

From: Fabien Sirjean 
Sent: 24 March 2023 12:00
To: ceph-users 
Subject: [ceph-users] EC profiles where m>k (EC 8+12)

CAUTION: This email originates from outside THG

Hi Ceph users!

I've been proposed an interesting EC setup I hadn't thought about before.

Scenario is : we have two server rooms and want to store ~4PiB with the
ability to loose 1 server room without loss of data or RW availability.

For the context, performance is not needed (cold storage mostly, used as
a big filesystem).

The idea is to use EC 8+12 over 24 servers (12 on each server room), so
if we loose 1 room we still have half of the EC parts (10/20) and are
able to loose 2 more servers before reaching the point where we loose data.

I find this pretty elegant when working on a two-sites context, as
efficiency is 40% (better than 33% three times replication) and the
redundancy is good.

What do you think of this setup ? Did you ever used EC profiles with M > K ?

Thanks for sharing your thoughts!

Cheers,

Fabien
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Danny Webb
Principal OpenStack Engineer
danny.w...@thehutgroup.com
[THG Ingenuity Logo]
[https://i.imgur.com/wbpVRW6.png]
  [https://i.imgur.com/c3040tr.png] 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-24 Thread Anthony D'Atri

>> 
>> I would be surprised if I installed "ceph-osd" and didn't get
>> ceph-volume, but I never thought about if "only" installing "ceph" did
>> or did not provide it.

I — perhaps naively - think of `ceph` as just the CLI, usually installing 
`ceph-common` too.

>> 
>> (from one of the remaining non-container ceph users)

More out there than we realize

>> 
> 
> Ideally you want your orchestrator to supply volumes to the containers. I 
> guess ceph-adm is slowly evolving to this concept. 
> And on newer distributions I have noticed the feature where you type a 
> non-existing command, the os if offering you to install it. Looks to me as 
> intended.

Ew, David!  I would disable that if I came across it.  Too much risk of:

* Divergent versions
* Getting the package from the wrong repository
* Filesystem fillage
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC profiles where m>k (EC 8+12)

2023-03-24 Thread Fabien Sirjean

Hi, thanks for your reply!

Stretch mode is obviously useful with small pools, but with its size of 
4 this is a 25% efficiency and we can't afford it (buying 16 PiB raw for 
4 PiB net, it's quite hard to justify to budget holders...).


Good to hear that you used such EC setup in prod, thanks for sharing!

Cheers,

F.


On 3/24/23 13:11, Eugen Block wrote:

Hi,

we have multiple customers with such profiles, for example one with k7 
m11 for a two-site cluster (in total 20 nodes). The customer is pretty 
happy with the resiliency because they actually had multiple outages 
of one DC and everything was still working fine. Although there's also 
the stretch mode (which I haven't tested properly yet) I can encourage 
you to use such a profile. Just be advised to properly test your crush 
rule. ;-)


Regards,
Eugen

Zitat von Fabien Sirjean :


Hi Ceph users!

I've been proposed an interesting EC setup I hadn't thought about 
before.


Scenario is : we have two server rooms and want to store ~4PiB with 
the ability to loose 1 server room without loss of data or RW 
availability.


For the context, performance is not needed (cold storage mostly, used 
as a big filesystem).


The idea is to use EC 8+12 over 24 servers (12 on each server room), 
so if we loose 1 room we still have half of the EC parts (10/20) and 
are able to loose 2 more servers before reaching the point where we 
loose data.


I find this pretty elegant when working on a two-sites context, as 
efficiency is 40% (better than 33% three times replication) and the 
redundancy is good.


What do you think of this setup ? Did you ever used EC profiles with 
M > K ?


Thanks for sharing your thoughts!

Cheers,

Fabien
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC profiles where m>k (EC 8+12)

2023-03-24 Thread Eugen Block

Hi,

we have multiple customers with such profiles, for example one with k7  
m11 for a two-site cluster (in total 20 nodes). The customer is pretty  
happy with the resiliency because they actually had multiple outages  
of one DC and everything was still working fine. Although there's also  
the stretch mode (which I haven't tested properly yet) I can encourage  
you to use such a profile. Just be advised to properly test your crush  
rule. ;-)


Regards,
Eugen

Zitat von Fabien Sirjean :


Hi Ceph users!

I've been proposed an interesting EC setup I hadn't thought about before.

Scenario is : we have two server rooms and want to store ~4PiB with  
the ability to loose 1 server room without loss of data or RW  
availability.


For the context, performance is not needed (cold storage mostly,  
used as a big filesystem).


The idea is to use EC 8+12 over 24 servers (12 on each server room),  
so if we loose 1 room we still have half of the EC parts (10/20) and  
are able to loose 2 more servers before reaching the point where we  
loose data.


I find this pretty elegant when working on a two-sites context, as  
efficiency is 40% (better than 33% three times replication) and the  
redundancy is good.


What do you think of this setup ? Did you ever used EC profiles with M > K ?

Thanks for sharing your thoughts!

Cheers,

Fabien
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] EC profiles where m>k (EC 8+12)

2023-03-24 Thread Fabien Sirjean

Hi Ceph users!

I've been proposed an interesting EC setup I hadn't thought about before.

Scenario is : we have two server rooms and want to store ~4PiB with the 
ability to loose 1 server room without loss of data or RW availability.


For the context, performance is not needed (cold storage mostly, used as 
a big filesystem).


The idea is to use EC 8+12 over 24 servers (12 on each server room), so 
if we loose 1 room we still have half of the EC parts (10/20) and are 
able to loose 2 more servers before reaching the point where we loose data.


I find this pretty elegant when working on a two-sites context, as 
efficiency is 40% (better than 33% three times replication) and the 
redundancy is good.


What do you think of this setup ? Did you ever used EC profiles with M > K ?

Thanks for sharing your thoughts!

Cheers,

Fabien
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-24 Thread Ernesto Puerta
Hi Casey,

The original idea was to leave this to Reef alone, but given that the
CentOS 9 Quincy release is also blocked by missing Python packages, I think
that it'd make sense to backport it.

I'm coordinating with Pere (in CC) to expedite this. We may need help to
troubleshoot Shaman/rpmbuild issues. Who would be the best one to help with
that?

Regarding your last question, I don't know who's the maintainer of those
packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
requesting that specific package, but that's only one out of the dozen of
missing packages (plus transitive dependencies)...

Kind Regards,
Ernesto


On Thu, Mar 23, 2023 at 2:19 PM Casey Bodley  wrote:

> hi Ernesto and lists,
>
> > [1] https://github.com/ceph/ceph/pull/47501
>
> are we planning to backport this to quincy so we can support centos 9
> there? enabling that upgrade path on centos 9 was one of the
> conditions for dropping centos 8 support in reef, which i'm still keen
> to do
>
> if not, can we find another resolution to
> https://tracker.ceph.com/issues/58832? as i understand it, all of
> those python packages exist in centos 8. do we know why they were
> dropped for centos 9? have we looked into making those available in
> epel? (cc Ken and Kaleb)
>
> On Fri, Sep 2, 2022 at 12:01 PM Ernesto Puerta 
> wrote:
> >
> > Hi Kevin,
> >
> >>
> >> Isn't this one of the reasons containers were pushed, so that the
> packaging isn't as big a deal?
> >
> >
> > Yes, but the Ceph community has a strong commitment to provide distro
> packages for those users who are not interested in moving to containers.
> >
> >> Is it the continued push to support lots of distros without using
> containers that is the problem?
> >
> >
> > If not a problem, it definitely makes it more challenging. Compiled
> components often sort this out by statically linking deps whose packages
> are not widely available in distros. The approach we're proposing here
> would be the closest equivalent to static linking for interpreted code
> (bundling).
> >
> > Thanks for sharing your questions!
> >
> > Kind regards,
> > Ernesto
> > ___
> > Dev mailing list -- d...@ceph.io
> > To unsubscribe send an email to dev-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-24 Thread Geert Kloosterman
On Fri, 2023-03-24 at 08:43 +0100, Janne Johansson wrote:
> 
> Den tors 23 mars 2023 kl 15:18 skrev Geert Kloosterman
> :
> > Hi all,
> > Until Ceph Pacific, installing just the "ceph" package was enough
> > to get everything needed to deploy Ceph.
> > However, with Quincy, ceph-volume was split off into its own
> > package, and it is not automatically installed anymore.
> > 
> > Should I file a bug for this?
> 
> I would be surprised if I installed "ceph-osd" and didn't get
> ceph-volume, but I never thought about if "only" installing "ceph"
> did
> or did not provide it.
> 

In the following commit "ceph-volume" was split from "ceph-osd"

https://github.com/ceph/ceph/commit/8e0e9ef382c5749954eb416107e4a7d22f92d41c

There is no dependency from "ceph-osd" on "ceph-volume". Adding this
dependency to "ceph-osd" would prevent your surprise and also
indirectly solve things for the "ceph" package.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-24 Thread Marc
> > Until Ceph Pacific, installing just the "ceph" package was enough to
> get everything needed to deploy Ceph.
> > However, with Quincy, ceph-volume was split off into its own package,
> and it is not automatically installed anymore.
> >
> > Should I file a bug for this?
> 
> I would be surprised if I installed "ceph-osd" and didn't get
> ceph-volume, but I never thought about if "only" installing "ceph" did
> or did not provide it.
> 
> (from one of the remaining non-container ceph users)
> 

Ideally you want your orchestrator to supply volumes to the containers. I guess 
ceph-adm is slowly evolving to this concept. 
And on newer distributions I have noticed the feature where you type a 
non-existing command, the os if offering you to install it. Looks to me as 
intended.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-24 Thread Janne Johansson
Den tors 23 mars 2023 kl 15:18 skrev Geert Kloosterman
:
> Hi all,
> Until Ceph Pacific, installing just the "ceph" package was enough to get 
> everything needed to deploy Ceph.
> However, with Quincy, ceph-volume was split off into its own package, and it 
> is not automatically installed anymore.
>
> Should I file a bug for this?

I would be surprised if I installed "ceph-osd" and didn't get
ceph-volume, but I never thought about if "only" installing "ceph" did
or did not provide it.

(from one of the remaining non-container ceph users)

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io