[ceph-users] Re: v18.2.4 Reef released - blog release note missing issue 61948

2024-07-24 Thread Robin H. Johnson
Hi Yuri et al,

The email announcement includes the fix for 61948, but the linked blog
page omits it entirely.

Suggest adding that note to the blog page.

The language used also differs slightly between the two announcements.

email:
> We're happy to announce the 4th release in the Reef series.
blog:
| This is the third backport release in the Reef series. We recommend
| that all users update to this release.



On Wed, Jul 24, 2024 at 02:12:25PM -0700, Yuri Weinstein wrote:
> We're happy to announce the 4th release in the Reef series.
...
> Notable Changes
> ---
> * RADOS: This release fixes a bug (https://tracker.ceph.com/issues/61948) 
> where
>   pre-reef clients were allowed to connect to the `pg-upmap-primary`
>   (https://docs.ceph.com/en/reef/rados/operations/read-balancer/)
>   interface despite users having set `require-min-compat-client=reef`,
>   leading to an assert in the osds and mons. You are susceptible to this
>   bug in reef versions prior to 18.2.3 if 1) you are using an osdmap
>   generated via the offline osdmaptool with the `--read` option or 2)
>   you have explicitly generated pg-upmap-primary mappings with the CLI
>   command. Please note that the fix is minimal and does not address corner
>   cases such as adding a mapping in the middle of an upgrade or in a partially
>   upgraded cluster (related trackers linked in
> https://tracker.ceph.com/issues/61948).
>   As such, we recommend removing any existing pg-upmap-primary
> mappings until remaining
>   issues are addressed in future point releases.
>   See https://tracker.ceph.com/issues/61948#note-32 for instructions
> on how to remove
>   existing pg-upmap-primary mappings.
This is the missing item on the webpage.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Robin H. Johnson
On Thu, Jul 11, 2024 at 01:16:22PM +, Drew Weaver wrote:
> Hello,
> 
> We would like to repurpose some Dell PowerEdge R750s for a Ceph cluster.
> 
> Currently the servers have one H755N RAID controller for each 8 drives. (2 
> total)
The N variant of H755N specifically? So you have 16 NVME drives in each
server?

> I have been asking their technical support what needs to happen in
> order for us to just rip out those raid controllers and cable the
> backplane directly to the motherboard/PCIe lanes and they haven't been
> super enthusiastic about helping me. I get it just buy another 50
> servers, right? No big deal.
I don't think the motherboard has enough PCIe lanes to natively connect
all the drives: the RAID controller effectively functioned as a
expander, so you needed less PCIe lanes on the motherboard.

As the quickest way forward: look for passthrough / single-disk / RAID0
options, in that order, in the controller management tools (perccli etc).

I haven't used the N variant at all, and since it's NVME presented as
SCSI/SAS, I don't want to trust the solution of reflashing the
controller for IT (passthrough) mode.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RGW: Cannot write to bucket anymore

2024-03-21 Thread Robin H. Johnson
On Thu, Mar 21, 2024 at 11:20:44AM +0100, Malte Stroem wrote:
> Hello Robin,
> 
> thanks a lot.
> 
> Yes, I set debug to debug_rgw=20 & debug_ms=1.
> 
> It's that 403 I always get.
> 
> There is no versioning enabled.
> 
> There is a lifecycle policy for removing the files after one day.
Did the object stat call return anything?

Can you show more of the debug output (redact the keys/hostname/filename)?

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RGW: Cannot write to bucket anymore

2024-03-19 Thread Robin H. Johnson
On Tue, Mar 19, 2024 at 01:19:34PM +0100, Malte Stroem wrote:
> I checked the policies, lifecycle and versioning.
> 
> Nothing. The user has FULL_CONTROL. Same settings for the user's other 
> buckets he can still write to.
> 
> Wenn setting debugging to higher numbers all I can see is something like 
> this while trying to write to the bucket:
Did you get to debug_rgw=20 & debug_ms=1?
> 
> s3:put_obj reading permissions 
>  
> 
> s3:put_obj init op
> s3:put_obj verifying op mask
> s3:put_obj verifying op permissions
> op->ERRORHANDLER: err_no=-13 new_err_no=-13
> cache get: name=default.rgw.log++script.postrequest. : hit (negative entry)
> s3:put_obj op status=0
> s3:put_obj http status=403
> 1 == req done req=0x7fe8bb60a710 op status=0 http_status=403 
> latency=0.0s ==
Does an object of the same name exist, possibly versioned, somehow owned
by a different user?

`radosgw-admin object stat --bucket=... --object=...`

IIRC there would be specific messages saying it was denied by policy,
but I haven't checked that part of the codebase in some time.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RGW core dump at start

2024-02-10 Thread Robin H. Johnson
On Sat, Feb 10, 2024 at 10:05:02AM -0500, Vladimir Sigunov wrote:
> Hello Community!
> I would appreciate any help/suggestions with the massive RGWs outage we are
> facing.
> The cluster's overall status is acceptable (HEALTH_WARN because of some pgs
> not scrubbed in time), and the cluster is operational.
> However, all RGWs fail to start with a core dump.
> The only issue I see at the moment is the RGW GC queue (radosgs-admin gc
> list) that contains 600K records.
> I believe this could be the root cause of the issue. When I pause OSD iops
> (ceph osd pause), all RGWs starting with no issues.
> There are no large OMAPs or any other warnings in ceph -s output.

To get you going for the moment, how about disabling the GC threads in
the RGW daemon, and then processing GC async.

Add "rgw_enable_gc_threads=0" to ceph.conf.

After that, testing to see why you get the dump; start up a seperate RGW
instance with debug logging enabled.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CompleteMultipartUpload takes a long time to finish

2024-02-05 Thread Robin H. Johnson
On Mon, Feb 05, 2024 at 07:51:34PM +0100, Ondřej Kukla wrote:
> Hello,
> 
> For some time now I’m struggling with the time it takes to 
> CompleteMultipartUpload on one of my rgw clusters.
> 
> I have a customer with ~8M objects in one bucket uploading quite a large 
> files. From 100GB to like 800GB.
> 
> 
> I’ve noticed when they are uploading ~200GB files that the requests started 
> timeouting on a LB we have infront of the rgw.
> 
> When I’ve started going through the logs I’ve noticed that the 
> CompleteMultipartUpload request took like 700s to finish. Which seemed 
> ok-ish, but the number seem quite large.
> 
> However, when they started uploading 750GB files the time to complete the 
> multipart upload ended around 2500s -> more than 40minutes which seems like a 
> way to much.
> 
> 
> Do you have a similar experience? Is there anything we can do to improve 
> this? How much time does the CompleteMultipartUpload takes on your clusters?
> 
> The cluster is running on version 17.2.6.
How many incomplete MPUs and MPU parts exist in that bucket?

Is it 8M objects based on ListObjects, or the number of objects reported
by radosgw-admin bucket stats?

If there are LOTS of incomplete objects, that can cause extreme cases in
the listing used by both ListObjects & CompleteMultipartUpload.

This is esp. likely if they the huge files used many tiny parts (some
tooling defaults to 5MB parts).

Easy way to test this is ask them to do a single 200GB upload to a brand
new bucket (with no objects).

If *that* case is fast, then it's something about the index entries in
the existing bucket; likely a high proportion of incomplete parts.

CompleteMultipartUpload for 50GB => single-digit seconds.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-15 Thread Robin H. Johnson
On Mon, Jan 15, 2024 at 03:21:11PM +, Drew Weaver wrote:
> Oh, well what I was going to do wAs just use SATA HBAs on PowerEdge R740s 
> because we don't really care about performance as this is just used as a copy 
> point for backups/archival but the current Ceph cluster we have [Which is 
> based on HDDs attached to Dell RAID controllers with each disk in RAID-0 and 
> works just fine for us] is on EL7 and that is going to be EOL soon. So I 
> thought it might be better on the new cluster to use HBAs instead of having 
> the OSDs just be single disk RAID-0 volumes because I am pretty sure that's 
> the least good scenario whether or not it has been working for us for like 8 
> years now.
> 
> So I asked on the list for recommendations and also read on the website and 
> it really sounds like the only "right way" to run Ceph is by directly 
> attaching disks to a motherboard. I had thought that HBAs were okay before 
> but I am probably confusing that with ZFS/BSD or some other equally 
> hyperspecific requirement. The other note was about how using NVMe seems to 
> be the only right way now too.
> 
> I would've rather just stuck to SATA but I figured if I was going to have to 
> buy all new servers that direct attach the SATA ports right off the 
> motherboards to a backplane I may as well do it with NVMe (even though the 
> price of the media will be a lot higher).
> 
> It would be cool if someone made NVMe drives that were cost competitive and 
> had similar performance to hard drives (meaning, not super expensive but not 
> lightning fast either) because the $/GB on datacenter NVMe drives like 
> Kioxia, etc is still pretty far away from what it is for HDDs (obviously).

I think as a collective, the mailing list didn't do enough to ask about
your use case for the Ceph cluster earlier in the thread.

Now that you say it's just backups/archival, QLC might be excessive for
you (or a great fit if the backups are churned often).

USD70/TB is the best public large-NVME pricing I'm aware of presently; for QLC
30TB drives. Smaller capacity drives do get down to USD50/TB.
2.5" SATA spinning disk is USD20-30/TB.
All of those are much higher than the USD15-20/TB for 3.5" spinning disk
made for 24/7 operation.

Maybe it would also help as a community to explain "why" on the
perceptions of "right way".

It's a tradeoff in what you're doing, you don't want to
bottleneck/saturate critical parts of the system.

PCIe bandwidth: this goes for NVME as well as SATA/SAS.
I won't name the vendor, but I saw a weird NVME server with 50+ drive
slots.  Each drive slot was x4 lane width but had a number of PCIe
expanders in the path from the motherboard, so it you were trying to max
it out, simultaneously using all the drives, each drive only only got
~1.7x usable PCIe4.0 lanes.

Compare that to the Supermicro servers I suggested: The AMD variants use
a H13SSF motherboard, which provides 64x PCIe5.0 lanes, split into 32x
E3.S drive slots, and each drive slot has 4x PCIe 4.0, no
over-subscription.

On that same Supermicro system, how do you get the data out? There are
two PCIe 5.0 x16 slots for your network cards, so you only need to
saturate at most HALF the drives to saturate the network.

Taking this back to the SATA/SAS servers: if you had a 16-port HBA,
with only PCIe 2.0 x8, theoretical max 4GB/sec. Say you filled it with
Samsung QVO drives, and efficiently used them for 560MB/sec.
The drives can collectively get almost 9GB/sec.
=> probably worthwhile to buy a better HBA.

On the HBA side, some of the controllers, in any RAID mode (including
single-disk RAID0), cannot handle saturating every port at the same
time: the little CPU is just doing too much work. Those same controllers
in a passthrough/IT mode are fine because the CPU doesn't do work
anymore.

This turned out more rambling than I intended, but how can we capture
the 'why' of the recommendations into something usable by the community,
and have everybody be able to read that (esp. for those that don't want
to engage on a mailing list).

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-14 Thread Robin H. Johnson
On Fri, Jan 12, 2024 at 02:32:12PM +, Drew Weaver wrote:
> Hello,
> 
> So we were going to replace a Ceph cluster with some hardware we had
> laying around using SATA HBAs but I was told that the only right way
> to build Ceph in 2023 is with direct attach NVMe.
> 
> Does anyone have any recommendation for a 1U barebones server (we just
> drop in ram disks and cpus) with 8-10 2.5" NVMe bays that are direct
> attached to the motherboard without a bridge or HBA for Ceph
> specifically?
If you're buying new, Supermicro would be my first choice for vendor
based on experience.
https://www.supermicro.com/en/products/nvme

You said 2.5" bays, which makes me think you have existing drives.
There are models to fit that, but if you're also considering new drives,
you can get further density in E1/E3

The only caveat is that you will absolutely want to put a better NIC in
these systems, because 2x10G is easy to saturate with a pile of NVME.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Etag change of a parent object

2023-12-14 Thread Robin H. Johnson
On Wed, Dec 13, 2023 at 10:57:10AM +0100, Rok Jaklič wrote:
> Hi,
> 
> shouldn't etag of a "parent" object change when "child" objects are added
> on s3?
> 
> Example:
> 1. I add an object to test bucket: "example/" - size 0
> "example/" has an etag XYZ1
> 2. I add an object to test bucket: "example/test1.txt" - size 12
> "example/test1.txt" has an etag XYZ2
> "example/" has an etag XYZ1 ... should this change?
> 
> I understand that object storage is not hierarchical by design and objects
> are "not connected" by some other means than the bucket name.
If you understand the storage is not hierarchical, why do you think one
object is a parent of the other (child) object?

In your example, there are 2 objects:
"example/"
"example/test1.txt"

The "/" in the name is not special in any way. It would still be 2
objects if the object names were:
"exampleZ"
"exampleZtest1.txt"

I want to understand why you (and others) feel that one object is the
parent the other child object.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Disable signature url in ceph rgw

2023-12-08 Thread Robin H. Johnson
On Fri, Dec 08, 2023 at 10:41:59AM +0100, marc@singer.services wrote:
> Hi Ceph users
> 
> We are using Ceph Pacific (16) in this specific deployment.
> 
> In our use case we do not want our users to be able to generate signature v4 
> URLs because they bypass the policies that we set on buckets (e.g IP 
> restrictions).
> Currently we have a sidecar reverse proxy running that filters requests with 
> signature URL specific request parameters.
> This is obviously not very efficient and we are looking to replace this 
> somehow in the future.
> 
> 1. Is there an option in RGW to disable this signed URLs (e.g returning 
> status 403)?
> 2. If not is this planned or would it make sense to add it as a configuration 
> option?
> 3. Or is the behaviour of not respecting bucket policies in RGW with 
> signature v4 URLs a bug and they should be actually applied?

Trying to clarify your ask:
- you want ALL requests, including presigned URLs, to be subject to the
  IP restrictions encoded in your bucket policy?
  e.g. auth (signature AND IP-list)

That should be possible with bucket policy.

Can you post the current bucket policy that you have? (redact with
distinct values the IPs, userids, bucket name, any paths, but otherwise
keep it complete).

You cannot fundamentally stop anybody from generating presigned URLs,
because that's purely a client-side operation. Generating presigned URLs
requires an access key and secret key, at which point they can do
presigned or regular authenticated requests.

P.S. What stops your users from changing the bucket policy?

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW public HA traffic - best practices?

2023-11-19 Thread Robin H. Johnson
On Fri, Nov 17, 2023 at 11:09:22AM +0100, Boris Behrens wrote:
> Hi,
> I am looking for some experience on how people make their RGW public.
What level of fine-grained control do you have over DNS for your
environment?

If you can use a very short TTL, and dynamically update DNS rapidly,
maybe a DNS-based routing solution would be the quickest win for you?

s3.example.com => A/ record that resolves to only the pod(s) that
are online AND least loaded with traffic. 10 second TTL on the DNS
entry.

Right now those pods might be direct RGW, or L7LB+RGW (HAProxy, Envoy).

In future, you might iterate the design to be L4LB ingress on those
pods, and have the L7LB+RGW pods doing direct server return.

If a pod goes offline:
0-TTL seconds: some clients might have to retry on a different IP.
TTL+ seconds: failed pod is no longer in the DNS records.

A good piece of overall reading is vbernat's load-balancing with Linux
page:
https://vincent.bernat.ch/en/blog/2018-multi-tier-loadbalancer

It doesn't have the above dynamic DNS solution directly in front of
pods, because it mostly focuses on what can be done with BGP as a common
point. It does however suggest DNS for regional failover.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW strange behavior when using a presigned url generated by SDK PHP

2023-06-30 Thread Robin H. Johnson
On Fri, Jun 30, 2023 at 01:21:57AM -, Huy Nguyen wrote:
> Thanks for your reply,
> Yes, my setup is like the following:
> RGWs (port 8084) -> Nginx (80, 443)
> 
> So this why it make me confuse when :8084 appear with the domain.
> 
> And this behavior only occurs with PHP's generated url, not in Boto3
Put tcpdump or something else between nginx & RGW and capture the
transaction when using Boto3 vs PHP.

I'm relatively sure it's nginx adding it for you.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW strange behavior when using a presigned url generated by SDK PHP

2023-06-29 Thread Robin H. Johnson
On Thu, Jun 29, 2023 at 10:46:16AM -, Huy Nguyen wrote:
> Hi,
> I tried to generate a presigned url using SDK PHP, but it doesn't
> work. (I also tried to use boto3 with the same configures and the url
> works normally)
Do you have some sort of load-balancer in the setup? Either HAProxy,
Nginx, or something else.

If the port number isn't in the PHP script's output, by deduction it
must be coming from somewhere else. As a bug or misconfiguration.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 10x more used space than expected

2023-03-14 Thread Robin H. Johnson
On Tue, Mar 14, 2023 at 06:59:51PM +0100, Gaël THEROND wrote:
> Versioning wasn’t enabled, at least not explicitly and for the
> documentation it isn’t enabled by default.
> 
> Using nautilus.
> 
> I’ll get all the required missing information on tomorrow morning, thanks
> for the help!
> 
> Is there a way to tell CEPH to delete versions that aren’t current used one
> with radosgw-admin?
> 
> If not I’ll use the rest api no worries.
Nope, s3 API only.

You should also check for incomplete multiparts. For that, I recommend
using AWSCLI or boto directly. Specifically not s3cmd, because s3cmd
doesn't respect the  flag properly.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 10x more used space than expected

2023-03-14 Thread Robin H. Johnson
On Tue, Mar 14, 2023 at 06:34:54PM +0100, Gaël THEROND wrote:
> Hi everyone, I’ve got a quick question regarding one of our RadosGW bucket.
> 
> This bucket is used to store docker registries, and the total amount of
> data we use is supposed to be 4.5Tb BUT it looks like ceph told us we
> rather use ~53Tb of data.
> 
> One interesting thing is, this bucket seems to shard for unknown reason as
> it is supposed to be disabled by default, but even taking that into account
> we’re not supposed to see such a massive amount of additional data isn’t it?
> 
> Here is the bucket stats of it:
> https://paste.opendev.org/show/bdWFRvNFtxyHnbPfXWu9/
At a glance, is versioning enabled?

And if so, are you pruning old versions?

Please share "radosgw-admin metadata get" for the bucket &
bucket-instance.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Generated signurl is accessible from restricted IPs in bucket policy

2023-02-09 Thread Robin H. Johnson
On Wed, Feb 08, 2023 at 03:07:20PM -, Aggelos Toumasis wrote:
> Hi there,
> 
> We noticed after creating a signurl that the bucket resources were
> accessible from IPs that were originally restricted from accessing
> them (using a bucket policy).  Using the s3cmd utility we confirmed
> that the Policy is correctly applied and you can access it only for
> the allowed IPs.
>
> Is this an expected behavior or do we miss something?
Can you share the bucket policy?

Also, are you using some reverse proxy in front of RGW, and if so: 
are both the proxy & RGW configured for the correct headers to agree on
the actual source IP.

IIRC depending how the policy is written, you have have either of:
- presigned URL || IP-check
- presigned URL && IP-check

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Local NTP servers on monitor node's.

2022-03-18 Thread Robin H. Johnson
On Wed, Mar 16, 2022 at 10:49:15AM +, Frank Schilder wrote:
> Returning to this thread, I finally managed to capture the problem I'm
> facing in a log. The time service to the outside world is blocked by
> our organisation's firewall and I'm restricted to use internal time
> servers. Unfortunately, these seem to be periodically unstable. I
> caught a time-excursion in the log extracts shown below. My problem
> now is that such a transient causes time-havoc on the cluster, because
> the servers start to adjust in all directions.
...
> Is there a config to tell the head node to take it easy with jumps in
> the external clock source?
This is the "step" config knobs.

> Here the observation. It is annotated and filtered to contain only
> lines where the offset changes and I reduced it to show the incident
> with few lines, all as seen from the head node:
...
> I know that the providers of the time service should get their act
> together, but I doubt that will happen and I would like to harden my
> time sync config to survive such events without chaos. If anyone can
> point me to a suitable config, please do. I need a way to smoothen out
> steep upstream oscillations, like a low-pass filter would do.
If you did filter out the sudden jumps, you'd end up with your mons
all (rightly) distrusting the bad time service, and then they could
drift on their own.

There are better timenuts than I on the list, but I think the following
MIGHT be a reasonable course of action.

1. Disable time stepping: "tinker stepfwd 0 stepback 0" (the exact syntax might 
vary depending on NTP version)
2. Set up your mons all be NTP servers (possibly in addition to the
   existing head node). They should peer with each other explicitly.
3. Set up the rest of your cluster to consume from the mons ONLY.
4. Optional: if your time service providers are unreliable, investigate
   build/buying your own, and use it to feed time to the mons.

If all the mons end up distrusting the time-service you have, they
*should* retain consistent time between themselves, and thus the clients
should also keep consistent time.





-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bug in RGW header x-amz-date parsing

2021-12-12 Thread Robin H. Johnson
On Tue, Dec 07, 2021 at 12:39:12PM -0500, Casey Bodley wrote:
> hi Subu,
> 
> On Tue, Dec 7, 2021 at 12:10 PM Subu Sankara Subramanian
>  wrote:
> >
> > Folks,
> >
> > Is there a bug in ceph RGW date parsing?
> > https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.cc#L223 - this
> > line parses the date in x-amz-date as RFC 2616. BUT the format specified by
> > Amazon S3 is ISO 8601 basic - MMDDTHHMMSSZ (
> > https://docs.aws.amazon.com/ses/latest/APIReference-V2/CommonParameters.html).
> 
> this is a link to the Amazon Simple Email Service v2 API, it doesn't
> refer to aws v2 signatures. v2 signatures for s3 are documented at
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html#RESTAuthenticationTimeStamp,
> which states that "The value of the x-amz-date header must be in one
> of the RFC 2616 formats (http://www.ietf.org/rfc/rfc2616.txt)."
Yes, there is a bug, but it's not the one you're thinking of.

The S3 specification says one thing (RFC2616 dates [3]), and AWS's S3
implementation does something slightly different:
It accepts non-GMT time offsets, and correctly converts them to GMT/UTC.

RGW on the other hand has a subtle failure:
1. RFC2616 section 3.3.1 [1]
   Specifies that HTTP timestamps must be in GMT.
2. Not all clients follow this, and may send it with other timezones!
3. RGW uses strptime(s, "%a, %d %b %Y %H:%M:%S %z", t) to try and parse
   the time.
4. glibc strptime accepts the %z specifier, but discards the value! [2]
5. If the server is set to UTC, and the client sends a non-GMT date,
   it will be parsed as if it's in GMT, and thus potentially be offset
   by some time (usually a multiple of hours due to the geo-political
   nature of timezones).
6. If making a SigV2 request, the timezone used 4th line of the string to
   sign (timestamp) is *not* GMT/ offset, e.g. "Sun, 12 Dec 2021
   12:52:48 -0800", RGW will respond with RequestTimeTooSkewed response.

Partial demo for this bug at [4].

[1] https://datatracker.ietf.org/doc/html/rfc2616#section-3.3.1
[2] https://man7.org/linux/man-pages/man3/strptime.3.html#NOTES
[3] 
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html#RESTAuthenticationTimeStamp
[4] https://gist.github.com/robbat2/930974e21dcdf2b31944f954c46dd904

Credit to one of my co-workers, Max Kuznetsov, for finding the initial
conditions that lead this bug discovery.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Double slashes in s3 name

2021-04-28 Thread Robin H. Johnson
On Tue, Apr 27, 2021 at 11:55:15AM -0400, Gavin Chen wrote:
> Hello,
> 
> We’ve got some issues when uploading s3 objects with a double slash //
> in the name, and was wondering if anyone else has observed this issue
> with uploading objects to the radosgw?
> 
> When connecting to the cluster to upload an object with the key
> ‘test/my//bucket’ the request returns with a 403
> (SignatureDoesNotMatch) error. 
> 
> Wondering if anyone else has observed this behavior and has any
> workarounds to work with double slashes in the object key name.
I'm not aware of any issues with this, but absolutely doesn't mean it's
bug free.

For debugging it, what client are you using? I'd suggest using
debug_rgw=20 AND maximal debug on the client side, and comparing the
signature construction to see why it doesn't match.

This goes doubly if the bug exists with only one of v2 or v4 signatures!

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Storing 20 billions of immutable objects in Ceph, 75% <16KB

2021-02-17 Thread Robin H. Johnson
On Wed, Feb 17, 2021 at 05:36:53PM +0100, Loïc Dachary wrote:
> Bonjour,
> 
> TL;DR: Is it more advisable to work on Ceph internals to make it
> friendly to this particular workload or write something similar to
> EOS[0] (i.e Rocksdb + Xrootd + RBD)?
CERN's EOSPPC instance, which is one of the biggest from what I can
find, was up around 3.5B files in 2019; and you're proposing running 10B
files, so I don't know how EOS  will handle that. Maybe Dan can chime in
on the scalability there.

Please do keep on this important work! I've tried to do something
similar at a much smaller scale for Gentoo Linux's historical collection
of source code media (distfiles), but am significantly further behind
your effort.

> Let say those 10 billions objects are stored in a single 4+2 erasure
> coded pool with bluestore compression set for objects that have a size
> 32KB and the smallest allocation size for bluestore set to 4KB[3].
> The 750TB won't use the expected 350TB but about 30% more, i.e.
> ~450TB (see [4] for the maths). This space amplification is because
> storing a 1 byte object uses the same space as storing a 16KB object
> (see [5] to repeat the experience at home). In a 4+2 erasure coded
> pool, each of the 6 chunks will use no less than 4KB because that's
> the smallest allocation size for bluestore. That's 4 * 4KB = 16KB
> even when all that is needed is 1 byte.
I think you have an error here: with 4KB allocation size in 4+2 pool,
any object sized (0,16K] will take _6_ chunks: 20KB of storage.
Any object sized (16K,32K] will take _12_ chunks: 40K of storage.

I'd attack this from another side entirely:
- how aggressively do you want to pack objects overall? e.g. if you have
  a few thousand objects in the 4-5K range, do you want zero bytes
  wasted between objects?
- how aggressively do you want to dudup objects that share common data,
  esp if it's not aligned on some common byte margins?
- what are the data portability requirements to move/extract data from
  this system at a later point?
- how complex of an index are you willing to maintain to
  reconstruct/access data?
- What requirements are there about the ordering and accessibility of
  the packs? How related do the pack objects need to be? e.g. are the
  packed as they arrive in time order, to build up successive packs of
  size, or are there many packs and you append the "correct" pack for a
  given object?

I'm normally distinctly in the camp that object storage systems should
natively expose all objects, but that also doesn't account for your
immutability/append-only nature.

I see your discussion at https://forge.softwareheritage.org/T3054#58977
as well, about the "full scale out" vs "scale up metadata & scale out
data" parts.

To brainstorm parts of an idea, I'm wondering about Git's
still-in-development partial clone work, with the caveat that you intend
to NEVER checkout the entire repository at the same time.

Ideally, using some manner of fuse filesystem (similar to Git Virtual
Filesystem) w/ an index-only clone, naive clients could access the
object they wanted, which would be fetched on demand from the git server
which has mostly git packs and a few sparse objects that are waiting for
packing.

The write path on ingest clients would involve sending back the new
data, and git background processes on some regular interval packing the
loose objects into new packfiles.

Running this on top of CephFS for now means that you get the ability to
move it to future storage systems more easily than any custom RBD/EOS
development you might do: bring up enough space, sync the files over,
profit.

Git handles the deduplication, compression, access methods, and
generates large pack files, which Ceph can store more optimally than the
plethora of tiny objects.

Overall, this isn't great, but there aren't a lot of alternatives as
your great research has noted.

Being able to take a backup of the Git-on-CephFS is also made a lot
easier since it's a filesystem: "just" write out the 350TB to 20x LTO-9
tapes

Thinking back to older systems, like SGI's hierarchal storage modules
for XFS, the packing overhead starts to become significant for your
objects: some of the underlying mechanisms in the XFS HSM DMAPI, if they
ended up packing immutable objects to tape still had tar & tar-like
headers (at least 512 bytes per object), your 10B objects would take at
least 4TB of extra space (before compression).


> It was suggested[6] to have two different pools: one with a 4+2 erasure pool 
> and compression for all objects with a size > 32KB that are expected to 
> compress to 16KB. And another with 3 replicas for the smaller objects to 
> reduce space amplification to a minimum without compromising on durability. A 
> client looking for the object could make two simultaneous requests to the two 
> pools. They would get 404 from one of them and the object from the other.
> 
> Another workaround, is best described in the "Finding a needle in Haystack: 
> Facebook’s photo 

[ceph-users] Re: How OSD encryption affects latency/iops on NVMe, SSD and HDD

2020-09-26 Thread Robin H. Johnson
On Sat, Sep 26, 2020 at 04:50:34PM +, t...@postix.net wrote:
> Hi all,
> 
> For those who use encryption on your OSDs, what effect do you see on
> your NVMe, SSD and HDD vs non-encrypted OSDs? I tried to find some
> info on this subject but there isn't much detail available.
> 
> >From experience, dmcrypt is CPU-bound and becomes a bottleneck when
> >used on very fast NVMe. Using aes-xts, one can only expect around
> >1600-2000GB/s with 256/512 bit keys.

There's two things to point out as improvements for you.

1. CloudFlare's writeup about reducing latency in dm-crypt earlier this
year:
https://blog.cloudflare.com/speeding-up-linux-disk-encryption/

2. An internal observation, that I don't think was well published yet,
that disabling CONFIG_CRYPTO_STATS may provide significant CPU reduction
(CPU time spent in crypto_stats_* specifically).

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Nautilus Scrub and deep-Scrub execution order

2020-09-14 Thread Robin H. Johnson
On Mon, Sep 14, 2020 at 11:40:22AM -, Johannes L wrote:
> Hello Ceph-Users
> 
> after upgrading one of our clusters to Nautilus we noticed the x pgs not 
> scrubbed/deep-scrubbed in time warnings.
> Through some digging we found out that it seems like the scrubbing takes 
> place at random and doesn't take the age of the last scrub/deep-scrub into 
> consideration.
> I dumped the time of the last scrub with a 90 min gap in between:
> ceph pg dump | grep active | awk '{print $22}' | sort | uniq -c
> dumped all
>2434 2020-08-30
>5935 2020-08-31
>1782 2020-09-01
>   2 2020-09-02
>   2 2020-09-03
>   5 2020-09-06
>   3 2020-09-08
>   5 2020-09-09
>  17 2020-09-10
> 259 2020-09-12
>   26672 2020-09-13
>   12036 2020-09-14
> 
> dumped all
>2434 2020-08-30
>5933 2020-08-31
>1782 2020-09-01
>   2 2020-09-02
>   2 2020-09-03
>   5 2020-09-06
>   3 2020-09-08
>   5 2020-09-09
>  17 2020-09-10
>  51 2020-09-12
>   24862 2020-09-13
>   14056 2020-09-14
> 
> It is pretty obvious that the PGs that have been scrubbed a day ago have been 
> scrubbed again for some reason while ones that are 2 weeks old are basically 
> left untouched.
> One way we are currently dealing with this issue is setting the 
> osd_scrub_min_interval to 72h to force the cluster to scrub the older PGs.
> This can't be intentional.
> Has anyone else seen this behavior?
Yes, this has existed for a long time; but the warnings are what's new.

- What's your workload? RBD/RGW/CephFS/???
- Is there a pattern to which pools are behind?

At more than one job now, we've have written some tooling that drove the
oldest scrubs in addition or instead of Ceph scheduling scrubs.

The one thing that absolutely stood out in that however, is some PGs
that took much longer than others or never completed (and meant other
PGs on those OSDs also got delayed). I never got to the bottom of why
when I was at my last job, and it hasn't been priority enough at my
current job for the once we saw it (and it may have been a precursor to
a disk failing).

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: S3 bucket lifecycle not deleting old objects

2020-07-28 Thread Robin H. Johnson
On Tue, Jul 28, 2020 at 01:28:14PM +, Alex Hussein-Kershaw wrote:
> Hello,
> 
> I have a problem that old versions of S3 objects are not being deleted. Can 
> anyone advise as to why? I'm using Ceph 14.2.9.
How many objects are in the bucket? If it's a lot, then you may run into
RGW's lifecycle performance limitations: listing each bucket is a very
slow operation for lifecycle prior to improvements make in later
versions (Octopus with maybe a backport to Nautilius?)

If the bucket doesn't have a lot of operations, you could try running
the 'radosgw-admin lc process' directly, with debug logging, and see
where it gets bogged down.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: NoSuchKey on key that is visible in s3 list/radosgw bk

2020-07-27 Thread Robin H. Johnson
On Mon, Jul 27, 2020 at 08:02:23PM +0200, Mariusz Gronczewski wrote:
> Hi,
> 
> I've got a problem on Octopus (15.2.3, debian packages) install, bucket
> S3 index shows a file:
> 
> s3cmd ls s3://upvid/255/38355 --recursive
> 2020-07-27 17:48  50584342
> 
> s3://upvid/255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4
> 
> radosgw-admin bi list also shows it
> 
> {
> "type": "plain",
> "idx":
> "255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4",
> "entry": { "name":
> "255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4",
> "instance": "", "ver": {
> "pool": 11,
> "epoch": 853842
> },
> "locator": "",
> "exists": "true",
> "meta": {
> "category": 1,
> "size": 50584342,
> "mtime": "2020-07-27T17:48:27.203008Z",
> "etag": "2b31cc8ce8b1fb92a5f65034f2d12581-7",
> "storage_class": "",
> "owner": "filmweb-app",
> "owner_display_name": "filmweb app user",
> "content_type": "",
> "accounted_size": 50584342,
> "user_data": "",
> "appendable": "false"
> },
> "tag": "_3ubjaztglHXfZr05wZCFCPzebQf-ZFP",
> "flags": 0,
> "pending_map": [],
> "versioned_epoch": 0
> }
> },
> 
> but trying to download it via curl (I've set permissions to public0 only gets 
> me
Does the RADOS object for this still exist?

try:
radosgw-admin object stat --bucket ... --object 
'255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4'

If that doesn't return, then the backing object is gone, and you have a
stale index entry that can be cleaned up in most cases with check
bucket.
For cases where that doesn't fix it, my recommended way to fix it is
write a new 0-byte object to the same name, then delete it.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW latency on chuked uploads

2020-06-09 Thread Robin H. Johnson
On Tue, Jun 09, 2020 at 09:07:49PM +0300, Tadas wrote:
> Hello,
> we face like 75-100 ms while doing 600 chunked PUT's.
> while 40-45ms while doing 1k normal PUT's.
> (Even amount of PUT's lowers on chunked PUT way)
> 
> We tried civetweb and beast. Nothing changes.
How close is your test running to the RGWs?

Does it get noticeably worse if you inject artificial latency into the network?

E.g.
https://bencane.com/2012/07/16/tc-adding-simulated-network-latency-to-your-linux-server/

If you can run the test without SSL, then tcpdump should let you see if
your client is trying to stuff the max amount of data into the pipeline
or waiting for an ACK each time.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW latency on chuked uploads

2020-06-09 Thread Robin H. Johnson
On Tue, Jun 09, 2020 at 12:59:10PM +0300, Tadas wrote:
> Hello,
> 
> I have strange issues with radosgw:
> When trying to PUT object with “transfer-encoding: chunked”, I can see high 
> request latencies.
> When trying to PUT the same object as non-chunked – latency is much lower, 
> and also request/s performance is better.
> Perhaps anyone had the same issue?
What is your latency to the RGW?

There's one downside to chunked encoding that I observed with CivetWeb
when I implemented chunked transfer encoding for the Bucket Listing.

Specifically, CivetWeb did not stuff the socket with all available
content, and instead only trickled out entries, waiting for each TCP
window ACK before the next segment was sent.

If the Bucket Listing took a long time to complete within RGW, the
time to the first results was hugely improved, but the time for the full
response MAY be worse if the latency was high, due to having more back &
forth in TCP ACKs.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io