Re: [ceph-users] CephFS msg length greater than osd_max_write_size

2019-05-22 Thread Ryan Leimenstoll
. Is there a reason that this wouldn’t be a HEALTH_ERR condition since it represents a significant service degradation? Thanks! Ryan > On May 22, 2019, at 4:20 AM, Yan, Zheng wrote: > > On Tue, May 21, 2019 at 6:10 AM Ryan Leimenstoll > wrote: >> >> Hi all, >> >> We

[ceph-users] CephFS msg length greater than osd_max_write_size

2019-05-20 Thread Ryan Leimenstoll
large a message to the OSD, however my understanding was that the MDS should be using osd_max_write_size to determine the size of that message [0]. Is this maybe a bug in how this is calculated on the MDS side? Thanks! Ryan Leimenstoll rleim...@umiacs.umd.edu University of Maryland Institute for Ad

Re: [ceph-users] "rgw relaxed s3 bucket names" and underscores

2018-10-02 Thread Ryan Leimenstoll
! Best, Ryan [0] https://tracker.ceph.com/issues/36293 <https://tracker.ceph.com/issues/36293> > On Oct 2, 2018, at 6:08 PM, Robin H. Johnson wrote: > > On Tue, Oct 02, 2018 at 12:37:02PM -0400, Ryan Leimenstoll wrote: >> I was hoping to get some clarification on what &

[ceph-users] "rgw relaxed s3 bucket names" and underscores

2018-10-02 Thread Ryan Leimenstoll
hough this is now prohibited by Amazon in US-East and seemingly all of their other regions [0]. Since clients typically follow Amazon’s direction, should RGW be rejecting underscores in these names to be in compliance? (I did notice it already rejects uppercase letters.) Thanks much! Ryan Leimen

Re: [ceph-users] cephfs-data-scan safety on active filesystem

2018-05-08 Thread Ryan Leimenstoll
2018 at 8:50 PM, Ryan Leimenstoll > <rleim...@umiacs.umd.edu> wrote: >> Hi All, >> >> We recently experienced a failure with our 12.2.4 cluster running a CephFS >> instance that resulted in some data loss due to a seemingly problematic OSD >> blocking IO o

[ceph-users] cephfs-data-scan safety on active filesystem

2018-05-07 Thread Ryan Leimenstoll
/docs/luminous/cephfs/disaster-recovery/#recovery-from-missing-metadata-objects Thanks much, Ryan Leimenstoll rleim...@umiacs.umd <mailto:rleim...@umiacs.umd>.edu University of Maryland Institute for Advanced Computer Studies ___ ceph-users mailin

Re: [ceph-users] change radosgw object owner

2018-03-08 Thread Ryan Leimenstoll
the hood. Thanks, Ryan > On Mar 6, 2018, at 2:54 PM, Robin H. Johnson <robb...@gentoo.org> wrote: > > On Tue, Mar 06, 2018 at 02:40:11PM -0500, Ryan Leimenstoll wrote: >> Hi all, >> >> We are trying to move a bucket in radosgw from one user to another in an

[ceph-users] change radosgw object owner

2018-03-06 Thread Ryan Leimenstoll
helpful to have the ability to do this on the radosgw backend. This is especially useful for large buckets/datasets where copying the objects out and into radosgw could be time consuming. Is this something that is currently possible within radosgw? We are running Ceph 12.2.2. Thanks, Ryan

Re: [ceph-users] rgw resharding operation seemingly won't end

2017-10-10 Thread Ryan Leimenstoll
are somewhat nervous to reenable dynamic sharding as it seems to have contributed to this problem. Thanks, Ryan > On Oct 9, 2017, at 5:26 PM, Yehuda Sadeh-Weinraub <yeh...@redhat.com> wrote: > > On Mon, Oct 9, 2017 at 1:59 PM, Ryan Leimenstoll > <rleim...@umiacs.umd.edu&

[ceph-users] rgw resharding operation seemingly won't end

2017-10-09 Thread Ryan Leimenstoll
on processing returned error r=-22 Can anyone advise on the best path forward to stop the current sharding states and avoid this moving forward? Some other details: - 3 rgw instances - Ceph Luminous 12.2.1 - 584 active OSDs, rgw bucket index is on Intel NVMe OSDs Thanks, Ryan Leim

[ceph-users] Luminous RGW dynamic sharding

2017-09-20 Thread Ryan Leimenstoll
candidate phase, I haven’t seen much mention of it. For some time now we have been experiencing blocked requests when deep scrubbing PGs in our bucket index, so this could be quite useful for us. Thanks, Ryan Leimenstoll rleim...@umiacs.umd.edu University of Maryland Institute for Advanced

[ceph-users] RGW Multisite Sync Memory Usage

2017-07-26 Thread Ryan Leimenstoll
153970795085 .rgw.buckets.index0 497200 0 3721485483 5926323574 360300980 Thanks, Ryan Leimenstoll University of Maryland Institute for Advanced Computer Studies ___ ceph-users