. Is there a reason that this wouldn’t be a HEALTH_ERR
condition since it represents a significant service degradation?
Thanks!
Ryan
> On May 22, 2019, at 4:20 AM, Yan, Zheng wrote:
>
> On Tue, May 21, 2019 at 6:10 AM Ryan Leimenstoll
> wrote:
>>
>> Hi all,
>>
>> We
large a message to the OSD,
however my understanding was that the MDS should be using osd_max_write_size to
determine the size of that message [0]. Is this maybe a bug in how this is
calculated on the MDS side?
Thanks!
Ryan Leimenstoll
rleim...@umiacs.umd.edu
University of Maryland Institute for Ad
!
Best,
Ryan
[0] https://tracker.ceph.com/issues/36293
<https://tracker.ceph.com/issues/36293>
> On Oct 2, 2018, at 6:08 PM, Robin H. Johnson wrote:
>
> On Tue, Oct 02, 2018 at 12:37:02PM -0400, Ryan Leimenstoll wrote:
>> I was hoping to get some clarification on what &
hough this is now prohibited by
Amazon in US-East and seemingly all of their other regions [0]. Since clients
typically follow Amazon’s direction, should RGW be rejecting underscores in
these names to be in compliance? (I did notice it already rejects uppercase
letters.)
Thanks much!
Ryan Leimen
2018 at 8:50 PM, Ryan Leimenstoll
> <rleim...@umiacs.umd.edu> wrote:
>> Hi All,
>>
>> We recently experienced a failure with our 12.2.4 cluster running a CephFS
>> instance that resulted in some data loss due to a seemingly problematic OSD
>> blocking IO o
/docs/luminous/cephfs/disaster-recovery/#recovery-from-missing-metadata-objects
Thanks much,
Ryan Leimenstoll
rleim...@umiacs.umd <mailto:rleim...@umiacs.umd>.edu
University of Maryland Institute for Advanced Computer Studies
___
ceph-users mailin
the hood.
Thanks,
Ryan
> On Mar 6, 2018, at 2:54 PM, Robin H. Johnson <robb...@gentoo.org> wrote:
>
> On Tue, Mar 06, 2018 at 02:40:11PM -0500, Ryan Leimenstoll wrote:
>> Hi all,
>>
>> We are trying to move a bucket in radosgw from one user to another in an
helpful to have the ability to do this on
the radosgw backend. This is especially useful for large buckets/datasets where
copying the objects out and into radosgw could be time consuming.
Is this something that is currently possible within radosgw? We are running
Ceph 12.2.2.
Thanks,
Ryan
are somewhat nervous to reenable dynamic sharding as it seems to have
contributed to this problem.
Thanks,
Ryan
> On Oct 9, 2017, at 5:26 PM, Yehuda Sadeh-Weinraub <yeh...@redhat.com> wrote:
>
> On Mon, Oct 9, 2017 at 1:59 PM, Ryan Leimenstoll
> <rleim...@umiacs.umd.edu&
on processing
returned error r=-22
Can anyone advise on the best path forward to stop the current sharding states
and avoid this moving forward?
Some other details:
- 3 rgw instances
- Ceph Luminous 12.2.1
- 584 active OSDs, rgw bucket index is on Intel NVMe OSDs
Thanks,
Ryan Leim
candidate phase, I
haven’t seen much mention of it. For some time now we have been experiencing
blocked requests when deep scrubbing PGs in our bucket index, so this could be
quite useful for us.
Thanks,
Ryan Leimenstoll
rleim...@umiacs.umd.edu
University of Maryland Institute for Advanced
153970795085
.rgw.buckets.index0 497200
0 3721485483 5926323574 360300980
Thanks,
Ryan Leimenstoll
University of Maryland Institute for Advanced Computer Studies
___
ceph-users
12 matches
Mail list logo