OK, thanks.
On Fri, Oct 25, 2019 at 6:07 PM Wido den Hollander wrote:
>
>
> On 10/25/19 5:27 AM, luckydog xf wrote:
> > Hi, list,
> >
> > Currently my ceph nodes with 3 MON and 9 OSDs, everything is fine.
> > Now I plan to add onre more public network, the initial public network
> > is
Hi!
Yes, resetting journals is exactly what we did, quite a while ago, when the mds
ran out of memory because a journal entry had an absurdly large number in it (I
think it may have been an inode number). We probably also reset the inode table
later, which I recently learned resets a data
Yes, try and get the pgs healthy, then you can just re-provision the down OSDs.
Run a scrub on each of these pgs and then use the commands on the
following page to find out more information for each case.
https://docs.ceph.com/docs/luminous/rados/troubleshooting/troubleshooting-pg/
Focus on the
I should have noted this is with Luminous 12.2.12 and consistent with
swiftclient versions from 3.0.0 to 3.8.1, which may not be relevant. With a
proper nod I can open a ticket for this – just want to make sure it’s not a
config issue.
[client.rgw.cephrgw-s01]
host = cephrgw-s01
keyring =
Hello,
From several weeks, i have some OSDs flapping before ending out of the
cluster by Ceph…
I was hoping some Ceph's magic and just gave it sometime to auto heal
(and be able to do all the side work…) but it was a bad idea (what a
surprise :D). Also got some inconsistents PGs, but i was
On Fri, Oct 25, 2019 at 12:11 PM Pickett, Neale T wrote:
> In the last week we have made a few changes to the down filesystem in an
> attempt to fix what we thought was an inode problem:
>
>
> cephfs-data-scan scan_extents # about 1 day with 64 processes
>
> cephfs-data-scan scan_inodes #
Hi Philippe,
Have you looked at the mempool stats yet?
ceph daemon osd.NNN dump_mempools
You may also want to look at the heap stats, and potentially enable
debug 5 for bluestore to see what the priority cache manager is doing.
Typically in these cases we end up seeing a ton of memory
Hi,
When uploading to RGW via swift I can set an expiration time. The files being
uploaded are large. We segment them using the swift upload ‘-S’ arg. This
results in a 0-byte file in the bucket and all the data frags landing in a
*_segments bucket.
When the expiration passes the 0-byte
Hi Paul, Nigel,
I'm also seeing "HEALTH_WARN 6 large omap objects" warnings with cephfs
after upgrading to 14.2.4:
The affected osd's are used (only) by the metadata pool:
POOLID STORED OBJECTS USED %USED MAX AVAIL
mds_ssd 1 64 GiB 1.74M 65 GiB 4.47 466 GiB
See below for more log
I was following the pg autoscaler recommendations and I did not get a
recommendation to raise the PGs there.
I'll try that, I am raising it already. But still seems weird why it would move
data onto almost full OSDs, see the data distribution, it's horrible, ranges
from 60 to almost 90% of
Hi,
we are seeing quite a high memory usage by OSDs since Nautilus. Averaging
10GB/OSD for 10TB HDDs. But I had OOM issues on 128GB Systems because some
single OSD processes used up to 32%.
Here an example how they look on average: https://i.imgur.com/kXCtxMe.png
Is that normal? I never seen
11 matches
Mail list logo