On 9/27/19 3:54 PM, Eugen Block wrote:
Update: I expanded all rocksDB devices, but the warnings still appear:
After expanding you should tell to OSD compact command, like `ceph tell
osd.0 compact`.
k
___
ceph-users mailing list --
This is in part a question of *how many* of those dense OSD nodes you have. If
you have a hundred of them, then most likely they’re spread across a decent
number of racks and the loss of one or two is a tolerable *fraction* of the
whole cluster.
If you have a cluster of just, say, 3-4 of
Hi everyone,
I searching to consulting and support of Ceph in Brazil. Does anyone on
the list provide consulting in Brazil?
Regards,
Gesiel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Wed, Oct 2, 2019 at 3:41 PM Paul Emmerich wrote:
>
> On Wed, Oct 2, 2019 at 10:56 PM Robert LeBlanc wrote:
> > Is there a way to have leveldb compact more frequently or cause it to
> > come up for air more frequently and respond to heartbeats and process
> > some IO?
>
> you can manually
>From my initial testing it looks like 14.2.4 fully supports the
deduplication mentioned here:
https://docs.ceph.com/docs/master/dev/deduplication/
However, I'm not sure where the struct object_manifest script goes in
relation to foo and foo-chunk, and I'm not aware of what the
offsets/caspool
The documentation states:
https://docs.ceph.com/docs/mimic/rados/operations/monitoring/
The POOLS section of the output provides a list of pools and the notional usage
of each pool. The output from this section DOES NOT reflect replicas, clones or
snapshots. For example, if you store an object
Thanks Paul. I was speaking more about total OSDs and RAM, rather than a single
node. However, I am considering building a cluster with a large OSD/node count.
This would be for archival use, with reduced performance and availability
requirements. What issues would you anticipate with a large
On Wed, Oct 2, 2019 at 10:56 PM Robert LeBlanc wrote:
> Is there a way to have leveldb compact more frequently or cause it to
> come up for air more frequently and respond to heartbeats and process
> some IO?
you can manually trigger a compaction via the admin socket (or was it
via ceph tell?)
I have one or two more stability issues I'm trying to solve in a
cluster that I inherited that I just can't seem to figure out. One
issue may be the cause for the other.
This is a Jewel 10.2.11 cluster with ~760 - 10TB HDDs and 5GB journals on SSD.
When a large number of files are deleted from
On Sun, Sep 29, 2019 at 8:21 PM Florian Pritz
wrote:
>
> On Sun, Sep 29, 2019 at 10:49:58AM +0800, "Yan, Zheng"
> wrote:
> > > Hanging client (10.1.67.49) kernel log:
> > >
> > > > 2019-09-26T16:08:27.481676+02:00 hostnamefoo kernel: [708596.227148]
> > > > ceph: mds0 reconnect start
> > > >
10 matches
Mail list logo