Re: [ceph-users] ceph; pg scrub errors

2019-09-25 Thread M Ranga Swami Reddy
repair took almost 6 hours..but after repair, still sees the scrub errs. On Wed, Sep 25, 2019 at 5:11 AM Brad Hubbard wrote: > On Tue, Sep 24, 2019 at 10:51 PM M Ranga Swami Reddy > wrote: > > > > Interestingly - "rados list-inconsistent-obj ${PG} --format=json" not > showing any objects incon

Re: [ceph-users] POOL_TARGET_SIZE_BYTES_OVERCOMMITTED

2019-09-25 Thread Oliver Freyermuth
Hi together, can somebody confirm whether should I put this in a ticket, or whether this is wanted (but very unexpected) behaviour? We have some pools which gain a factor of three by compression: POOL ID STORED OBJECTS USED %USED MAX AVAIL

Re: [ceph-users] RADOS EC: is it okay to reduce the number of commits required for reply to client?

2019-09-25 Thread Gregory Farnum
On Thu, Sep 19, 2019 at 12:06 AM Alex Xu wrote: > > Hi Cephers, > > We are testing the write performance of Ceph EC (Luminous, 8 + 4), and > noticed that tail latency is extremly high. Say, avgtime of 10th > commit is 40ms, acceptable as it's an all HDD cluster; 11th is 80ms, > doubled; then 12th

[ceph-users] Ceph RDMA setting for public/cluster network

2019-09-25 Thread Liu, Changcheng
Hi all, Does anyone know how to set "ms_async_rdma_device_name" for OSD in ceph.conf in production environment? When deploying Ceph, it’s better to isolate public & cluster network. For OSD daemon, public & cluster network use different configuration means that they need use diffe

[ceph-users] Ceph RDMA setting for public/cluster network

2019-09-25 Thread Liu, Changcheng
Hi all, Does anyone know how to set "ms_async_rdma_device_name" for OSD in ceph.conf in production environment? When deploying Ceph, it’s better to isolate public & cluster network. For OSD daemon, public & cluster network use different configuration means that they need use diffe

[ceph-users] slow requests after rocksdb delete wal or table_file_deletion

2019-09-25 Thread lin zhou
hi, cephers recenty, I am testing ceph 12.2.12 with bluestore using cosbench. both SATA osd and ssd osd has slow request. many slow request occur, and most slow logs after rocksdb delete wal or table_file_deletion logs does it means the bottleneck of Rocksdb? if so how to improve. if not how to fi