hs can result in
> a bottleneck here.
> [4] librados AIO callbacks are currently single-threaded so high queue
> depths can result in a bottleneck here.
>
>
> On Tue, Jul 12, 2016 at 3:42 AM, Wido den Hollander <w...@42on.com> wrote:
>
>>
>> > Op 12 juli 201
Can anyone explain or at least refer to the lines of the codes in librd by
which objects are created? I need to know the relation between objects and
fio's iodepth...
Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
; There should be a line in the log specifying which assert is failing ,
> post that along with say 10 lines from top of that..
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Mansour Shafaei
Hi All,
Has anyone faced a similar issue? I do not have a problem with random read,
sequential read, and sequential writes though. Everytime I try running fio
for random writes, one osd in the cluster crashes. Here is the what I see
at the tail of the log:
ceph version 0.94.6
Running ./cbt.py commands step by step on a cluster of VMs with Centos 7.2
image on them realized that the following step does not go through:
pdsh -R ssh -w root@vm10 ceph-authtool --create-keyring --gen-key
--name=mon. /tmp/cbt/ceph/keyring --cap mon 'allow *'ceph-authtool
I particularly see