Sorry about the misused term 'OSS: object storage server' (a term often
used in Lustre filesystem), what I meant is 4 hosts, each manages 12 OSDs.
Thanks for anyone who may answer any of my questions.

Best,
Jialin
NERSC/LBNL

On Sun, Jun 17, 2018 at 11:29 AM Jialin Liu <jaln...@lbl.gov> wrote:

> Hello,
>
> I have a couple questions regarding the IO on OSD via librados.
>
>
> 1. How to check which osd is receiving data?
>
> 2. Can the write operation return immediately to the application once the
> write to the primary OSD is done? or does it return only when the data is
> replicated twice? (size=3)
>
> 3. What is the I/O size in the lower level in librados, e.g., if I send a
> 100MB request with 1 thread, does librados send the data by a fixed
> transaction size?
>
> 4. I have 4 OSS, 48 OSDs, will the 4 OSS become the bottleneck? from the
> ceph documentation, once the cluster map is received by the client, the
> client can talk to OSD directly, so the assumption is the max parallelism
> depends on the number of OSDs, is this correct?
>
>
> Best,
>
> Jialin
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to