Hi Kyle and Greg,
I will get back to you with more details tomorrow, thanks for the response.

Thanks,
Guang
在 2013-10-22,上午9:37,Kyle Bader <kyle.ba...@gmail.com> 写道:

> Besides what Mark and Greg said it could be due to additional hops through 
> network devices. What network devices are you using, what is the network  
> topology and does your CRUSH map reflect the network topology?
> 
> On Oct 21, 2013 9:43 AM, "Gregory Farnum" <g...@inktank.com> wrote:
> On Mon, Oct 21, 2013 at 7:13 AM, Guang Yang <yguan...@yahoo.com> wrote:
> > Dear ceph-users,
> > Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) 
> > to a much bigger one (330 OSDs).
> >
> > When using rados bench to test the small cluster (24 OSDs), it showed the 
> > average latency was around 3ms (object size is 5K), while for the larger 
> > one (330 OSDs), the average latency was around 7ms (object size 5K), twice 
> > comparing the small cluster.
> >
> > The OSD within the two cluster have the same configuration, SAS disk,  and 
> > two partitions for one disk, one for journal and the other for metadata.
> >
> > For PG numbers, the small cluster tested with the pool having 100 PGs, and 
> > for the large cluster, the pool has 43333 PGs (as I will to further scale 
> > the cluster, so I choose a much large PG).
> >
> > Does my test result make sense? Like when the PG number and OSD increase, 
> > the latency might drop?
> 
> Besides what Mark said, can you describe your test in a little more
> detail? Writing/reading, length of time, number of objects, etc.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to