[ceph-users] Re: Strange performance drop and low oss performance

2020-02-06 Thread Janne Johansson
> > > For object gateway, the performance is got by `swift-bench -t 64` which > uses 64 threads concurrently. Will the radosgw and http overhead be so > significant (94.5MB/s to 26MB/s for cluster1) when multiple threads are > used? Thanks in advance! > > Can't say what it "must" be, but if I log

[ceph-users] Re: Strange performance drop and low oss performance

2020-02-05 Thread Marc Roos
February 2020 16:34 To: quexian da Cc: ceph-users Subject: [ceph-users] Re: Strange performance drop and low oss performance Den ons 5 feb. 2020 kl 16:19 skrev quexian da : > Thanks for your valuable answer! > Is the write cache specific to ceph? Could you please provide some &

[ceph-users] Re: Strange performance drop and low oss performance

2020-02-05 Thread quexian da
Thanks for your valuable answer about write cache! For object gateway, the performance is got by `swift-bench -t 64` which uses 64 threads concurrently. Will the radosgw and http overhead be so significant (94.5MB/s to 26MB/s for cluster1) when multiple threads are used? Thanks in advance! On

[ceph-users] Re: Strange performance drop and low oss performance

2020-02-05 Thread Janne Johansson
Den ons 5 feb. 2020 kl 16:19 skrev quexian da : > Thanks for your valuable answer! > Is the write cache specific to ceph? Could you please provide some links > to the documentation about the write cache? Thanks! > > It is all the possible caches used by ceph, by the device driver, the filesystem

[ceph-users] Re: Strange performance drop and low oss performance

2020-02-05 Thread quexian da
Thanks for your valuable answer! Is the write cache specific to ceph? Could you please provide some links to the documentation about the write cache? Thanks! Do you have any idea about the slow oss speed? Is it normal that the write performance of object gateway is slower than that of rados

[ceph-users] Re: Strange performance drop and low oss performance

2020-02-05 Thread Janne Johansson
Den ons 5 feb. 2020 kl 11:14 skrev quexian da : > Hello, > > I'm a beginner on ceph. I set up three ceph clusters on google cloud. > Cluster1 has three nodes and each node has three disks. Cluster2 has three > nodes and each node has two disks. Cluster3 has five nodes and each node > has five