Re: why we use two ObjectStore::Transaction in ReplicatedBackend::submit_transaction?

2015-11-01 Thread Sage Weil
On Sun, 1 Nov 2015, ??? wrote: > Yes, I think so. > keeping them separate and pass them to > ObjectStore::queue_transactions() would increase the time on > transaction encode process and exhaust more cpu. > > The transaction::append holds 0.8% cpu on my environment. > The transaction encoding is a

Re: why we use two ObjectStore::Transaction in ReplicatedBackend::submit_transaction?

2015-11-01 Thread Sage Weil
On Sun, 1 Nov 2015, Sage Weil wrote: > On Sun, 1 Nov 2015, ??? wrote: > > Yes, I think so. > > keeping them separate and pass them to > > ObjectStore::queue_transactions() would increase the time on > > transaction encode process and exhaust more cpu. > > > > The transaction::append holds 0.8% cpu

RE: why we use two ObjectStore::Transaction in ReplicatedBackend::submit_transaction?

2015-11-01 Thread Somnath Roy
Sage, Is it possible that we can't reuse the op_t because it could be still there in the messenger queue before calling parent->log_operation() ? Thanks & Regards Somnath -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sa

RE: why we use two ObjectStore::Transaction in ReplicatedBackend::submit_transaction?

2015-11-01 Thread Somnath Roy
Huh..It seems the op_t is already copied in generate_subop() -> ::encode(*op_t, wr->get_data());...So, this shouldn't be an issue.. -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy Sent: Sunday, November 01, 2015

Only client.admin can mount cephfs by ceph-fuse

2015-11-01 Thread Jaze Lee
Hello, I find only ceph.client.admin can mount cephfs. [root@ceph-base-0 ceph]# ceph auth get client.cephfs_user exported keyring for client.cephfs_user [client.cephfs_user] key = AQDZ3DZWR7nqBxAAzSoU/yRz1oJsOYdYrTAzcw== caps mds = "allow *" caps mon = "allow *" caps osd = "allow *" [root@ceph

[performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test

2015-11-01 Thread hzwulibin
Hi, same environment, after a test script, the io latency(get from sudo ceph --admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase from about 4ms to 7.3ms qemu version: debian 2.1.2 kernel:3.10.45-openstack-amd64 system: debian 7.8 ceph: 0.94.5 VM CPU number: 4 (cpu MHz : 2599.

RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test

2015-11-01 Thread Chen, Xiaoxi
Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:). In this case, you may want to re-create the RBD, pre-allocate and try again. > -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf O