Hi Milosz,
The OSD op worker threads which handle requests are part of sharded thread
pool. We observed that the distribution across these shards was a bit uneven.
Most of the new/delete were originating from Index Manager code in the read
path when we last checked.
Thanks,
Viju
Hi,
There is a sync thread (sync_entry in FileStore.cc) which triggers periodically
and executes sync_filesystem() to ensure that the data is consistent. The
journal entries are trimmed only after a successful sync_filesystem() call
Thanks
Viju
-Original Message-
From:
Sage,
We did test with latest version of tcmalloc as well. It exhibited the same
behavior.
Viju
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Wednesday, December 03, 2014 9:30 PM
To: Chaitanya Huilgol
Hi,
You can refer to _create_collection, which is invoked from OP_MKCOLL
transaction opcode (FileStore.cc) context for the file store backend.
Thanks,
Viju
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of xinxin shu
Hi,
I had gotten teuthology to work some time back to a reasonable extent in my
local setup with a few quick ugly hacks. The main set of changes were,
1. Explicitly named my test systems as plana01 , plana02 plana03. Some
of the teuthology code which checks for VM instances does compare with