Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Shu, Xinxin
Thanks for your reply, why not rebuild object-map when object-map feature is enabled. Cheers, xinxin -Original Message- From: Jason Dillaman [mailto:dilla...@redhat.com] Sent: Tuesday, October 27, 2015 9:20 PM To: Shu, Xinxin Cc: ceph-users Subject: Re: Question about rbd flag

RE: question on write transaction

2015-08-03 Thread Shu, Xinxin
Thanks for pointing me out -Original Message- From: Howard Chu [mailto:h...@symas.com] Sent: Tuesday, August 4, 2015 9:46 AM To: Shu, Xinxin; openldap-technical@openldap.org Subject: Re: question on write transaction Shu, Xinxin wrote: > Hi list, recently I wrote app on LMDB, found t

question on write transaction

2015-08-03 Thread Shu, Xinxin
Hi list, recently I wrote app on LMDB, found that for every write , lmdb lock a global lock when transaction is created , and release this lock in mdb_txn_commit, on Ubuntu, this lock is implemented by pthread_mutext_t. in my app, I create write transaction in a thread, then commit this write

RE: Ceph write path optimization

2015-07-29 Thread Shu, Xinxin
Hi Somnath, any performance data for journal on 128M NVRAM partition with hammer release? Cheers, xinxin -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy Sent: Wednesday, July 29, 2015 5:08 AM To: ceph-devel@vg

RE: large write amplification

2015-05-06 Thread Shu, Xinxin
the degree of B-Tree of lmdb database? Cheers, xinxin -Original Message- From: Леонид Юрьев [mailto:l...@yuriev.ru] Sent: Tuesday, May 05, 2015 6:16 PM To: Shu, Xinxin Cc: openldap-technical@openldap.org Subject: Re: large write amplification Hm, ANY change needs a btree-update. Let have

RE: large write amplification

2015-05-05 Thread Shu, Xinxin
.ru] Sent: Monday, May 04, 2015 6:59 PM To: Shu, Xinxin Cc: openldap-technical@openldap.org Subject: Re: large write amplification Hi, Xinxin. I will try to answer briefly, without a details: - To allow readers be never blocked by a writer, LMDB provides a snapshot of data, indexes and director

large write amplification

2015-05-03 Thread Shu, Xinxin
Hi list, Recently I run micro tests on LMDB on DC3700 (200GB), I use bench code https://github.com/hyc/leveldb/tree/benches , I tested fillrandsync mode and collected iostat data, found that write amplification is large For fillrandsync case: IOPS : 1020 ops/sec Iostat data shows that w/s

RE: Firefly integration branch : OK

2015-04-27 Thread Shu, Xinxin
Hi loic, can we merge PR (4414 ~ 4416) into integration branch now -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Loic Dachary Sent: Monday, April 27, 2015 4:06 PM To: Shu, Xinxin Cc: Ceph Development Subject: Firefly

RE: LMDB space amplification

2015-04-21 Thread Shu, Xinxin
I tried 1024 & 2048 bytes value size , but the space amplification is ~2, so If I want to reduce this space amplification, what's your suggestion? Cheers, xinxin -Original Message- From: Howard Chu [mailto:h...@symas.com] Sent: Wednesday, April 22, 2015 3:48 AM To: Sh

LMDB space amplification

2015-04-21 Thread Shu, Xinxin
Hi list, Recently I have integrated lmdb into ceph , got intial results , I dump all records of lmdb, there are several different size key-value pairs, 1. key size : 45 byte, value size = 124 byte 2. key size : 85 byte, value size = 1187 byte 3. key size : 57 byte, value size = 135 byte 4.

LMDB error

2015-04-02 Thread Shu, Xinxin
Hi list , I am working on intergrating LMDB with ceph, I met follow error "Invalid argument" when I commit transaction to LMDB, details are descripted as following. Two threads operate on LMDB, each thread is responsible for create write transaction and read transaction , the write transacti

how to enable debug message

2015-04-02 Thread Shu, Xinxin
Hi list , I want to enable debug message for lmdb, I have noticed MDB_DEBUG Macro, and I define MDB_DEBUG when compiling lmdb, but did not get any debug message, how can I enable debug message ?

LMDB error

2015-03-23 Thread Shu, Xinxin
Hi list, I'm now working on integrating LMDB with ceph, now I encounter the follow error when initialize transaction : MDB_BAD_RSLOT: Invalid reuse of reader locktable slot What I want to know is in which situation this error occur? thanks Cheers, xinxin

FW: questions about LMDB

2015-03-19 Thread Shu, Xinxin
CC technical list Cheers, xinxin -Original Message- From: openldap-devel [mailto:openldap-devel-boun...@openldap.org] On Behalf Of Shu, Xinxin Sent: Friday, March 20, 2015 8:50 AM To: Howard Chu; openldap-de...@openldap.org Subject: RE: questions about LMDB Sorry for the wrong mail

RE: keyvaluestore speed up?

2015-03-19 Thread Shu, Xinxin
I think rocksdb can support this configuration. Cheers, xinxin -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Xinze Chi Sent: Thursday, March 19, 2015 5:23 PM To: Sage Weil; sj...@redhat.com; Haomai Wang; ceph-devel@vger.k

RE: questions about LMDB

2015-03-19 Thread Shu, Xinxin
Sorry for the wrong mail list , I will forward this request to technical list Cheers, xinxin -Original Message- From: Howard Chu [mailto:h...@symas.com] Sent: Thursday, March 19, 2015 11:52 PM To: Shu, Xinxin; openldap-devel@openldap.org Subject: Re: questions about LMDB Shu, Xinxin

RE: questions about LMDB

2015-03-19 Thread Shu, Xinxin
Message- From: Shu, Xinxin Sent: Thursday, March 19, 2015 3:49 PM To: openldap-devel@openldap.org Cc: Shu, Xinxin Subject: questions about LMDB Hi list , Recently I read docs about lmdb , there are two sentences 1) readers do not block writers 2) writers do not block readers I can

questions about LMDB

2015-03-19 Thread Shu, Xinxin
Hi list , Recently I read docs about lmdb , there are two sentences 1) readers do not block writers 2) writers do not block readers I can understand 'readers do not block writers' , but cannot understand the second one , can someone help explain , how lmdb achieve 'writers do not block readers

Re: [ceph-users] RADOS pool snaps and RBD

2014-10-20 Thread Shu, Xinxin
comments inline. -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Shu, Xinxin Sent: Tuesday, October 21, 2014 9:13 AM To: Xavier Trilla; ceph-users@lists.ceph.com Subject: Re: [ceph-users] RADOS pool snaps and RBD -Original Message

Re: [ceph-users] RADOS pool snaps and RBD

2014-10-20 Thread Shu, Xinxin
-Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Xavier Trilla Sent: Tuesday, October 21, 2014 12:42 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] RADOS pool snaps and RBD Hi, It seems Ceph doesn't allow rados pool snapshots on RBD po

Re: [ceph-users] how to resolve : start mon assert == 0

2014-10-20 Thread Shu, Xinxin
Please refer to http://tracker.ceph.com/issues/8851 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of minchen Sent: Monday, October 20, 2014 3:42 PM To: ceph-users; ceph-de...@vger.kernel.org Subject: [ceph-users] how to resolve : start mon assert == 0 Hello , all when i r

Re: [ceph-users] how to resolve : start mon assert == 0

2014-10-20 Thread Shu, Xinxin
Please refer to http://tracker.ceph.com/issues/8851 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of minchen Sent: Monday, October 20, 2014 3:42 PM To: ceph-users; ceph-de...@vger.kernel.org Subject: [ceph-users] how to resolve : start mon assert == 0 Hello , all when i

Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.

2014-10-16 Thread Shu, Xinxin
We do observe the same issue on our 12 SSD setup, disable the all log maybe helpful. Cheers, xinxin From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Wu Sent: Friday, October 17, 2014 12:18 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] Performance doesn't sca

RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params

2014-10-14 Thread Shu, Xinxin
net] Sent: Wednesday, October 15, 2014 10:23 AM To: Shu, Xinxin Cc: Andreas Bluemle; Paul Von-Stamwitz; Stefan Priebe; Somnath Roy; ceph-devel@vger.kernel.org; Zhang, Jian Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params On Wed, 15 Oct 2014, Shu, Xinxin wrote: > Hi all

RE: severe librbd performance degradation in Giant

2014-09-19 Thread Shu, Xinxin
12 x Intel DC 3700 200GB, every SSD has two OSDs. Cheers, xinxin -Original Message- From: Stefan Priebe [mailto:s.pri...@profihost.ag] Sent: Friday, September 19, 2014 2:54 PM To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang Cc: Sage Weil; Josh Durgin; ceph-devel

RE: severe librbd performance degradation in Giant

2014-09-18 Thread Shu, Xinxin
My bad , with latest master , we got ~ 120K IOPS. Cheers, xinxin -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Shu, Xinxin Sent: Friday, September 19, 2014 9:08 AM To: Somnath Roy; Alexandre DERUMIER; Haomai Wang Cc

RE: severe librbd performance degradation in Giant

2014-09-18 Thread Shu, Xinxin
I also observed performance degradation on my full SSD setup , I can got ~270K IOPS for 4KB random read with 0.80.4 , but with latest master , I only got ~12K IOPS Cheers, xinxin -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On B

Re: [ceph-users] Tracking the system calls for OSD write

2014-08-14 Thread Shu, Xinxin
The system call is invoked in FileStore::_do_transaction(). Cheers, xinxin From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sudarsan, Rajesh Sent: Thursday, August 14, 2014 3:01 PM To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com Subject: [ceph-users] Tracking th

Re: [ceph-users] Tracking the system calls for OSD write

2014-08-14 Thread Shu, Xinxin
The system call is invoked in FileStore::_do_transaction(). Cheers, xinxin From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sudarsan, Rajesh Sent: Thursday, August 14, 2014 3:01 PM To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com Subject: [ceph-users] Tracking th

how to enable ENCODE_DUMP debug option for message

2014-08-10 Thread Shu, Xinxin
Hi all , I want to debug message and noticed that there is ENCODE_DUMP debug options , I recompile ceph with the following command line : ./do_autogen.sh -d 1 -e /var/message/ -n && make && make install then excecuted 'rbd ls', but in the /var/message/ directory , the message file did not g

RE: First attempt at rocksdb monitor store stress testing

2014-08-04 Thread Shu, Xinxin
Hi sage, I created a pull request https://github.com/ceph/rocksdb/pull/4 to fix the issue, please help review. Cheers, xinxin -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Saturday, August 02, 2014 6:30 AM To: Shu, Xinxin Cc: Mark Nelson; ceph-devel

RE: First attempt at rocksdb monitor store stress testing

2014-07-31 Thread Shu, Xinxin
Hi sage , I create a pull request https://github.com/ceph/rocksdb/pull/3 , please help review. Cheers, xinxin -Original Message- From: Shu, Xinxin Sent: Thursday, July 31, 2014 4:42 PM To: 'Sage Weil' Cc: Mark Nelson; ceph-devel@vger.kernel.org Subject: RE: First attempt

RE: First attempt at rocksdb monitor store stress testing

2014-07-31 Thread Shu, Xinxin
(git version , compile time) , since we may not care about thess infos , we can remove this line from Makefile.am , generate util/build_version.cc by myself. Cheers, xinxin -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Thursday, July 31, 2014 10:08 AM To: Shu, Xinxin

RE: First attempt at rocksdb monitor store stress testing

2014-07-30 Thread Shu, Xinxin
Does your report base on wip-rocksdb-mark branch? Cheers, xinxin -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson Sent: Tuesday, July 29, 2014 12:56 AM To: Shu, Xinxin; ceph-devel@vger.kernel.org Subject: Re

RE: First attempt at rocksdb monitor store stress testing

2014-07-30 Thread Shu, Xinxin
ning disks and SSDs: http://nhm.ceph.com/mon-store-stress/Monitor_Store_Stress_Short_Tests.pdf Mark On 07/27/2014 11:45 PM, Shu, Xinxin wrote: > Hi mark, > > I tested this option on my setup , same issue happened , I will dig into it , > if you want to get info log , there is a worka

RE: First attempt at rocksdb monitor store stress testing

2014-07-30 Thread Shu, Xinxin
Hi mark, Which way do you used to set a higher limitation? use 'ulimit' command or enlarge rocksdb_max_open_files config option? Cheers, xinxin -Original Message- From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Thursday, July 31, 2014 1:35 AM To: Shu, Xinxin;

RE: First attempt at rocksdb monitor store stress testing

2014-07-27 Thread Shu, Xinxin
: Saturday, July 26, 2014 12:10 AM To: Shu, Xinxin; ceph-devel@vger.kernel.org Subject: Re: First attempt at rocksdb monitor store stress testing Hi Xinxin, I'm trying to enable the rocksdb log file as described in config_opts using: rocksdb_log = The file gets created but is empty. Any i

RE: First attempt at rocksdb monitor store stress testing

2014-07-24 Thread Shu, Xinxin
To: Shu, Xinxin; ceph-devel@vger.kernel.org Subject: Re: First attempt at rocksdb monitor store stress testing Earlier today I modified the rocksdb options so I could enable universal compaction. Over all performance is lower but I don't see the hang/stall in the middle of the test e

RE: First attempt at rocksdb monitor store stress testing

2014-07-23 Thread Shu, Xinxin
Hi mark, I think this maybe related to 'verify_checksums' config option ,when ReadOptions is initialized, default this option is true , all data read from underlying storage will be verified against corresponding checksums, however, this option cannot be configured in wip-rocksdb branch. I w

RE: [RFC] add rocksdb support

2014-07-02 Thread Shu, Xinxin
g is right , I think RWlock or a fine-grain lock is a good suggestion. -Original Message- From: Haomai Wang [mailto:haomaiw...@gmail.com] Sent: Tuesday, July 01, 2014 2:10 PM To: Sushma Gurram Cc: Shu, Xinxin; Mark Nelson; Sage Weil; Zhang, Jian; ceph-devel@vger.kernel.org Subject: Re:

RE: [RFC] add rocksdb support

2014-06-22 Thread Shu, Xinxin
0.495 668.360 0.383 680.673 0.376 -Original Message- From: Shu, Xinxin Sent: Saturday, June 14, 2014 11:50 AM To: Sushma Gurram; Mark Nelson; Sage Weil Cc: ceph-devel@vger.kernel.org; Zhang, Jian Subject: RE: [RFC] add rocksdb support Currently ceph will

RE: [RFC] add rocksdb support

2014-06-13 Thread Shu, Xinxin
- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sushma Gurram Sent: Saturday, June 14, 2014 2:52 AM To: Shu, Xinxin; Mark Nelson; Sage Weil Cc: ceph-devel@vger.kernel.org; Zhang, Jian Subject: RE: [RFC] add rocksdb support Hi Xinxin, I tried to c

RE: [RFC] add rocksdb support

2014-06-09 Thread Shu, Xinxin
, currently this patch can be found https://github.com/xinxinsh/ceph/tree/wip-rocksdb . -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson Sent: Tuesday, June 10, 2014 1:12 AM To: Shu, Xinxin; Sage Weil Cc: ceph-devel

RE: [RFC] add rocksdb support

2014-05-28 Thread Shu, Xinxin
--- From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Wednesday, May 21, 2014 9:06 PM To: Shu, Xinxin; Sage Weil Cc: ceph-devel@vger.kernel.org; Zhang, Jian Subject: Re: [RFC] add rocksdb support On 05/21/2014 07:54 AM, Shu, Xinxin wrote: > Hi, sage > > I will add rocksdb submodule

RE: [RFC] add rocksdb support

2014-05-21 Thread Shu, Xinxin
, 2014 9:19 AM To: Shu, Xinxin Cc: ceph-devel@vger.kernel.org Subject: Re: [RFC] add rocksdb support Hi Xinxin, I've pushed an updated wip-rocksdb to github/liewegas/ceph.git that includes the latest set of patches with the groundwork and your rocksdb patch. There is also a commit that

RE: [RFC] add rocksdb support

2014-03-06 Thread Shu, Xinxin
I don't get the exact number, but from the size of files , we don't get billions of entries. -Original Message- From: Andreas Joachim Peters [mailto:andreas.joachim.pet...@cern.ch] Sent: Wednesday, March 05, 2014 5:19 PM To: Haomai Wang; Alexandre DERUMIER Cc: Shu, Xinxin;

RE: [RFC] add rocksdb support

2014-03-05 Thread Shu, Xinxin
with 7.2k disks, replications 2x. -Original Message- From: Alexandre DERUMIER [mailto:aderum...@odiso.com] Sent: Wednesday, March 05, 2014 4:23 PM To: Shu, Xinxin Cc: ceph-devel@vger.kernel.org Subject: Re: [RFC] add rocksdb support >>Hi Alexandre, below is random io test r

RE: [RFC] add rocksdb support

2014-03-04 Thread Shu, Xinxin
rch 04, 2014 12:49 PM To: Shu, Xinxin Cc: ceph-devel@vger.kernel.org Subject: Re: [RFC] add rocksdb support >>Performance Test >>Attached file is the performance comparison of rocksdb and leveldb on four >>nodes with 40 osds, using 'rados bench' as the test tool. The per

[RFC] add rocksdb support

2014-03-02 Thread Shu, Xinxin
Hi all, This patch added rocksdb support for ceph, enabled rocksdb for omap directory. Rocksdb source code can be get from link. To use use rocksdb, C++11 standard should be enabled, gcc version >= 4.7 is required to get C++11 support. Rocksdb can be installed with instructions described i

[ceph-users] can not get rbd cache perf counter

2013-11-27 Thread Shu, Xinxin
Recently, I want to test performance benefit of rbd cache, i cannot get obvious performance benefit at my setup, then I try to make sure rbd cache is enabled, but I cannot get rbd cache perf counter. In order to identify how to enable rbd cache perf counter, I setup a simple setup(one client h

Re: [ceph-users] how to enable rbd cache

2013-11-26 Thread Shu, Xinxin
ter(l_objectcacher_cache_ops_hit, "cache_ops_hit"); plb.add_u64_counter(l_objectcacher_cache_ops_miss, "cache_ops_miss"); plb.add_u64_counter(l_objectcacher_cache_bytes_hit, "cache_bytes_hit"); plb.add_u64_counter(l_objectcacher_cache_bytes_miss, "cache_bytes_mi

Re: [ceph-users] how to enable rbd cache

2013-11-25 Thread Shu, Xinxin
cuttlefish. Openstack is folsom. Is there anything wired for you? Please let me know. -Original Message- From: Mike Dawson [mailto:mike.daw...@cloudapt.com] Sent: Tuesday, November 26, 2013 12:41 AM To: Shu, Xinxin Cc: Gregory Farnum; Mark Nelson; ceph-users@lists.ceph.com Subject:

[ceph-users] how to enable rbd cache

2013-11-25 Thread Shu, Xinxin
Recently , I want to enable rbd cache to identify performance benefit. I add rbd_cache=true option in my ceph configure file, I use 'virsh attach-device' to attach rbd to vm, below is my vdb xml file. 6b5ff6f4-9f8c-4fe0-84d6-9d795967c7dd i I do not know this i

[ceph-users] cannot use "dd" to initialize rbd

2013-06-06 Thread Shu, Xinxin
Hi all I want to do some performance tests on kernel rbd, and I setup a ceph cluster with 4 hosts, every host has 20 osds, the journal of osds is on a separate SSD partition. First I created 48 rbds and mapped them to six clients, 8 rbds for every clients, then I executed the following command

[ceph-users] wrong device name of kernel rbd

2013-06-04 Thread Shu, Xinxin
I have a ceph setup with cuttlefish for kernel rbd test. After I mapped rbd to the clients, I execute 'rbd showmapped', the output looks like as follows: id pool image snap device 1 ceph node7_1 -/dev/rbd1 2 ceph node7_2 -/dev/rbd2 3 ceph node7_3 -/dev/rbd3 4 ceph node7_4 -