Thanks for your reply, why not rebuild object-map when object-map feature is
enabled.
Cheers,
xinxin
-Original Message-
From: Jason Dillaman [mailto:dilla...@redhat.com]
Sent: Tuesday, October 27, 2015 9:20 PM
To: Shu, Xinxin
Cc: ceph-users
Subject: Re: Question about rbd flag
Thanks for pointing me out
-Original Message-
From: Howard Chu [mailto:h...@symas.com]
Sent: Tuesday, August 4, 2015 9:46 AM
To: Shu, Xinxin; openldap-technical@openldap.org
Subject: Re: question on write transaction
Shu, Xinxin wrote:
> Hi list, recently I wrote app on LMDB, found t
Hi list, recently I wrote app on LMDB, found that for every write , lmdb lock a
global lock when transaction is created , and release this lock in
mdb_txn_commit, on Ubuntu, this lock is implemented by pthread_mutext_t. in
my app, I create write transaction in a thread, then commit this write
Hi Somnath, any performance data for journal on 128M NVRAM partition with
hammer release?
Cheers,
xinxin
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Wednesday, July 29, 2015 5:08 AM
To: ceph-devel@vg
the degree of B-Tree of lmdb database?
Cheers,
xinxin
-Original Message-
From: Леонид Юрьев [mailto:l...@yuriev.ru]
Sent: Tuesday, May 05, 2015 6:16 PM
To: Shu, Xinxin
Cc: openldap-technical@openldap.org
Subject: Re: large write amplification
Hm, ANY change needs a btree-update.
Let have
.ru]
Sent: Monday, May 04, 2015 6:59 PM
To: Shu, Xinxin
Cc: openldap-technical@openldap.org
Subject: Re: large write amplification
Hi, Xinxin.
I will try to answer briefly, without a details:
- To allow readers be never blocked by a writer, LMDB provides a snapshot of
data, indexes and director
Hi list,
Recently I run micro tests on LMDB on DC3700 (200GB), I use bench code
https://github.com/hyc/leveldb/tree/benches , I tested fillrandsync mode and
collected iostat data, found that write amplification is large
For fillrandsync case:
IOPS : 1020 ops/sec
Iostat data shows that w/s
Hi loic, can we merge PR (4414 ~ 4416) into integration branch now
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Loic Dachary
Sent: Monday, April 27, 2015 4:06 PM
To: Shu, Xinxin
Cc: Ceph Development
Subject: Firefly
I tried 1024 & 2048 bytes value size , but the space amplification is ~2, so If
I want to reduce this space amplification, what's your suggestion?
Cheers,
xinxin
-Original Message-
From: Howard Chu [mailto:h...@symas.com]
Sent: Wednesday, April 22, 2015 3:48 AM
To: Sh
Hi list,
Recently I have integrated lmdb into ceph , got intial results , I dump all
records of lmdb, there are several different size key-value pairs,
1. key size : 45 byte, value size = 124 byte
2. key size : 85 byte, value size = 1187 byte
3. key size : 57 byte, value size = 135 byte
4.
Hi list ,
I am working on intergrating LMDB with ceph, I met follow error "Invalid
argument" when I commit transaction to LMDB, details are descripted as
following.
Two threads operate on LMDB, each thread is responsible for create write
transaction and read transaction , the write transacti
Hi list ,
I want to enable debug message for lmdb, I have noticed MDB_DEBUG Macro, and I
define MDB_DEBUG when compiling lmdb, but did not get any debug message, how
can I enable debug message ?
Hi list,
I'm now working on integrating LMDB with ceph, now I encounter the follow error
when initialize transaction :
MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
What I want to know is in which situation this error occur? thanks
Cheers,
xinxin
CC technical list
Cheers,
xinxin
-Original Message-
From: openldap-devel [mailto:openldap-devel-boun...@openldap.org] On Behalf Of
Shu, Xinxin
Sent: Friday, March 20, 2015 8:50 AM
To: Howard Chu; openldap-de...@openldap.org
Subject: RE: questions about LMDB
Sorry for the wrong mail
I think rocksdb can support this configuration.
Cheers,
xinxin
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Xinze Chi
Sent: Thursday, March 19, 2015 5:23 PM
To: Sage Weil; sj...@redhat.com; Haomai Wang; ceph-devel@vger.k
Sorry for the wrong mail list , I will forward this request to technical list
Cheers,
xinxin
-Original Message-
From: Howard Chu [mailto:h...@symas.com]
Sent: Thursday, March 19, 2015 11:52 PM
To: Shu, Xinxin; openldap-devel@openldap.org
Subject: Re: questions about LMDB
Shu, Xinxin
Message-
From: Shu, Xinxin
Sent: Thursday, March 19, 2015 3:49 PM
To: openldap-devel@openldap.org
Cc: Shu, Xinxin
Subject: questions about LMDB
Hi list ,
Recently I read docs about lmdb , there are two sentences
1) readers do not block writers
2) writers do not block readers
I can
Hi list ,
Recently I read docs about lmdb , there are two sentences
1) readers do not block writers
2) writers do not block readers
I can understand 'readers do not block writers' , but cannot understand the
second one , can someone help explain , how lmdb achieve 'writers do not block
readers
comments inline.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Shu,
Xinxin
Sent: Tuesday, October 21, 2014 9:13 AM
To: Xavier Trilla; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RADOS pool snaps and RBD
-Original Message
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Xavier
Trilla
Sent: Tuesday, October 21, 2014 12:42 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] RADOS pool snaps and RBD
Hi,
It seems Ceph doesn't allow rados pool snapshots on RBD po
Please refer to http://tracker.ceph.com/issues/8851
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of minchen
Sent: Monday, October 20, 2014 3:42 PM
To: ceph-users; ceph-de...@vger.kernel.org
Subject: [ceph-users] how to resolve : start mon assert == 0
Hello , all
when i r
Please refer to http://tracker.ceph.com/issues/8851
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of minchen
Sent: Monday, October 20, 2014 3:42 PM
To: ceph-users; ceph-de...@vger.kernel.org
Subject: [ceph-users] how to resolve : start mon assert == 0
Hello , all
when i
We do observe the same issue on our 12 SSD setup, disable the all log maybe
helpful.
Cheers,
xinxin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Wu
Sent: Friday, October 17, 2014 12:18 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Performance doesn't sca
net]
Sent: Wednesday, October 15, 2014 10:23 AM
To: Shu, Xinxin
Cc: Andreas Bluemle; Paul Von-Stamwitz; Stefan Priebe; Somnath Roy;
ceph-devel@vger.kernel.org; Zhang, Jian
Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
On Wed, 15 Oct 2014, Shu, Xinxin wrote:
> Hi all
12 x Intel DC 3700 200GB, every SSD has two OSDs.
Cheers,
xinxin
-Original Message-
From: Stefan Priebe [mailto:s.pri...@profihost.ag]
Sent: Friday, September 19, 2014 2:54 PM
To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel
My bad , with latest master , we got ~ 120K IOPS.
Cheers,
xinxin
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Shu, Xinxin
Sent: Friday, September 19, 2014 9:08 AM
To: Somnath Roy; Alexandre DERUMIER; Haomai Wang
Cc
I also observed performance degradation on my full SSD setup , I can got
~270K IOPS for 4KB random read with 0.80.4 , but with latest master , I only
got ~12K IOPS
Cheers,
xinxin
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On B
The system call is invoked in FileStore::_do_transaction().
Cheers,
xinxin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Sudarsan, Rajesh
Sent: Thursday, August 14, 2014 3:01 PM
To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: [ceph-users] Tracking th
The system call is invoked in FileStore::_do_transaction().
Cheers,
xinxin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Sudarsan, Rajesh
Sent: Thursday, August 14, 2014 3:01 PM
To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: [ceph-users] Tracking th
Hi all ,
I want to debug message and noticed that there is ENCODE_DUMP debug options ,
I recompile ceph with the following command line :
./do_autogen.sh -d 1 -e /var/message/ -n && make && make install
then excecuted 'rbd ls', but in the /var/message/ directory , the message file
did not g
Hi sage,
I created a pull request https://github.com/ceph/rocksdb/pull/4 to fix the
issue, please help review.
Cheers,
xinxin
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Saturday, August 02, 2014 6:30 AM
To: Shu, Xinxin
Cc: Mark Nelson; ceph-devel
Hi sage ,
I create a pull request https://github.com/ceph/rocksdb/pull/3 , please help
review.
Cheers,
xinxin
-Original Message-
From: Shu, Xinxin
Sent: Thursday, July 31, 2014 4:42 PM
To: 'Sage Weil'
Cc: Mark Nelson; ceph-devel@vger.kernel.org
Subject: RE: First attempt
(git version ,
compile time) , since we may not care about thess infos , we can remove this
line from Makefile.am , generate util/build_version.cc by myself.
Cheers,
xinxin
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Thursday, July 31, 2014 10:08 AM
To: Shu, Xinxin
Does your report base on wip-rocksdb-mark branch?
Cheers,
xinxin
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, July 29, 2014 12:56 AM
To: Shu, Xinxin; ceph-devel@vger.kernel.org
Subject: Re
ning disks and SSDs:
http://nhm.ceph.com/mon-store-stress/Monitor_Store_Stress_Short_Tests.pdf
Mark
On 07/27/2014 11:45 PM, Shu, Xinxin wrote:
> Hi mark,
>
> I tested this option on my setup , same issue happened , I will dig into it ,
> if you want to get info log , there is a worka
Hi mark,
Which way do you used to set a higher limitation? use 'ulimit' command or
enlarge rocksdb_max_open_files config option?
Cheers,
xinxin
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Thursday, July 31, 2014 1:35 AM
To: Shu, Xinxin;
: Saturday, July 26, 2014 12:10 AM
To: Shu, Xinxin; ceph-devel@vger.kernel.org
Subject: Re: First attempt at rocksdb monitor store stress testing
Hi Xinxin,
I'm trying to enable the rocksdb log file as described in config_opts using:
rocksdb_log =
The file gets created but is empty. Any i
To: Shu, Xinxin; ceph-devel@vger.kernel.org
Subject: Re: First attempt at rocksdb monitor store stress testing
Earlier today I modified the rocksdb options so I could enable universal
compaction. Over all performance is lower but I don't see the hang/stall in
the middle of the test e
Hi mark,
I think this maybe related to 'verify_checksums' config option ,when
ReadOptions is initialized, default this option is true , all data read from
underlying storage will be verified against corresponding checksums, however,
this option cannot be configured in wip-rocksdb branch. I w
g is right , I think RWlock or a fine-grain lock is a good
suggestion.
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Tuesday, July 01, 2014 2:10 PM
To: Sushma Gurram
Cc: Shu, Xinxin; Mark Nelson; Sage Weil; Zhang, Jian; ceph-devel@vger.kernel.org
Subject: Re:
0.495 668.360 0.383
680.673 0.376
-Original Message-
From: Shu, Xinxin
Sent: Saturday, June 14, 2014 11:50 AM
To: Sushma Gurram; Mark Nelson; Sage Weil
Cc: ceph-devel@vger.kernel.org; Zhang, Jian
Subject: RE: [RFC] add rocksdb support
Currently ceph will
-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sushma Gurram
Sent: Saturday, June 14, 2014 2:52 AM
To: Shu, Xinxin; Mark Nelson; Sage Weil
Cc: ceph-devel@vger.kernel.org; Zhang, Jian
Subject: RE: [RFC] add rocksdb support
Hi Xinxin,
I tried to c
, currently this patch can be found
https://github.com/xinxinsh/ceph/tree/wip-rocksdb .
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, June 10, 2014 1:12 AM
To: Shu, Xinxin; Sage Weil
Cc: ceph-devel
---
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Wednesday, May 21, 2014 9:06 PM
To: Shu, Xinxin; Sage Weil
Cc: ceph-devel@vger.kernel.org; Zhang, Jian
Subject: Re: [RFC] add rocksdb support
On 05/21/2014 07:54 AM, Shu, Xinxin wrote:
> Hi, sage
>
> I will add rocksdb submodule
, 2014 9:19 AM
To: Shu, Xinxin
Cc: ceph-devel@vger.kernel.org
Subject: Re: [RFC] add rocksdb support
Hi Xinxin,
I've pushed an updated wip-rocksdb to github/liewegas/ceph.git that includes
the latest set of patches with the groundwork and your rocksdb patch. There is
also a commit that
I don't get the exact number, but from the size of files , we don't get
billions of entries.
-Original Message-
From: Andreas Joachim Peters [mailto:andreas.joachim.pet...@cern.ch]
Sent: Wednesday, March 05, 2014 5:19 PM
To: Haomai Wang; Alexandre DERUMIER
Cc: Shu, Xinxin;
with 7.2k disks, replications 2x.
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Sent: Wednesday, March 05, 2014 4:23 PM
To: Shu, Xinxin
Cc: ceph-devel@vger.kernel.org
Subject: Re: [RFC] add rocksdb support
>>Hi Alexandre, below is random io test r
rch 04, 2014 12:49 PM
To: Shu, Xinxin
Cc: ceph-devel@vger.kernel.org
Subject: Re: [RFC] add rocksdb support
>>Performance Test
>>Attached file is the performance comparison of rocksdb and leveldb on four
>>nodes with 40 osds, using 'rados bench' as the test tool. The per
Hi all,
This patch added rocksdb support for ceph, enabled rocksdb for omap directory.
Rocksdb source code can be get from link. To use use rocksdb, C++11 standard
should be enabled, gcc version >= 4.7 is required to get C++11 support. Rocksdb
can be installed with instructions described i
Recently, I want to test performance benefit of rbd cache, i cannot get
obvious performance benefit at my setup, then I try to make sure rbd cache is
enabled, but I cannot get rbd cache perf counter. In order to identify how to
enable rbd cache perf counter, I setup a simple setup(one client h
ter(l_objectcacher_cache_ops_hit, "cache_ops_hit");
plb.add_u64_counter(l_objectcacher_cache_ops_miss, "cache_ops_miss");
plb.add_u64_counter(l_objectcacher_cache_bytes_hit, "cache_bytes_hit");
plb.add_u64_counter(l_objectcacher_cache_bytes_miss, "cache_bytes_mi
cuttlefish. Openstack is folsom. Is there anything wired
for you? Please let me know.
-Original Message-
From: Mike Dawson [mailto:mike.daw...@cloudapt.com]
Sent: Tuesday, November 26, 2013 12:41 AM
To: Shu, Xinxin
Cc: Gregory Farnum; Mark Nelson; ceph-users@lists.ceph.com
Subject:
Recently , I want to enable rbd cache to identify performance benefit. I add
rbd_cache=true option in my ceph configure file, I use 'virsh attach-device' to
attach rbd to vm, below is my vdb xml file.
6b5ff6f4-9f8c-4fe0-84d6-9d795967c7dd
i
I do not know this i
Hi all
I want to do some performance tests on kernel rbd, and I setup a ceph cluster
with 4 hosts, every host has 20 osds, the journal of osds is on a separate SSD
partition.
First I created 48 rbds and mapped them to six clients, 8 rbds for every
clients, then I executed the following command
I have a ceph setup with cuttlefish for kernel rbd test. After I mapped rbd to
the clients, I execute 'rbd showmapped', the output looks like as follows:
id pool image snap device
1 ceph node7_1 -/dev/rbd1
2 ceph node7_2 -/dev/rbd2
3 ceph node7_3 -/dev/rbd3
4 ceph node7_4 -
55 matches
Mail list logo