ceph-disk pyudev implementation

2015-09-09 Thread Chaitanya Huilgol
Hi Loic, As discussed in the multipath tracker, please find the port of ceph-disk which is based on pyudev (https://pyudev.readthedocs.org/en/latest/ python libudev binding) Here is a short summary on the approach: - Current ceph-disk determines various properties on block device by path

Re: bug with cache/tiering and snapshot reads

2015-09-09 Thread Loic Dachary
Hi Jason, Thanks for the quick reply :-) Cheers On 09/09/2015 02:55, Jason Dillaman wrote: >> Does "bug with cache/tiering and snapshot reads" >> http://tracker.ceph.com/issues/12748 sound familiar to you ? Although it >> looks like a tiering problem, it involves rbd and maybe it rings a bell ?

Re: How to save log when test met bugs

2015-09-09 Thread Loic Dachary
[adding ceph-devel as this may also be an inconvenient to others] On 09/09/2015 10:23, Ma, Jianpeng wrote:> Hi Loic: > Today, I run test/cephtool-test-mds.sh, because my code has bug cause osd > down. I only from the screen saw "osd o down " and so on. But I don't > find the related osd

RE: [NewStore]About PGLog Workload With RocksDB

2015-09-09 Thread Dałek , Piotr
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Haomai Wang > Sent: Tuesday, September 08, 2015 3:58 PM > To: Sage Weil > Hi Sage, > > I notice your post in rocksdb page about make rocksdb aware of short alive >

pet project: OSD compatible daemon

2015-09-09 Thread Loic Dachary
Hi Ceph, I would like to try to write an OSD compatible daemon, as a pet project, to learn Go and better understand the message flow. I suspect it may also be useful for debug purposes but that's not my primary incentive. Has anyone tried something similar ? If so I'd happily contribute

Re: ceph-disk pyudev implementation

2015-09-09 Thread Loic Dachary
Hi, The approach you describe makes sense to me. And you've done a nice job with the refactor. I'm not familiar with pyudev though and other Ceph developers may already have an opinion (or answers). When adding new dependencies to Ceph, I think we need to assert how stable / reliable those

try-restart on upgrade, and upgrade procedures in general

2015-09-09 Thread Nathan Cutler
Hi all: I have been tinkering with the %preun and %postun scripts in ceph.spec.in - in particular, the ones for the "ceph" and "ceph-radosgw" packages. Recently, as part of the "wip-systemd" effort, these snippets were updated for compatibility with systemd. Since the "Upgrade procedures"

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-09 Thread Jan Schermer
This is great, thank you! Jan > On 09 Sep 2015, at 12:37, HEWLETT, Paul (Paul) > wrote: > > Hi Jan > > If I can suggest that you look at: > > http://engineering.linkedin.com/performance/optimizing-linux-memory-managem >

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-09 Thread Jan Schermer
I looked at THP before. It comes enabled on RHEL6 and on our KVM hosts it merges a lot (~300GB hugepages on a 400GB KVM footprint). I am probably going to disable it and see if it introduces any problems for me - the most important gain here is better processor memory lookup table (cache)

RE: ceph-disk pyudev implementation

2015-09-09 Thread Chaitanya Huilgol
Inline > -Original Message- > From: Loic Dachary [mailto:l...@dachary.org] > Sent: Wednesday, September 09, 2015 1:05 PM > To: Chaitanya Huilgol; Ceph Development > Subject: Re: ceph-disk pyudev implementation > > Hi, > > The approach you describe makes sense to me. And you've done a nice

Re: ceph-disk pyudev implementation

2015-09-09 Thread Loic Dachary
Inline as well On 09/09/2015 12:37, Chaitanya Huilgol wrote: > Inline > >> -Original Message- >> From: Loic Dachary [mailto:l...@dachary.org] >> Sent: Wednesday, September 09, 2015 1:05 PM >> To: Chaitanya Huilgol; Ceph Development >> Subject: Re: ceph-disk pyudev implementation >> >>

Failed on starting osd-daemon after upgrade giant-0.87.1 to hammer-0.94.3

2015-09-09 Thread 王锐
Hi all: I got on error after upgrade my ceph cluster from giant-0.87.2 to hammer-0.94.3, my local environment is: CentOS 6.7 x86_64 Kernel 3.10.86-1.el6.elrepo.x86_64 HDD: XFS, 2TB Install Package: ceph.com official RPMs x86_64 step 1: Upgrade MON server from 0.87.1 to 0.94.3, all is fine!

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-09 Thread HEWLETT, Paul (Paul)
Hi Jan If I can suggest that you look at: http://engineering.linkedin.com/performance/optimizing-linux-memory-managem ent-low-latency-high-throughput-databases where LinkedIn ended up disabling some of the new kernel features to prevent memory thrashing. Search for Transparent Huge Pages..

rbd lock list command failure

2015-09-09 Thread Huamin Chen
Hi Running "rbd lock list" inside a Docker container yields mixed results. Sometimes I can get the right results but most times I just get errors. A good run is like this: [root@host server]# docker run --privileged --net=host -v /dev:/dev -v /sys:/sys ceph/base rbd lock list foo --pool kube

RE: Ceph Write Path Improvement

2015-09-09 Thread Somnath Roy
Hi, Here is the updated presentation we discussed in the performance meeting today with performance data incorporated for the scenario where both journal/data on the same SSD. https://docs.google.com/presentation/d/15-Uqk0b4s1fVV1cG1G6Kba9xafcnIoLvfq8LUY7KBL0/edit#slide=id.p4 Here is the

Re: try-restart on upgrade, and upgrade procedures in general

2015-09-09 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Sounds reasonable to me. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, Sep 9, 2015 at 2:48 AM, Nathan Cutler wrote: > Hi all: > > I have been tinkering with the %preun and %postun

RE: About Fio backend with ObjectStore API

2015-09-09 Thread James (Fei) Liu-SSI
Hi Haomai and Casey, The fio-objectstore was compiled after new repo cloned. However, I still got below error with undefined symbol from ObjectStore.h. jamesliu@jamesliu-OptiPlex-7010:~/WorkSpace/ceph/src$ ./fio/fio ./test/objectstore.fio filestore: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K,