Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-10 Thread Alexandre DERUMIER
Hi Sebastien, here my first results with crucial m550 (I'll send result with intel s3500 later): - 3 nodes - dell r620 without expander backplane - sas controller : lsi LSI 9207 (no hardware raid or cache) - 2 x E5-2603v2 1.8GHz (4cores) - 32GB ram - network : 2xgigabit link lacp + 2xgigabit lac

[ceph-users] upload data using swift API

2014-09-10 Thread pragya jain
hi all! from swift perspective, to upload data in Swift, following steps are performed: * user must have an account in Swift cluster. * user create a container within the account to store his data *user store his data as an object in the container within the account * object information is update

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread 廖建锋
the current ceph cluster was compiled by hand , and now i disabled scrub and deep-scrub until your new dev version released I hope the new version can help to scrub all data which already error displayed From: Haomai Wang Date: 2014-09-11 12:00 To: 廖建锋

[ceph-users] monitoring tool for ceph which monitor end-user level usage

2014-09-10 Thread pragya jain
hi all! Is there any monitoring tool for ceph which monitor end-user level usage and data transfer for ceph object storage service? Please help me to know any type of information related to it. Please reply. Regards Pragya Jain___ ceph-users mailin

[ceph-users] different storage disks as a single storage

2014-09-10 Thread pragya jain
hi all! I have a very low level query. Please help to clarify it. To store data on a storage cluster, at the bottom, there is a heterogeneous set of storage disks,in which there can be a variety of storage disks, such as SSDs, HDDs, flash drives, tapes and any other type also. Document says tha

[ceph-users] error while installing ceph in cluster node

2014-09-10 Thread Subhadip Bagui
Hi, I'm getting the below error while installing ceph on node using ceph-deploy. I'm executing the command in admin node as [root@ceph-admin ~]$ ceph-deploy install ceph-mds [ceph-mds][DEBUG ] Loaded plugins: fastestmirror, security [ceph-mds][WARNIN] You need to be root to perform this command.

Re: [ceph-users] why one osd-op from client can get two osd-op-reply?

2014-09-10 Thread Gregory Farnum
On Wed, Sep 10, 2014 at 8:29 PM, yuelongguang wrote: > > > > > as for ack and ondisk, ceph has size and min_size to decide there are how > many replications. > if client receive ack or ondisk, which means there are at least min_size > osds have done the ops? > > i am reading the cource code, co

[ceph-users] (no subject)

2014-09-10 Thread Subhadip Bagui
Hi, I'm getting the below error while installing ceph on node using ceph-deploy. I'm executing the command in admin node as [root@ceph-admin ~]$ ceph-deploy install ceph-mds [ceph-mds][DEBUG ] Loaded plugins: fastestmirror, security [ceph-mds][WARNIN] You need to be root to perform this command.

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread Haomai Wang
No, you need to wait the next develop release, or you can compile it by hand. On Thu, Sep 11, 2014 at 10:35 AM, 廖建锋 wrote: > haomai wang, > i already use 0.85 which is the latest version of CEPH, is > there any new version than 0.85? > > > *From:* Haomai Wang > *Date:* 2014-09-11 1

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Sage Weil
I had two substantiative comments on the first patch and then some trivial whitespace nits.Otherwise looks good! tahnks- sage On Thu, 11 Sep 2014, Somnath Roy wrote: > Sam/Sage, > I have incorporated all of your comments. Please have a look at the same pull > request. > > https://github.c

Re: [ceph-users] why one osd-op from client can get two osd-op-reply?

2014-09-10 Thread yuelongguang
as for ack and ondisk, ceph has size and min_size to decide there are how many replications. if client receive ack or ondisk, which means there are at least min_size osds have done the ops? i am reading the cource code, could you help me with the two questions. 1. on osd, where is

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread 廖建锋
haomai wang, i already use 0.85 which is the latest version of CEPH, is there any new version than 0.85? From: Haomai Wang Date: 2014-09-11 10:02 To: 廖建锋 CC: ceph-users Subject: Re: [ceph-users] Why s

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread Christian Balzer
Hello, On Thu, 11 Sep 2014 00:35:15 + 廖建锋 wrote: First and foremost, screenshot images really have no place in a mailing list. Never mind the wasted space and that what you had there was plain text to begin with, nobody searching for something similar will ever find this thread, as all the i

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread Haomai Wang
Please check (http://tracker.ceph.com/issues/8589). KeyValueStore is a experiment backend. We still make it stable now. You can checkout master branch to fix it. On Thu, Sep 11, 2014 at 8:35 AM, 廖建锋 wrote: > dear, > Is this another big bug of CEPH? > > > > > __

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Somnath Roy
Sam/Sage, I have incorporated all of your comments. Please have a look at the same pull request. https://github.com/ceph/ceph/pull/2440 Thanks & Regards Somnath -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Wednesday, September 10, 2014 3:25 PM To: Somnath Ro

Re: [ceph-users] Cache Pool writing too much on ssds, poor performance?

2014-09-10 Thread 廖建锋
I bet he didn't set hit_set yet From: ceph-users Date: 2014-09-11 09:00 To: Andrei Mikhailovsky; ceph-users Subject: Re: [ceph-users] Cache Pool writing too much on ssds, poor performance? Could

Re: [ceph-users] Cache Pool writing too much on ssds, poor performance?

2014-09-10 Thread Chen, Xiaoxi
Could you show your cache tiering configuration? Especially this three parameters. ceph osd pool set hot-storage cache_target_dirty_ratio 0.4 ceph osd pool set hot-storage cache_target_full_ratio 0.8 ceph osd pool set {cachepool} target_max_bytes {#bytes} From: ceph-users [mailto:ceph-users-bo

[ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread 廖建锋
dear, Is this another big bug of CEPH? [cid:_Foxmail.1@970bdd36-9dc6-e03f-6afa-bcdcfed500d0] ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CephFS roadmap (was Re: NAS on RBD)

2014-09-10 Thread Blair Bethwaite
On 11 September 2014 08:47, John Spray wrote: > I do think this is something we could think about building a tool for: > lots of people will have comparatively tiny quantities of metadata so > full dumps would be a nice thing to have in our back pockets. Reminds > me of the way Lustre people used

[ceph-users] Upgraded now MDS won't start

2014-09-10 Thread McNamara, Bradley
Hello, This is my first real issue since running Ceph for several months. Here's the situation: I've been running an Emperor cluster for several months. All was good. I decided to upgrade since I'm running Ubuntu 13.10 and 0.72.2. I decided to first upgrade Ceph to 0.80.4, which was the la

Re: [ceph-users] CephFS roadmap (was Re: NAS on RBD)

2014-09-10 Thread John Spray
On Wed, Sep 10, 2014 at 9:05 PM, Gregory Farnum wrote: >> Related, given there is no fsck, how would one go about backing up the >> metadata in order to facilitate DR? Is there even a way for that to >> make sense given the decoupling of data & metadata pools...? > > Uh, depends on the kind of DR

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Samuel Just
Oh, I changed my mind, your approach is fine. I was unclear. Currently, I just need you to address the other comments. -Sam On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > As I understand, you want me to implement the following. > > 1. Keep this implementation one sharded optracker for th

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Somnath Roy
As I understand, you want me to implement the following. 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. 2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard Am I righ

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Samuel Just
I don't quite understand. -Sam On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > Thanks Sam. > So, you want me to go with optracker/shadedopWq , right ? > > Regards > Somnath > > -Original Message- > From: Samuel Just [mailto:sam.j...@inktank.com] > Sent: Wednesday, September 10, 2014

Re: [ceph-users] Ceph-deploy bug; CentOS 7, Firefly

2014-09-10 Thread Piers Dawson-Damer
Thanks Alfredo, However the domain/folder name part of URL is fine.. it is that the RPM file-name seams incorrect regards, Piers On 10 Sep 2014, at 10:55 pm, Alfredo Deza wrote: > This should get fixed pretty soon, we already have an open issue in > the tracker for it: http://tracker.ceph.

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Somnath Roy
Thanks Sam. So, you want me to go with optracker/shadedopWq , right ? Regards Somnath -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Wednesday, September 10, 2014 2:36 PM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-de...@vger.kernel.org; ceph-users@l

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Samuel Just
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. -Sam On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > Thanks Sam..I responded back :-) > > -Original Message- > From: ceph-devel-ow...@vger.kernel.org > [mailto

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Somnath Roy
Thanks Sam..I responded back :-) -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just Sent: Wednesday, September 10, 2014 11:17 AM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-de...@vger.kernel.org; ceph-us

Re: [ceph-users] CephFS roadmap (was Re: NAS on RBD)

2014-09-10 Thread Gregory Farnum
On Tue, Sep 9, 2014 at 6:10 PM, Blair Bethwaite wrote: > Hi Sage, > > Thanks for weighing into this directly and allaying some concerns. > > It would be good to get a better understanding about where the rough > edges are - if deployers have some knowledge of those then they can be > worked around

Re: [ceph-users] OpTracker optimization

2014-09-10 Thread Samuel Just
Added a comment about the approach. -Sam On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > Hi Sam/Sage, > > As we discussed earlier, enabling the present OpTracker code degrading > performance severely. For example, in my setup a single OSD node with 10 > clients is reaching ~103K read iops wi

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-09-10 Thread Gregory Farnum
On Wednesday, September 10, 2014, Daniel Schneller < daniel.schnel...@centerdevice.com> wrote: > On 09 Sep 2014, at 21:43, Gregory Farnum > wrote: > > > Yehuda can talk about this with more expertise than I can, but I think > it should be basically fine. By creating so many buckets you're > decre

Re: [ceph-users] why one osd-op from client can get two osd-op-reply?

2014-09-10 Thread Gregory Farnum
The important bit there is actually near the end of the message output line, where the first says "ack" and the second says "ondisk". I assume you're using btrfs; the ack is returned after the write is applied in-memory and readable by clients. The ondisk (commit) message is returned after it's du

Re: [ceph-users] [ANN] ceph-deploy 1.5.14 released

2014-09-10 Thread Scottix
Suggestion: Can you link to a changelog of any new features or major bug fixes when you do new releases. Thanks, Scottix On Wed, Sep 10, 2014 at 6:45 AM, Alfredo Deza wrote: > Hi All, > > There is a new bug-fix release of ceph-deploy that helps prevent the > environment variable > issues that so

[ceph-users] [ANN] ceph-deploy 1.5.14 released

2014-09-10 Thread Alfredo Deza
Hi All, There is a new bug-fix release of ceph-deploy that helps prevent the environment variable issues that sometimes may cause issues when depending on them on remote hosts. It is also now possible to specify public and cluster networks when creating a new ceph.conf file. Make sure you update

Re: [ceph-users] question about RGW

2014-09-10 Thread Sage Weil
[Moving this to ceph-devel, where you're more likely to get a response from a developer!] On Wed, 10 Sep 2014, baijia...@126.com wrote: > when I read RGW code,  and can't  understand  master_ver  inside struct > rgw_bucket_dir_header . > who can explain this struct , in especial master_ver and s

Re: [ceph-users] Ceph-deploy bug; CentOS 7, Firefly

2014-09-10 Thread Alfredo Deza
This should get fixed pretty soon, we already have an open issue in the tracker for it: http://tracker.ceph.com/issues/9376 However, it is easy to workaround this issue with ceph-deploy by passing explicitly the repo url so that it will skip the step to fetch this: ceph-deploy install --repo-

[ceph-users] Cache Pool writing too much on ssds, poor performance?

2014-09-10 Thread Andrei Mikhailovsky
Hello guys, I am experimeting with cache pool and running some tests to see how adding the cache pool improves the overall performance of our small cluster. While doing testing I've noticed that it seems that the cache pool is writing too much on the cache pool ssds. Not sure what the issue

Re: [ceph-users] question about librbd io

2014-09-10 Thread Josh Durgin
On 09/09/2014 07:06 AM, yuelongguang wrote: hi, josh.durgin: i want to know how librbd launch io request. use case: inside vm, i use fio to test rbd-disk's io performance. fio's pramaters are bs=4k, direct io, qemu cache=none. in this case, if librbd just send what it gets from vm, i mean no ga

[ceph-users] osd cpu usage is bigger than 100%

2014-09-10 Thread yuelongguang
hi,all i am testing rbd performance, now there is only one vm which is using rbd as its disk, and inside it fio is doing r/w. the big diffenence is that i set a big iodepth other than iodepth=1. how do you think about it, which part is using up cpu? i want to find the root cause. ---de

Re: [ceph-users] Ceph on RHEL 7 with multiple OSD's

2014-09-10 Thread yrabl
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/07/2014 07:15 PM, Loic Dachary wrote: > > > On 07/09/2014 14:11, yr...@redhat.com wrote: >> Hi, >> >> I'm trying to install Ceph Firefly on RHEL 7 on my three of my >> storage servers. Each server has 17 HD, thus I thought each will >> have 17

Re: [ceph-users] region creation is failing

2014-09-10 Thread Santhosh Fernandes
Thanks John, I am following the same link. Do I need to follow "Create a Gateway Configuration" section if I am not using s3 ? Still getting this error : radosgw-admin region default --rgw-region=in --name client.radosgw.in-east-1 failed to init region: (2) No such file or directory 2014-09-10 17

Re: [ceph-users] Best practices on Filesystem recovery on RBD block volume?

2014-09-10 Thread Andrei Mikhailovsky
Keith, I think the hypervisor / infrastructure orchestration layer should be able to handle proper snapshotting with io freezing. For instance, we use CloudStack and you can set up automatic snapshots and snapshot retention policies. Cheers Andrei - Original Message - From: "Ilya

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-09-10 Thread Daniel Schneller
On 09 Sep 2014, at 21:43, Gregory Farnum wrote: > Yehuda can talk about this with more expertise than I can, but I think > it should be basically fine. By creating so many buckets you're > decreasing the effectiveness of RGW's metadata caching, which means > the initial lookup in a particular bu

Re: [ceph-users] Best practices on Filesystem recovery on RBD block volume?

2014-09-10 Thread Ilya Dryomov
On Wed, Sep 10, 2014 at 2:45 PM, Keith Phua wrote: > Hi Andrei, > > Thanks for the suggestion. > > But a rbd volume snapshots may only work if the filesystem is in a > consistent state, which mean no IO during snapshotting. With cronjob > snapshotting, usually we have no control over client doing

Re: [ceph-users] Ceph on RHEL 7 with multiple OSD's

2014-09-10 Thread BG
Michal/Marco, thanks for your help, the issue I had was indeed with firewalld blocking ports. Still adjusting to the changes in el7 but at least now I have an "active+clean" cluster to start playing with ;) ___ ceph-users mailing list ceph-users@lists.

Re: [ceph-users] Best practices on Filesystem recovery on RBD block volume?

2014-09-10 Thread Keith Phua
Hi Andrei, Thanks for the suggestion. But a rbd volume snapshots may only work if the filesystem is in a consistent state, which mean no IO during snapshotting. With cronjob snapshotting, usually we have no control over client doing any IOs. Having said that, having regular snapshotting is s

Re: [ceph-users] Best practices on Filesystem recovery on RBD block volume?

2014-09-10 Thread Andrei Mikhailovsky
Keith, You should consider doing regular rbd volume snapshots and keep them for N amount of hours/days/months depending on your needs. Cheers Andrei - Original Message - From: "Keith Phua" To: ceph-users@lists.ceph.com Cc: y...@nus.edu.sg, cheechi...@nus.edu.sg, eng...@nus.edu.

[ceph-users] why one osd-op from client can get two osd-op-reply?

2014-09-10 Thread yuelongguang
hi,all i recently debug ceph rbd, the log tells that one write to osd can get two if its reply. the difference between them is seq. why? thanks ---log- reader got message 6 0x7f58900010a0 osd_op_reply(15 rbd_data.19d92ae8944a.0001 [set-alloc-hint object_size 4194304 write_

[ceph-users] bad performance of leveldb on 0.85

2014-09-10 Thread 廖建锋
Dear, is there any body compared the performance with leveldb between 0.80.5 and 0.85, In my pervious cluster(0.80.5), the average writting speed: 10MB- 15MB In current cluster(0.85), the average writting speed: 5M-8M what is going on ? will superblock of leveldb disk cause this ? __

Re: [ceph-users] Problem with customized crush rule for EC pool

2014-09-10 Thread Loic Dachary
Right : I thought about data loss but what you're after is data availability. Thanks for explaining :-) On 10/09/2014 04:29, Lei Dong wrote: > Yes, My goal is to make it loosing 3 OSD does not lose data. > > My 6 racks may not be in different rooms but they use 6 different > switches, so I want