Re: [CEPH-DEVEL] MAX_RBD_IMAGES

2015-09-30 Thread Dan Mick
Doesn't mean anything. It was just a medium-large number of emulated targets. The choice had nothing to do with the kernel. On 09/29/2015 08:58 PM, Shinobu Kinjo wrote: > I just want to know what that number means. > > Based on the Linux kenel, that number doesn't make sense but, for the Ceph

Re: Data-at-rest compression at EC pools blueprint

2015-09-30 Thread Samuel Just
Seems like a reasonable start. Quick back-of-the-envelope calculation suggests 2k of (logical_offset, compressed_offset) pairs per 1MB of data with 8MB/pair and 4k chunks, which is probably ok to stuff into an xattr. You should restructure the blueprint to make it independent of EC.

RE: Backend ObjectStore engine performance bench with FIO

2015-09-30 Thread James (Fei) Liu-SSI
Hi Casey and Xiaoxi, The next challenge to us is to figure out why the performance is so bad with fio-ceph-objectstore. With our initial performance data, the newstore is ~ 9 times worse than raw block device access in terms of IOPS and raw data. As I mentioned , The sync engine is might be

CEPH_RBD_API: options on image create

2015-09-30 Thread Mykola Golub
Hi, It was mentioned several times eralier that it would be nice to pass options as key/value configuration pairs on image create instead of expanding rbd_create/rbd_clone/rbd_copy for every possible configuration override. What do you think about this API? Introduce rbd_image_options_t and

Re: Teuthology Integration to native openstack

2015-09-30 Thread Bharath Krishna
Hi, Thanks a lot for pointing to right git and instructions. I have passed that step now and teuthology VM got created. But teuthology openstack command fails to parse the instance id from the json format output of below command: DEBUG:teuthology.misc:openstack server show -f json teuthology

Re: Teuthology Integration to native openstack

2015-09-30 Thread Loic Dachary
Hi, On 30/09/2015 07:51, Bharath Krishna wrote: > Hi, > > Thanks a lot for pointing to right git and instructions. I have passed > that step now and teuthology VM got created. > > But teuthology openstack command fails to parse the instance id from the > json format output of below command: >

Re: Teuthology Integration to native openstack

2015-09-30 Thread Bharath Krishna
Hi Loic, Does piping the command output of "openstack server show -f json ” to jq alter the output format? Openstack version being used is Juno. Thank you Regards, M Bharath Krishna On 9/30/15, 2:20 PM, "Loic Dachary" wrote: >Hi, > >On 30/09/2015 07:51, Bharath Krishna

Re: Teuthology Integration to native openstack

2015-09-30 Thread Loic Dachary
On 30/09/2015 11:34, Bharath Krishna wrote: > Hi Loic, > > Does piping the command output of "openstack server show -f json > ” to jq alter the output format? It just displays it nicely but does not otherwise change it. > > Openstack version being used is Juno. That's also the version of

Re: Teuthology Integration to native openstack

2015-09-30 Thread Loic Dachary
Could you send me privately the full log ? I suspect something else is happening (not a problem with tools / cluster version) and I may find a clue in the logs. On 30/09/2015 12:17, Bharath Krishna wrote: > Its the same version I do have as well. > > #openstack --version > openstack 1.7.0 > >

Re: a patch to improve cephfs direct io performance

2015-09-30 Thread zhucaifeng
Hi, Yan iov_iter APIs seems unsuitable for the direct io manipulation below. iov_iter APIs hide how to iterate over elements, whileas dio_xxx below explicitly control over the iteration. They conflict with each other in the principle. The patch for the newest kernel branch is below. Best

Re: Teuthology Integration to native openstack

2015-09-30 Thread Bharath Krishna
Its the same version I do have as well. #openstack --version openstack 1.7.0 Thank you. Regards M Bharath Krishna On 9/30/15, 3:42 PM, "Loic Dachary" wrote: > > >On 30/09/2015 11:34, Bharath Krishna wrote: >> Hi Loic, >> >> Does piping the command output of "openstack

Re: branches! infernalis vs master, RIP next

2015-09-30 Thread Daniel Gryniewicz
On Tue, Sep 29, 2015 at 5:12 PM, Sage Weil wrote: > > 1- Target any pull request with a bug fix that should go into infernalis > at the infernalis branch. So, currently, anything targeted at both infernalis and master should have a pull request for infernalis only? Or for

Re: branches! infernalis vs master, RIP next

2015-09-30 Thread Sage Weil
On Wed, 30 Sep 2015, Daniel Gryniewicz wrote: > On Tue, Sep 29, 2015 at 5:12 PM, Sage Weil wrote: > > > > 1- Target any pull request with a bug fix that should go into infernalis > > at the infernalis branch. > > > So, currently, anything targeted at both infernalis and

Re: a patch to improve cephfs direct io performance

2015-09-30 Thread Yan, Zheng
On Wed, Sep 30, 2015 at 5:40 PM, zhucaifeng wrote: > Hi, Yan > > iov_iter APIs seems unsuitable for the direct io manipulation below. > iov_iter APIs > hide how to iterate over elements, whileas dio_xxx below explicitly control > over > the iteration. They conflict

[BUG] commit "namei: d_is_negative() should be checked before ->d_seq validation" breaks ceph-fuse

2015-09-30 Thread Yan, Zheng
Hi, Al I found that commit 766c4cbfac "namei: d_is_negative() should be checked before ->d_seq validation” breaks ceph-fuse. After that commit, lookup_fast can return -ENOENT before calling d_revalidate(). This breaks remote filesystems which allows creating/deleting files from multiple

[PATCH] ceph: fix message length computation

2015-09-30 Thread Arnd Bergmann
create_request_message() computes the maximum length of a message, but uses the wrong type for the time stamp: sizeof(struct timespec) may be 8 or 16 depending on the architecture, while sizeof(struct ceph_timespec) is always 8, and that is what gets put into the message. Found while auditing the

Re: Running the rbd suite on OpenStack

2015-09-30 Thread Jason Dillaman
Excellent news. I believe the following should be solved when this OSD fix [1] is merged. That bug was causing wide-spread test failures for RBD in the latest sepia runs. http://tracker.ceph.com/issues/13309 http://tracker.ceph.com/issues/13310 [1] https://github.com/ceph/ceph/pull/6118

Re: Backend ObjectStore engine performance bench with FIO

2015-09-30 Thread Casey Bodley
Hi Xiaoxi, I pushed a new branch wip-fio-objectstore to ceph's github. I look forward to seeing James' work! Thanks, Casey - Original Message - > Hi Casey, > Would it better if we create an integration brunch on > ceph/ceph/wip-fio-objstore to allow more people try and

Re: a patch to improve cephfs direct io performance

2015-09-30 Thread zhucaifeng
Hi, Yan dio_get_pagevlen() calculate the length sum of multiple iovs that can be combined into one page vector. If needed, the length sum may be shortened by ceph_osdc_new_request() in ceph_sync_direct_write(). Nevertheless, the number of pages, derived from the length sum, may expand across

Re: a patch to improve cephfs direct io performance

2015-09-30 Thread zhucaifeng
Hi, Yan dio_get_pagevlen() calculate the length sum of multiple iovs that can be combined into one page vector. If needed, the length sum may be shortened by ceph_osdc_new_request() in ceph_sync_direct_write(). Nevertheless, the number of pages, derived from the length sum, may expand across

Re: a patch to improve cephfs direct io performance

2015-09-30 Thread zhucaifeng
Hi, Yan dio_get_pagevlen() calculate the length sum of multiple iovs that can be combined into one page vector. If needed, the length sum may be shortened by ceph_osdc_new_request() in ceph_sync_direct_write(). Nevertheless, the number of pages, derived from the length sum, may expand across

09/30/2015 Weekly Ceph Performance Meeting IS ON!

2015-09-30 Thread Mark Nelson
8AM PST as usual! Discussion topics include Somnath's writepath PR and more updates on transparent huge pages testing and async messenger testing. Please feel free to add your own! Here's the links: Etherpad URL: http://pad.ceph.com/p/performance_weekly To join the Meeting:

Data-at-rest compression at EC pools blueprint

2015-09-30 Thread Igor Fedotov
Folks, I've just added a blueprint about adding compression to EC pools as per our previous discussion. Please find it at http://tracker.ceph.com/projects/ceph/wiki/Submissions Not sure I did that in proper manner - by attaching a new file. Other blueprints look different at this page. Can't