FWIW, there was some discussion in OpenStack Swift and their performance tests
showed 255 is not the best in recent XFS. They decided to use large xattr
boundary size(65535).
https://gist.github.com/smerritt/5e7e650abaa20599ff34
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
On Wed, 17 Jun 2015, Nathan Cutler wrote:
> > We've since merged something
> > that stripes over several small xattrs so that we can keep things inline,
> > but it hasn't been backported to hammer yet. See
> > c6cdb4081e366f471b372102905a1192910ab2da.
>
> Hi Sage:
>
> You wrote "yet" - should
On Wed, Jun 17, 2015 at 1:02 PM, Nathan Cutler wrote:
>> We've since merged something
>> that stripes over several small xattrs so that we can keep things inline,
>> but it hasn't been backported to hammer yet. See
>> c6cdb4081e366f471b372102905a1192910ab2da.
>
> Hi Sage:
>
> You wrote "yet" - sh
> We've since merged something
> that stripes over several small xattrs so that we can keep things inline,
> but it hasn't been backported to hammer yet. See
> c6cdb4081e366f471b372102905a1192910ab2da.
Hi Sage:
You wrote "yet" - should we earmark it for hammer backport?
Nathan
___
On Wed, 17 Jun 2015, Zhou, Yuan wrote:
> FWIW, there was some discussion in OpenStack Swift and their performance
> tests showed 255 is not the best in recent XFS. They decided to use large
> xattr boundary size(65535).
>
> https://gist.github.com/smerritt/5e7e650abaa20599ff34
If I read this co
Hi Yuan,
Thanks for sharing the link, it is interesting to read. My understanding of the
test results, is that with a fixed size of xattrs, using smaller stripe size
will incur larger latency for read, which kind of makes sense since there are
more k-v pairs, and with the size, it needs to get e
After back-porting Sage's patch to Giant, with radosgw, the xattrs can get
inline. I haven't run extensive testing yet, will update once I have some
performance data to share.
Thanks,
Guang
> Date: Tue, 16 Jun 2015 15:51:44 -0500
> From: mnel...@redhat.com
> To: yguan...@outlook.com; s...@newdr
On 06/16/2015 03:48 PM, GuangYang wrote:
Thanks Sage for the quick response.
It is on Firefly v0.80.4.
While trying to put with *rados* directly, the xattrs can be inline. The
problem comes to light when using radosgw, since we have a bunch of metadata to
keep via xattrs, including:
rgw
Thanks Sage for the quick response.
It is on Firefly v0.80.4.
While trying to put with *rados* directly, the xattrs can be inline. The
problem comes to light when using radosgw, since we have a bunch of metadata to
keep via xattrs, including:
rgw.idtag : 15 bytes
rgw.manifest : 381 byte
On Tue, 16 Jun 2015, GuangYang wrote:
> Hi Cephers,
> While looking at disk utilization on OSD, I noticed the disk was constantly
> busy with large number of small writes, further investigation showed that, as
> radosgw uses xattrs to store metadata (e.g. etag, content-type, etc.), which
> made
sers [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
GuangYang
Sent: Tuesday, June 16, 2015 11:31 AM
To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: [ceph-users] xattrs vs. omap with radosgw
Hi Cephers,
While looking at disk utilization on OSD, I noticed the disk was constantly
Hi Cephers,
While looking at disk utilization on OSD, I noticed the disk was constantly
busy with large number of small writes, further investigation showed that, as
radosgw uses xattrs to store metadata (e.g. etag, content-type, etc.), which
made the xattrs get from local to extents, which incu
12 matches
Mail list logo