[Gluster-devel] regression test failure in mainline

2018-02-09 Thread Atin Mukherjee
FYI..One of my patch in mainline has broken one regression test
tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
and the fix for the same has been posted at
https://review.gluster.org/#/c/19536 . Would request some one to review
this change and merge this asap so that other patches do not get blocked
because of it.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Ravishankar N



On 02/10/2018 01:24 AM, Shyam Ranganathan wrote:

On 02/02/2018 10:26 AM, Ravishankar N wrote:

2) "Replace MD5 usage to enable FIPS support" - Ravi, Amar

+ Kotresh who has done most (all to be precise) of the patches listed in
https://github.com/gluster/glusterfs/issues/230  in case he would like
to add anything.

There is a pending work for this w.r.t rolling upgrade support.  I hope
to work on this next week, but I cannot commit anything looking at other
things in my queue :(.

I have this confusion reading the issue comments, so if one of the
servers is updated, and other server(s) in the replica are still old,
then self heal deamon would work, without the fix?

 From the comment [1], I understand that the new node self heal deamon
would crash.

If the above is true and the fix is to be at MD5 till  then
that is a must fix before release, as there is no way to upgrade and
handle heals, and hence not get into split brains later as I understand.

Where am I going wrong? or, is this understanding correct?
This is right. In the new node, the shd (afr) requests the checksum from 
both bricks (assuming a 1x2 setup). In saving the checksum in its local 
structures in the cbk, (see __checksum_cbk, 
https://github.com/gluster/glusterfs/blob/release-4.0/xlators/cluster/afr/src/afr-self-heal-data.c#L45), 
it will do a memcpy of SHA256_DIGEST_LENGTH bytes even if the older 
brick sends only MD5_DIGEST_LENGTH bytes. It might or not crash but it 
is illegal memory access.


The summary of changes we had in mind are:
- Restore md5sum in the code.
- Have a volume set option for posix xlator tied to 
GD_OP_VERSION_4_0_0.  By default, without this option set, 
posix_rcheksum will still send MD5SUM.
- Amar has introduced a flag in gfx_rchecksum_rsp. At the brick side, 
set that flag to 1 only if we are sending SHA256
- change rchecksum fop_cbk signature to include the flag (or maybe 
capture the flag in response xadata dict instead?).
- In afr depending on whether the flag is set or not, memcpy the 
appropriate length.
- After upgrade is complete and cluster op version becomes 
GD_OP_VERSION_4_0_0, user can set the volume option and from then on 
wards rchecksum will use SHA256.


Regards,
Ravi



To add more clarity, for fresh setup (clients + servers) in 4.0,
enabling FIPS works fine. But we need to handle case of old servers and
new clients and vice versa. If this can be considered a bug fix, then
here is my attempt at the release notes for this fix:

"Previously, if gluster was run on a FIPS enabled system, it used to
crash because gluster used MD5 checksum in various places like self-heal
and geo-rep. This has been fixed by replacing MD5 with SHA256 which is
FIPS compliant."

I'm happy to update the above text in doc/release-notes/4.0.0.md and
send it on gerrit for review.

I can take care of this, no worries. I need the information, so that I
do not misrepresent :) provided any which way is fine...

[1] https://github.com/gluster/glusterfs/issues/230#issuecomment-358293386


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Shyam Ranganathan
On 02/02/2018 10:26 AM, Ravishankar N wrote:
>>> 2) "Replace MD5 usage to enable FIPS support" - Ravi, Amar
> 
> + Kotresh who has done most (all to be precise) of the patches listed in
> https://github.com/gluster/glusterfs/issues/230  in case he would like
> to add anything.
> 
> There is a pending work for this w.r.t rolling upgrade support.  I hope
> to work on this next week, but I cannot commit anything looking at other
> things in my queue :(.

I have this confusion reading the issue comments, so if one of the
servers is updated, and other server(s) in the replica are still old,
then self heal deamon would work, without the fix?

From the comment [1], I understand that the new node self heal deamon
would crash.

If the above is true and the fix is to be at MD5 till  then
that is a must fix before release, as there is no way to upgrade and
handle heals, and hence not get into split brains later as I understand.

Where am I going wrong? or, is this understanding correct?

> To add more clarity, for fresh setup (clients + servers) in 4.0,
> enabling FIPS works fine. But we need to handle case of old servers and
> new clients and vice versa. If this can be considered a bug fix, then
> here is my attempt at the release notes for this fix:
> 
> "Previously, if gluster was run on a FIPS enabled system, it used to
> crash because gluster used MD5 checksum in various places like self-heal
> and geo-rep. This has been fixed by replacing MD5 with SHA256 which is
> FIPS compliant."
> 
> I'm happy to update the above text in doc/release-notes/4.0.0.md and
> send it on gerrit for review.

I can take care of this, no worries. I need the information, so that I
do not misrepresent :) provided any which way is fine...

[1] https://github.com/gluster/glusterfs/issues/230#issuecomment-358293386
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-02-09-cb0339f9 (master branch)

2018-02-09 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-02-09-cb0339f9
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Raghavendra Gowdappa


- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Raghavendra G" 
> Cc: "Gluster Devel" 
> Sent: Friday, February 9, 2018 2:30:59 PM
> Subject: Re: [Gluster-devel] Glusterfs and Structured data
> 
> 
> 
> On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G < raghaven...@gluster.com >
> wrote:
> 
> 
> 
> 
> 
> On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur < vbel...@redhat.com > wrote:
> 
> 
> 
> 
> 
> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa < rgowd...@redhat.com >
> wrote:
> 
> 
> All,
> 
> One of our users pointed out to the documentation that glusterfs is not good
> for storing "Structured data" [1], while discussing an issue [2].
> 
> 
> As far as I remember, the content around structured data in the Install Guide
> is from a FAQ that was being circulated in Gluster, Inc. indicating the
> startup's market positioning. Most of that was based on not wanting to get
> into performance based comparisons of storage systems that are frequently
> seen in the structured data space.
> 
> 
> Does any of you have more context on the feasibility of storing "structured
> data" on Glusterfs? Is one of the reasons for such a suggestion "staleness
> of metadata" as encountered in bugs like [3]?
> 
> 
> There are challenges that distributed storage systems face when exposed to
> applications that were written for a local filesystem interface. We have
> encountered problems with applications like tar [4] that are not in the
> realm of "Structured data". If we look at the common theme across all these
> problems, it is related to metadata & read after write consistency issues
> with the default translator stack that gets exposed on the client side.
> While the default stack is optimal for other scenarios, it does seem that a
> category of applications needing strict metadata consistency is not well
> served by that. We have observed that disabling a few performance
> translators and tuning cache timeouts for VFS/FUSE have helped to overcome
> some of them. The WIP effort on timestamp consistency across the translator
> stack, patches that have been merged as a result of the bugs that you
> mention & other fixes for outstanding issues should certainly help in
> catering to these workloads better with the file interface.
> 
> There are deployments that I have come across where glusterfs is used for
> storing structured data. gluster-block & qemu-libgfapi overcome the metadata
> consistency problem by exposing a file as a block device & by disabling most
> of the performance translators in the default stack. Workloads that have
> been deemed problematic with the file interface for the reasons alluded
> above, function well with the block interface.
> 
> I agree that gluster-block due to its usage of a subset of glusterfs fops
> (mostly reads/writes I guess), runs into less number of consistency issues.
> However, as you've mentioned we seem to disable perf xlator stack in our
> tests/use-cases till now. Note that perf xlator stack is one of worst
> offenders as far as the metadata consistency is concerned (relatively less
> scenarios of data inconsistency). So, I wonder,
> * what would be the scenario if we enable perf xlator stack for
> gluster-block?
> * Is performance on gluster-block satisfactory so that we don't need these
> xlators?
> - Or is it that these xlators are not useful for the workload usually run on
> gluster-block (For random read/write workload, read/write caching xlators
> offer less or no advantage)?
> 
> Yes. They are not useful. Block/VM files are opened with O_DIRECT, so we
> don't enable caching at any layer in glusterfs. md-cache could be useful for
> serving fstat from glusterfs. But apart from that I don't see any other
> xlator contributing much.
> 
> 
> 
> - Or theoretically the workload is ought to benefit from perf xlators, but we
> don't see them in our results (there are open bugs to this effect)?
> 
> I am asking these questions to ascertain priority on fixing perf xlators for
> (meta)data inconsistencies. If we offer a different solution for these
> workloads, the need for fixing these issues will be less.
> 
> My personal opinion is that both block and fs should work correctly. i.e.
> caching xlators shouldn't lead to inconsistency issues. 

+1. That's my personal opinion too. We'll try to fix these issues. However, we 
need to qualify the fixes. It would be helpful if community can help here. 
We'll let community know when the fixes are in.

> It would be better
> if we are in a position where we choose a workload on block vs fs based on
> their performance for that workload and nothing else. Block/VM usecases
> change the workload of the application for glusterfs, so for small file
> operations the kind of performance you see on block can never be achieved by
> glusterfs with the current architecture/design.
> 
> 
> 
> 
> 
> 
> 
> I feel that we have come a long way from the time the 

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Pranith Kumar Karampuri
On Tue, Jan 30, 2018 at 3:40 AM, Shyam Ranganathan 
wrote:

> Hi,
>
> I have posted an initial draft version of the release notes here [1].
>
> I would like to *suggest* the following contributors to help improve and
> finish the release notes by 06th Feb, 2017. As you read this mail, if
> you feel you cannot contribute, do let us know, so that we can find the
> appropriate contributors for the same.
>
> NOTE: Please use the release tracker to post patches that modify the
> release notes, the bug ID is *1539842* (see [2]).
>
> 1) Aravinda/Kotresh: Geo-replication section in the release notes
>
> 2) Kaushal/Aravinda/ppai: GD2 section in the release notes
>
> 3) Du/Poornima/Pranith: Performance section in the release notes
>

https://review.gluster.org/19535 is posted for EC changes.


>
> 4) Amar: monitoring section in the release notes
>
> Following are individual call outs for certain features:
>
> 1) "Ability to force permissions while creating files/directories on a
> volume" - Niels
>
> 2) "Replace MD5 usage to enable FIPS support" - Ravi, Amar
>
> 3) "Dentry fop serializer xlator on brick stack" - Du
>
> 4) "Add option to disable nftw() based deletes when purging the landfill
> directory" - Amar
>
> 5) "Enhancements for directory listing in readdirp" - Nithya
>
> 6) "xlators should not provide init(), fini() and others directly, but
> have class_methods" - Amar
>
> 7) "New on-wire protocol (XDR) needed to support iattx and cleaner
> dictionary structure" - Amar
>
> 8) "The protocol xlators should prevent sending binary values in a dict
> over the networks" - Amar
>
> 9) "Translator to handle 'global' options" - Amar
>
> Thanks,
> Shyam
>
> [1] github link to draft release notes:
> https://github.com/gluster/glusterfs/blob/release-4.0/
> doc/release-notes/4.0.0.md
>
> [2] Initial gerrit patch for the release notes:
> https://review.gluster.org/#/c/19370/
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Pranith Kumar Karampuri
On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G 
wrote:

>
>
> On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur  wrote:
>
>>
>>
>> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa > > wrote:
>>
>>> All,
>>>
>>> One of our users pointed out to the documentation that glusterfs is not
>>> good for storing "Structured data" [1], while discussing an issue [2].
>>
>>
>>
>> As far as I remember, the content around structured data in the Install
>> Guide is from a FAQ that was being circulated in Gluster, Inc. indicating
>> the startup's market positioning. Most of that was based on not wanting to
>> get into performance based comparisons of storage systems that are
>> frequently seen in the structured data space.
>>
>>
>>> Does any of you have more context on the feasibility of storing
>>> "structured data" on Glusterfs? Is one of the reasons for such a suggestion
>>> "staleness of metadata" as encountered in bugs like [3]?
>>>
>>
>>
>> There are challenges that distributed storage systems face when exposed
>> to applications that were written for a local filesystem interface. We have
>> encountered problems with applications like tar [4] that are not in the
>> realm of "Structured data". If we look at the common theme across all these
>> problems, it is related to metadata & read after write consistency issues
>> with the default translator stack that gets exposed on the client side.
>> While the default stack is optimal for other scenarios, it does seem that a
>> category of applications needing strict metadata consistency is not well
>> served by that. We have observed that disabling a few performance
>> translators and tuning cache timeouts for VFS/FUSE have helped to overcome
>> some of them. The WIP effort on timestamp consistency across the translator
>> stack, patches that have been merged as a result of the bugs that you
>> mention & other fixes for outstanding issues should certainly help in
>> catering to these workloads better with the file interface.
>>
>> There are deployments that I have come across where glusterfs is used for
>> storing structured data. gluster-block  & qemu-libgfapi overcome the
>> metadata consistency problem by exposing a file as a block device & by
>> disabling most of the performance translators in the default stack.
>> Workloads that have been deemed problematic with the file interface for the
>> reasons alluded above, function well with the block interface.
>>
>
> I agree that gluster-block due to its usage of a subset of glusterfs fops
> (mostly reads/writes I guess), runs into less number of consistency issues.
> However, as you've mentioned we seem to disable perf xlator stack in our
> tests/use-cases till now. Note that perf xlator stack is one of worst
> offenders as far as the metadata consistency is concerned (relatively less
> scenarios of data inconsistency). So, I wonder,
> * what would be the scenario if we enable perf xlator stack for
> gluster-block?
> * Is performance on gluster-block satisfactory so that we don't need these
> xlators?
>   - Or is it that these xlators are not useful for the workload usually
> run on gluster-block (For random read/write workload, read/write caching
> xlators offer less or no advantage)?
>

Yes. They are not useful. Block/VM files are opened with O_DIRECT, so we
don't enable caching at any layer in glusterfs. md-cache could be useful
for serving fstat from glusterfs. But apart from that I don't see any other
xlator contributing much.


>   - Or theoretically the workload is ought to benefit from perf xlators,
> but we don't see them in our results (there are open bugs to this effect)?
>
> I am asking these questions to ascertain priority on fixing perf xlators
> for (meta)data inconsistencies. If we offer a different solution for these
> workloads, the need for fixing these issues will be less.
>

My personal opinion is that both block and fs should work correctly. i.e.
caching xlators shouldn't lead to inconsistency issues. It would be better
if we are in a position where we choose a workload on block vs fs based on
their performance for that workload and nothing else. Block/VM usecases
change the workload of the application for glusterfs, so for small file
operations the kind of performance you see on block can never be achieved
by glusterfs with the current architecture/design.


>
> I feel that we have come a long way from the time the install guide was
>> written and an update for removing the "staleness of content" might be in
>> order there :-).
>>
>> Regards,
>> Vijay
>>
>> [4] https://bugzilla.redhat.com/show_bug.cgi?id=1058526
>>
>>
>>>
>>> [1] http://docs.gluster.org/en/latest/Install-Guide/Overview/
>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1512691
>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1390050
>>>
>>> regards,
>>> Raghavendra
>>> ___
>>> Gluster-devel mailing list
>>> 

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Raghavendra Gowdappa
+gluster-users

Another guideline we can provide is to disable all performance xlators for 
workloads requiring strict metadata consistency (even for non gluster-block 
usecases like native fuse mount etc). Note that we might still can have few 
perf xlators turned on. But, that will require some experimentation. The safest 
and easiest would be to turn off following xlators:

* performance.read-ahead
* performance.write-behind
* performance.readdir-ahead and performance.parallel-readdir
* performance.quick-read
* performance.stat-prefetch
* performance.io-cache

performance.open-behind can be turned on if the application doesn't require the 
functionality of file accessible through an fd opened on a mountpoint while 
file is deleted from a different mount point. As far as metadata 
inconsistencies go, I am not aware of any issues with performance.open-behind.

Please note that as has been pointed out by different mails in this thread, 
perf-xlators is one part (albeit larger one) of the bigger problem of metadata 
inconsistency. 

regards,
Raghavendra

- Original Message -
> From: "Vijay Bellur" 
> To: "Raghavendra G" 
> Cc: "Raghavendra Gowdappa" , "Gluster Devel" 
> 
> Sent: Friday, February 9, 2018 1:34:25 PM
> Subject: Re: [Gluster-devel] Glusterfs and Structured data
> 
> On Wed, Feb 7, 2018 at 10:35 PM, Raghavendra G 
> wrote:
> 
> >
> >
> > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur  wrote:
> >
> >>
> >>
> >> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa  >> > wrote:
> >>
> >>> All,
> >>>
> >>> One of our users pointed out to the documentation that glusterfs is not
> >>> good for storing "Structured data" [1], while discussing an issue [2].
> >>
> >>
> >>
> >> As far as I remember, the content around structured data in the Install
> >> Guide is from a FAQ that was being circulated in Gluster, Inc. indicating
> >> the startup's market positioning. Most of that was based on not wanting to
> >> get into performance based comparisons of storage systems that are
> >> frequently seen in the structured data space.
> >>
> >>
> >>> Does any of you have more context on the feasibility of storing
> >>> "structured data" on Glusterfs? Is one of the reasons for such a
> >>> suggestion
> >>> "staleness of metadata" as encountered in bugs like [3]?
> >>>
> >>
> >>
> >> There are challenges that distributed storage systems face when exposed
> >> to applications that were written for a local filesystem interface. We
> >> have
> >> encountered problems with applications like tar [4] that are not in the
> >> realm of "Structured data". If we look at the common theme across all
> >> these
> >> problems, it is related to metadata & read after write consistency issues
> >> with the default translator stack that gets exposed on the client side.
> >> While the default stack is optimal for other scenarios, it does seem that
> >> a
> >> category of applications needing strict metadata consistency is not well
> >> served by that. We have observed that disabling a few performance
> >> translators and tuning cache timeouts for VFS/FUSE have helped to overcome
> >> some of them. The WIP effort on timestamp consistency across the
> >> translator
> >> stack, patches that have been merged as a result of the bugs that you
> >> mention & other fixes for outstanding issues should certainly help in
> >> catering to these workloads better with the file interface.
> >>
> >> There are deployments that I have come across where glusterfs is used for
> >> storing structured data. gluster-block  & qemu-libgfapi overcome the
> >> metadata consistency problem by exposing a file as a block device & by
> >> disabling most of the performance translators in the default stack.
> >> Workloads that have been deemed problematic with the file interface for
> >> the
> >> reasons alluded above, function well with the block interface.
> >>
> >
> > I agree that gluster-block due to its usage of a subset of glusterfs fops
> > (mostly reads/writes I guess), runs into less number of consistency issues.
> > However, as you've mentioned we seem to disable perf xlator stack in our
> > tests/use-cases till now. Note that perf xlator stack is one of worst
> > offenders as far as the metadata consistency is concerned (relatively less
> > scenarios of data inconsistency). So, I wonder,
> > * what would be the scenario if we enable perf xlator stack for
> > gluster-block?
> >
> 
> 
> tcmu-runner opens block devices with O_DIRECT. So enabling perf xlators for
> gluster-block would not make a difference as translators like io-cache &
> read-ahead do not enable caching for open() with O_DIRECT. In addition,
> since bulk of the operations happen to be reads & writes on large files
> with gluster-block, md-cache & quick-read are not appropriate for the stack
> that tcmu-runner operates on.
> 
> 
> 

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Vijay Bellur
On Wed, Feb 7, 2018 at 10:35 PM, Raghavendra G 
wrote:

>
>
> On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur  wrote:
>
>>
>>
>> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa > > wrote:
>>
>>> All,
>>>
>>> One of our users pointed out to the documentation that glusterfs is not
>>> good for storing "Structured data" [1], while discussing an issue [2].
>>
>>
>>
>> As far as I remember, the content around structured data in the Install
>> Guide is from a FAQ that was being circulated in Gluster, Inc. indicating
>> the startup's market positioning. Most of that was based on not wanting to
>> get into performance based comparisons of storage systems that are
>> frequently seen in the structured data space.
>>
>>
>>> Does any of you have more context on the feasibility of storing
>>> "structured data" on Glusterfs? Is one of the reasons for such a suggestion
>>> "staleness of metadata" as encountered in bugs like [3]?
>>>
>>
>>
>> There are challenges that distributed storage systems face when exposed
>> to applications that were written for a local filesystem interface. We have
>> encountered problems with applications like tar [4] that are not in the
>> realm of "Structured data". If we look at the common theme across all these
>> problems, it is related to metadata & read after write consistency issues
>> with the default translator stack that gets exposed on the client side.
>> While the default stack is optimal for other scenarios, it does seem that a
>> category of applications needing strict metadata consistency is not well
>> served by that. We have observed that disabling a few performance
>> translators and tuning cache timeouts for VFS/FUSE have helped to overcome
>> some of them. The WIP effort on timestamp consistency across the translator
>> stack, patches that have been merged as a result of the bugs that you
>> mention & other fixes for outstanding issues should certainly help in
>> catering to these workloads better with the file interface.
>>
>> There are deployments that I have come across where glusterfs is used for
>> storing structured data. gluster-block  & qemu-libgfapi overcome the
>> metadata consistency problem by exposing a file as a block device & by
>> disabling most of the performance translators in the default stack.
>> Workloads that have been deemed problematic with the file interface for the
>> reasons alluded above, function well with the block interface.
>>
>
> I agree that gluster-block due to its usage of a subset of glusterfs fops
> (mostly reads/writes I guess), runs into less number of consistency issues.
> However, as you've mentioned we seem to disable perf xlator stack in our
> tests/use-cases till now. Note that perf xlator stack is one of worst
> offenders as far as the metadata consistency is concerned (relatively less
> scenarios of data inconsistency). So, I wonder,
> * what would be the scenario if we enable perf xlator stack for
> gluster-block?
>


tcmu-runner opens block devices with O_DIRECT. So enabling perf xlators for
gluster-block would not make a difference as translators like io-cache &
read-ahead do not enable caching for open() with O_DIRECT. In addition,
since bulk of the operations happen to be reads & writes on large files
with gluster-block, md-cache & quick-read are not appropriate for the stack
that tcmu-runner operates on.


* Is performance on gluster-block satisfactory so that we don't need these
> xlators?
>   - Or is it that these xlators are not useful for the workload usually
> run on gluster-block (For random read/write workload, read/write caching
> xlators offer less or no advantage)?
>   - Or theoretically the workload is ought to benefit from perf xlators,
> but we don't see them in our results (there are open bugs to this effect)?
>


Owing to the reasons mentioned above, most performance xlators do not seem
very useful for gluster-block workloads.


 Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel