Yes
On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL
wrote:
> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29
Means the fs where this brick has been created?
On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
wrote:
> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL > wrote:
>
>> No,we are not using sharding
>> On Apr
Is your backend filesystem ext4?
On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL
wrote:
> No,we are not using sharding
> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>
>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>
>> I have did more
No,we are not using sharding
On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-12-e536bea0
___
Gluster-devel mailing list
Gluster-devel@gluster.org
I have did more investigation and find out that brick dir size is
equivalent to gluster mount point but .glusterfs having too much difference
opt/lvmdir/c2/brick
# du -sch *
96K RNC_Exceptions
36K configuration
63Mjava
176K
On 04/12/2017 01:57 PM, Mateusz Slupny wrote:
Hi,
I'm observing strange behavior when accessing glusterfs 3.10.0 volume
through FUSE mount: when self-healing, stat() on a file that I know
has non-zero size and is being appended to results in stat() return
code 0, and st_size being set to 0
Hi,
I'm observing strange behavior when accessing glusterfs 3.10.0 volume
through FUSE mount: when self-healing, stat() on a file that I know has
non-zero size and is being appended to results in stat() return code 0,
and st_size being set to 0 as well.
Next week I'm planning to find a
On Wed, Apr 12, 2017 at 12:07 PM, Atin Mukherjee
wrote:
> As per http://fstat.rht.gluster.org/weeks/1 the test in $Subject has
> failed multiple times and is now blocking most of the patches to pass the
> regression. I have a patch https://review.gluster.org/#/c/17033/ to
>
As per http://fstat.rht.gluster.org/weeks/1 the test in $Subject has failed
multiple times and is now blocking most of the patches to pass the
regression. I have a patch https://review.gluster.org/#/c/17033/ to remove
this test entirely and I have the reason in the commit message.
Can this patch
10 matches
Mail list logo