GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-07-ba892262
___
Gluster-devel mailing list
Gluster-devel@gluster.org
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-07-ba892262
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On Fri, Apr 7, 2017 at 7:57 AM, Mackay, Michael wrote:
> I’ve updated my patch to work for glusterfs 3.10.0. I thought that
> targeting the latest stable baseline would be best.
>
>
>
> Could I ask for a starting point to submit the change? I see a place to
> submit a
I’ve updated my patch to work for glusterfs 3.10.0. I thought that targeting
the latest stable baseline would be best.
Could I ask for a starting point to submit the change? I see a place to submit
a change on git, but if you could point me to a starting point in the whole
process I can take
Hi
Thanks for your quick answer.
My first problem was that I dont do the operation on directory, I correct
it.
After that correction I understand another mistake that is about
dist_layout.
After changing the field and storing that, in continue for another
operation, disk layout is not like local
Means if old data is present in brick and volume is not present then it
should be visible in our brick dir /opt/lvmdir/c2/brick?
On Fri, Apr 7, 2017 at 3:04 PM, Ashish Pandey wrote:
>
> If you are creating a fresh volume, then it is your responsibility to have
> clean
Hi Ashish,
I don't think so that count of files on mount point and .glusterfs/ will
remain same. Because I have created one file on the gluster mount poing but
on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
creates .glusterfs/xx/xx/x... which is two parent dir and
HI Ashish,
Even if there is a old data then it should be clear by gluster it self
right? or you want to do it manually?
Regards,
Abhishek
On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey wrote:
>
> Are you sure that the bricks which you used for this volume was not having
>
Is there any update ??
On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs