There is no need but it could happen accidentally and I think it should be
protect or should not be permissible.
On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee wrote:
>
>
> On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL
> wrote:
>
>> Hi All,
>>
On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL
wrote:
> Hi All,
>
> Here we have below steps to reproduce the issue
>
> Reproduction steps:
>
>
>
> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
> - create the gluster volume
>
> volume
Hi All,
Here we have below steps to reproduce the issue
Reproduction steps:
root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
- create the gluster volume
volume create: brick: success: please start the volume to access data
root@128:~# gluster volume set brick
On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL
wrote:
> yes it is ext4. but what is the impact of this.
>
Did you have a lot of data before and you deleted all that data? ext4 if I
remember correctly doesn't decrease size of directory once it expands it.
So in ext4
yes it is ext4. but what is the impact of this.
On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL > wrote:
>
>> Means the fs where this brick has been created?
>> On Apr
Yes
On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL
wrote:
> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29
Means the fs where this brick has been created?
On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
wrote:
> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL > wrote:
>
>> No,we are not using sharding
>> On Apr
Is your backend filesystem ext4?
On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL
wrote:
> No,we are not using sharding
> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>
>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>
>> I have did more
No,we are not using sharding
On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much
Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much
> difference
>
You are probably using sharding?
Buon lavoro.
/Alessandro Briosi/
*METAL.it Nord S.r.l.*
Via
t;abhishpali...@gmail.com>
> *To: *"Ashish Pandey" <aspan...@redhat.com>
> *Cc: *"gluster-users" <gluster-users@gluster.org>, "Gluster Devel" <
> gluster-de...@gluster.org>
> *Sent: *Friday, April 7, 2017 2:28:46 PM
> *Subject: *
t;Ashish Pandey" <aspan...@redhat.com>
> *Cc: *"gluster-users" <gluster-users@gluster.org>, "Gluster Devel" <
> gluster-de...@gluster.org>
> *Sent: *Friday, April 7, 2017 2:28:46 PM
> *Subject: *Re: [Gluster-users] [Gluster-devel] Glusterfs meta d
sh Pandey" <aspan...@redhat.com>
Cc: "gluster-users" <gluster-users@gluster.org>, "Gluster Devel"
<gluster-de...@gluster.org>
Sent: Friday, April 7, 2017 2:28:46 PM
Subject: Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space
consumptio
Hi Ashish,
I don't think so that count of files on mount point and .glusterfs/ will
remain same. Because I have created one file on the gluster mount poing but
on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
creates .glusterfs/xx/xx/x... which is two parent dir and
HI Ashish,
Even if there is a old data then it should be clear by gluster it self
right? or you want to do it manually?
Regards,
Abhishek
On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey wrote:
>
> Are you sure that the bricks which you used for this volume was not having
>
15 matches
Mail list logo