yes it is ext4. but what is the impact of this.
On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL > wrote:
>
>> Means the fs where this brick has been created?
>> On Apr 13, 2017 8:19 AM, "Pranith Ku
Hi,
I am using 3.10.1 with EC disperse volume 8+2 (10 nodes each has
one brick). one of the disks has failed in the set. replaced disk and heal
process started.
48 Hours have been crossed out of 3.5TB only 688GB of data is healed. how
to speed up heal process?.
Also when I read data which is ava
Hi,
I think the directory Workhours_2017 is deleted on master and on
slave it's failing to delete because there might be stale linkto files
at the back end. These issues are fixed in DHT with latest versions.
Upgrading to latest version would solve these issues.
To workaround the issue, you might
Yes
On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL
wrote:
> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com
Means the fs where this brick has been created?
On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
wrote:
> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL > wrote:
>
>> No,we are not using sharding
>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>>
Is your backend filesystem ext4?
On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL
wrote:
> No,we are not using sharding
> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>
>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>
>> I have did more investigation and find out that brick dir s
No,we are not using sharding
On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much difference
>
>
> You are pr
Hi Kotresh,
Thanks for your hint, adding the "--ignore-missing-args" option to rsync and
restarting geo-replication worked but it only managed to sync approximately 1/3
of the data until it put the geo replication in status "Failed" this time. Now
I have a different type of error as you can see
Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much
> difference
>
You are probably using sharding?
Buon lavoro.
/Alessandro Briosi/
*METAL.it Nord S.r.l.*
Via M
I have did more investigation and find out that brick dir size is
equivalent to gluster mount point but .glusterfs having too much difference
opt/lvmdir/c2/brick
# du -sch *
96K RNC_Exceptions
36K configuration
63Mjava
176K
10 matches
Mail list logo