Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread Hari Gowtham
Hi,

Yes. This bugs is fixed in 3.12.

On Wed, Feb 28, 2018 at 1:18 AM, mabi  wrote:
> Hi,
>
> Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 
> so I guess this bug is also fixed there.
>
> Best regards,
> M.
>
>
> ‐‐‐ Original Message ‐‐‐
>
> On February 27, 2018 9:38 AM, Hari Gowtham  wrote:
>
>>
>>
>> Hi Mabi,
>>
>> The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
>>
>> made available.
>>
>> The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.
>>
>> On Sat, Feb 24, 2018 at 4:05 PM, mabi m...@protonmail.ch wrote:
>>
>> > Dear Hari,
>> >
>> > Thank you for getting back to me after having analysed the problem.
>> >
>> > As you said I tried to run "gluster volume quota  list " 
>> > for all of my directories which have a quota and found out that there was 
>> > one directory quota which was missing (stale) as you can see below:
>> >
>> > $ gluster volume quota myvolume list /demo.domain.tld
>> >
>> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit 
>> > exceeded?
>> >
>> >
>> > --
>> >
>> > /demo.domain.tld N/A N/A 8.0MB N/A N/A N/A
>> >
>> > So as you suggest I added again the quota on that directory and now the 
>> > "list" finally works again and show the quotas for every directories as I 
>> > defined them. That did the trick!
>> >
>> > Now do you know if this bug is already corrected in a new release of 
>> > GlusterFS? if not do you know when it will be fixed?
>> >
>> > Again many thanks for your help here!
>> >
>> > Best regards,
>> >
>> > M.
>> >
>> > ‐‐‐ Original Message ‐‐‐
>> >
>> > On February 23, 2018 7:45 AM, Hari Gowtham hgowt...@redhat.com wrote:
>> >
>> > > Hi,
>> > >
>> > > There is a bug in 3.10 which doesn't allow the quota list command to
>> > >
>> > > output, if the last entry on the conf file is a stale entry.
>> > >
>> > > The workaround for this is to remove the stale entry at the end. (If
>> > >
>> > > the last two entries are stale then both have to be removed and so on
>> > >
>> > > until the last entry on the conf file is a valid entry).
>> > >
>> > > This can be avoided by adding a new limit. As the new limit you added
>> > >
>> > > didn't work there is another way to check this.
>> > >
>> > > Try quota list command with a specific limit mentioned in the command.
>> > >
>> > > gluster volume quota  list 
>> > >
>> > > Make sure this path and the limit are set.
>> > >
>> > > If this works then you need to clean up the last stale entry.
>> > >
>> > > If this doesn't work we need to look further.
>> > >
>> > > Thanks Sanoj for the guidance.
>> > >
>> > > On Wed, Feb 14, 2018 at 1:36 AM, mabi m...@protonmail.ch wrote:
>> > >
>> > > > I tried to set the limits as you suggest by running the following 
>> > > > command.
>> > > >
>> > > > $ sudo gluster volume quota myvolume limit-usage /directory 200GB
>> > > >
>> > > > volume quota : success
>> > > >
>> > > > but then when I list the quotas there is still nothing, so nothing 
>> > > > really happened.
>> > > >
>> > > > I also tried to run stat on all directories which have a quota but 
>> > > > nothing happened either.
>> > > >
>> > > > I will send you tomorrow all the other logfiles as requested.
>> > > >
>> > > > \-\-\-\-\-\-\-\- Original Message 
>> > > >
>> > > > On February 13, 2018 12:20 PM, Hari Gowtham hgowt...@redhat.com wrote:
>> > > >
>> > > > > Were you able to set new limits after seeing this error?
>> > > > >
>> > > > > On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham hgowt...@redhat.com 
>> > > > > wrote:
>> > > > >
>> > > > > > Yes, I need the log files in that duration, the log rotated file 
>> > > > > > after
>> > > > > >
>> > > > > > hitting the
>> > > > > >
>> > > > > > issue aren't necessary, but the ones before hitting the issues are 
>> > > > > > needed
>> > > > > >
>> > > > > > (not just when you hit it, the ones even before you hit it).
>> > > > > >
>> > > > > > Yes, you have to do a stat from the client through fuse mount.
>> > > > > >
>> > > > > > On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote:
>> > > > > >
>> > > > > > > Thank you for your answer. This problem seem to have started 
>> > > > > > > since last week, so should I also send you the same log files 
>> > > > > > > but for last week? I think logrotate rotates them on a weekly 
>> > > > > > > basis.
>> > > > > > >
>> > > > > > > The only two quota commands we use are the following:
>> > > > > > >
>> > > > > > > gluster volume quota myvolume limit-usage /directory 10GB
>> > > > > > >
>> > > > > > > gluster volume quota myvolume list
>> > > > > > >
>> > > > > > > basically to set a new quota or to list the current quotas. The 
>> > > > > > > quota list was working in the past yes but we already had a 
>> > > > > > > similar issue where the quotas 

Re: [Gluster-users] On sharded tiered volume, only first shard of new file goes on hot tier.

2018-02-27 Thread Hari Gowtham
Hi Jeff,

Tier and shard are not supported together.
There are chances for more bugs to be there in this area as there
wasn't much effort put into it.
And I don't see this support to be done in the near future.


On Tue, Feb 27, 2018 at 11:45 PM, Jeff Byers  wrote:
> Does anyone have any ideas about how to fix, or to work-around the
> following issue?
> Thanks!
>
> Bug 1549714 - On sharded tiered volume, only first shard of new file
> goes on hot tier.
> https://bugzilla.redhat.com/show_bug.cgi?id=1549714
>
> On sharded tiered volume, only first shard of new file goes on hot tier.
>
> On a sharded tiered volume, only the first shard of a new file
> goes on the hot tier, the rest are written to the cold tier.
>
> This is unfortunate for archival applications where the hot
> tier is fast, but the cold tier is very slow. After the tier-
> promote-frequency (default 120 seconds), all of the shards do
> migrate to hot tier, but for archival applications, this
> migration is not helpful since the file is likely to not be
> accessed again just after being copied on, and it will later
> just migrate back to the cold tier.
>
> Sharding should be, and needs to be used with very large
> archive files because of bug:
>
> Bug 1277112 - Data Tiering:File create and new writes to
> existing file fails when the hot tier is full instead of
> redirecting/flushing the data to cold tier
>
> which sharding with tiering helps mitigate.
>
> This problem occurs in GlusterFS 3.7.18 and 3.12.3, at least.
> I/O size doesn't make any difference, I tried multiple sizes.
> I didn't find any volume configuration options that helped
> in getting all of the new shards to go directly to the hot tier.
>
> # dd if=/dev/root bs=64M of=/samba/tiered-sharded-vol/file-1 count=6
> 402653184 bytes (403 MB) copied, 1.31154 s, 307 MB/s
> # ls -lrtdh /samba/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
> -T0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
> -rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> # sleep 120
> # ls -lrtdh /samba/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
> -T0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> -rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
> -T0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> -T0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> -T0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> -T0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> -T0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
>
> Volume Name: tiered-sharded-vol
> Type: Tier
> Volume ID: 8c09077a-371e-4d30-9faa-c9c76d7b1b57
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Hot Tier :
> Hot Tier Type : Distribute
> Number of Bricks: 1
> Brick1: 10.10.60.169:/exports/brick-hot/tiered-sharded-vol
> Cold Tier:
> Cold Tier Type : Distribute
> Number of Bricks: 1
> Brick2: 10.10.60.169:/exports/brick-cold/tiered-sharded-vol
> Options Reconfigured:
> cluster.tier-mode: cache
> features.ctr-enabled: on
> features.shard: on
> features.shard-block-size: 64MB
> server.allow-insecure: on
> performance.quick-read: off
> performance.stat-prefetch: off
> 

Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-27 Thread Nithya Balachandran
Hi Jose,

There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.

The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.


Regards,
Nithya



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260

On 28 February 2018 at 03:03, Jose V. Carrión  wrote:

> Hi,
>
> Some days ago all my glusterfs configuration was working fine. Today I
> realized that the total size reported by df command was changed and is
> smaller than the aggregated capacity of all the bricks in the volume.
>
> I checked that all the volumes status are fine, all the glusterd daemons
> are running, there is no error in logs,  however df shows a bad total size.
>
> My configuration for one volume: volumedisk1
> [root@stor1 ~]# gluster volume status volumedisk1  detail
>
> Status of volume: volumedisk1
> 
> --
> Brick: Brick stor1data:/mnt/glusterfs/vol1/brick1
> TCP Port : 49153
> RDMA Port: 0
> Online   : Y
> Pid  : 13579
> File System  : xfs
> Device   : /dev/sdc1
> Mount Options: rw,noatime
> Inode Size   : 512
> Disk Space Free  : 35.0TB
> Total Disk Space : 49.1TB
> Inode Count  : 5273970048
> Free Inodes  : 5273123069
> 
> --
> Brick: Brick stor2data:/mnt/glusterfs/vol1/brick1
> TCP Port : 49153
> RDMA Port: 0
> Online   : Y
> Pid  : 13344
> File System  : xfs
> Device   : /dev/sdc1
> Mount Options: rw,noatime
> Inode Size   : 512
> Disk Space Free  : 35.0TB
> Total Disk Space : 49.1TB
> Inode Count  : 5273970048
> Free Inodes  : 5273124718
> 
> --
> Brick: Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1
> TCP Port : 49154
> RDMA Port: 0
> Online   : Y
> Pid  : 17439
> File System  : xfs
> Device   : /dev/sdc1
> Mount Options: rw,noatime
> Inode Size   : 512
> Disk Space Free  : 35.7TB
> Total Disk Space : 49.1TB
> Inode Count  : 5273970048
> Free Inodes  : 5273125437
> 
> --
> Brick: Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1
> TCP Port : 49155
> RDMA Port: 0
> Online   : Y
> Pid  : 17459
> File System  : xfs
> Device   : /dev/sdd1
> Mount Options: rw,noatime
> Inode Size   : 512
> Disk Space Free  : 35.6TB
> Total Disk Space : 49.1TB
> Inode Count  : 5273970048
> Free Inodes  : 5273127036
>
>
> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
> = *196,4 TB  *but df shows:
>
> [root@stor1 ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda2  48G   21G   25G  46% /
> tmpfs  32G   80K   32G   1% /dev/shm
> /dev/sda1 190M   62M  119M  35% /boot
> /dev/sda4 395G  251G  124G  68% /data
> /dev/sdb1  26T  601G   25T   3% /mnt/glusterfs/vol0
> /dev/sdc1  50T   15T   36T  29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
>76T  1,6T   74T   3% /volumedisk0
> stor1data:/volumedisk1
>   *148T*   42T  106T  29% /volumedisk1
>
> Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
>
> It's a production system so I hope you can help me.
>
> Thanks in advance.
>
> Jose V.
>
>
> Below some other data of my configuration:
>
> [root@stor1 ~]# gluster volume info
>
> Volume Name: volumedisk0
> Type: Distribute
> Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: stor1data:/mnt/glusterfs/vol0/brick1
> Brick2: stor2data:/mnt/glusterfs/vol0/brick1
> Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
> Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
> Options Reconfigured:
> performance.cache-size: 4GB
> cluster.min-free-disk: 1%
> performance.io-thread-count: 16
> performance.readdir-ahead: on
>
> Volume Name: volumedisk1
> Type: Distribute
> Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: stor1data:/mnt/glusterfs/vol1/brick1
> Brick2: stor2data:/mnt/glusterfs/vol1/brick1
> Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
> Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
> Options 

[Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-27 Thread Jose V . Carrión
Hi,

Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.

I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs,  however df shows a bad total size.

My configuration for one volume: volumedisk1
[root@stor1 ~]# gluster volume status volumedisk1  detail

Status of volume: volumedisk1
--
Brick: Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 13579
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,noatime
Inode Size   : 512
Disk Space Free  : 35.0TB
Total Disk Space : 49.1TB
Inode Count  : 5273970048
Free Inodes  : 5273123069
--
Brick: Brick stor2data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 13344
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,noatime
Inode Size   : 512
Disk Space Free  : 35.0TB
Total Disk Space : 49.1TB
Inode Count  : 5273970048
Free Inodes  : 5273124718
--
Brick: Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1
TCP Port : 49154
RDMA Port: 0
Online   : Y
Pid  : 17439
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,noatime
Inode Size   : 512
Disk Space Free  : 35.7TB
Total Disk Space : 49.1TB
Inode Count  : 5273970048
Free Inodes  : 5273125437
--
Brick: Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1
TCP Port : 49155
RDMA Port: 0
Online   : Y
Pid  : 17459
File System  : xfs
Device   : /dev/sdd1
Mount Options: rw,noatime
Inode Size   : 512
Disk Space Free  : 35.6TB
Total Disk Space : 49.1TB
Inode Count  : 5273970048
Free Inodes  : 5273127036


Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB  *but df shows:

[root@stor1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda2  48G   21G   25G  46% /
tmpfs  32G   80K   32G   1% /dev/shm
/dev/sda1 190M   62M  119M  35% /boot
/dev/sda4 395G  251G  124G  68% /data
/dev/sdb1  26T  601G   25T   3% /mnt/glusterfs/vol0
/dev/sdc1  50T   15T   36T  29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
   76T  1,6T   74T   3% /volumedisk0
stor1data:/volumedisk1
  *148T*   42T  106T  29% /volumedisk1

Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB

It's a production system so I hope you can help me.

Thanks in advance.

Jose V.


Below some other data of my configuration:

[root@stor1 ~]# gluster volume info

Volume Name: volumedisk0
Type: Distribute
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: stor1data:/mnt/glusterfs/vol0/brick1
Brick2: stor2data:/mnt/glusterfs/vol0/brick1
Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
Options Reconfigured:
performance.cache-size: 4GB
cluster.min-free-disk: 1%
performance.io-thread-count: 16
performance.readdir-ahead: on

Volume Name: volumedisk1
Type: Distribute
Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: stor1data:/mnt/glusterfs/vol1/brick1
Brick2: stor2data:/mnt/glusterfs/vol1/brick1
Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
Options Reconfigured:
cluster.min-free-inodes: 6%
performance.cache-size: 4GB
cluster.min-free-disk: 1%
performance.io-thread-count: 16
performance.readdir-ahead: on

[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option 

Re: [Gluster-users] mkdir -p, cp -R fails

2018-02-27 Thread Stefan Solbrig
Dear all,

I identified the source of the problem:

if i set   "server.root-squash on", then the problem is 100% reproducible,
with "server.root-squash off", the problem vanishes.

This is true for glusterfs 3.12.3, 3.12.4 and 3.12.6  (haven't tested other 
versions)

best wishes,
Stefan

-- 
Dr. Stefan Solbrig
Universität Regensburg, Fakultät für Physik,
93040 Regensburg, Germany
Tel +49-941-943-2097


> Am 21.01.2018 um 13:15 schrieb Stefan Solbrig :
> 
> Dear all,
> 
> I have problem with glusterfs 3.12.4  
> 
> mkdir -p  fails with "no data available" when umask is 0022, but works when 
> umask is 0002. 
> 
> Also recursive copy  (cp -R  or cp -r) fails with "no data available", 
> independly of the umask.
> 
> See below for an example to reproduce the error.  I already tried to change 
> transport from rdma to tcp. (Changing the transport works, but the error 
> persists.)
> 
> I'd be grateful for some insight.
> 
> I have small test system that still runs on glusterfs 3.12.3, where 
> everything works fine.
> 
> best wishes,
> Stefan
> 
> 
> 
> [hpcadmin@pcph00131 bug]$ mount | grep gluster 
> qloginx:/glurch.rdma on /glurch type fuse.glusterfs 
> (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
> 
> 
> [hpcadmin@pcph00131 bug]$ pwd
> /glurch/test/bug
> 
> [hpcadmin@pcph00131 bug]$ umask 0022 
> [hpcadmin@pcph00131 bug]$ mkdir aa #works
> [hpcadmin@pcph00131 bug]$ mkdir -p aa/bb/cc/dd 
> mkdir: cannot create directory ‘aa/bb’: No data available
> 
> 
> [hpcadmin@pcph00131 bug]$ umask 0002 
> [hpcadmin@pcph00131 bug]$ mkdir -p aa/bb/cc/dd 
> mkdir: cannot create directory ‘aa/bb’: No data available
> 
> [hpcadmin@pcph00131 bug]$ rm -rf aa 
> [hpcadmin@pcph00131 bug]$ mkdir -p aa/bb/cc/dd #works now
> # looks like all directories in the path of mkdir -p ...
> # have to be group-writable... 
> 
> [hpcadmin@pcph00131 bug]$ cp -R aa foo 
> cp: cannot create directory ‘foo/bb’: No data available
> # I cannot get this to work, independ of the umask...
> 
> [hpcadmin@pcph00131 bug]$ glusterd --version 
> glusterfs 3.12.4
> 
> [hpcadmin@pcph00131 bug]$ glusterfs  --version
> glusterfs 3.12.4
> 
> [hpcadmin@pcph00131 bug]$ glusterfsd --version
> glusterfs 3.12.4
> 
> [hpcadmin@pcph00131 bug]$ sudo gluster v status all 
> Status of volume: glurch
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick qloginx:/gl/lv1osb06/glurchbrick  0 49152  Y   17623
> Brick qloginx:/gl/lv2osb06/glurchbrick  0 49153  Y   17647
> Brick gluster2x:/gl/lv3osb06/glurchbrick0 49152  Y   2845 
> Brick gluster2x:/gl/lv4osb06/glurchbrick0 49153  Y   2868 
> 
> #note: im mounting on the qlogin
> 
> 
> 
> -- 
> Dr. Stefan Solbrig
> Universität Regensburg, Fakultät für Physik,
> 93040 Regensburg, Germany
> Tel +49-941-943-2097
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread mabi
Hi,

Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so 
I guess this bug is also fixed there.

Best regards,
M.
​

‐‐‐ Original Message ‐‐‐

On February 27, 2018 9:38 AM, Hari Gowtham  wrote:

> ​​
> 
> Hi Mabi,
> 
> The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
> 
> made available.
> 
> The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.
> 
> On Sat, Feb 24, 2018 at 4:05 PM, mabi m...@protonmail.ch wrote:
> 
> > Dear Hari,
> > 
> > Thank you for getting back to me after having analysed the problem.
> > 
> > As you said I tried to run "gluster volume quota  list " for 
> > all of my directories which have a quota and found out that there was one 
> > directory quota which was missing (stale) as you can see below:
> > 
> > $ gluster volume quota myvolume list /demo.domain.tld
> > 
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit 
> > exceeded?
> > 
> > 
> > --
> > 
> > /demo.domain.tld N/A N/A 8.0MB N/A N/A N/A
> > 
> > So as you suggest I added again the quota on that directory and now the 
> > "list" finally works again and show the quotas for every directories as I 
> > defined them. That did the trick!
> > 
> > Now do you know if this bug is already corrected in a new release of 
> > GlusterFS? if not do you know when it will be fixed?
> > 
> > Again many thanks for your help here!
> > 
> > Best regards,
> > 
> > M.
> > 
> > ‐‐‐ Original Message ‐‐‐
> > 
> > On February 23, 2018 7:45 AM, Hari Gowtham hgowt...@redhat.com wrote:
> > 
> > > Hi,
> > > 
> > > There is a bug in 3.10 which doesn't allow the quota list command to
> > > 
> > > output, if the last entry on the conf file is a stale entry.
> > > 
> > > The workaround for this is to remove the stale entry at the end. (If
> > > 
> > > the last two entries are stale then both have to be removed and so on
> > > 
> > > until the last entry on the conf file is a valid entry).
> > > 
> > > This can be avoided by adding a new limit. As the new limit you added
> > > 
> > > didn't work there is another way to check this.
> > > 
> > > Try quota list command with a specific limit mentioned in the command.
> > > 
> > > gluster volume quota  list 
> > > 
> > > Make sure this path and the limit are set.
> > > 
> > > If this works then you need to clean up the last stale entry.
> > > 
> > > If this doesn't work we need to look further.
> > > 
> > > Thanks Sanoj for the guidance.
> > > 
> > > On Wed, Feb 14, 2018 at 1:36 AM, mabi m...@protonmail.ch wrote:
> > > 
> > > > I tried to set the limits as you suggest by running the following 
> > > > command.
> > > > 
> > > > $ sudo gluster volume quota myvolume limit-usage /directory 200GB
> > > > 
> > > > volume quota : success
> > > > 
> > > > but then when I list the quotas there is still nothing, so nothing 
> > > > really happened.
> > > > 
> > > > I also tried to run stat on all directories which have a quota but 
> > > > nothing happened either.
> > > > 
> > > > I will send you tomorrow all the other logfiles as requested.
> > > > 
> > > > \-\-\-\-\-\-\-\- Original Message 
> > > > 
> > > > On February 13, 2018 12:20 PM, Hari Gowtham hgowt...@redhat.com wrote:
> > > > 
> > > > > Were you able to set new limits after seeing this error?
> > > > > 
> > > > > On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham hgowt...@redhat.com 
> > > > > wrote:
> > > > > 
> > > > > > Yes, I need the log files in that duration, the log rotated file 
> > > > > > after
> > > > > > 
> > > > > > hitting the
> > > > > > 
> > > > > > issue aren't necessary, but the ones before hitting the issues are 
> > > > > > needed
> > > > > > 
> > > > > > (not just when you hit it, the ones even before you hit it).
> > > > > > 
> > > > > > Yes, you have to do a stat from the client through fuse mount.
> > > > > > 
> > > > > > On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote:
> > > > > > 
> > > > > > > Thank you for your answer. This problem seem to have started 
> > > > > > > since last week, so should I also send you the same log files but 
> > > > > > > for last week? I think logrotate rotates them on a weekly basis.
> > > > > > > 
> > > > > > > The only two quota commands we use are the following:
> > > > > > > 
> > > > > > > gluster volume quota myvolume limit-usage /directory 10GB
> > > > > > > 
> > > > > > > gluster volume quota myvolume list
> > > > > > > 
> > > > > > > basically to set a new quota or to list the current quotas. The 
> > > > > > > quota list was working in the past yes but we already had a 
> > > > > > > similar issue where the quotas disappeared last August 2017:
> > > > > > > 
> > > > > > > http://lists.gluster.org/pipermail/gluster-users/2017-August/031946.html
> > > > > > > 
> > > > > > > In the mean time the only thing we did is to upgrade 

[Gluster-users] On sharded tiered volume, only first shard of new file goes on hot tier.

2018-02-27 Thread Jeff Byers
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!

Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714

On sharded tiered volume, only first shard of new file goes on hot tier.

On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest are written to the cold tier.

This is unfortunate for archival applications where the hot
tier is fast, but the cold tier is very slow. After the tier-
promote-frequency (default 120 seconds), all of the shards do
migrate to hot tier, but for archival applications, this
migration is not helpful since the file is likely to not be
accessed again just after being copied on, and it will later
just migrate back to the cold tier.

Sharding should be, and needs to be used with very large
archive files because of bug:

Bug 1277112 - Data Tiering:File create and new writes to
existing file fails when the hot tier is full instead of
redirecting/flushing the data to cold tier

which sharding with tiering helps mitigate.

This problem occurs in GlusterFS 3.7.18 and 3.12.3, at least.
I/O size doesn't make any difference, I tried multiple sizes.
I didn't find any volume configuration options that helped
in getting all of the new shards to go directly to the hot tier.

# dd if=/dev/root bs=64M of=/samba/tiered-sharded-vol/file-1 count=6
402653184 bytes (403 MB) copied, 1.31154 s, 307 MB/s
# ls -lrtdh /samba/tiered-sharded-vol/*
/exports/brick-*/tiered-sharded-vol/*
/exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
-T0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
-rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
-rw-r--r--  64M Feb 27 08:58
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
-rw-r--r--  64M Feb 27 08:58
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
-rw-r--r--  64M Feb 27 08:58
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
-rw-r--r--  64M Feb 27 08:58
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
-rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
-rw-r--r--  64M Feb 27 08:58
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
# sleep 120
# ls -lrtdh /samba/tiered-sharded-vol/*
/exports/brick-*/tiered-sharded-vol/*
/exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
-T0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
-rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
-rw-r--r--  64M Feb 27 08:58
/exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
-rw-r--r--  64M Feb 27 08:58
/exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
-rw-r--r--  64M Feb 27 08:58
/exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
-rw-r--r--  64M Feb 27 08:58
/exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
-rw-r--r--  64M Feb 27 08:58
/exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
-rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
-T0 Feb 27 09:00
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
-T0 Feb 27 09:00
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
-T0 Feb 27 09:00
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
-T0 Feb 27 09:00
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
-T0 Feb 27 09:00
/exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4

Volume Name: tiered-sharded-vol
Type: Tier
Volume ID: 8c09077a-371e-4d30-9faa-c9c76d7b1b57
Status: Started
Number of Bricks: 2
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 1
Brick1: 10.10.60.169:/exports/brick-hot/tiered-sharded-vol
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick2: 10.10.60.169:/exports/brick-cold/tiered-sharded-vol
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.shard: on
features.shard-block-size: 64MB
server.allow-insecure: on
performance.quick-read: off
performance.stat-prefetch: off
nfs.disable: on
nfs.addr-namelookup: off
performance.readdir-ahead: on
snap-activate-on-create: enable
cluster.enable-shared-storage: disable


~ Jeff Byers ~
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-27 Thread Kaleb S. KEITHLEY
On 02/26/2018 02:03 PM, Shyam Ranganathan wrote:
> Hi,
> 
> RC1 is tagged in the code, and the request for packaging the same is on
> its way.
> 
> We should have packages as early as today, and request the community to
> test the same and return some feedback.
> 
> We have about 3-4 days (till Thursday) for any pending fixes and the
> final release to happen, so shout out in case you face any blockers.
> 
> The RC1 packages should land here:
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
> and like so for CentOS,
> CentOS7:
>   # yum install
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>   # yum install glusterfs-server

Packages for:

* Fedora 27 and 28 (x86_64, aarch64, etc.) are at [1]; Fedora 29
(rawhide) are in rawhide.

* Debian stretch and buster (amd64) are at [1].

* CentOS 7 (x86_64, aarch64, ppc64le) are at [2]. They have been tagged
for testing and should appear soon at [3].

Please test and give feedback on gluster-d...@gluster.org or
#gluster-dev on freenode.

[1] https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
[2] https://cbs.centos.org/koji/taskinfo?taskID=340364
[3] https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-27 Thread Ryan Wilkinson
All volumes are configured as replica 3.  I have no arbiter volumes.
Storage hosts are for storage only and Virt hosts are dedicated Virt
hosts.  I've checked throughput from the Virt hosts to all 3 gluster hosts
and am getting ~9Gb/s.

On Tue, Feb 27, 2018 at 1:33 AM, Alex K  wrote:

> What is your gluster setup? Please share volume details where vms ate
> stored. It could be that the slow host is having arbiter volume.
>
> Alex
>
> On Feb 26, 2018 13:46, "Ryan Wilkinson"  wrote:
>
>> Here is info. about the Raid controllers.  Doesn't seem to be the culprit.
>>
>> Slow host:
>> Name PERC H710 Mini (Embedded)
>> Firmware Version 21.3.4-0001
>> Cache Memory Size 512 MB
>> Fast Host:
>>
>> Name PERC H310 Mini (Embedded)
>> Firmware Version 20.12.1-0002
>> Cache Memory Size 0 MB
>> Slow host:
>> Name PERC H310 Mini (Embedded)
>> Firmware Version 20.13.1-0002
>> Cache Memory Size 0 MB
>> Slow host:
>> Name PERC H310 Mini (Embedded)
>> Firmware Version 20.13.3-0001 Cache Memory Size 0 MB
>> Slow Host:
>> Name PERC H710 Mini (Embedded)
>> Firmware Version 21.3.5-0002
>> Cache Memory Size 512 MB
>> Fast Host
>> Perc H730
>> Cache Memory Size 1GB
>>
>> On Mon, Feb 26, 2018 at 9:42 AM, Alvin Starr  wrote:
>>
>>> I would be really supprised if the problem was related to Idrac.
>>>
>>> The Idrac processor is a stand alone cpu with its own nic and runs
>>> independent of the main CPU.
>>>
>>> That being said it does have visibility into the whole system.
>>>
>>> try using dmidecode to compare the systems and take a close look at the
>>> raid controllers and what size and form of cache they have.
>>>
>>> On 02/26/2018 11:34 AM, Ryan Wilkinson wrote:
>>>
>>> I've tested about 12 different Dell servers.  Ony a couple of them have
>>> Idrac express and all the others have Idrac Enterprise.  All the boxes with
>>> Enterprise perform poorly and the couple that have express perform well.  I
>>> use the disks in raid mode on all of them.  I've tried a few non-Dell boxes
>>> and they all perform well even though some of them are very old.  I've also
>>> tried disabling Idrac, the Idrac nic, virtual storage for Idrac with no
>>> sucess..
>>>
>>> On Mon, Feb 26, 2018 at 9:28 AM, Serkan Çoban 
>>> wrote:
>>>
 I don't think it is related with iDRAC itself but some configuration
 is wrong or there is some hw error.
 Did you check battery of raid controller? Do you use disks in jbod
 mode or raid mode?

 On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson 
 wrote:
 > Thanks for the suggestion.  I tried both of these with no difference
 in
 > performance.I have tried several other Dell hosts with Idrac
 Enterprise and
 > getting the same results.  I also tried a new Dell T130 with Idrac
 express
 > and was getting over 700 MB/s.  Any other users had this issues with
 Idrac
 > Enterprise??
 >
 >
 > On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban 
 > wrote:
 >>
 >> "Did you check the BIOS/Power settings? They should be set for high
 >> performance.
 >> Also you can try to boot "intel_idle.max_cstate=0" kernel command
 line
 >> option to be sure CPUs not entering power saving states.
 >>
 >> On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson 
 >> wrote:
 >> >
 >> >
 >> > I have a 3 host gluster replicated cluster that is providing
 storage for
 >> > our
 >> > RHEV environment.  We've been having issues with inconsistent
 >> > performance
 >> > from the VMs depending on which Hypervisor they are running on.
 I've
 >> > confirmed throughput to be ~9Gb/s to each of the storage hosts
 from the
 >> > hypervisors.  I'm getting ~300MB/s disk read spead when our test
 vm is
 >> > on
 >> > the slow Hypervisors and over 500 on the faster ones.  The
 performance
 >> > doesn't seem to be affected much by the cpu, memory that are in the
 >> > hypervisors.  I have tried a couple of really old boxes and got
 over 500
 >> > MB/s.  The common thread seems to be that the poorly perfoming
 hosts all
 >> > have Dell's Idrac 7 Enterprise.  I have one Hypervisor that has
 Idrac 7
 >> > express and it performs well.  We've compared system packages and
 >> > versions
 >> > til we're blue in the face and have been struggling with this for a
 >> > couple
 >> > months but that seems to be the only common denominator.  I've
 tried on
 >> > one
 >> > of those Idrac 7 hosts to disable the nic, virtual drive, etc,
 etc. but
 >> > no
 >> > change in performance.  In addition, I tried 5 new hosts and all
 are
 >> > complying to the Idrac enterprise theory.  Anyone else had this
 issue?!
 >> >
 >> >
 >> >
 >> > 

Re: [Gluster-users] Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first

2018-02-27 Thread Ingard Mevåg
We got extremely slow stat calls on our disperse cluster running latest
3.12 with clients also running 3.12.
When we downgraded clients to 3.10 the slow stat problem went away.

We later found out that by disabling disperse.eager-lock we could run the
3.12 clients without much issue (a little bit slower writes)
There is an open issue on this here :
https://bugzilla.redhat.com/show_bug.cgi?id=1546732
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first

2018-02-27 Thread Artem Russakovskii
Any updates on this one?

On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite  wrote:

> Hi all,
>
> I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
> boxes, distributed-replicate) My testing shows the same thing -- running a
> find on a directory dramatically increases lstat performance. To add
> another clue, the performance degrades again after issuing a call to reset
> the system's cache of dentries and inodes:
>
> # sync; echo 2 > /proc/sys/vm/drop_caches
>
> I think that this shows that it's the system cache that's actually doing
> the heavy lifting here. There are a couple of sysctl tunables that I've
> found helps out with this.
>
> See here:
>
> http://docs.gluster.org/en/latest/Administrator%20Guide/
> Linux%20Kernel%20Tuning/
>
> Contrary to what that doc says, I've found that setting
> vm.vfs_cache_pressure to a low value increases performance by allowing more
> dentries and inodes to be retained in the cache.
>
> # Set the swappiness to avoid swap when possible.
> vm.swappiness = 10
>
> # Set the cache pressure to prefer inode and dentry cache over file cache.
> This is done to keep as many
> # dentries and inodes in cache as possible, which dramatically improves
> gluster small file performance.
> vm.vfs_cache_pressure = 25
>
> For comparison, my config is:
>
> Volume Name: gv0
> Type: Tier
> Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
> Status: Started
> Snapshot Count: 13
> Number of Bricks: 8
> Transport-type: tcp
> Hot Tier :
> Hot Tier Type : Replicate
> Number of Bricks: 1 x 2 = 2
> Brick1: gluster2:/data/hot_tier/gv0
> Brick2: gluster1:/data/hot_tier/gv0
> Cold Tier:
> Cold Tier Type : Distributed-Replicate
> Number of Bricks: 3 x 2 = 6
> Brick3: gluster1:/data/brick1/gv0
> Brick4: gluster2:/data/brick1/gv0
> Brick5: gluster1:/data/brick2/gv0
> Brick6: gluster2:/data/brick2/gv0
> Brick7: gluster1:/data/brick3/gv0
> Brick8: gluster2:/data/brick3/gv0
> Options Reconfigured:
> performance.cache-max-file-size: 128MB
> cluster.readdir-optimize: on
> cluster.watermark-hi: 95
> features.ctr-sql-db-cachesize: 262144
> cluster.read-freq-threshold: 5
> cluster.write-freq-threshold: 2
> features.record-counters: on
> cluster.tier-promote-frequency: 15000
> cluster.tier-pause: off
> cluster.tier-compact: on
> cluster.tier-mode: cache
> features.ctr-enabled: on
> performance.cache-refresh-timeout: 60
> performance.stat-prefetch: on
> server.outstanding-rpc-limit: 2056
> cluster.lookup-optimize: on
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
> features.barrier: disable
> client.event-threads: 4
> server.event-threads: 4
> performance.cache-size: 1GB
> network.inode-lru-limit: 9
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> performance.quick-read: on
> performance.io-cache: on
> performance.nfs.write-behind-window-size: 4MB
> performance.write-behind-window-size: 4MB
> performance.nfs.io-threads: off
> network.tcp-window-size: 1048576
> performance.rda-cache-limit: 64MB
> performance.flush-behind: on
> server.allow-insecure: on
> cluster.tier-demote-frequency: 18000
> cluster.tier-max-files: 100
> cluster.tier-max-promote-file-size: 10485760
> cluster.tier-max-mb: 64000
> features.ctr-sql-db-wal-autocheckpoint: 2500
> cluster.tier-hot-compact-frequency: 86400
> cluster.tier-cold-compact-frequency: 86400
> performance.readdir-ahead: off
> cluster.watermark-low: 50
> storage.build-pgfid: on
> performance.rda-request-size: 128KB
> performance.rda-low-wmark: 4KB
> cluster.min-free-disk: 5%
> auto-delete: enable
>
>
> On Sun, Feb 4, 2018 at 9:44 PM, Amar Tumballi  wrote:
>
>> Thanks for the report Artem,
>>
>> Looks like the issue is about cache warming up. Specially, I suspect
>> rsync doing a 'readdir(), stat(), file operations' loop, where as when a
>> find or ls is issued, we get 'readdirp()' request, which contains the stat
>> information along with entries, which also makes sure cache is up-to-date
>> (at md-cache layer).
>>
>> Note that this is just a off-the memory hypothesis, We surely need to
>> analyse and debug more thoroughly for a proper explanation.  Some one in my
>> team would look at it soon.
>>
>> Regards,
>> Amar
>>
>> On Mon, Feb 5, 2018 at 7:25 AM, Vlad Kopylov  wrote:
>>
>>> You mounting it to the local bricks?
>>>
>>> struggling with same performance issues
>>> try using this volume setting
>>> http://lists.gluster.org/pipermail/gluster-users/2018-Januar
>>> y/033397.html
>>> performance.stat-prefetch: on might be it
>>>
>>> seems like when it gets to cache it is fast - those stat fetch which
>>> seem to come from .gluster are slow
>>>
>>> On Sun, Feb 4, 2018 at 3:45 AM, Artem Russakovskii 
>>> wrote:
>>> > An update, and a very interesting one!
>>> >
>>> > After I started stracing rsync, all I could see was lstat calls, quite

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
On Tue, Feb 27, 2018 at 05:50:49PM +0530, Karthik Subrahmanya wrote:
> gluster volume add-brick  replica 3 arbiter 1   2> 
> is the command. It will convert the existing volume to arbiter volume and
> add the specified bricks as arbiter bricks to the existing subvols.
> Once they are successfully added, self heal should start automatically and
> you can check the status of heal using the command,
> gluster volume heal  info

OK, done and the heal is in progress.  Thanks again for your help!

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Karthik Subrahmanya
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman  wrote:

> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and you
> will
> > > > have the distribution count also unchanged.
> > >
> > > I can probably find one or more machines with a few hundred GB free
> > > which could be allocated for arbiter bricks if it would be sigificantly
> > > simpler and safer than repurposing the existing bricks (and I'm getting
> > > the impression that it probably would be).
> >
> > Yes it is the simpler and safer way of doing that.
> >
> > >   Does it particularly matter
> > > whether the arbiters are all on the same node or on three separate
> > > nodes?
> > >
> >  No it doesn't matter as long as the bricks of same replica subvol are
> not
> > on the same nodes.
>
> OK, great.  So basically just install the gluster server on the new
> node(s), do a peer probe to add them to the cluster, and then
>
> gluster volume create palantir replica 3 arbiter 1 [saruman brick]
> [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter
> 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
>
gluster volume add-brick  replica 3 arbiter 1   
is the command. It will convert the existing volume to arbiter volume and
add the specified bricks as arbiter bricks to the existing subvols.
Once they are successfully added, self heal should start automatically and
you can check the status of heal using the command,
gluster volume heal  info

Regards,
Karthik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the distribution count also unchanged.
> >
> > I can probably find one or more machines with a few hundred GB free
> > which could be allocated for arbiter bricks if it would be sigificantly
> > simpler and safer than repurposing the existing bricks (and I'm getting
> > the impression that it probably would be).
> 
> Yes it is the simpler and safer way of doing that.
> 
> >   Does it particularly matter
> > whether the arbiters are all on the same node or on three separate
> > nodes?
> >
>  No it doesn't matter as long as the bricks of same replica subvol are not
> on the same nodes.

OK, great.  So basically just install the gluster server on the new
node(s), do a peer probe to add them to the cluster, and then

gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf 
brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu 
brick] [mordiggian brick] [arbiter 3]

Or is there more to it than that?

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Karthik Subrahmanya
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman  wrote:

> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this?  I'm trying to find documentation on
> distribution counts in gluster, but my google-fu is failing me.
>
More distribution, better load balancing.

>
> > - Your data on the first subvol i.e., replica subvol - 1 will be
> > unavailable till it is copied to the other subvols
> > after removing the bricks from the cluster.
>
> Hmm, ok.  I was sure I had seen a reference at some point to a command
> for migrating data off bricks to prepare them for removal.
>
> Is there an easy way to get a list of all files which are present on a
> given brick, then, so that I can see which data would be unavailable
> during this transfer?
>
The easiest way is by doing "ls" on the back end brick.

>
> > Since arbiter bricks need not be of same size as the data bricks, if you
> > can configure three more arbiter bricks
> > based on the guidelines in the doc [1], you can do it live and you will
> > have the distribution count also unchanged.
>
> I can probably find one or more machines with a few hundred GB free
> which could be allocated for arbiter bricks if it would be sigificantly
> simpler and safer than repurposing the existing bricks (and I'm getting
> the impression that it probably would be).

Yes it is the simpler and safer way of doing that.

>   Does it particularly matter
> whether the arbiters are all on the same node or on three separate
> nodes?
>
 No it doesn't matter as long as the bricks of same replica subvol are not
on the same nodes.

Regards,
Karthik

>
> --
> Dave Sherohman
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.

What's the significance of this?  I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.

> - Your data on the first subvol i.e., replica subvol - 1 will be
> unavailable till it is copied to the other subvols
> after removing the bricks from the cluster.

Hmm, ok.  I was sure I had seen a reference at some point to a command
for migrating data off bricks to prepare them for removal.

Is there an easy way to get a list of all files which are present on a
given brick, then, so that I can see which data would be unavailable
during this transfer?

> Since arbiter bricks need not be of same size as the data bricks, if you
> can configure three more arbiter bricks
> based on the guidelines in the doc [1], you can do it live and you will
> have the distribution count also unchanged.

I can probably find one or more machines with a few hundred GB free
which could be allocated for arbiter bricks if it would be sigificantly
simpler and safer than repurposing the existing bricks (and I'm getting
the impression that it probably would be).  Does it particularly matter
whether the arbiters are all on the same node or on three separate
nodes?

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Scheduled AutoCommit Function for WORM Feature

2018-02-27 Thread Karthik Subrahmanya
Hi David,

Yes it is a good to have feature, but AFAIK it is currently not in the
priority/focus list.
If anyone from community is interested in implementing this, is most
welcome.
Otherwise you need to wait for some more time until it comes to focus.

Thanks & Regards,
Karthik

On Tue, Feb 27, 2018 at 3:52 PM, David Spisla  wrote:

> Hello Gluster Community,
>
> while reading that article:
> https://github.com/gluster/glusterfs-specs/blob/master/
> under_review/worm-compliance.md
>
> there seems to be an interesting feature planned for the WORM Xlator:
>
> *Scheduled Auto-commit*: Scan Triggered Using timeouts for untouched
> files. The next scheduled namespace scan will cause the transition. CTR DB
> via libgfdb can be used to find files that have not changed. This can be
> verified with stat of the file.
>
> Is this feature still in focus? It is very usefull I think. A client does
> not have to trigger a FOP to make a file WORM-Retained after the expiration
> of the autcommit-period.
>
> Regards
> David Spisla
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Scheduled AutoCommit Function for WORM Feature

2018-02-27 Thread David Spisla
Hello Gluster Community,

while reading that article:
https://github.com/gluster/glusterfs-specs/blob/master/under_review/worm-compliance.md

there seems to be an interesting feature planned for the WORM Xlator:

*Scheduled Auto-commit*: Scan Triggered Using timeouts for untouched files.
The next scheduled namespace scan will cause the transition. CTR DB via
libgfdb can be used to find files that have not changed. This can be
verified with stat of the file.

Is this feature still in focus? It is very usefull I think. A client does
not have to trigger a FOP to make a file WORM-Retained after the expiration
of the autcommit-period.

Regards
David Spisla
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Karthik Subrahmanya
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman  wrote:

> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense.  I hadn't considered the possibility of
> alternating outages.  Thanks!
>
> > > > It would be great if you can consider configuring an arbiter or
> > > > replica 3 volume.
> > >
> > > I can.  My bricks are 2x850G and 4x11T, so I can repurpose the small
> > > bricks as arbiters with minimal effect on capacity.  What would be the
> > > sequence of commands needed to:
> > >
> > > 1) Move all data off of bricks 1 & 2
> > > 2) Remove that replica from the cluster
> > > 3) Re-add those two bricks as arbiters
> > >
> > > (And did I miss any additional steps?)
> > >
> > > Unfortunately, I've been running a few months already with the current
> > > configuration and there are several virtual machines running off the
> > > existing volume, so I'll need to reconfigure it online if possible.
> > >
> > Without knowing the volume configuration it is difficult to suggest the
> > configuration change,
> > and since it is a live system you may end up in data unavailability or
> data
> > loss.
> > Can you give the output of "gluster volume info "
> > and which brick is of what size.
>
> Volume Name: palantir
> Type: Distributed-Replicate
> Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: saruman:/var/local/brick0/data
> Brick2: gandalf:/var/local/brick0/data
> Brick3: azathoth:/var/local/brick0/data
> Brick4: yog-sothoth:/var/local/brick0/data
> Brick5: cthulhu:/var/local/brick0/data
> Brick6: mordiggian:/var/local/brick0/data
> Options Reconfigured:
> features.scrub: Inactive
> features.bitrot: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> network.ping-timeout: 1013
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> features.shard: on
> cluster.data-self-heal-algorithm: full
> storage.owner-uid: 64055
> storage.owner-gid: 64055
>
>
> For brick sizes, saruman/gandalf have
>
> $ df -h /var/local/brick0
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/gandalf-gluster  885G   55G  786G   7% /var/local/brick0
>
> and the other four have
>
> $ df -h /var/local/brick0
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdb111T  254G   11T   3% /var/local/brick0
>

If you want to use the first two bricks as arbiter, then you need to be
aware of the following things:
- Your distribution count will be decreased to 2.
- Your data on the first subvol i.e., replica subvol - 1 will be
unavailable till it is copied to the other subvols
after removing the bricks from the cluster.

Since arbiter bricks need not be of same size as the data bricks, if you
can configure three more arbiter bricks
based on the guidelines in the doc [1], you can do it live and you will
have the distribution count also unchanged.

One more thing from the volume info; Only the options which are
reconfigured will appear in the volume info output.
The quorum-type is in the list which says it is manually reconfigured.

[1]
http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing

Regards,
Karthik

>
>
> --
> Dave Sherohman
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread Hari Gowtham
Hi Mabi,

The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
made available.

The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.

On Sat, Feb 24, 2018 at 4:05 PM, mabi  wrote:
> Dear Hari,
>
> Thank you for getting back to me after having analysed the problem.
>
> As you said I tried to run "gluster volume quota  list " for 
> all of my directories which have a quota and found out that there was one 
> directory quota which was missing (stale) as you can see below:
>
> $ gluster volume quota myvolume list /demo.domain.tld
>   Path   Hard-limit  Soft-limit  Used  
> Available  Soft-limit exceeded? Hard-limit exceeded?
> ---
> /demo.domain.tldN/AN/A  8.0MB 
>   N/A N/AN/A
>
> So as you suggest I added again the quota on that directory and now the 
> "list" finally works again and show the quotas for every directories as I 
> defined them. That did the trick!
>
> Now do you know if this bug is already corrected in a new release of 
> GlusterFS? if not do you know when it will be fixed?
>
> Again many thanks for your help here!
>
> Best regards,
> M.
>
> ‐‐‐ Original Message ‐‐‐
>
> On February 23, 2018 7:45 AM, Hari Gowtham  wrote:
>
>>
>>
>> Hi,
>>
>> There is a bug in 3.10 which doesn't allow the quota list command to
>>
>> output, if the last entry on the conf file is a stale entry.
>>
>> The workaround for this is to remove the stale entry at the end. (If
>>
>> the last two entries are stale then both have to be removed and so on
>>
>> until the last entry on the conf file is a valid entry).
>>
>> This can be avoided by adding a new limit. As the new limit you added
>>
>> didn't work there is another way to check this.
>>
>> Try quota list command with a specific limit mentioned in the command.
>>
>> gluster volume quota  list 
>>
>> Make sure this path and the limit are set.
>>
>> If this works then you need to clean up the last stale entry.
>>
>> If this doesn't work we need to look further.
>>
>> Thanks Sanoj for the guidance.
>>
>> On Wed, Feb 14, 2018 at 1:36 AM, mabi m...@protonmail.ch wrote:
>>
>> > I tried to set the limits as you suggest by running the following command.
>> >
>> > $ sudo gluster volume quota myvolume limit-usage /directory 200GB
>> >
>> > volume quota : success
>> >
>> > but then when I list the quotas there is still nothing, so nothing really 
>> > happened.
>> >
>> > I also tried to run stat on all directories which have a quota but nothing 
>> > happened either.
>> >
>> > I will send you tomorrow all the other logfiles as requested.
>> >
>> > \-\-\-\-\-\-\-\- Original Message 
>> >
>> > On February 13, 2018 12:20 PM, Hari Gowtham hgowt...@redhat.com wrote:
>> >
>> > > Were you able to set new limits after seeing this error?
>> > >
>> > > On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham hgowt...@redhat.com wrote:
>> > >
>> > > > Yes, I need the log files in that duration, the log rotated file after
>> > > >
>> > > > hitting the
>> > > >
>> > > > issue aren't necessary, but the ones before hitting the issues are 
>> > > > needed
>> > > >
>> > > > (not just when you hit it, the ones even before you hit it).
>> > > >
>> > > > Yes, you have to do a stat from the client through fuse mount.
>> > > >
>> > > > On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote:
>> > > >
>> > > > > Thank you for your answer. This problem seem to have started since 
>> > > > > last week, so should I also send you the same log files but for last 
>> > > > > week? I think logrotate rotates them on a weekly basis.
>> > > > >
>> > > > > The only two quota commands we use are the following:
>> > > > >
>> > > > > gluster volume quota myvolume limit-usage /directory 10GB
>> > > > >
>> > > > > gluster volume quota myvolume list
>> > > > >
>> > > > > basically to set a new quota or to list the current quotas. The 
>> > > > > quota list was working in the past yes but we already had a similar 
>> > > > > issue where the quotas disappeared last August 2017:
>> > > > >
>> > > > > http://lists.gluster.org/pipermail/gluster-users/2017-August/031946.html
>> > > > >
>> > > > > In the mean time the only thing we did is to upgrade from 3.8 to 
>> > > > > 3.10.
>> > > > >
>> > > > > There are actually no errors to be seen using any gluster commands. 
>> > > > > The "quota myvolume list" returns simply nothing.
>> > > > >
>> > > > > In order to lookup the directories should I run a "stat" on them? 
>> > > > > and if yes should I do that on a client through the fuse mount?
>> > > > >
>> > > > > \-\-\-\-\-\-\-\- Original Message 
>> > > > >
>> > > > > On February 13, 2018 10:58 AM, Hari Gowtham hgowt...@redhat.com 
>> > > > > wrote:
>> > > > >
>> > > > > > The log provided are from 11th, 

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-27 Thread Alex K
What is your gluster setup? Please share volume details where vms ate
stored. It could be that the slow host is having arbiter volume.

Alex

On Feb 26, 2018 13:46, "Ryan Wilkinson"  wrote:

> Here is info. about the Raid controllers.  Doesn't seem to be the culprit.
>
> Slow host:
> Name PERC H710 Mini (Embedded)
> Firmware Version 21.3.4-0001
> Cache Memory Size 512 MB
> Fast Host:
>
> Name PERC H310 Mini (Embedded)
> Firmware Version 20.12.1-0002
> Cache Memory Size 0 MB
> Slow host:
> Name PERC H310 Mini (Embedded)
> Firmware Version 20.13.1-0002
> Cache Memory Size 0 MB
> Slow host:
> Name PERC H310 Mini (Embedded)
> Firmware Version 20.13.3-0001 Cache Memory Size 0 MB
> Slow Host:
> Name PERC H710 Mini (Embedded)
> Firmware Version 21.3.5-0002
> Cache Memory Size 512 MB
> Fast Host
> Perc H730
> Cache Memory Size 1GB
>
> On Mon, Feb 26, 2018 at 9:42 AM, Alvin Starr  wrote:
>
>> I would be really supprised if the problem was related to Idrac.
>>
>> The Idrac processor is a stand alone cpu with its own nic and runs
>> independent of the main CPU.
>>
>> That being said it does have visibility into the whole system.
>>
>> try using dmidecode to compare the systems and take a close look at the
>> raid controllers and what size and form of cache they have.
>>
>> On 02/26/2018 11:34 AM, Ryan Wilkinson wrote:
>>
>> I've tested about 12 different Dell servers.  Ony a couple of them have
>> Idrac express and all the others have Idrac Enterprise.  All the boxes with
>> Enterprise perform poorly and the couple that have express perform well.  I
>> use the disks in raid mode on all of them.  I've tried a few non-Dell boxes
>> and they all perform well even though some of them are very old.  I've also
>> tried disabling Idrac, the Idrac nic, virtual storage for Idrac with no
>> sucess..
>>
>> On Mon, Feb 26, 2018 at 9:28 AM, Serkan Çoban 
>> wrote:
>>
>>> I don't think it is related with iDRAC itself but some configuration
>>> is wrong or there is some hw error.
>>> Did you check battery of raid controller? Do you use disks in jbod
>>> mode or raid mode?
>>>
>>> On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson 
>>> wrote:
>>> > Thanks for the suggestion.  I tried both of these with no difference in
>>> > performance.I have tried several other Dell hosts with Idrac
>>> Enterprise and
>>> > getting the same results.  I also tried a new Dell T130 with Idrac
>>> express
>>> > and was getting over 700 MB/s.  Any other users had this issues with
>>> Idrac
>>> > Enterprise??
>>> >
>>> >
>>> > On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban 
>>> > wrote:
>>> >>
>>> >> "Did you check the BIOS/Power settings? They should be set for high
>>> >> performance.
>>> >> Also you can try to boot "intel_idle.max_cstate=0" kernel command line
>>> >> option to be sure CPUs not entering power saving states.
>>> >>
>>> >> On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson 
>>> >> wrote:
>>> >> >
>>> >> >
>>> >> > I have a 3 host gluster replicated cluster that is providing
>>> storage for
>>> >> > our
>>> >> > RHEV environment.  We've been having issues with inconsistent
>>> >> > performance
>>> >> > from the VMs depending on which Hypervisor they are running on.
>>> I've
>>> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts from
>>> the
>>> >> > hypervisors.  I'm getting ~300MB/s disk read spead when our test vm
>>> is
>>> >> > on
>>> >> > the slow Hypervisors and over 500 on the faster ones.  The
>>> performance
>>> >> > doesn't seem to be affected much by the cpu, memory that are in the
>>> >> > hypervisors.  I have tried a couple of really old boxes and got
>>> over 500
>>> >> > MB/s.  The common thread seems to be that the poorly perfoming
>>> hosts all
>>> >> > have Dell's Idrac 7 Enterprise.  I have one Hypervisor that has
>>> Idrac 7
>>> >> > express and it performs well.  We've compared system packages and
>>> >> > versions
>>> >> > til we're blue in the face and have been struggling with this for a
>>> >> > couple
>>> >> > months but that seems to be the only common denominator.  I've
>>> tried on
>>> >> > one
>>> >> > of those Idrac 7 hosts to disable the nic, virtual drive, etc, etc.
>>> but
>>> >> > no
>>> >> > change in performance.  In addition, I tried 5 new hosts and all are
>>> >> > complying to the Idrac enterprise theory.  Anyone else had this
>>> issue?!
>>> >> >
>>> >> >
>>> >> >
>>> >> > ___
>>> >> > Gluster-users mailing list
>>> >> > Gluster-users@gluster.org
>>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>>> >
>>> >
>>>
>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> Alvin Starr   ||   land:  (905)513-7688 <(905)%20513-7688>
>> Netvel Inc.   ||