[Gluster-users] After stop and start wrong port is advertised

2017-09-21 Thread Jo Goossens
Hi,

 
 
We use glusterfs 3.10.5 on Debian 9.

 
When we stop or restart the service, e.g.: service glusterfs-server restart

 
We see that the wrong port get's advertised afterwards. For example:

 
Before restart:

 
Status of volume: public
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick 192.168.140.41:/gluster/public        49153     0          Y       6364
Brick 192.168.140.42:/gluster/public        49152     0          Y       1483
Brick 192.168.140.43:/gluster/public        49152     0          Y       5913
Self-heal Daemon on localhost               N/A       N/A        Y       5932
Self-heal Daemon on 192.168.140.42          N/A       N/A        Y       13084
Self-heal Daemon on 192.168.140.41          N/A       N/A        Y       15499
 Task Status of Volume public
--
There are no active volume tasks
  After restart of the service on one of the nodes (192.168.140.43) the port 
seems to have changed (but it didn't):
 root@app3:/var/log/glusterfs#  gluster volume status
Status of volume: public
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick 192.168.140.41:/gluster/public        49153     0          Y       6364
Brick 192.168.140.42:/gluster/public        49152     0          Y       1483
Brick 192.168.140.43:/gluster/public        49154     0          Y       5913
Self-heal Daemon on localhost               N/A       N/A        Y       4628
Self-heal Daemon on 192.168.140.42          N/A       N/A        Y       3077
Self-heal Daemon on 192.168.140.41          N/A       N/A        Y       28777
 Task Status of Volume public
--
There are no active volume tasks
  However the active process is STILL the same pid AND still listening on the 
old port
 root@192.168.140.43:/var/log/glusterfs# netstat -tapn | grep gluster
tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      
5913/glusterfsd
  The other nodes logs fill up with errors because they can't reach the daemon 
anymore. They try to reach it on the "new" port instead of the old one:
 [2017-09-21 08:33:25.225006] E [socket.c:2327:socket_connect_finish] 
0-public-client-2: connection to 192.168.140.43:49154 failed (Connection 
refused); disconnecting socket
[2017-09-21 08:33:29.226633] I [rpc-clnt.c:2000:rpc_clnt_reconfig] 
0-public-client-2: changing port to 49154 (from 0)
[2017-09-21 08:33:29.227490] E [socket.c:2327:socket_connect_finish] 
0-public-client-2: connection to 192.168.140.43:49154 failed (Connection 
refused); disconnecting socket
[2017-09-21 08:33:33.225849] I [rpc-clnt.c:2000:rpc_clnt_reconfig] 
0-public-client-2: changing port to 49154 (from 0)
[2017-09-21 08:33:33.236395] E [socket.c:2327:socket_connect_finish] 
0-public-client-2: connection to 192.168.140.43:49154 failed (Connection 
refused); disconnecting socket
[2017-09-21 08:33:37.225095] I [rpc-clnt.c:2000:rpc_clnt_reconfig] 
0-public-client-2: changing port to 49154 (from 0)
[2017-09-21 08:33:37.225628] E [socket.c:2327:socket_connect_finish] 
0-public-client-2: connection to 192.168.140.43:49154 failed (Connection 
refused); disconnecting socket
[2017-09-21 08:33:41.225805] I [rpc-clnt.c:2000:rpc_clnt_reconfig] 
0-public-client-2: changing port to 49154 (from 0)
[2017-09-21 08:33:41.226440] E [socket.c:2327:socket_connect_finish] 
0-public-client-2: connection to 192.168.140.43:49154 failed (Connection 
refused); disconnecting socket
 So they now try 49154 instead of the old 49152 
 Is this also by design? We had a lot of issues because of this recently. We 
don't understand why it starts advertising a completely wrong port after 
stop/start.
 
Regards

Jo Goossens

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Backup and Restore strategy in Gluster FS 3.8.4

2017-09-21 Thread Diego Remolina
Not so fast, 3.8.4 is the latest if you are using official RHEL rpms
from Red Hat Gluster Storage, so support for that should go through
your Red Hat subscription. If you are using the community packages,
then yes, you want to update to a more current version.

Seems like the latest is: 3.8.4-44.el7

Diego

On Wed, Sep 20, 2017 at 1:45 PM, Ben Werthmann  wrote:
>
>> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old in
>> the 3.8 release, 3.8.15 is the current (and probably final) release of 3.8.
>>
>> "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and
>> GlusterFS-3.11 (STM) have reached EOL. Except for serious security issues no
>> further updates to these versions are forthcoming. If you find a bug please
>> see if you can reproduce it with 3.10 or 3.12 and
>> file a BZ if appropriate."
>> http://lists.gluster.org/pipermail/packaging/2017-August/000363.html
>>
>> Gluster 3.12 includes '#1428061: Halo Replication feature for AFR
>> translator' which was introduced in 3.11. See Halo Replication feature in
>> AFR has been introduced for a summary. 3.11 is EOL, best to use 3.12 (long
>> term release) if this is of interest to you.
>>
>> On Thu, Aug 24, 2017 at 4:18 AM, Sunil Aggarwal 
>> wrote:
>>>
>>> Hi,
>>>
>>> What is the preferred way of taking glusterfs backup?
>>>
>>> I am using Glusterfs 3.8.4.
>>>
>>> We've configured gluster on thick provisioned LV in which 50% of the VG
>>> is kept free for the LVM snapshot.
>>
>>
>> Gluster Volume Snapshots require each brick to have it's own thin LV. Only
>> thin LVs are supported with volume snapshots.
>>
>> Gluster snapshots do not provide a recovery path in the event of
>> catastrophic loss of a Gluster volume.
>>
>>  -  Gluster Volume Snapshots provide point-in-time recovery for a healthy
>> Gluster Volume. A restore will reset the entire volume to a previous
>> point-in-time recovery point. Granular recovery may be performed with admin
>> intervention.
>>  - User Serviceable Snapshots exposes Gluster Volume Snapshots via the
>> .snaps directory in every directory of the mounted volume. Please review
>> documentation for requirements specific to each protocol used to access the
>> gluster volume.
>>
>>>
>>> is it any different then taking snapshot on a thin provisioned LV?
>>
>>
>> If you don't want Gluster Volume Snapshots, and are just looking to backup
>> the current state of a volume, Gluster offers a number of features to
>> recover from catastrophic loss of a Gluster volume.
>>
>>  - Simple method: mount the gluster volume and run your backup utility of
>> choice
>>  - glusterfind generates a file list (full and incremental) for passing to
>> another utility to generate a backup. See:
>>   http://docs.gluster.org/en/latest/GlusterFS%20Tools/glusterfind/
>>   https://milindchangireblog.wordpress.com/2016/10/28/why-glusterfind/
>>   http://lists.gluster.org/pipermail/gluster-users/2017-August/032219.html
>>  - Geo-replication provides a distributed, continuous, asynchronous, and
>> incremental replication service from one site to another over Local Area
>> Networks (LANs), Wide Area Networks (WANs), and the Internet. (taken from
>> RedHat doc)
>>
>>
>> Please see docs for additional info about each of the above:
>>
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/pdf/administration_guide/Red_Hat_Gluster_Storage-3.2-Administration_Guide-en-US.pdf
>>
>>>
>>>
>>> --
>>> Thanks,
>>> Sunil Aggarwal
>>> 844-734-5346
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-21 Thread Martin Toth
Hello all fellow GlusterFriends,

I would like you to comment / correct my upgrade procedure steps on replica 2 
volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum 
issue that Infrastructure currently has.

Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache
- all two hypervisor running as GlusterFS nodes and also Qemu compute nodes 
(Ubuntu 16.04 LTS)
- we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula)
- we currently run : 1x2 , Type: Replicate volume

Current Versions :
glusterfs-* [package] 3.7.6-1ubuntu1
qemu-*  [package] 2.5+dfsg-5ubuntu10.2glusterfs3.7.14xenial1

What we need : (New versions)
- upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are EOL - 
see https://www.gluster.org/community/release-schedule/ 
)
- I want to use 
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
 as package 
repository for 3.12
- upgrade Qemu (with build-in support for libgfapi) - 
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12 

- (sadly Ubuntu has packages build without libgfapi support)
- add third node to replica setup of volume (this is probably most dangerous 
operation)

Backup Phase
- backup "NFS storage” - raw DATA that runs on VMs
- stop all running VMs
- backup all running VMs (Qcow2 images) outside of gluster

Upgrading Gluster Phase
- killall glusterfs glusterfsd glusterd (on every server)
(this should stop all gluster services - server and client as it runs 
on same nodes)
- install new Gluster Server and Client packages from repository mentioned 
upper (on every server) 
- install new Monotek's qemu glusterfs package with gfapi enabled support (on 
every server) 
- /etc/init.d/glusterfs-server start (on every server)
- /etc/init.d/glusterfs-server status - verify that all runs ok (on every 
server)
- check :
- gluster volume info
- gluster volume status
- check gluster FUSE clients, if mounts working as expected
- test if various VMs are able tu boot and run as expected (if libgfapi works 
in Qemu)
- reboot all nodes - do system upgrade of packages
- test and check again

Adding third node to replica 2 setup (replica 2 => replica 3)
(volumes will be mounted and up after upgrade and we tested VMs are able to be 
served with libgfapi = upgrade of gluster sucessfuly completed)
(next we extend replica 2 to replica 3 while volumes are mounted but no data is 
touched = no running VMs, only glusterfs servers and clients on nodes)
- issue command : gluster volume add-brick volume replica 3 
node3.san:/tank/gluster/brick1 (on new single node - node3)
so we change : 
Bricks:
Brick1: node1.san:/tank/gluster/brick1
Brick2: node2.san:/tank/gluster/brick1
to :
Bricks:
Brick1: node1.san:/tank/gluster/brick1
Brick2: node2.san:/tank/gluster/brick1
Brick3: node3.san:/tank/gluster/brick1
- check gluster status
- (is rebalance / heal required here ?)
- start all VMs and start celebration :)

My Questions
- is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ?
- is this upgrade procedure OK ? What more/else should I do in order to do this 
upgrade correctly ?

Many thanks to all for support. Hope my little preparation howto will help 
others to solve same situation.

Best Regards,
Martin___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Performance drop from 3.8 to 3.10

2017-09-21 Thread Lindsay Mathieson
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly 
substantial drop in read/write perfomance


env:

- 3 node, replica 3 cluster

- Private dedicated Network: 1Gx3, bond: balance-alb

- was able to down the volume for the upgrade and reboot each node

- Usage: VM Hosting (qemu)

- Sharded Volume

- sequential read performance in VM's has dropped from 700Mbps to 300mbs

- Seq Write has dropped from 115MB/s (approx) to 110

- Write IOPS have dropped from 12MB/s to 8MB/s

Apart from increasing the op version I made no changes to the volume 
settings.


op.version is 31004

gluster v info

Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
transport.address-family: inet
cluster.locking-scheme: granular
cluster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
performance.readdir-ahead: on
performance.low-prio-threads: 32
user.cifs: off
performance.flush-behind: on
server.event-threads: 4
client.event-threads: 4
server.allow-insecure: on


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Restrict root clients / experimental patch

2017-09-21 Thread Pierre C
Hi All,

I would like to use glusterfs in an environment where storage servers are
managed by an IT service - myself :) - and several users in the
organization can mount the distributed fs. The users are root on their
machines.
As far as I know about glusterfs, a root client user may impersonate any
uid/gid since it provides its uid/gid itself when it talks to the bricks
(like nfsv3).
The thing is, we want to enforce permissions, i.e. user X may only access
files shared with him even if he's root on his machine.
I found a draft spec about glusterfs+kerberos

but not much more so I think it's not possible with glusterfs right now,
correct?
(Also I feel that kerberos would be a bit heavy to manage)

---

An simple hack that I found is to add custom uid/gid fields in clients' ssl
certificates. The bricks use the client's uid/gid specified in its
certificate rather than using one specified by the user. This solution has
no effect on performance and there's no need for a central authentication.
The thing that changes is the way client certificates are generated and
glusterfsd needs a small patch.

I did an experimental implementation

of this idea. Custom fields "1.2.3.4.5.6.7" and "1.2.3.4.5.6.8" are used
for uid and gid.
I tried it with custom CA trusted by all bricks and I issued a few client
certificates.
No server configuration is needed when a new client is added, when a client
is revoked the a CRL
 must updated
and pushed to all servers.
By the way I didn't get glusterfs servers to accept my CRLs, do some people
use it?

Notes:
 * groups are not handled right now and since users may change groups
regularly I don't think it would be a great idea to freeze them in a
certificate. The bricks could possibly do an ldap lookup in order to
retrieve and cache the groups for an uid.
 * Clients obviously can't modify their certificates because they are
signed by CA

What do you think of this implementation, is it safe?
Do all client operations use auth_glusterfs_v2_authenticate or did I miss
other codepaths?

Thanks!

Pierre Carru
eshard

PS: By the way I found the source code very clean and well organized, nice
job :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Permission for glusterfs logs.

2017-09-21 Thread Marcin Dulak
Who is rotating the logs? If logrotate then setfacl may be the way to go
https://bugzilla.redhat.com/show_bug.cgi?id=77

[root@centos7 ~]# touch /var/log/my.log
[root@centos7 ~]# ls -al /var/log/my.log
-rw-r--r--. 1 root root 0 Sep 21 07:01 /var/log/my.log
[root@centos7 ~]# chmod 600 /var/log/my.log
[root@centos7 ~]# sudo su - vagrant
Last login: Thu Sep 21 07:01:36 UTC 2017 from 10.0.2.2 on pts/0
[vagrant@centos7 ~]$ cat /var/log/my.log
cat: /var/log/my.log: Permission denied
[vagrant@centos7 ~]$ exit
logout
[root@centos7 ~]# setfacl -m u:vagrant:r /var/log/my.log
[root@centos7 ~]# sudo su - vagrant
Last login: Thu Sep 21 07:03:05 UTC 2017 on pts/0
[vagrant@centos7 ~]$ cat /var/log/my.log
[vagrant@localhost ~]$ getfacl /var/log/my.log
getfacl: Removing leading '/' from absolute path names
# file: var/log/my.log
# owner: root
# group: root
user::rw-
user:vagrant:r--
group::---
mask::r--
other::---

Marcin


On Wed, Sep 20, 2017 at 2:07 PM, ABHISHEK PALIWAL 
wrote:

> I have modified the source code and its working fine but only below two
> files permission is not getting change even after modification.
>
> 1. cli.log
> 2. file which contains the mounting information for "mount -t glusterfs"
> command
>
> On Wed, Sep 20, 2017 at 5:20 PM, Kaleb S. KEITHLEY 
> wrote:
>
>> On 09/18/2017 09:22 PM, ABHISHEK PALIWAL wrote:
>> > Any suggestion would be appreciated...
>> >
>> > On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" > > > wrote:
>> >
>> > Any quick suggestion.?
>> >
>> > On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL
>> > > wrote:
>> >
>> > Hi Team,
>> >
>> > As you can see permission for the glusterfs logs in
>> > /var/log/glusterfs is 600.
>> >
>> > drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
>> > *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
>> > drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
>> > drwxr-xr-x 3 root root  100 Jan  3 20:21 .
>> > *-rw--- 1 root root 2102 Jan  3 20:21
>> > etc-glusterfs-glusterd.vol.log*
>> >
>> > Due to that non-root user is not able to access these logs
>> > files, could you please let me know how can I change these
>> > permission. So that non-root user can also access these log
>> files.
>> >
>>
>> There is no "quick fix."  Gluster creates the log files with 0600 — like
>> nearly everything else in /var/log.
>>
>> The admin can chmod the files, but when the logs rotate the new log
>> files will be 0600 again.
>>
>> You'd have to patch the source and rebuild to get different permission
>> bits.
>>
>> You can probably do something with ACLs, but as above, when the logs
>> rotate the new files won't have the ACLs.
>>
>>
>>
>> --
>>
>> Kaleb
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-21 Thread Amye Scavarda
Just making sure this gets through.


-- Forwarded message --
From: Martin Toth 
Date: Thu, Sep 21, 2017 at 9:17 AM
Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
To: gluster-users@gluster.org
Cc: Marek Toth , a...@redhat.com


Hello all fellow GlusterFriends,

I would like you to comment / correct my upgrade procedure steps on
replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct
quorum issue that Infrastructure currently has.

Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache
- all two hypervisor running as GlusterFS nodes and also Qemu compute
nodes (Ubuntu 16.04 LTS)
- we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula)
- we currently run : 1x2 , Type: Replicate volume

Current Versions :
glusterfs-* [package] 3.7.6-1ubuntu1
qemu-* [package] 2.5+dfsg-5ubuntu10.2glusterfs3.7.14xenial1

What we need : (New versions)
- upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are
EOL - see https://www.gluster.org/community/release-schedule/)
- I want to use
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 as
package repository for 3.12
- upgrade Qemu (with build-in support for libgfapi) -
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12
- (sadly Ubuntu has packages build without libgfapi support)
- add third node to replica setup of volume (this is probably most
dangerous operation)

Backup Phase
- backup "NFS storage” - raw DATA that runs on VMs
- stop all running VMs
- backup all running VMs (Qcow2 images) outside of gluster

Upgrading Gluster Phase
- killall glusterfs glusterfsd glusterd (on every server)
(this should stop all gluster services - server and client as it runs
on same nodes)
- install new Gluster Server and Client packages from repository
mentioned upper (on every server)
- install new Monotek's qemu glusterfs package with gfapi enabled
support (on every server)
- /etc/init.d/glusterfs-server start (on every server)
- /etc/init.d/glusterfs-server status - verify that all runs ok (on
every server)
- check :
- gluster volume info
- gluster volume status
- check gluster FUSE clients, if mounts working as expected
- test if various VMs are able tu boot and run as expected (if
libgfapi works in Qemu)
- reboot all nodes - do system upgrade of packages
- test and check again

Adding third node to replica 2 setup (replica 2 => replica 3)
(volumes will be mounted and up after upgrade and we tested VMs are
able to be served with libgfapi = upgrade of gluster sucessfuly
completed)
(next we extend replica 2 to replica 3 while volumes are mounted but
no data is touched = no running VMs, only glusterfs servers and
clients on nodes)
- issue command : gluster volume add-brick volume replica 3
node3.san:/tank/gluster/brick1 (on new single node - node3)
so we change :
Bricks:
Brick1: node1.san:/tank/gluster/brick1
Brick2: node2.san:/tank/gluster/brick1
to :
Bricks:
Brick1: node1.san:/tank/gluster/brick1
Brick2: node2.san:/tank/gluster/brick1
Brick3: node3.san:/tank/gluster/brick1
- check gluster status
- (is rebalance / heal required here ?)
- start all VMs and start celebration :)

My Questions
- is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ?
- is this upgrade procedure OK ? What more/else should I do in order
to do this upgrade correctly ?

Many thanks to all for support. Hope my little preparation howto will
help others to solve same situation.

Best Regards,
Martin


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] how to verify bitrot signed file manually?

2017-09-21 Thread Amudhan P
Hi,

I have a file in my brick which was signed by bitrot and latter when
running scrub it was marked as bad.

Now, I want to verify file again manually. just to clarify my doubt

how can I do this?


regards
Amudhan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "Input/output error" on mkdir for PPC64 based client

2017-09-21 Thread Niels de Vos
On Tue, Sep 19, 2017 at 04:10:26PM -0500, Walter Deignan wrote:
> I recently compiled the 3.10-5 client from source on a few PPC64 systems 
> running RHEL 7.3. They are mounting a Gluster volume which is hosted on 
> more traditional x86 servers.
> 
> Everything seems to be working properly except for creating new 
> directories from the PPC64 clients. The mkdir command gives a 
> "Input/output error" and for the first few minutes the new directory is 
> inaccessible. I checked the backend bricks and confirmed the directory was 
> created properly on all of them. After waiting for 2-5 minutes the 
> directory magically becomes accessible.
> 
> This inaccessible directory issue only appears from the client which 
> created it. When creating the directory from client #1 I can immediately 
> see it with no errors from client #2.
> 
> Using a pre-compiled 3.10-5 package on an x86 client doesn't show the 
> issue.
> 
> I poked around bugzilla but couldn't seem to find anything which matches 
> this.

Maybe https://bugzilla.redhat.com/show_bug.cgi?id=951903 ?
Some of the details have also been captured in
https://github.com/gluster/glusterfs/issues/203

Capturing a tcpdump and opening it up with Wireshark may help in
identifying if GFID's are mixed up. Possibly some are also mentioned in
different logs, but that might be more difficult to find.

HTH,
Niels


> 
> [root@mqdev1 hafsdev1_gv0]# ls -lh
> total 8.0K
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> [root@mqdev1 hafsdev1_gv0]# mkdir testdir2
> mkdir: cannot create directory ?testdir2?: Input/output error
> [root@mqdev1 hafsdev1_gv0]# ls
> ls: cannot access testdir2: No such file or directory
> data  testdir  testdir2
> [root@mqdev1 hafsdev1_gv0]# ls -lht
> ls: cannot access testdir2: No such file or directory
> total 8.0K
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> d?? ? ??   ?? testdir2
> [root@mqdev1 hafsdev1_gv0]# cd testdir2
> -bash: cd: testdir2: No such file or directory
> 
> *Wait a few minutes...*
> 
> [root@mqdev1 hafsdev1_gv0]# ls -lht
> total 12K
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:50 testdir2
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> [root@mqdev1 hafsdev1_gv0]#
> 
> My volume config...
> 
> [root@dc-hafsdev1a bricks]# gluster volume info
> 
> Volume Name: gv0
> Type: Replicate
> Volume ID: a2d37705-05cb-4700-8ed8-2cb89376faf0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: dc-hafsdev1a.ulinedm.com:/gluster/bricks/brick1/data
> Brick2: dc-hafsdev1b.ulinedm.com:/gluster/bricks/brick1/data
> Brick3: dc-hafsdev1c.ulinedm.com:/gluster/bricks/brick1/data
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> network.ping-timeout: 2
> features.bitrot: on
> features.scrub: Active
> cluster.server-quorum-ratio: 51%
> 
> -Walter Deignan
> -Uline IT, Systems Architect

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup and Restore strategy in Gluster FS 3.8.4

2017-09-21 Thread Ben Werthmann
Good point Diego. Thanks!

On Thu, Sep 21, 2017 at 9:49 AM, Diego Remolina  wrote:

> Not so fast, 3.8.4 is the latest if you are using official RHEL rpms
> from Red Hat Gluster Storage, so support for that should go through
> your Red Hat subscription. If you are using the community packages,
> then yes, you want to update to a more current version.
>
> Seems like the latest is: 3.8.4-44.el7
>
> Diego
>
> On Wed, Sep 20, 2017 at 1:45 PM, Ben Werthmann  wrote:
> >
> >> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old
> in
> >> the 3.8 release, 3.8.15 is the current (and probably final) release of
> 3.8.
> >>
> >> "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and
> >> GlusterFS-3.11 (STM) have reached EOL. Except for serious security
> issues no
> >> further updates to these versions are forthcoming. If you find a bug
> please
> >> see if you can reproduce it with 3.10 or 3.12 and
> >> file a BZ if appropriate."
> >> http://lists.gluster.org/pipermail/packaging/2017-August/000363.html
> >>
> >> Gluster 3.12 includes '#1428061: Halo Replication feature for AFR
> >> translator' which was introduced in 3.11. See Halo Replication feature
> in
> >> AFR has been introduced for a summary. 3.11 is EOL, best to use 3.12
> (long
> >> term release) if this is of interest to you.
> >>
> >> On Thu, Aug 24, 2017 at 4:18 AM, Sunil Aggarwal 
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> What is the preferred way of taking glusterfs backup?
> >>>
> >>> I am using Glusterfs 3.8.4.
> >>>
> >>> We've configured gluster on thick provisioned LV in which 50% of the VG
> >>> is kept free for the LVM snapshot.
> >>
> >>
> >> Gluster Volume Snapshots require each brick to have it's own thin LV.
> Only
> >> thin LVs are supported with volume snapshots.
> >>
> >> Gluster snapshots do not provide a recovery path in the event of
> >> catastrophic loss of a Gluster volume.
> >>
> >>  -  Gluster Volume Snapshots provide point-in-time recovery for a
> healthy
> >> Gluster Volume. A restore will reset the entire volume to a previous
> >> point-in-time recovery point. Granular recovery may be performed with
> admin
> >> intervention.
> >>  - User Serviceable Snapshots exposes Gluster Volume Snapshots via the
> >> .snaps directory in every directory of the mounted volume. Please review
> >> documentation for requirements specific to each protocol used to access
> the
> >> gluster volume.
> >>
> >>>
> >>> is it any different then taking snapshot on a thin provisioned LV?
> >>
> >>
> >> If you don't want Gluster Volume Snapshots, and are just looking to
> backup
> >> the current state of a volume, Gluster offers a number of features to
> >> recover from catastrophic loss of a Gluster volume.
> >>
> >>  - Simple method: mount the gluster volume and run your backup utility
> of
> >> choice
> >>  - glusterfind generates a file list (full and incremental) for passing
> to
> >> another utility to generate a backup. See:
> >>   http://docs.gluster.org/en/latest/GlusterFS%20Tools/glusterfind/
> >>   https://milindchangireblog.wordpress.com/2016/10/28/why-glusterfind/
> >>   http://lists.gluster.org/pipermail/gluster-users/2017-
> August/032219.html
> >>  - Geo-replication provides a distributed, continuous, asynchronous, and
> >> incremental replication service from one site to another over Local Area
> >> Networks (LANs), Wide Area Networks (WANs), and the Internet. (taken
> from
> >> RedHat doc)
> >>
> >>
> >> Please see docs for additional info about each of the above:
> >>
> >> https://access.redhat.com/documentation/en-us/red_hat_
> gluster_storage/3.2/pdf/administration_guide/Red_Hat_Gluster_Storage-3.2-
> Administration_Guide-en-US.pdf
> >>
> >>>
> >>>
> >>> --
> >>> Thanks,
> >>> Sunil Aggarwal
> >>> 844-734-5346
> >>>
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >>
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter and geo-replication

2017-09-21 Thread Marcus

Hi all!

Today I have a small gluster replication on 2 machines.
My plan is to scale this, I though need some feedback that how I plan 
things is in the right direction.


First of all I have understood the need of an arbiter.
When I scale this, say that I just have 2 replica and 1 arbiter, when I 
add another two machines can I still use the same physical machine as 
the arbiter?
Or when I add additional two machines I have to add another arbiter 
machine as well?


My second question is about geo-replication.
If I want to setup a geo-replication on above gluster cluster do I need 
to have the exact "same" machines in the reo-replication?
I know that disk size should be same size on both the brick and on the 
geo-replication side.
So if I have 2 replica and 1 arbiter, do I need 2 replica and 1 arbiter 
for the geo-replication?
Or is it sufficient for a 2 replica and 1 arbiter to use 1 replica for 
the geo-replication?
What I wonder is when I scale my gluster with additional 2 machines, do 
I need 2 machines for geo-replication or 1 machine for geo-replication?
So adding 2 machines means adding 4 machines in total or do I just need 
3 in total?

Is there a need for a arbiter in the geo-replication?

Many questions, but I hope that you can help me out!

Many thanks in advance!

Best regards
Marcus Pedersén


--

*Marcus Pedersén*
/System administrator/


*Interbull Centre*
Department of Animal Breeding & Genetics — SLU
Box 7023, SE-750 07
Uppsala, Sweden

Visiting address:
Room 55614, Ulls väg 26, Ultuna
Uppsala
Sweden

Tel: +46-(0)18-67 1962
Interbull Logo


ISO certification logo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] vfs_fruit and extended attributes

2017-09-21 Thread Terry McGuire
Hello list.  I’m attempting to improve how Samba shares directories on our 
Gluster volume to Mac users by using the vfs_fruit module.  This module does 
wonders for speeding listings and downloads of directories with large numbers 
of files in the Finder, but it kills uploads dead.  Finder gives an error:

The Finder can’t complete the operation because some data in “[filename]” can’t 
be read or written.
(Error code -36)

The man page for the module indicates that the vfs_streams_xattr module must 
also be loaded, like this:

vfs objects = fruit streams_xattr

I’ve done this, and I’ve futzed with various combinations of options, but 
haven’t got writes working.  This issue seems specific to Gluster, as I can 
share a directory that’s not in the Gluster volume and it works fine.  I’m 
guessing the problem has something to do with extended attributes, but, near as 
I can tell, the Gluster volume ought to do xattrs just fine, as its bricks are 
XFS.

Anyone have any experience with this?  Anyone have vfs_fruit working with 
Gluster?

Regards,
Terry
_
Terry McGuire
Information Services and Technology (IST)
University of Alberta
Edmonton, Alberta, Canada  T6G 2H1
Phone:  780-492-9422



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Permission for glusterfs logs.

2017-09-21 Thread Alex K
Wouldn't a simple chmod 644 logfile suffice? This will give read
permissions to all.

Otherwise you could change the group ownership (chgroup), give read
permissuons to this group (640) then make the users a member of this group.

Alex


On Sep 20, 2017 2:37 PM, "ABHISHEK PALIWAL"  wrote:

Any suggestion would be appreciated...

On Sep 18, 2017 15:05, "ABHISHEK PALIWAL"  wrote:

> Any quick suggestion.?
>
> On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL  > wrote:
>
>> Hi Team,
>>
>> As you can see permission for the glusterfs logs in /var/log/glusterfs is
>> 600.
>>
>> drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
>> *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
>> drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
>> drwxr-xr-x 3 root root  100 Jan  3 20:21 .
>> *-rw--- 1 root root 2102 Jan  3 20:21 etc-glusterfs-glusterd.vol.log*
>>
>> Due to that non-root user is not able to access these logs files, could
>> you please let me know how can I change these permission. So that non-root
>> user can also access these log files.
>>
>> Regards,
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "Input/output error" on mkdir for PPC64 based client

2017-09-21 Thread Walter Deignan
I put the share into debug mode and then repeated the process from a ppc64 
client and an x86 client. Weirdly the client logs were almost identical.

Here's the ppc64 gluster client log of attempting to create a folder...

-

[2017-09-20 13:34:23.344321] D 
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (--> 
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xfdf60)[0x3fff9ec56fe0] 
(--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked-0x26060)[0x3fff9ebd9e20]
 
(--> /usr/lib64/libgfrpc.so.0(+0x1a614)[0x3fff9ebda614] (--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (--> 
/usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0] 
) 0-: 10.50.80.102:49152: ping timer event already removed
[2017-09-20 13:34:23.345149] D 
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (--> 
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xfdf60)[0x3fff9ec56fe0] 
(--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked-0x26060)[0x3fff9ebd9e20]
 
(--> /usr/lib64/libgfrpc.so.0(+0x1a614)[0x3fff9ebda614] (--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (--> 
/usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0] 
) 0-: 10.50.80.103:49152: ping timer event already removed
[2017-09-20 13:34:23.345977] D 
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (--> 
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xfdf60)[0x3fff9ec56fe0] 
(--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked-0x26060)[0x3fff9ebd9e20]
 
(--> /usr/lib64/libgfrpc.so.0(+0x1a614)[0x3fff9ebda614] (--> 
/usr/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (--> 
/usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0] 
) 0-: 10.50.80.104:49152: ping timer event already removed
[2017-09-20 13:34:23.346070] D [MSGID: 0] 
[dht-common.c:1002:dht_revalidate_cbk] 0-gv0-dht: revalidate lookup of / 
returned with op_ret 0 [Structure needs cleaning]
[2017-09-20 13:34:23.347612] D [MSGID: 0] [dht-common.c:2699:dht_lookup] 
0-gv0-dht: Calling fresh lookup for /tempdir3 on gv0-replicate-0
[2017-09-20 13:34:23.348013] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-1 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348013] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-0 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348083] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-2 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348132] D [MSGID: 0] 
[afr-common.c:2264:afr_lookup_done] 0-stack-trace: stack-address: 
0x3fff88001080, gv0-replicate-0 returned -1 error: No such file or 
directory [No such file or directory]
[2017-09-20 13:34:23.348166] D [MSGID: 0] 
[dht-common.c:2284:dht_lookup_cbk] 0-gv0-dht: fresh_lookup returned for 
/tempdir3 with op_ret -1 [No such file or directory]
[2017-09-20 13:34:23.348195] D [MSGID: 0] 
[dht-common.c:2297:dht_lookup_cbk] 0-gv0-dht: Entry /tempdir3 missing on 
subvol gv0-replicate-0
[2017-09-20 13:34:23.348220] D [MSGID: 0] 
[dht-common.c:2068:dht_lookup_everywhere] 0-gv0-dht: winding lookup call 
to 1 subvols
[2017-09-20 13:34:23.348551] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-1 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348551] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-0 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348613] D [MSGID: 0] 
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: 
stack-address: 0x3fff88001080, gv0-client-2 returned -1 error: No such 
file or directory [No such file or directory]
[2017-09-20 13:34:23.348639] D [MSGID: 0] 
[afr-common.c:2264:afr_lookup_done] 0-stack-trace: stack-address: 
0x3fff88001080, gv0-replicate-0 returned -1 error: No such file or 
directory [No such file or directory]
[2017-09-20 13:34:23.348665] D [MSGID: 0] 
[dht-common.c:1870:dht_lookup_everywhere_cbk] 0-gv0-dht: returned with 
op_ret -1 and op_errno 2 (/tempdir3) from subvol gv0-replicate-0
[2017-09-20 13:34:23.348697] D [MSGID: 0] 
[dht-common.c:1535:dht_lookup_everywhere_done] 0-gv0-dht: STATUS: 
hashed_subvol gv0-replicate-0 cached_subvol null
[2017-09-20 13:34:23.348740] D [MSGID: 0] 
[dht-common.c:1596:dht_lookup_everywhere_done] 0-gv0-dht: There was no 
cached file and  unlink on hashed is not skipped /tempdir3
[2017-09-20 13:34:23.348783] D [MSGID: 0] 
[dht-common.c:1599:dht_lookup_everywhere_done] 0-stack-trace: 
stack-address: 

Re: [Gluster-users] "Input/output error" on mkdir for PPC64 based client

2017-09-21 Thread Amar Tumballi
Looks like it is an issue with architecture compatibility in RPC layer (ie,
with XDRs and how it is used). Just glance the logs of the client process
where you saw the errors, which could give some hints. If you don't
understand the logs, share them, so we will try to look into it.

-Amar

On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan  wrote:

> I recently compiled the 3.10-5 client from source on a few PPC64 systems
> running RHEL 7.3. They are mounting a Gluster volume which is hosted on
> more traditional x86 servers.
>
> Everything seems to be working properly except for creating new
> directories from the PPC64 clients. The mkdir command gives a "Input/output
> error" and for the first few minutes the new directory is inaccessible. I
> checked the backend bricks and confirmed the directory was created properly
> on all of them. After waiting for 2-5 minutes the directory magically
> becomes accessible.
>
> This inaccessible directory issue only appears from the client which
> created it. When creating the directory from client #1 I can immediately
> see it with no errors from client #2.
>
> Using a pre-compiled 3.10-5 package on an x86 client doesn't show the
> issue.
>
> I poked around bugzilla but couldn't seem to find anything which matches
> this.
>
> [root@mqdev1 hafsdev1_gv0]# ls -lh
> total 8.0K
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> [root@mqdev1 hafsdev1_gv0]# mkdir testdir2
> mkdir: cannot create directory ‘testdir2’: Input/output error
> [root@mqdev1 hafsdev1_gv0]# ls
> ls: cannot access testdir2: No such file or directory
> data  testdir  testdir2
> [root@mqdev1 hafsdev1_gv0]# ls -lht
> ls: cannot access testdir2: No such file or directory
> total 8.0K
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> d?? ? ??   ?? testdir2
> [root@mqdev1 hafsdev1_gv0]# cd testdir2
> -bash: cd: testdir2: No such file or directory
>
> *Wait a few minutes...*
>
> [root@mqdev1 hafsdev1_gv0]# ls -lht
> total 12K
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:50 testdir2
> drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
> drwxrwxr-x. 4 mqm  mqm  4.0K Sep 19 15:47 data
> [root@mqdev1 hafsdev1_gv0]#
>
> My volume config...
>
> [root@dc-hafsdev1a bricks]# gluster volume info
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: a2d37705-05cb-4700-8ed8-2cb89376faf0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: dc-hafsdev1a.ulinedm.com:/gluster/bricks/brick1/data
> Brick2: dc-hafsdev1b.ulinedm.com:/gluster/bricks/brick1/data
> Brick3: dc-hafsdev1c.ulinedm.com:/gluster/bricks/brick1/data
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> network.ping-timeout: 2
> features.bitrot: on
> features.scrub: Active
> cluster.server-quorum-ratio: 51%
>
> -Walter Deignan
> -Uline IT, Systems Architect
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [gluster-users] 3.12 Retrospective

2017-09-21 Thread Amye Scavarda
Want to help the Gluster community improve releases?
Help us by participating in the 3.12 retrospective, available on the
main gluster.org website: https://www.gluster.org/3-12-retrospectives/

I'll keep this poll open until next Friday and will post back with feedback.
Thanks!

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup and Restore strategy in Gluster FS 3.8.4

2017-09-21 Thread Ben Werthmann
> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old in
> the 3.8 release, 3.8.15 is the current (and probably final) release of 3.8.
>
> "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and GlusterFS-
> 3.11 (STM) have reached EOL. Except for serious security issues no
> further updates to these versions are forthcoming. If you find a bug please
> see if you can reproduce it with 3.10 or 3.12 and
> file a BZ if appropriate."
> http://lists.gluster.org/pipermail/packaging/2017-August/000363.html
>
> Gluster 3.12 includes '#1428061 :
> Halo Replication feature for AFR translator' which was introduced in 3.11.
> See Halo Replication feature in AFR has been introduced
>  for a
> summary. 3.11 is EOL, best to use 3.12 (long term release) if this is of
> interest to you.
>
> On Thu, Aug 24, 2017 at 4:18 AM, Sunil Aggarwal 
> wrote:
>
>> Hi,
>>
>> What is the preferred way of taking glusterfs backup?
>>
>> I am using Glusterfs 3.8.4.
>>
>> We've configured gluster on thick provisioned LV in which 50% of the VG
>> is kept free for the LVM snapshot.
>>
>
> Gluster Volume Snapshots require each brick to have it's own thin LV. Only
> thin LVs are supported with volume snapshots.
>
> Gluster snapshots do not provide a recovery path in the event of
> catastrophic loss of a Gluster volume.
>
>  -  Gluster Volume Snapshots provide point-in-time recovery for a healthy
> Gluster Volume. A restore will reset the entire volume to a previous
> point-in-time recovery point. Granular recovery may be performed with admin
> intervention.
>  - User Serviceable Snapshots exposes Gluster Volume Snapshots via the
> .snaps directory in every directory of the mounted volume. Please review
> documentation for requirements specific to each protocol used to access the
> gluster volume.
>
>
>> is it any different then taking snapshot on a thin provisioned LV?
>>
>
> If you don't want Gluster Volume Snapshots, and are just looking to backup
> the current state of a volume, Gluster offers a number of features to
> recover from catastrophic loss of a Gluster volume.
>
>  - Simple method: mount the gluster volume and run your backup utility of
> choice
>  - glusterfind generates a file list (full and incremental) for passing to
> another utility to generate a backup. See:
>   http://docs.gluster.org/en/latest/GlusterFS%20Tools/glusterfind/
>   https://milindchangireblog.wordpress.com/2016/10/28/why-glusterfind/
>   http://lists.gluster.org/pipermail/gluster-users/2017-August/032219.html
>  - Geo-replication provides a distributed, continuous, asynchronous, and
> incremental replication service from one site to another over Local Area
> Networks (LANs), Wide Area Networks (WANs), and the Internet. (taken from
> RedHat doc)
>
>
> Please see docs for additional info about each of the above:
> https://access.redhat.com/documentation/en-us/red_hat_
> gluster_storage/3.2/pdf/administration_guide/Red_Hat_Gluster_Storage-3.2-
> Administration_Guide-en-US.pdf
>
>
>>
>> --
>> Thanks,
>> Sunil Aggarwal
>> 844-734-5346 <%28844%29%20734-5346>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Sharding option for distributed volumes

2017-09-21 Thread Pavel Kutishchev

Hello folks,

Would please someone advice how to use sharding option for distributed 
volumes. At this moment i'm facing problem with exporting big file which 
are not going to be distributed across bricks inside one volume.


Thank you in advance.


--
Best regards
Pavel Kutishchev
Golang DevOPS Engineer at
Self employed.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?

2017-09-21 Thread Mauro Tridici

Dear Sunil Kumar Acharya,

yes, I can confirm that I placed 2 bricks per subvolume per host.

Thank you very much for your support.
Regards,
Mauro tridici



> Il giorno 20 set 2017, alle ore 09:34, Sunil Kumar Heggodu Gopala Acharya 
>  ha scritto:
> 
> Hi Mauro Tridici,
> 
> From the information provided it appears like you have placed 2 bricks of a 
> subvolume on one host. Please confirm.
> 
> The number of hosts that could go down without losing access to data can be 
> derived based on the brick configuration/distribution. Please let us know the 
> brick distribution plan.
> 
> Regards,
> SUNIL KUMAR ACHARYA
> SENIOR SOFTWARE ENGINEER
> Red Hat 
> 
>  
> T: +91-8067935170 
> 
>   
> TRIED. TESTED. TRUSTED. 
> 
> On Tue, Sep 19, 2017 at 1:09 AM, Mauro Tridici  > wrote:
> Dear All,
> 
> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume 
> based on the following hardware:
> 
> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk 
> SAS 12Gb/s, 10GbE storage network)
> 
> Now, we need to add 3 new servers with the same hardware configuration 
> respecting the current volume topology.
> If I'm right, we will obtain a DITRIBUTED DISPERSED gluster volume with 12 
> subvolumes, each volume will contain (4+2) bricks, that is a [12x(4+2)] 
> volume.
> 
> My question is: in the current volume configuration, only 2 bricks per 
> subvolume or one host could be down without losing data. What it will happen 
> in the next configuration? How many hosts could be down without losing data?
> 
> Thank you very much.
> Mauro Tridici
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?

2017-09-21 Thread Mauro Tridici

Dear Serkan,

thank you very much for your support and explanation.
I really appreciated the information you provided.

Regards,
Mauro

> Il giorno 20 set 2017, alle ore 08:26, Serkan Çoban  
> ha scritto:
> 
> If you add bricks to existing volume one host could be down in each
> three host group, If you recreate the volume with one brick on each
> host, then two random hosts can be tolerated.
> Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend
> volume. If any two servers in each group goes down you loose data. If
> you chose random two host the probability you loose data will be %20
> in this case.
> If you recreate volume with s1,s2,s3,s4,s5,s6 with one brick on each
> host any random two servers can go down. If you chose random two host
> the probability you loose data will be %0 in this case.
> 
> On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici  wrote:
>> Dear All,
>> 
>> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume 
>> based on the following hardware:
>> 
>> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk 
>> SAS 12Gb/s, 10GbE storage network)
>> 
>> Now, we need to add 3 new servers with the same hardware configuration 
>> respecting the current volume topology.
>> If I'm right, we will obtain a DITRIBUTED DISPERSED gluster volume with 12 
>> subvolumes, each volume will contain (4+2) bricks, that is a [12x(4+2)] 
>> volume.
>> 
>> My question is: in the current volume configuration, only 2 bricks per 
>> subvolume or one host could be down without losing data. What it will happen 
>> in the next configuration? How many hosts could be down without losing data?
>> 
>> Thank you very much.
>> Mauro Tridici
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?

2017-09-21 Thread Mauro Tridici

Dear Ashish,

thank you very much for the information and for the provided examples. They 
really help.

I think that it may be useful to stick a label on each server in order to 
identify the group they belong to.
For example:

server n.01 (label: Group 01)
server n.02 (label: Group 01)
server n.03 (label: Group 01)

server n.04 (label: Group 02)
server n.05 (label: Group 02)
server n.06 (label: Group 02)

So, if server n.01 and server n.06 go down I know that there will be no problem 
for the data.
But, if server n.05 and n.06 go down, I can start to cry.

Thank you again,
Mauro Tridici

> Il giorno 20 set 2017, alle ore 09:33, Ashish Pandey  ha 
> scritto:
> 
> 
> After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes.
> It depends on the way you are going to add new bricks on the existing volume 
> 'vol"
> I think you should remember that in a given EC sub volume of 4+2, at any 
> point of time 2 bricks could be down.
> When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks 
> you want to add.
> 
> Suppose you want to add 6 bricks and all the 6 bricks are on 3 new nodes (2 
> each) then with respect to that sub volume  you can tolerate 1 node going 
> down. 
> If you are creating a 12 * (4+2) volume from the scratch and providing 12 
> bricks from each server then in that case even 2 nodes can go down without 
> any issue.
> 
> I think You should focus more on the number of Hard Drive in a sub volume. 
> You should ask yourself "How many bricks (HD) with in a  sub volume will be 
> unavailable if 1 or 2 nodes are going down?"
> 
> Ashish
> 
> 
> 
> 
> From: "Mauro Tridici" 
> To: gluster-users@gluster.org
> Sent: Tuesday, September 19, 2017 1:09:06 AM
> Subject: [Gluster-users] how many hosts could be down in a 12x(4+2)
> distributed dispersed volume?
> 
> Dear All,
> 
> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume 
> based on the following hardware:
> 
> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk 
> SAS 12Gb/s, 10GbE storage network)
> 
> Now, we need to add 3 new servers with the same hardware configuration 
> respecting the current volume topology.
> 
> 
> My question is: in the current volume configuration, only 2 bricks per 
> subvolume or one host could be down without losing data. What it will happen 
> in the next configuration? How many hosts could be down without losing data?
> 
> Thank you very much.
> Mauro Tridici
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 


-
Mauro Tridici

Fondazione CMCC
CMCC Supercomputing Center
presso Complesso Ecotekne - Università del Salento -
Strada Prov.le Lecce - Monteroni sn
73100 Lecce  IT
http://www.cmcc.it

mobile: (+39) 327 5630841
email: mauro.trid...@cmcc.it

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?

2017-09-21 Thread Mauro Tridici
Thank you very much.
I just confirmed default values.

Regards,
Mauro Tridici

> Il giorno 20 set 2017, alle ore 09:16, Serkan Çoban  
> ha scritto:
> 
> Defaults should be fine in your size. In big clusters I usually set
> event-threads to 4.
> 
> On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici  wrote:
>> 
>> Dear All,
>> 
>> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume
>> based on the following hardware:
>> 
>> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk
>> SAS 12Gb/s, 10GbE storage network)
>> 
>> Is there a way to detect the ideal value for client.event-threads,
>> server.event-threads and performance.io-thread-count?
>> 
>> Thank you in advance,
>> Mauro Tridici
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users