Re: [Gluster-users] @redhat - someone could take a look or ask about - freeipa-us...@redhat.com

2018-01-03 Thread Vijay Bellur
>From [1] it does look this list has been deprecated and you would need to
use freeipa-us...@lists.fedorahosted.org. You should be able to post on
that list.

HTH,
Vijay

[1] https://www.redhat.com/mailman/listinfo/freeipa-users

On Wed, Jan 3, 2018 at 7:50 AM, lejeczek  wrote:

> sorry guys to spam a bit - I hope someone from redhat could check whether
> - freeipa-us...@redhat.com - is up & ok?
> I've been a subscriber for a couple of years but now, suddenly(?) I cannot
> mail there, I get:
>
> "
> Sorry, we were unable to deliver your message to the following address.
>
> :
> 554: 5.7.1 : Recipient address rejected: Access
> denied
> "
>
> I suspect this might have something to do which DMARC, but where, which
> end? Can someone check with freeipa-users admins/owners?
>
> Anybody here also a freeipa-users member and experience this problem?
>
> many thanks, L.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Trusted pool authentication & traffic encryption

2018-01-03 Thread Omar Kohl
Hi all,

I have some questions concerning Gluster security.

I was thinking about using Gluster for synchronizing data between my laptop and 
my desktop computer. I realize that this is not the usual use case, but I think 
it should work. I would create one replica-2 volume with one brick on each PC 
plus a FUSE mount of that volume on each PC. I would then always write my data 
to the local FUSE mount. Quite often one of the PCs would be offline but this 
should not be a problem (right?) because they would synchronize as soon as both 
are online.

Question1: The hosts in the trusted peer network know about each other via 
hostname or IP address. What would happen if I take my laptop into another 
network and someone else has the same IP address as my desktop PC at home? Are 
there any circumstances under which the Laptop would start sending data to that 
third-party machine? What if for instance this third party were a malicious 
attacker that knew I was using Gluster?

Question2: If someone has access to my home network would they see the 
clear-text traffic between the two Gluster hosts (i.e. between the brick 
processes)?

I thinks both questions are easily generalizable to other settings. For 
instance an attacker could try IP spoofing in a datacentre or they could record 
all traffic that passes through a switch.

I suspect both questions might be answered with TLS/SSL encryption (e.g. 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/chap-network_encryption)
 but I would like confirmation and preferably some more details how the 
hosts/bricks authenticate to each other and if any assumptions are being made.

Kind regards,
Omar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] @redhat - someone could take a look or ask about - freeipa-us...@redhat.com

2018-01-03 Thread lejeczek
sorry guys to spam a bit - I hope someone from redhat could 
check whether - freeipa-us...@redhat.com - is up & ok?
I've been a subscriber for a couple of years but now, 
suddenly(?) I cannot mail there, I get:


"
Sorry, we were unable to deliver your message to the 
following address.


:
554: 5.7.1 : Recipient address 
rejected: Access denied

"

I suspect this might have something to do which DMARC, but 
where, which end? Can someone check with freeipa-users 
admins/owners?


Anybody here also a freeipa-users member and experience this 
problem?


many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 2018 - Plans and Expectations on Gluster Community

2018-01-03 Thread Shyam Ranganathan
On 01/03/2018 08:05 AM, Kaleb S. KEITHLEY wrote:
> On 01/02/2018 11:03 PM, Vijay Bellur wrote:
>> ...     The people who were writing storhaug never finished it. Keep
>> using
>>     3.10 until storhaug gets finished.
>>
>>
>>
>> Since 3.10 will be EOL in approximately 2 months from now, what would
>> be our answer for NFS HA if storahug is not finished by then?

Correction, 3.10 will not be EOL when 4.0 is released, as 4.0 is an STM.
The oldest LTM will be EOL'd when the next LTM releases, which is 4.1.
Hence, we have another 5 months before which 3.10 is EOL'd.

>>
>>   -   Use ctdb
>>   -   Restore nfs.ganesha CLI support
>>   -   Something else?
>>
>> Have we already documented upgrade instructions for those users
>> utilizing nfs.ganesha CLI in 3.8? If not already done so, it would be
>> useful to have them listed somewhere.
>>
> 
> I have a pretty high degree of confidence that I can have storhaug
> usable by or before 4.0. The bits I have on my devel box are almost
> ready to post on github.
> 
> I'd like to abandon the github repo at
> https://github.com/linux-ha-storage/storhaug; and create a new repo
> under https://github.com/gluster/storhaug. I dare say there are other
> Linux storage solutions besides gluster+ganesha+samba that storhaug
> doesn't handle.
> 
> And upgrade instructions for what? Upgrading/switching from legacy
> glusterd to storhaug? No, not yet. Doesn't make sense since there's no
> (usable) storhaug yet.
> 
> -- 
> 
> Kaleb
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] stale file handle on gluster NFS client when trying to remove a directory

2018-01-03 Thread Jeevan Patnaik
Hi,

Yes. But it didn't work for about 18 minutes. It keep giving the same
error. Then it suddenly worked.

Regards,
Jeevan.

On Jan 3, 2018 3:27 PM, "Nithya Balachandran"  wrote:

> An ESTALE error usually means the gfid could not be found. Does repeating
> the "rm -rf" delete the directory?
>
> Regards,
> Nithya
>
> On 3 January 2018 at 12:16, Jeevan Patnaik  wrote:
>
>> Hi all,
>>
>> I haven't found any root cause or workaround for this yet. Can any one
>> help me in underatanding the issue?
>>
>> Regards,
>> Jeevan.
>>
>> On Dec 21, 2017 8:20 PM, "Jeevan Patnaik"  wrote:
>>
>>> Hi,
>>>
>>>
>>> After running rm -rf on a directory, the files under it got deleted, but
>>> the directory was not deleted and was showing stale file handle error.
>>> After 18 minutes, I'm able to delete the directory. So could anyone help me
>>> in knowing what could have happened or when in general I get such errors.
>>>
>>>
>>> The following is NFS log:
>>>
>>>
>>> [2017-12-21 13:56:01.592256] I [MSGID: 108019]
>>> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
>>> 0-g_sitework2-replicate-5: Blocking entrylks failed.
>>>
>>> [2017-12-21 13:56:01.594350] W [MSGID: 108019]
>>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>>> 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks
>>> on at least one child while attempting RMDIR on
>>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>>
>>> [2017-12-21 13:56:01.594648] I [MSGID: 108019]
>>> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
>>> 0-g_sitework2-replicate-4: Blocking entrylks failed.
>>>
>>> [2017-12-21 13:56:01.594790] W [MSGID: 112032]
>>> [nfs3.c:3713:nfs3svc_rmdir_cbk] 0-nfs: df521f4d:
>>> /csrc => -1 (Stale file
>>> handle) [Stale file handle]
>>>
>>> [2017-12-21 13:56:01.594816] W [MSGID: 112199]
>>> [nfs3-helpers.c:3414:nfs3_log_common_res] 0-nfs-nfsv3:
>>> /csrc => (XID: df521f4d,
>>> RMDIR: NFS: 70(Invalid file handle), POSIX: 116(Stale file handle))
>>>
>>> [2017-12-21 13:56:01.590522] W [MSGID: 108019]
>>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>>> 0-g_sitework2-replicate-2: Unable to obtain sufficient blocking entry locks
>>> on at least one child while attempting RMDIR on
>>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>>
>>> [2017-12-21 13:56:01.590569] W [MSGID: 108019]
>>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>>> 0-g_sitework2-replicate-1: Unable to obtain sufficient blocking entry locks
>>> on at least one child while attempting RMDIR on
>>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>>
>>>
>>> Regards,
>>>
>>> Jeevan.
>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs, ganesh, and pcs rules

2018-01-03 Thread Renaud Fortier
Have you tried with :
VIP_tlxdmz-nfs1="10.X.X.181"
VIP_tlxdmz-nfs2="10.X.X.182"
Instead of :
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

Also, I don’t have the HA_VOL_SERVER in my settings but i’m using gluster 
3.10.x. I think it’s deprecated in 3.10 but not in 3.8.

Your /etc/hosts file or your DNS is correct for these servers ?

Renaud


De : Hetz Ben Hamo [mailto:h...@hetz.biz]
Envoyé : 24 décembre 2017 04:33
À : Renaud Fortier 
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] glusterfs, ganesh, and pcs rules

I checked, and I have it like this:

# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-nfs"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="tlxdmz-nfs1"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="tlxdmz-nfs1,tlxdmz-nfs2"
#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

תודה,
חץ בן חמו
אתם מוזמנים לבקר בבלוג היעוץ או בבלוג הפרטי 
שלי

On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier 
> wrote:
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like 
this :

VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"

Renaud

De : 
gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org]
 De la part de Hetz Ben Hamo
Envoyé : 20 décembre 2017 04:35
À : gluster-users@gluster.org
Objet : [Gluster-users] glusterfs, ganesh, and pcs rules

Hi,

I've just created again the gluster with NFS ganesha. Glusterfs version 3.8

When I run the command  gluster nfs-ganesha enable - it returns a success. 
However, looking at the pcs status, I see this:

[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with 
quorum
Last updated: Wed Dec 20 09:20:44 2017
Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1

2 nodes configured
8 resources configured

Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-mon-clone [nfs-mon]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-grace-clone [nfs-grace]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 tlxdmz-nfs1-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped
 tlxdmz-nfs2-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped

Failed Actions:
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Any suggestion how this can be fixed when enabling nfs-ganesha when invoking 
the above command or anything else that I can do to fixed the failed actions?

Thanks

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 2018 - Plans and Expectations on Gluster Community

2018-01-03 Thread Kaleb S. KEITHLEY

On 01/02/2018 11:03 PM, Vijay Bellur wrote:
... 
The people who were writing storhaug never finished it. Keep using

3.10 until storhaug gets finished.



Since 3.10 will be EOL in approximately 2 months from now, what would be 
our answer for NFS HA if storahug is not finished by then?


  -   Use ctdb
  -   Restore nfs.ganesha CLI support
  -   Something else?

Have we already documented upgrade instructions for those users 
utilizing nfs.ganesha CLI in 3.8? If not already done so, it would be 
useful to have them listed somewhere.




I have a pretty high degree of confidence that I can have storhaug 
usable by or before 4.0. The bits I have on my devel box are almost 
ready to post on github.


I'd like to abandon the github repo at 
https://github.com/linux-ha-storage/storhaug; and create a new repo 
under https://github.com/gluster/storhaug. I dare say there are other 
Linux storage solutions besides gluster+ganesha+samba that storhaug 
doesn't handle.


And upgrade instructions for what? Upgrading/switching from legacy 
glusterd to storhaug? No, not yet. Doesn't make sense since there's no 
(usable) storhaug yet.


--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] which components needs ssh keys?

2018-01-03 Thread Aravinda
Only Geo-replication uses SSH since it is between two Clusters. All 
other features are limited to single Cluster/Volume, so communications 
happens via Glusterd(Port tcp/24007 and brick ports(tcp/47152-47251))



On Wednesday 03 January 2018 03:29 PM, lejeczek wrote:

hi everyone

I think geo-repl needs ssh and keys in order to work, but does 
anything else? Self-heal perhaps?
Reason I ask is that I had some old keys gluster put in when I had 
geo-repl which I removed and self-heal now rouge, cannot get statistics:


..
Gathering crawl statistics on volume WORK has been unsuccessful on 
bricks that are down. Please check if all brick processes are running.

..
vol heal WORK info says Status: all connected, no conflicts.

thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



--
regards
Aravinda VK

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] which components needs ssh keys?

2018-01-03 Thread lejeczek

hi everyone

I think geo-repl needs ssh and keys in order to work, but 
does anything else? Self-heal perhaps?
Reason I ask is that I had some old keys gluster put in when 
I had geo-repl which I removed and self-heal now rouge, 
cannot get statistics:


..
Gathering crawl statistics on volume WORK has been 
unsuccessful on bricks that are down. Please check if all 
brick processes are running.

..
vol heal WORK info says Status: all connected, no conflicts.

thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] stale file handle on gluster NFS client when trying to remove a directory

2018-01-03 Thread Nithya Balachandran
An ESTALE error usually means the gfid could not be found. Does repeating
the "rm -rf" delete the directory?

Regards,
Nithya

On 3 January 2018 at 12:16, Jeevan Patnaik  wrote:

> Hi all,
>
> I haven't found any root cause or workaround for this yet. Can any one
> help me in underatanding the issue?
>
> Regards,
> Jeevan.
>
> On Dec 21, 2017 8:20 PM, "Jeevan Patnaik"  wrote:
>
>> Hi,
>>
>>
>> After running rm -rf on a directory, the files under it got deleted, but
>> the directory was not deleted and was showing stale file handle error.
>> After 18 minutes, I'm able to delete the directory. So could anyone help me
>> in knowing what could have happened or when in general I get such errors.
>>
>>
>> The following is NFS log:
>>
>>
>> [2017-12-21 13:56:01.592256] I [MSGID: 108019]
>> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
>> 0-g_sitework2-replicate-5: Blocking entrylks failed.
>>
>> [2017-12-21 13:56:01.594350] W [MSGID: 108019]
>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>> 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks
>> on at least one child while attempting RMDIR on
>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>
>> [2017-12-21 13:56:01.594648] I [MSGID: 108019]
>> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk]
>> 0-g_sitework2-replicate-4: Blocking entrylks failed.
>>
>> [2017-12-21 13:56:01.594790] W [MSGID: 112032]
>> [nfs3.c:3713:nfs3svc_rmdir_cbk] 0-nfs: df521f4d:
>> /csrc => -1 (Stale file
>> handle) [Stale file handle]
>>
>> [2017-12-21 13:56:01.594816] W [MSGID: 112199]
>> [nfs3-helpers.c:3414:nfs3_log_common_res] 0-nfs-nfsv3:
>> /csrc => (XID: df521f4d,
>> RMDIR: NFS: 70(Invalid file handle), POSIX: 116(Stale file handle))
>>
>> [2017-12-21 13:56:01.590522] W [MSGID: 108019]
>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>> 0-g_sitework2-replicate-2: Unable to obtain sufficient blocking entry locks
>> on at least one child while attempting RMDIR on
>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>
>> [2017-12-21 13:56:01.590569] W [MSGID: 108019]
>> [afr-lk-common.c:1064:afr_log_entry_locks_failure]
>> 0-g_sitework2-replicate-1: Unable to obtain sufficient blocking entry locks
>> on at least one child while attempting RMDIR on
>> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}.
>>
>>
>> Regards,
>>
>> Jeevan.
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users