Re: [Gluster-users] SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file

2015-02-19 Thread Jeremy Young
I've had issues with the glusterd and glusterfsd sockets getting labeled
var_run_t instead of glusterd_var_run_t.

To fix your problem:

   1. Update your hosts to the latest SELinux policy
   2. Set SELinux to enforcing
   3. Stop any running glusterd or glusterfsd processes.  (i.e. systemctl
   stop glusterd; pkill -f gluster)
   4. Remove any old socket files from /var/run ( rm -f /var/run/*.socket )
   5. Start gluster ( systemctl start glusterd )
   6. Check that the sockets were created with a context that gluster can
   access. ( ls -Z /var/run/*.socket )  types of glusterd_var_run_t

Gluster is only allowed to write to the following socket types:
sesearch -A -C -s glusterd_t -c sock_file -p write
Found 18 semantic av rules:
   allow domain setrans_var_run_t : sock_file { write getattr append open }
;
   allow glusterd_t dirsrv_var_run_t : sock_file { write getattr append
open } ;
   allow glusterd_t nscd_var_run_t : sock_file { write getattr append open
} ;
   allow glusterd_t nslcd_var_run_t : sock_file { write getattr append open
} ;
   allow glusterd_t avahi_var_run_t : sock_file { write getattr append open
} ;
   allow glusterd_t slapd_var_run_t : sock_file { write getattr append open
} ;
   allow glusterd_t sssd_var_lib_t : sock_file { write getattr append open
} ;
   allow glusterd_t glusterd_var_lib_t : sock_file { ioctl read write
create getattr setattr lock append unlink link rename open } ;
   allow glusterd_t glusterd_var_run_t : sock_file { ioctl read write
create getattr setattr lock append unlink link rename open } ;
   allow glusterd_t winbind_var_run_t : sock_file { write getattr append
open } ;
   allow glusterd_t devlog_t : sock_file { write getattr append open } ;
   allow glusterd_t glusterd_tmp_t : sock_file { ioctl read write create
getattr setattr lock append unlink link rename open } ;
   allow glusterd_t lsassd_var_socket_t : sock_file { write getattr append
open } ;
   allow daemon abrt_var_run_t : sock_file { write getattr append open } ;
DT allow daemon cluster_pid : sock_file { write getattr append open } ; [
daemons_enable_cluster_mode ]
EF allow glusterd_t nscd_var_run_t : sock_file { write getattr append open
} ; [ nscd_use_shm ]
DT allow glusterd_t nscd_var_run_t : sock_file { ioctl read write getattr
lock append open } ; [ nscd_use_shm ]
ET allow glusterd_t pcscd_var_run_t : sock_file { write getattr append open
} ; [ allow_kerberos ]


Even when the sockets are labeled correctly, a user-initiated relabel can
break Gluster.

[root@hostname run]# pwd
/var/run
[root@hostname run]# ls -Z *.socket
srwx--. root root staff_u:object_r:glusterd_var_run_t:s0
30d920e9fce88ae66a86e85c1d9b.socket
srwx--. root root staff_u:object_r:glusterd_var_run_t:s0
8416f5dc522a14421afdf0f100a6947d.socket
srwx--. root root staff_u:object_r:glusterd_var_run_t:s0
85dc678b993d76ebc8ab2fb3f13a7c03.socket
srwx--. root root staff_u:object_r:glusterd_var_run_t:s0 glusterd.socket
[root@hostname run]# restorecon -v *.socket
restorecon reset /var/run/30d920e9fce88ae66a86e85c1d9b.socket context
staff_u:object_r:glusterd_var_run_t:s0-staff_u:object_r:var_run_t:s0
restorecon reset /var/run/8416f5dc522a14421afdf0f100a6947d.socket context
staff_u:object_r:glusterd_var_run_t:s0-staff_u:object_r:var_run_t:s0
restorecon reset /var/run/85dc678b993d76ebc8ab2fb3f13a7c03.socket context
staff_u:object_r:glusterd_var_run_t:s0-staff_u:object_r:var_run_t:s0


On Thu, Feb 19, 2015 at 8:43 AM, Nathanaël Blanchet blanc...@abes.fr
wrote:

 On freshly installed el7 hosts, selinux prevents gluster from running.
 Setting selinux to permissive or building the relative .pp module resolves
 the issue.
 Does otopi configure selinux for gluster when installing?
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




-- 
Jeremy Young jrm16...@gmail.com, M.S., RHCSA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Jeremy Young
Hi Paul,

I will agree from experience that finding accurate, up-to-date
documentation on how to do some basic configuration of a Gluster volume can
be difficult.  However, this blog post mentions the updated firewall ports.

http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules

Get rid of 24009-24012 in your firewall configuration and replace them with
49152-4915X.  If you don't actually need NFS, you can exclude the 3486X
ports that you've opened as well.


From: gluster-users-boun...@gluster.org gluster-users-boun...@gluster.org
on behalf of Osborne, Paul (paul.osbo...@canterbury.ac.uk) 
paul.osbo...@canterbury.ac.uk
Sent: Thursday, October 30, 2014 8:58 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time

Hi,

I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run
local firewalls on every host so I have to get the network access locked
down appropriately.

I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

I opened up the relevant ports:

34865 – 34867  for gluster
111 for the portmapper
24009 – 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467

What has not been helpful is that there was no mention of port: 2049 for
NFS over TCP - which would have been helpful and probably my own mistake as
I should have known.

To really confuse matters I noticed that the bricks were not syncing
anyway, and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154
and 49155 open.

even gluster volume status reveals some of the ports:

gluster volume status
Status of volume: www
Gluster process PortOnline  Pid
--
Brick 194.82.210.140:/srv/hod/lampe-www 49154   Y   3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155   Y
16160
NFS Server on localhost 2049Y
16062
Self-heal Daemon on localhost   N/A Y
16072
NFS Server on gfse-isr-01   2049Y   3040
Self-heal Daemon on gfse-isr-01 N/A Y   3045

Task Status of Volume www
--
There are no active volume tasks


So my query here is, if the bricks are actually using 49154  49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Jeremy Young jrm16...@gmail.com, M.S., RHCSA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users