Re: [Gluster-devel] Quick question about the latest glusterfs and client side selinux support

2019-06-20 Thread Jiffin Thottan
Hi Janak,

Currently, it is supported in glusterfs(from 2.8 onwards) and cephfs(already 
there in 2.7) for nfs-ganesha.

--
Jiffin

- Original Message -
From: "Janak Desai" 
To: "Jiffin Tony Thottan" 
Sent: Thursday, June 20, 2019 9:29:09 PM
Subject: Re: Quick question about the latest glusterfs and client side selinux 
support

Hi Jiffin,

 

I came across your presentation “NFS-Ganesha Weather Report” that you gave at 
the FOSDEM’19 in early Feb this year. In that you mentioned that ongoing 
developments in v2.8 include “labelled NFS” support. I see that v2.8 is now 
out.  Do you know if labelled NFS support made it in?  If it did, is it only 
supported in CEPHFS FSAL or any other FSALs also include the support for it? I 
took a cursory look at the release documents and didn’t see Labelled NFS in it, 
so thought I would bug you directly. 

 

Thanks.

 

-Janak

 

 

From: Jiffin Tony Thottan 
Date: Tuesday, August 28, 2018 at 12:50 AM
To: Janak Desai , "nde...@redhat.com" 
, "mselv...@redhat.com" 
Cc: "p...@paul-moore.com" 
Subject: Re: Quick question about the latest glusterfs and client side selinux 
support

 

Hi Janak,

Thanks for the interest. Basic selinux xlator is present at gluster server 
stack. It stores selinux context at the backend as a xattr. When we developed 
that xlator,

at that point they were no client to test the functionality. Don't know whether 
required change  in fuse got merged or not. As you mentioned ,here first we 
need to figure out

whether issue is related to server. Can collect the packet trace using tcpdump 
from client and sent with mail during setting/getting selinux context.

Regards,

Jiffin

 

On Tuesday 28 August 2018 04:14 AM, Desai, Janak wrote:

Hi Niels, Manikandan, Jiffin,

 

I work for Georgia Tech Research Institute’s CIPHER Lab and am investigating 
suitability of glusterfs for a couple of large upcoming projects. My ‘google 
research’ is yielding confusing and inconclusive results, so I thought I would 
try and reach out to some of the core developers to get some clarity.

 

We use SELinux extensively in our software solution. I am trying to find out 
if, with the latest version 4.1 of glusterfs running on the latest version of 
rhel, I should be able to associate and enforce selinux contexts from glusterfs 
clients. I see in the 3.11 release notes that the selinux feature was 
implemented but then I also see references to kernel work that is not done yet. 
I also could not find any documentation/examples on how to add/integrate this 
selinux translator to setup and enforce selinux labels from the client side. In 
my simple test setup, which I mounted using the “selinux” option (which gluster 
does seem to recognize), I am getting the “operation not supported” error. I 
guess either I am not pulling in the selinux translator or I am running up 
against other missing functionality in the kernel. I would really appreciate if 
you could clear this up for me. If I am not configuring my mount correctly, I 
would appreciate if you could point me to a document or an example. Our other 
option is lustre filesystem since it does have a working client side 
association and enforcement of selinux contexts. However, lustre appears to be 
lot difficult to setup and maintain and I would rather use glusterfs. We need a 
distributed (or parallel) filesystem that can work with Hadoop. If glusterfs 
doesn’t pan out then I will look at labelled nfs 4.2 that is now available in 
rhel7.  However, my google research shows much more Hadoop affinity for 
glusterfs than nfs v4. 

 

I am also copying Paul Moore, with whom I collaborated a few years ago as part 
of the team that took Linux through its common criteria evaluation, and who I 
haven’t bugged lately ☺, to see if he can shed some light any missing kernel 
dependencies. I am currently testing with rhel7.5, but would be willing to try 
upstream kernel if have to get this proof of concept going. I know the 
underlying problem in the kernel is supporting extended attrs on FUSE file 
systems, but was wondering (and hoping) that at least setup/enforcement of 
selinux contexts from client side for glusterfs is possible. 

 

Thanks.

 

-Janak




___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-04-24 Thread Jiffin Thottan
Below looks like kernel nfs was started (may be enabled on the machine).

Did u start rpcbind manually on that machine, if yes can u please check kernel 
nfs status before and after that service?

--

Jiffin

- Original Message -
From: "Michael Scherer" 
To: "Atin Mukherjee" 
Cc: "Deepshikha Khandelwal" , "Gluster Devel" 
, "Jiffin Thottan" , 
"gluster-infra" 
Sent: Tuesday, April 23, 2019 7:44:49 PM
Subject: Re: [Gluster-infra] [Gluster-devel] is_nfs_export_available from 
nfs.rc failing too often?

Le lundi 22 avril 2019 à 22:57 +0530, Atin Mukherjee a écrit :
> Is this back again? The recent patches are failing regression :-\ .

So, on builder206, it took me a while to find that the issue is that
nfs (the service) was running.

./tests/basic/afr/tarissue.t failed, because the nfs initialisation
failed with a rather cryptic message:

[2019-04-23 13:17:05.371733] I [socket.c:991:__socket_server_bind] 0-
socket.nfs-server: process started listening on port (38465)
[2019-04-23 13:17:05.385819] E [socket.c:972:__socket_server_bind] 0-
socket.nfs-server: binding to  failed: Address already in use
[2019-04-23 13:17:05.385843] E [socket.c:974:__socket_server_bind] 0-
socket.nfs-server: Port is already in use
[2019-04-23 13:17:05.385852] E [socket.c:3788:socket_listen] 0-
socket.nfs-server: __socket_server_bind failed;closing socket 14

I found where this came from, but a few stuff did surprised me:

- the order of print is different that the order in the code
- the message on "started listening" didn't take in account the fact
that bind failed on:


https://github.com/gluster/glusterfs/blob/master/rpc/rpc-transport/socket/src/socket.c#L967

The message about port 38465 also threw me off the track. The real
issue is that the service nfs was already running, and I couldn't find
anything listening on port 38465

once I do service nfs stop, it no longer failed.

So far, I do know why nfs.service was activated.

But at least, 206 should be fixed, and we know a bit more on what would
be causing some failure.

 

> On Wed, 3 Apr 2019 at 19:26, Michael Scherer 
> wrote:
> 
> > Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit :
> > > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan <
> > > jthot...@redhat.com>
> > > wrote:
> > > 
> > > > Hi,
> > > > 
> > > > is_nfs_export_available is just a wrapper around "showmount"
> > > > command AFAIR.
> > > > I saw following messages in console output.
> > > >  mount.nfs: rpc.statd is not running but is required for remote
> > > > locking.
> > > > 05:06:55 mount.nfs: Either use '-o nolock' to keep locks local,
> > > > or
> > > > start
> > > > statd.
> > > > 05:06:55 mount.nfs: an incorrect mount option was specified
> > > > 
> > > > For me it looks rpcbind may not be running on the machine.
> > > > Usually rpcbind starts automatically on machines, don't know
> > > > whether it
> > > > can happen or not.
> > > > 
> > > 
> > > That's precisely what the question is. Why suddenly we're seeing
> > > this
> > > happening too frequently. Today I saw atleast 4 to 5 such
> > > failures
> > > already.
> > > 
> > > Deepshika - Can you please help in inspecting this?
> > 
> > So we think (we are not sure) that the issue is a bit complex.
> > 
> > What we were investigating was nightly run fail on aws. When the
> > build
> > crash, the builder is restarted, since that's the easiest way to
> > clean
> > everything (since even with a perfect test suite that would clean
> > itself, we could always end in a corrupt state on the system, WRT
> > mount, fs, etc).
> > 
> > In turn, this seems to cause trouble on aws, since cloud-init or
> > something rename eth0 interface to ens5, without cleaning to the
> > network configuration.
> > 
> > So the network init script fail (because the image say "start eth0"
> > and
> > that's not present), but fail in a weird way. Network is
> > initialised
> > and working (we can connect), but the dhclient process is not in
> > the
> > right cgroup, and network.service is in failed state. Restarting
> > network didn't work. In turn, this mean that rpc-statd refuse to
> > start
> > (due to systemd dependencies), which seems to impact various NFS
> > tests.
> > 
> > We have also seen that on some builders, rpcbind pick some IP v6
> > autoconfiguration, but we can't reproduce that, and there is no ip
> > v6
> > set up anywhere. I suspect

Re: [Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

2019-04-03 Thread Jiffin Thottan
Hi,

is_nfs_export_available is just a wrapper around "showmount" command AFAIR.
I saw following messages in console output.
 mount.nfs: rpc.statd is not running but is required for remote locking.
05:06:55 mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
05:06:55 mount.nfs: an incorrect mount option was specified

For me it looks rpcbind may not be running on the machine.
Usually rpcbind starts automatically on machines, don't know whether it can 
happen or not. 

Regards,
Jiffin


- Original Message -
From: "Atin Mukherjee" 
To: "gluster-infra" , "Gluster Devel" 

Sent: Wednesday, April 3, 2019 10:46:51 AM
Subject: [Gluster-devel] is_nfs_export_available from nfs.rc failing too
often?

I'm observing the above test function failing too often because of which 
arbiter-mount.t test fails in many regression jobs. Such frequency of failures 
wasn't there earlier. Does anyone know what has changed recently to cause these 
failures in regression? I also hear when such failure happens a reboot is 
required, is that true and if so why? 

One of the reference : 
https://build.gluster.org/job/centos7-regression/5340/consoleFull 


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 4.1: Schedule, scope and review focus

2018-04-16 Thread Jiffin Thottan


- Original Message -
From: "Shyam Ranganathan" 
To: "Gluster Devel" , "GlusterFS Maintainers" 

Sent: Wednesday, April 11, 2018 6:37:58 PM
Subject: Re: [Gluster-devel] Release 4.1: Schedule, scope and review focus

On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> Hi,
> 
> As we have completed potential scope for 4.1 release (reflected here [1]
> and also here [2]), it's time to talk about the schedule.
> 
> - Branching date (and hence feature exception date): Apr 16th

We are about 5 days away from branching, now would be a good time to
think which features are slipping and call them out sooner than later!


I need two more weeks time, planning to complete feature by EOD April.
Can u please consider it as exception ?
--
Jiffin

> - Week of Apr 16th release notes updated for all features in the release
> - RC0 tagging: Apr 23rd
> - Week of Apr 23rd, upgrade and other testing
> - RCNext: May 7th (if critical failures, or exception features arrive late)
> - RCNext: May 21st
> - Week of May 21st, final upgrade and testing
> - GA readiness call out: May, 28th
> - GA tagging: May, 30th
> - +2-4 days release announcement
> 
> and, review focus. As in older releases, I am starring reviews that are
> submitted against features, this should help if you are looking to help
> accelerate feature commits for the release (IOW, this list is the watch
> list for reviews). This can be found handy here [3].
> 
> So, branching is in about 4 weeks!
> 
> Thanks,
> Shyam
> 
> [1] Issues marked against release 4.1:
> https://github.com/gluster/glusterfs/milestone/5
> 
> [2] github project lane for 4.1:
> https://github.com/gluster/glusterfs/projects/1#column-1075416
> 
> [3] Review focus dashboard:
> https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing Glusterfs release 3.12.3 (Long Term Maintenance)

2017-11-16 Thread Jiffin Thottan
The Gluster community is pleased to announce the release of Gluster 3.12.3 
(packages available at [1,2,3]).

Release notes for the release can be found at [4]. 

We still carry following major issue that is reported in the release-notes as 
follows,

1.) - Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes are 
expanded or possibly contracted (i.e add/remove bricks and rebalance) there are 
reports of VM images getting corrupted.

The last known cause for corruption (Bug #1465123) has a fix with this 
release. As further testing is still in progress, the issue is retained as a 
major issue.

Status of this bug can be tracked here, #1465123 

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.3/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.3/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious regression: Checking on 3 test cases

2015-05-26 Thread Jiffin Thottan


- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Krishnan Parthasarathi kpart...@redhat.com, Shyam 
srang...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Friday, 22 May, 2015 9:49:15 AM
Subject: Re: [Gluster-devel] Spurious regression: Checking on 3 test cases

On 05/22/2015 07:13 AM, Krishnan Parthasarathi wrote:

 Are the following tests in any spurious regression failure lists? or,
 noticed by others?

 1) ./tests/basic/ec/ec-5-1.t
 Run:
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/9363/consoleFull

 2) ./tests/basic/mount-nfs-auth.t
 Run:
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/9406/consoleFull


Right now there is no easy fix for the issue. It may require to reconstruct 
entire netgroup/export structures
used for this feature.

 NOTE: This is a regression run for the same patch as in (1) above, and
 ec-5-1.t passed in this instance.

 3) ./tests/performance/open-behind.t
 Run:
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/9407/consoleFull


I remember seeing this test fail a few days back but seem to have missed 
including this test in the etherpad.

 None of the above tests are being tracked at 
 https://public.pad.fsfe.org/p/gluster-spurious-failures.
 Should we add them to the list?

Yes and we should add them to is_bad_test() in run-tests.sh too.

Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel