Re: [Gluster-devel] Review request for http://review.gluster.org/#/c/10262

2015-08-11 Thread Atin Mukherjee
Gentle reminder!! Since 3.7.3 is already out, I am willing to take that
in for 3.7.4

On 07/24/2015 09:28 AM, Atin Mukherjee wrote:
> Folks,
> 
> Currently in our vme table we have few option names which are redundant
> across different translators. For eg: cache-size, this option is same
> across io-cache and quick-read xlator. Now if an user wants to have two
> different values set, we don't have a mechanism for it. What I have done
> here is to use unique names for these redundant options.
> 
> Reviews is highly appreciated and I do think this will be a good
> candidate for 3.7.3.
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Sharding and Geo-replication

2015-08-11 Thread Aravinda

Hi,

We are thinking different approaches to add support in Geo-replication 
for Sharded Gluster Volumes[1]


*Approach 1: Geo-rep: Sync Full file*
   - In Changelog only record main file details in the same brick where 
it is created
   - Record as DATA in Changelog whenever any addition/changes to the 
sharded file
   - Geo-rep rsync will do checksum as a full file from mount and syncs 
as new file

   - Slave side sharding is managed by Slave Volume
*Approach 2: Geo-rep: Sync sharded file separately*
   - Geo-rep rsync will do checksum for sharded files only
   - Geo-rep syncs each sharded files independently as new files
   - [UNKNOWN] Sync internal xattrs(file size and block count) in the 
main sharded file to Slave Volume to maintain the same state as in Master.
   - Sharding translator to allow file creation under .shards dir for 
gsyncd. that is Parent GFID is .shards directory
   - If sharded files are modified during Geo-rep run may end up stale 
data in Slave.
   - Files on Slave Volume may not be readable unless all sharded files 
sync to Slave(Each bricks in Master independently sync files to slave)


First approach looks more clean, but we have to analize the Rsync 
checksum performance on big files(Sharded in backend, accessed as one 
big file from rsync)


Let us know your thoughts. Thanks

Ref:
[1] 
http://www.gluster.org/community/documentation/index.php/Features/sharding-xlator


--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 50 minutes)

2015-08-11 Thread Mohammed Rafi K C
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.


Regards
Rafi KC

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Please review SSL fix

2015-08-11 Thread Emmanuel Dreyfus
Hi

Could someone please review/merge that fix?
http://review.gluster.org/11840
http://review.gluster.org/11842
-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What's the status of selinux integration?

2015-08-11 Thread Bob Arendt

On 08/08/2015 10:04 AM, Niels de Vos wrote:

On Fri, Aug 07, 2015 at 05:30:21PM -0700, Bob Arendt wrote:

>I'm currently using gluster 3.6.2, and I've been exploring the gluster docs
>and source trees.  The man pages seem to indicate that there*should*
>be selinux support, perhaps augmented by adding a --selinux argument
>to glusterd, glusterfsd, and adding a selinux option to the glusterfs mount.

The feature to support SElinux over FUSE mounts boils down to the mount
option "selinux":

   # mount -t glusterfs -o selinux storage.example.com:/volume /mnt

The /sbin/mount.glusterfs helper sctipt parses the "selinux" option and
passes the /usr/sbin/glusterfs binary the --selinux argument.

The option is only affecting the client-side. Without the option the
special SElinux extended attributes are filtered and not sent to the
bricks (maybe even with an error returned). As long as the bricks
support SElinux, everything is expected to work.

In case something is not working correctly, please provide the exact
steps to reproduce with a clear example in a bug report.

 https://bugzilla.redhat.com/enter_bug.cgi?Product=GlusterFS

Thanks,
Niels



Thanks Niels,

I've documented my steps in https://bugzilla.redhat.com/show_bug.cgi?id=1252627
The selinux mount option is asserted, and I see that this does result
in the glusterfs process receiving a --selinux switch. But that's not
effective.  Is there something server-side that has to be enabled?

Thank you,
-Bob Arendt
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel