Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume

2017-03-29 Thread Amar Tumballi
On Wed, Mar 22, 2017 at 2:31 AM, Mackay, Michael  wrote:

> At the risk of repeating myself, the POSIX file system underpinnings are
> not a concern – that part is understood and handled.
>
>
>
> I’m also not asking for help to solve this problem, again, to be clear.
> SE Linux is not an option.  To summarize the point of my post:
>
>
>
> I’ve gotten what I want to work.  I have a small list of code changes to
> make it work.  I wish to find out if the Gluster community is interested in
> the changes.
>
>
>
We are happy to take the code changes in. Please submit the changes.

Regards,
Amar



> Thanks
>
> Mike
>
>
>
> *From:* Dustin Black [mailto:dbl...@redhat.com]
> *Sent:* Tuesday, March 21, 2017 12:12 PM
> *To:* Mackay, Michael
> *Cc:* Saravanakumar Arumugam; Atin Mukherjee; gluster-devel@gluster.org
>
> *Subject:* (nwl) Re: [Gluster-devel] Read-only option for a replicated
> (replication for fail-over) Gluster volume
>
>
>
> I don't see how you could accomplish what you're describing purely through
> the gluster code. The bricks are mounted on the servers as standard local
> POSIX file systems, so there is always the chance that something could
> change the data outside of Gluster's control.
>
>
>
> This all seems overly-restrictive to me, given that your storage system
> should be locked-down from an administrative perspective as a best practice
> in the first place, limiting the risk of any brick-side corruption or in
> your case even writes/changes. But assuming that you have a compliance or
> other requirement that is forcing this configuration, why not simply mount
> the brick local file system as read only, and then also enable the existing
> Gluster read-only translator, providing two layers of protection against
> any writes? Of course this would also restrict any metadata actions on the
> Gluster side, which could be problematic for something like bitrot
> detection and could result in a lot of log noise, I'm guessing. And
> administratively someone could still get in and remount the bricks as r/w,
> so if you _really_ _really_ need it locked down you may also need selinux.
>
>
>
>
> Dustin Black, RHCA
>
> Senior Architect, Software-Defined Storage
>
> Red Hat, Inc.
>
>
>
>
> On Tue, Mar 21, 2017 at 10:52 AM, Mackay, Michael 
> wrote:
>
> Thanks for the help and advice so far.  It’s difficult at times to
> describe what the use case is, so I’ll try here.
>
>
>
> We need to make sure that no one can write to the physical volume in any
> way.  We want to be able to be sure that it can’t be corrupted. We know
> from working with Gluster that we shouldn’t access the brick directly, and
> that’s part of the point.  We want to make it so it’s impossible to write
> to the volume or the brick under any circumstances.  At the same time, we
> like Gluster’s recovery capability, so if one of two copies of the data
> becomes unavailable (due to failure of the host server or maintenance) the
> other copy will still be up and available.
>
>
>
> Essentially, the filesystem under the brick is a physically read-only disk
> that is set up at integration time and delivered read-only.  We won’t want
> to change it after delivery, and (in this case for security) we want it to
> be immutable so we know we can rely on that data to be the same always, no
> matter what.
>
>
>
> All users will get data from the Gluster mount and use it, but from the
> beginning it would be read-only.
>
>
>
> A new deliver might have new data, or changed data, but that’s the only
> time it will change.
>
>
>
> I want to repeat as well that we’ve identified changes in the code
> baseline that allow this to work, if interested.
>
>
>
> I hope that provides the information you were looking for.
>
>
>
> Mike
>
>
>
> *From:* Saravanakumar Arumugam [mailto:sarum...@redhat.com]
> *Sent:* Tuesday, March 21, 2017 10:18 AM
> *To:* Mackay, Michael; Atin Mukherjee
>
>
> *Cc:* gluster-devel@gluster.org
> *Subject:* Re: [Gluster-devel] Read-only option for a replicated
> (replication for fail-over) Gluster volume
>
>
>
>
>
> On 03/21/2017 07:33 PM, Mackay, Michael wrote:
>
> “read-only xlator is loaded at gluster server (brick) stack. so once the
> volume is in place, you'd need to enable read-only option using volume set
> and then you should be able to mount the volume which would provide you the
> read-only access.”
>
>
>
> OK, so fair enough, but is the physical volume on which the brick resides
> allowed to be on a r/o filesystem?
>
>
>
> Again, it’s not just whether Gluster considers the volume to be read-only
> to clients, but whether the gluster brick and its underlying medium can be
> read-only.
>
> No, it is only Gluster consider it as a read-only voume.
>
> If you go and access the gluster brick directly, you will be able to write
> on it.
> In general, you should avoid accessing the bricks directly.
>
>
> Do you mean to say, like creating a gluster volume to be read-only even
> from the beginning ?

[Gluster-devel] How can add one fop after another fop?

2017-03-29 Thread Tahereh Fattahi
Hi
I want after create any file, do one setxattr automatically , where and how
I should add this operation, and in which translator?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.10.1: Scheduled for the 30th of March

2017-03-29 Thread Shyam

On 03/27/2017 12:59 PM, Shyam wrote:

Hi,

It's time to prepare the 3.10.1 release, which falls on the 30th of each
month, and hence would be Mar-30th-2017 this time around.

We have one blocker issue for the release, which is [1] "auth failure
after upgrade to GlusterFS 3.10", that we are tracking using the release
tracker bug [2]. @Atin, can we have this fixed in a day or 2, or does it
look like we may slip beyond that?


This looks almost complete, I assume that in the next 24h we should be 
able to have this backported and merged into 3.10.1.


This means we will tag 3.10.1 in all probability tomorrow and packages 
for various distributions will follow.




This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.10.1? If so mark them against the provided tracker [2] as blockers for
the release, or at the very least post them as a response to this mail


I have not heard of any other issue (other than the rebalance+shard 
case, for which root cause is still in progress). So I will assume 
nothing else blocks the minor update.




2) Pending reviews in the 3.10 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [3] to check on the status of your patches to 3.10 and get
these going

3) I have made checks on what went into 3.8 post 3.10 release and if
these fixes are included in 3.10 branch, the status on this is *green*
as all fixes ported to 3.8, are ported to 3.10 as well


This is still green.



4) First cut of the release notes are posted here [4], if there are any
specific call outs for 3.10 beyond bugs, please update the review, or
leave a comment in the review, for me to pick it up

Thanks,
Shyam

[1] Pending blocker bug for 3.10.1:
https://bugzilla.redhat.com/show_bug.cgi?id=1429117

[2] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.10.1

[3] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-10-dashboard


[4] Release notes WIP: https://review.gluster.org/16957
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Proposal to deprecate replace-brick for "distribute only" volumes

2017-03-29 Thread Shyam

On 03/16/2017 08:59 AM, Joe Julian wrote:

In the last few releases, we have changed replace-brick command such
that it can be called only with "commit force" option. When invoked,
this is what happens to the volume:

a. distribute only volume: the given brick is replaced with a empty
brick with 100% probability of data loss.
b. distribute-replicate: the given brick is replaced with a empty
brick and self heal is triggered. If admin is wise enough to monitor
self heal status before another replace-brick command, data is safe.
c. distribute-disperse: same as above in distribute-replicate

My proposal is to fully deprecate replace-brick command for
"distribute only" volumes. It should print out a error "The right way
to replace brick for distribute only volume is to add brick, wait for
rebalance to complete and remove brick" and return a "-1".


I am responding late, assuming this is still not done or WIP. Correct me 
if I am wrong.


Yes, we need the above. As it really does not make any sense in the way 
the current (replace brick) command is structured to lose a pure 
distribute brick.







It makes sense.
I just don't see any use of add-brick before remove-brick except the
fact that it will
help to keep the overall storage capacity of volume intact .
What is the guarantee that the files on the brick which we want to
replace
would migrate to added brick?

If a brick, which we want to replace, is healthy and we just want to
replace it then perhaps we should provide
a command to copy those files to new brick and then remove the old
brick.


We used to have a command that did just that. It was replace-brick.


This involves some rebalance trickery IMO (if we leverage that to 
replace the brick in a distribute only volume), which we do not have. I 
would suggest a github issue be added for such support, and take in the 
prevention of current replace-brick on distribute only volumes as the 
first priority (for obvious reasons as stated by talur).


Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume

2017-03-29 Thread Mackay, Michael
I’m not sure if this went (email issues at the office), so re-sending.

From: Mackay, Michael
Sent: Tuesday, March 21, 2017 5:01 PM
To: gluster-devel@gluster.org
Subject: RE: (nwl) Re: [Gluster-devel] Read-only option for a replicated 
(replication for fail-over) Gluster volume

At the risk of repeating myself, the POSIX file system underpinnings are not a 
concern – that part is understood and handled.

I’m also not asking for help to solve this problem, again, to be clear.  SE 
Linux is not an option.  To summarize the point of my post:

I’ve gotten what I want to work.  I have a small list of code changes to make 
it work.  I wish to find out if the Gluster community is interested in the 
changes.

Thanks
Mike

From: Dustin Black [mailto:dbl...@redhat.com]
Sent: Tuesday, March 21, 2017 12:12 PM
To: Mackay, Michael
Cc: Saravanakumar Arumugam; Atin Mukherjee; 
gluster-devel@gluster.org
Subject: (nwl) Re: [Gluster-devel] Read-only option for a replicated 
(replication for fail-over) Gluster volume

I don't see how you could accomplish what you're describing purely through the 
gluster code. The bricks are mounted on the servers as standard local POSIX 
file systems, so there is always the chance that something could change the 
data outside of Gluster's control.

This all seems overly-restrictive to me, given that your storage system should 
be locked-down from an administrative perspective as a best practice in the 
first place, limiting the risk of any brick-side corruption or in your case 
even writes/changes. But assuming that you have a compliance or other 
requirement that is forcing this configuration, why not simply mount the brick 
local file system as read only, and then also enable the existing Gluster 
read-only translator, providing two layers of protection against any writes? Of 
course this would also restrict any metadata actions on the Gluster side, which 
could be problematic for something like bitrot detection and could result in a 
lot of log noise, I'm guessing. And administratively someone could still get in 
and remount the bricks as r/w, so if you _really_ _really_ need it locked down 
you may also need selinux.


Dustin Black, RHCA
Senior Architect, Software-Defined Storage
Red Hat, Inc.


On Tue, Mar 21, 2017 at 10:52 AM, Mackay, Michael 
> wrote:
Thanks for the help and advice so far.  It’s difficult at times to describe 
what the use case is, so I’ll try here.

We need to make sure that no one can write to the physical volume in any way.  
We want to be able to be sure that it can’t be corrupted. We know from working 
with Gluster that we shouldn’t access the brick directly, and that’s part of 
the point.  We want to make it so it’s impossible to write to the volume or the 
brick under any circumstances.  At the same time, we like Gluster’s recovery 
capability, so if one of two copies of the data becomes unavailable (due to 
failure of the host server or maintenance) the other copy will still be up and 
available.

Essentially, the filesystem under the brick is a physically read-only disk that 
is set up at integration time and delivered read-only.  We won’t want to change 
it after delivery, and (in this case for security) we want it to be immutable 
so we know we can rely on that data to be the same always, no matter what.

All users will get data from the Gluster mount and use it, but from the 
beginning it would be read-only.

A new deliver might have new data, or changed data, but that’s the only time it 
will change.

I want to repeat as well that we’ve identified changes in the code baseline 
that allow this to work, if interested.

I hope that provides the information you were looking for.

Mike

From: Saravanakumar Arumugam 
[mailto:sarum...@redhat.com]
Sent: Tuesday, March 21, 2017 10:18 AM
To: Mackay, Michael; Atin Mukherjee

Cc: gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Read-only option for a replicated (replication for 
fail-over) Gluster volume


On 03/21/2017 07:33 PM, Mackay, Michael wrote:
“read-only xlator is loaded at gluster server (brick) stack. so once the volume 
is in place, you'd need to enable read-only option using volume set and then 
you should be able to mount the volume which would provide you the read-only 
access.”

OK, so fair enough, but is the physical volume on which the brick resides 
allowed to be on a r/o filesystem?

Again, it’s not just whether Gluster considers the volume to be read-only to 
clients, but whether the gluster brick and its underlying medium can be 
read-only.
No, it is only Gluster consider it as a read-only voume.

If you go and access the gluster brick directly, you will be able to write on 
it.
In general, you should avoid accessing the bricks directly.


Do you mean to say, like creating a gluster volume to be read-only even from 
the beginning ?
Can you tell