Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Ravishankar N

On 08/17/2016 10:40 AM, Krutika Dhananjay wrote:

Good question.

Any attempt from a client to access /.shard or its contents from the 
mount point will be met with an EPERM (Operation not permitted). We do 
not expose .shard on the mount point.




Just to be clear, I was referring to the shard xlator accessing the 
participant shard by sending a named lookup when we access the file (say 
'cat /mount/file > /dev/null`) from the mount.
I removed a shard and its hard-link from one of the bricks of a 2 way 
replica, unmounted the client, stopped and started the volume and did 
read the file from a fresh mount. For some reason (I need to debug why), 
a reverse heal seems to be happening where both bricks of the 2-replica 
volume end up with zero byte file for the shard in question.

-Ravi


-Krutika

On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N 
mailto:ravishan...@redhat.com>> wrote:


On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:

On 17 August 2016 at 11:24, Ravishankar N
mailto:ravishan...@redhat.com>> wrote:

The right way to heal the corrupted files as of now is to
access them from
the mount-point like you did after removing the
hard-links. The list of
files that are corrupted can be obtained with the scrub
status command.


Hows that work with sharding where you can't see the shards
from the
mount point?

If sharding xlator does a named lookup of the shard in question as
and when it is accessed, AFR can heal it. But I'm not sure if that
is the case though. Let me check and get back.
-Ravi



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Krutika Dhananjay
Not sure. I did check the logs you'd attached. There are some messages that
are unintended on the bricks. I need to find out if that can have any
negative consequences.

-Krutika

On Wed, Aug 17, 2016 at 11:03 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> The problem I had Monday with shards not healing for hours be related to
> this?
>
> On 17 August 2016 at 15:10, Krutika Dhananjay  wrote:
> > Good question.
> >
> > Any attempt from a client to access /.shard or its contents from the
> mount
> > point will be met with an EPERM (Operation not permitted). We do not
> expose
> > .shard on the mount point.
> >
> > -Krutika
> >
> > On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N 
> > wrote:
> >>
> >> On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:
> >>>
> >>> On 17 August 2016 at 11:24, Ravishankar N 
> wrote:
> 
>  The right way to heal the corrupted files as of now is to access them
>  from
>  the mount-point like you did after removing the hard-links. The list
> of
>  files that are corrupted can be obtained with the scrub status
> command.
> >>>
> >>>
> >>> Hows that work with sharding where you can't see the shards from the
> >>> mount point?
> >>>
> >> If sharding xlator does a named lookup of the shard in question as and
> >> when it is accessed, AFR can heal it. But I'm not sure if that is the
> case
> >> though. Let me check and get back.
> >> -Ravi
> >>
> >>
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
>
>
>
> --
> Lindsay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Lindsay Mathieson
The problem I had Monday with shards not healing for hours be related to this?

On 17 August 2016 at 15:10, Krutika Dhananjay  wrote:
> Good question.
>
> Any attempt from a client to access /.shard or its contents from the mount
> point will be met with an EPERM (Operation not permitted). We do not expose
> .shard on the mount point.
>
> -Krutika
>
> On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N 
> wrote:
>>
>> On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:
>>>
>>> On 17 August 2016 at 11:24, Ravishankar N  wrote:

 The right way to heal the corrupted files as of now is to access them
 from
 the mount-point like you did after removing the hard-links. The list of
 files that are corrupted can be obtained with the scrub status command.
>>>
>>>
>>> Hows that work with sharding where you can't see the shards from the
>>> mount point?
>>>
>> If sharding xlator does a named lookup of the shard in question as and
>> when it is accessed, AFR can heal it. But I'm not sure if that is the case
>> though. Let me check and get back.
>> -Ravi
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Krutika Dhananjay
Good question.

Any attempt from a client to access /.shard or its contents from the mount
point will be met with an EPERM (Operation not permitted). We do not expose
.shard on the mount point.

-Krutika

On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N 
wrote:

> On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:
>
>> On 17 August 2016 at 11:24, Ravishankar N  wrote:
>>
>>> The right way to heal the corrupted files as of now is to access them
>>> from
>>> the mount-point like you did after removing the hard-links. The list of
>>> files that are corrupted can be obtained with the scrub status command.
>>>
>>
>> Hows that work with sharding where you can't see the shards from the
>> mount point?
>>
>> If sharding xlator does a named lookup of the shard in question as and
> when it is accessed, AFR can heal it. But I'm not sure if that is the case
> though. Let me check and get back.
> -Ravi
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Ravishankar N

On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:

On 17 August 2016 at 11:24, Ravishankar N  wrote:

The right way to heal the corrupted files as of now is to access them from
the mount-point like you did after removing the hard-links. The list of
files that are corrupted can be obtained with the scrub status command.


Hows that work with sharding where you can't see the shards from the
mount point?

If sharding xlator does a named lookup of the shard in question as and 
when it is accessed, AFR can heal it. But I'm not sure if that is the 
case though. Let me check and get back.

-Ravi


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster inside containers

2016-08-16 Thread Zach Lanich
Hey guys, has anyone had a few mins to look at the aforementioned decision 
dilemmas I’m faced with? :) I also have a couple follow-up questions:

1. Is it possible to change a Replicated (replica3, 3 node) setup to a 
Distributed Replicated (replica 2, 4 node setup)?

2. I’m leaning toward Option #2 in some form as I feel volumes would be better 
separation than subdirectories (correct me if I’m wrong), so is there a good 
way to manage access to separate Gluster volumes? I can’t have the containers 
being able to mount w/e volume they want. One option is to mount the correct 
volume from the top down using lxc device add, but if possible, I might avoid 
that as it sort of breaks the rule of isolation for the containers. Do you 
agree?

3. Is it feasible to resize a set of bricks being used for a Gluster volume, 
should I want to add more HDD space on the already existing nodes? Or am I just 
going about this the wrong way? Would I just create more bricks on those nodes 
and add them to the Gluster volume?

Best Regards,

Zach Lanich
Business Owner, Entrepreneur, Creative
Owner/CTO
weCreate LLC
www.WeCreate.com

> On Aug 16, 2016, at 1:13 PM, Atin Mukherjee  wrote:
> 
> Adding Luis, Humble, Ashiq to comment as they have done some extensive work 
> on this area.
> 
> On Tuesday 16 August 2016, Zach Lanich  > wrote:
> Hey guys, I’m having a real hard time figuring out how to handle my Gluster 
> situation for the web hosting setup I’m working on. Here’s the rundown of 
> what I’m trying to accomplish:
> 
> - Load-balanced web nodes (2 nodes right now), each with multiple LXD 
> containers in them (1 container per website)
> - Gluster vols mounted into the containers (I probably need site-specific 
> volumes, not mounting the same volume into all of them)
> 
> Here are 3 scenarios I’ve come up with for a replica 3 (possibly w/ arbiter):
> 
> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for 
> each website), mounting the respective subdirs into their containers & using 
> ACLs & LXD’s u/g id maps (mixed feelings about security here)
> 
> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating 
> website-specific volumes, then mounting those respective volumes into their 
> containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
> 
> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster 
> Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
>   - /data/brick1
> - gcontainer-website2
>   - /data/brick1
> gnode-2
> - gcontainer-website1
>   - /data/brick2
> - gcontainer-website2
>   - /data/brick2
> gnode-3
> - gcontainer-website1
>   - /data/brick3
> - gcontainer-website2
>   - /data/brick3
> 
> Where I need help:
> 
> - I don’t know which method is best (or if all 3 are technically possible, 
> though I feel they are)
> 
> My concerns/frustrations:
> 
> - Security
>   - Option 1 - Gives me mixed feelings about putting all customers’ website 
> files on one large volume and mounting subdirs of that volume into the LXD 
> containers, giving the containers R/W to that sub dir using ACLs on the host. 
> Mounting via "lxc device add” supposedly is secure itself, but I’m just not 
> sure here.
> 
> - Performance 
>   - Option 2 - Not sure if Gluster will suffer in any way by using it with 
> say 50 volumes? (one for each customer website)
>   - Option 3 - Not sure if I’m incurring any significant overhead running 
> multiple instances of the Gluster Daemons, etc by creating an isolated 
> Gluster cluster for every customer website. LXD itself is very lightweight, 
> but would this be any worse than running say 50x the FOPs through a single 
> more powerful Gluster cluster?
> 
> - Networking
>   - Option 3 - If all these mini Gluster clusters will be in their own 
> containers, it seems I will have some majorly annoying networking to do. I 
> force a couple ways to do this (and please let me know if you see alt ways):
> - a. Send all Gluster traffic to the Gluster nodes, then use iptables & 
> port forwarding to send traffic to the correct container - Seems like a 
> nightmare. I think I’d have to use different sets ports for every website’s 
> Gluster cluster.
> - b. Bridge the containers to their host’s internal network and assign 
> the containers unique IPs on the host’s network - Much more realistic, but 
> not 100% sure I can do this atm as I’m on Digital Ocean. I know there’s 
> private networking, but I’m not 100% sure I can assign IPs on that network as 
> DO seems to assign the Droplets private IPs automatically. I foresee IP 
> collisions here. If I have to move to a diff provider to do this, then so be 
> it, but I like the SSDs :)

Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Lindsay Mathieson
On 17 August 2016 at 11:24, Ravishankar N  wrote:
> The right way to heal the corrupted files as of now is to access them from
> the mount-point like you did after removing the hard-links. The list of
> files that are corrupted can be obtained with the scrub status command.


Hows that work with sharding where you can't see the shards from the
mount point?

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Ravishankar N

On 08/16/2016 10:44 PM, Дмитрий Глушенок wrote:

Hello,

While testing healing after bitrot error it was found that self healing cannot 
heal files which were manually deleted from brick. Gluster 3.8.1:

- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2  srv01:/R1/test01 
srv02:/R1/test01
volume create: test01: success: please start the volume to access data
[root@srv01 ~]# gluster volume start test01
volume start: test01: success
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# cp /etc/passwd /mnt
[root@srv01 ~]# ls -l /mnt
итого 2
-rw-r--r--. 1 root root 1505 авг 16 19:59 passwd

- Then remove test file from first brick like we have to do in case of bitrot 
error in the file


You also need to remove all hard-links to the corrupted file from the 
brick, including the one in the .glusterfs folder.
There is a bug in heal-full that prevents it from crawling all bricks of 
the replica. The right way to heal the corrupted files as of now is to 
access them from the mount-point like you did after removing the 
hard-links. The list of files that are corrupted can be obtained with 
the scrub status command.


Hope this helps,
Ravi


[root@srv01 ~]# rm /R1/test01/passwd
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]#

- Issue full self heal
[root@srv01 ~]# gluster volume heal test01 full
Launching heal operation to perform full self heal on volume test01 has been 
successful
Use heal info commands to check status
[root@srv01 ~]# tail -2 /var/log/glusterfs/glustershd.log
[2016-08-16 16:59:56.483767] I [MSGID: 108026] 
[afr-self-heald.c:611:afr_shd_full_healer] 0-test01-replicate-0: starting full 
sweep on subvol test01-client-0
[2016-08-16 16:59:56.486560] I [MSGID: 108026] 
[afr-self-heald.c:621:afr_shd_full_healer] 0-test01-replicate-0: finished full 
sweep on subvol test01-client-0

- Now we still see no files in mount point (it becomes empty right after 
removing file from the brick)
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]#

- Then try to access file by using full name (lookup-optimize and 
readdir-optimize are turned off by default). Now glusterfs shows the file!
[root@srv01 ~]# ls -l /mnt/passwd
-rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd

- And it reappeared in the brick
[root@srv01 ~]# ls -l /R1/test01/
итого 4
-rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
[root@srv01 ~]#

Is it a bug or we can tell self heal to scan all files on all bricks in the 
volume?

--
Dmitry Glushenok
Jet Infosystems

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-16 Thread Nigel Babu
On Fri, Aug 12, 2016 at 03:48:49PM -0400, Vijay Bellur wrote:
> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are looking
> to have talks and discussions related to the following themes in the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to reduce our
> technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
>
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
>
> Please do reach out to me or Amye if you have any questions.
>
> Thanks!
> Vijay
>
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

Here's my proposal:

Topic: State of the CI and future

It'll cover the following topics:
* Current state of our CI system.
* Planned improvements for the next year.
* A timboxed discussion about what needs to improve.

--
nigelb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-16 Thread Prasanna Kalever
Hey All,

Here is my topic to utter at gluster summit

Abstract:

Title: GLUSTER AS BLOCK STORE IN CONTAINERS

As we all know containers are stateless entities which are used to
deploy applications and hence need persistent storage to store
application data for availability across container incarnations.

Persistent storage in containers are of two types, shared and non-shared.

Shared storage:
Consider this as a volume/store where multiple Containers perform both
read and write operations on the same data. Useful for applications
like web servers that need to serve the same data from multiple
container instances.

Non Shared Storage:
Only a single container can perform write operations to this store at
a given time.

This presentation intend to show/discuss how gluster plays a role as a
nonshared block store in containers
Hence it indoctrinate the background to terminology (LIO, iSCSI,
tcmurunner, targetcli) and explains the solution achieving 'Block
store in Containers using gluster' followed by a demo.

Demo will showcase some basic (could be elaborated, based on the
audience) gluster setup, then show nodes initiating the iSCSI session,
attaches iSCSI target as block device and serve it to containers where
the application is running and requires persistent storage.

Will show the working demos about its integration with
* Docker
* Kubernetes
* OpenShift

Intention of this presentation is to get more feedback from people who
use similar solutions and also know  potential risks for better
defence
While discussing TODO's (access locking, encryption, snapshots and
etc.) we could gather some education around.


Cheers,
--
Prasanna


On Tue, Aug 16, 2016 at 7:23 PM, Kaushal M  wrote:
> Okay. Here's another proposal from me.
>
> # GlusterFS Release process
> An overview of the GlusterFS release process
>
> The GlusterFS release process has been recently updated and been
> documented for the first time. In this presentation, I'll be giving an
> overview the whole release process including release types, release
> schedules, patch acceptance criteria and the release procedure.
>
> Kaushal
> kshlms...@gmail.com
> Process & Infrastructure
>
> On Mon, Aug 15, 2016 at 5:30 AM, Amye Scavarda  wrote:
>> Kaushal,
>>
>> That's probably best. We'll be able to track similar proposals here.
>> - amye
>>
>> On Sat, Aug 13, 2016 at 6:30 PM, Kaushal M  wrote:
>>>
>>> How do we submit proposals now? Do we just reply here?
>>>
>>>
>>> On 13 Aug 2016 03:49, "Amye Scavarda"  wrote:
>>>
>>> GlusterFS for Users
>>> "GlusterFS for users" introduces you with GlusterFS, it's terminologies,
>>> it's features and how to manage y GlusterFS cluster.
>>>
>>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>>> can create large, distributed storage solutions for media streaming, data
>>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>>> and open source software.
>>>
>>> This session is more intended for users/admins.
>>> Scope of this session :
>>>
>>> * What is Glusterfs
>>> * Glusterfs terminologies
>>> * Easy steps to get started with glusterfs
>>> * Volume topologies
>>> * Access protocols
>>> * Various features from user perspective :
>>> Replication, Data distribution, Geo-replication, Bit rot detection,
>>> data tiering,  Snapshot, Encryption, containerized glusterfs
>>> * Various configuration files
>>> * Various logs and it's location
>>> * various custom profile for specific use-cases
>>> * Collecting statedump and it's usage
>>> * Few common problems like :
>>>1) replacing a faulty brick
>>>2) resolving split-brain
>>>3) peer disconnect issue
>>>
>>> Bipin Kunal
>>> bku...@redhat.com
>>> User Perspectives
>>>
>>> On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda  wrote:

 Demo : Quickly setup GlusterFS cluster
 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 GlusterFS is a scalable network filesystem. Using commodity hardware, you
 can create large, distributed storage solutions for media streaming, data
 analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
 and open source software.

 This demo is intended for new user who is willing to setup glusterFS
 cluster.

 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 Scope of this session :

 1) Install GlusterFS packages
 2) Create a trusted storage pool
 3) Create a GlusterFS volume
 4) Access GlusterFS volume using various protocols
a) FUSE b) NFS c) CIFS d) NFS-ganesha
 5) Using Snapshot
 6) Creating geo-rep session
 7) Adding/removing/replacing bricks
 8) Bit-rot detection and correction

 Bipin Kunal
 bku...@redhat.com
 User Perspectives

 On Fri, Aug 12, 2016 at 3:17 PM, Amye Scavarda  wrote:
>
> An Update on GlusterD-2.0
> An upda

[Gluster-users] GlusterFS and ACL+SAMBA

2016-08-16 Thread Gilberto Nunes
Hello list

I am trying GlusterFS 3.8.1, compile from scratch and mounted like this:


 mount -t glusterfs -o acl localhost:/FILES /WORK

And even with acl parameters, when I try to use Samba with ACL, I get this
error:

The mount point '/WORK' must be mounted with 'acl' option. This is required
for permissions to work properly.

I have other mount point, named home, with acl support enable too, and
works fine.

All this mount point are ZFS pool, with acltype=posixacl and
aclinherit=passthrough.

I don't know what can I do solve this issue.

Somebody can help me???

Thanks so much!

Best regards.


Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Self healing does not see files to heal

2016-08-16 Thread Дмитрий Глушенок
Hello,

While testing healing after bitrot error it was found that self healing cannot 
heal files which were manually deleted from brick. Gluster 3.8.1:

- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2  srv01:/R1/test01 
srv02:/R1/test01 
volume create: test01: success: please start the volume to access data
[root@srv01 ~]# gluster volume start test01
volume start: test01: success
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# cp /etc/passwd /mnt
[root@srv01 ~]# ls -l /mnt
итого 2
-rw-r--r--. 1 root root 1505 авг 16 19:59 passwd

- Then remove test file from first brick like we have to do in case of bitrot 
error in the file
[root@srv01 ~]# rm /R1/test01/passwd 
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]# 

- Issue full self heal
[root@srv01 ~]# gluster volume heal test01 full
Launching heal operation to perform full self heal on volume test01 has been 
successful 
Use heal info commands to check status
[root@srv01 ~]# tail -2 /var/log/glusterfs/glustershd.log
[2016-08-16 16:59:56.483767] I [MSGID: 108026] 
[afr-self-heald.c:611:afr_shd_full_healer] 0-test01-replicate-0: starting full 
sweep on subvol test01-client-0
[2016-08-16 16:59:56.486560] I [MSGID: 108026] 
[afr-self-heald.c:621:afr_shd_full_healer] 0-test01-replicate-0: finished full 
sweep on subvol test01-client-0

- Now we still see no files in mount point (it becomes empty right after 
removing file from the brick)
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]# 

- Then try to access file by using full name (lookup-optimize and 
readdir-optimize are turned off by default). Now glusterfs shows the file!
[root@srv01 ~]# ls -l /mnt/passwd
-rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd

- And it reappeared in the brick
[root@srv01 ~]# ls -l /R1/test01/
итого 4
-rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
[root@srv01 ~]#

Is it a bug or we can tell self heal to scan all files on all bricks in the 
volume?

--
Dmitry Glushenok
Jet Infosystems

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster inside containers

2016-08-16 Thread Atin Mukherjee
Adding Luis, Humble, Ashiq to comment as they have done some extensive work
on this area.

On Tuesday 16 August 2016, Zach Lanich  wrote:

> Hey guys, I’m having a real hard time figuring out how to handle my
> Gluster situation for the web hosting setup I’m working on. Here’s the
> rundown of what I’m trying to accomplish:
>
> - Load-balanced web nodes (2 nodes right now), each with multiple LXD
> containers in them (1 container per website)
> - Gluster vols mounted into the containers (I probably need site-specific
> volumes, not mounting the same volume into all of them)
>
> Here are 3 scenarios I’ve come up with for a replica 3 (possibly w/
> arbiter):
>
> *Option 1*. 3 Gluster nodes, one large volume, divided up into subdirs (1
> for each website), mounting the respective subdirs into their containers &
> using ACLs & LXD’s u/g id maps (mixed feelings about security here)
>
> *Option 2*. 3 Gluster nodes, website-specifc bricks on each, creating
> website-specific volumes, then mounting those respective volumes into their
> containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
>
> *Option 3*. 3 Gluster nodes, every website get’s their own mini “Gluster
> Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
>   - /data/brick1
> - gcontainer-website2
>   - /data/brick1
> gnode-2
> - gcontainer-website1
>   - /data/brick2
> - gcontainer-website2
>   - /data/brick2
> gnode-3
> - gcontainer-website1
>   - /data/brick3
> - gcontainer-website2
>   - /data/brick3
>
> *Where I need help:*
>
> - I don’t know which method is best (or if all 3 are technically possible,
> though I feel they are)
>
> *My concerns/frustrations:*
>
> - *Security*
>   - Option 1 - Gives me mixed feelings about putting all customers’
> website files on one large volume and mounting subdirs of that volume into
> the LXD containers, giving the containers R/W to that sub dir using ACLs on
> the host. Mounting via "lxc device add” supposedly is secure itself, but
> I’m just not sure here.
>
> - *Performance *
>   - Option 2 - Not sure if Gluster will suffer in any way by using it with
> say 50 volumes? (one for each customer website)
>   - Option 3 - Not sure if I’m incurring any significant overhead running
> multiple instances of the Gluster Daemons, etc by creating an isolated
> Gluster cluster for every customer website. LXD itself is very lightweight,
> but would this be any worse than running say 50x the FOPs through a single
> more powerful Gluster cluster?
>
> - *Networking*
>   - Option 3 - If all these mini Gluster clusters will be in their own
> containers, it seems I will have some majorly annoying networking to do. I
> force a couple ways to do this (and please let me know if you see alt ways):
> - a. Send all Gluster traffic to the Gluster nodes, then use iptables
> & port forwarding to send traffic to the correct container - Seems like a
> nightmare. I think I’d have to use different sets ports for every website’s
> Gluster cluster.
> - b. Bridge the containers to their host’s internal network and assign
> the containers unique IPs on the host’s network - Much more realistic, but
> not 100% sure I can do this atm as I’m on Digital Ocean. I know there’s
> private networking, but I’m not 100% sure I can assign IPs on that network
> as DO seems to assign the Droplets private IPs automatically. I foresee IP
> collisions here. If I have to move to a diff provider to do this, then so
> be it, but I like the SSDs :)
>
> I’d appreciate help on this as I’ma bit in over my head, but extremely
> eager to figure this out and make it happen. I’m not 100% aware of the
> Security/Performance/Networking implications are for the above decisions
> and I need an expert so I don’t go too far off in left field.
>
> Best Regards,
>
> Zach Lanich
> *Business Owner, Entrepreneur, Creative*
> *Owner/CTO*
> weCreate LLC
> *www.WeCreate.com *
>
>

-- 
--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-16 Thread Kaushal M
Okay. Here's another proposal from me.

# GlusterFS Release process
An overview of the GlusterFS release process

The GlusterFS release process has been recently updated and been
documented for the first time. In this presentation, I'll be giving an
overview the whole release process including release types, release
schedules, patch acceptance criteria and the release procedure.

Kaushal
kshlms...@gmail.com
Process & Infrastructure

On Mon, Aug 15, 2016 at 5:30 AM, Amye Scavarda  wrote:
> Kaushal,
>
> That's probably best. We'll be able to track similar proposals here.
> - amye
>
> On Sat, Aug 13, 2016 at 6:30 PM, Kaushal M  wrote:
>>
>> How do we submit proposals now? Do we just reply here?
>>
>>
>> On 13 Aug 2016 03:49, "Amye Scavarda"  wrote:
>>
>> GlusterFS for Users
>> "GlusterFS for users" introduces you with GlusterFS, it's terminologies,
>> it's features and how to manage y GlusterFS cluster.
>>
>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>> can create large, distributed storage solutions for media streaming, data
>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>> and open source software.
>>
>> This session is more intended for users/admins.
>> Scope of this session :
>>
>> * What is Glusterfs
>> * Glusterfs terminologies
>> * Easy steps to get started with glusterfs
>> * Volume topologies
>> * Access protocols
>> * Various features from user perspective :
>> Replication, Data distribution, Geo-replication, Bit rot detection,
>> data tiering,  Snapshot, Encryption, containerized glusterfs
>> * Various configuration files
>> * Various logs and it's location
>> * various custom profile for specific use-cases
>> * Collecting statedump and it's usage
>> * Few common problems like :
>>1) replacing a faulty brick
>>2) resolving split-brain
>>3) peer disconnect issue
>>
>> Bipin Kunal
>> bku...@redhat.com
>> User Perspectives
>>
>> On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda  wrote:
>>>
>>> Demo : Quickly setup GlusterFS cluster
>>> This demo will let you understand How to setup GlusterFS cluster and how
>>> to exploit its features.
>>>
>>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>>> can create large, distributed storage solutions for media streaming, data
>>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>>> and open source software.
>>>
>>> This demo is intended for new user who is willing to setup glusterFS
>>> cluster.
>>>
>>> This demo will let you understand How to setup GlusterFS cluster and how
>>> to exploit its features.
>>>
>>> Scope of this session :
>>>
>>> 1) Install GlusterFS packages
>>> 2) Create a trusted storage pool
>>> 3) Create a GlusterFS volume
>>> 4) Access GlusterFS volume using various protocols
>>>a) FUSE b) NFS c) CIFS d) NFS-ganesha
>>> 5) Using Snapshot
>>> 6) Creating geo-rep session
>>> 7) Adding/removing/replacing bricks
>>> 8) Bit-rot detection and correction
>>>
>>> Bipin Kunal
>>> bku...@redhat.com
>>> User Perspectives
>>>
>>> On Fri, Aug 12, 2016 at 3:17 PM, Amye Scavarda  wrote:

 An Update on GlusterD-2.0
 An update on what's been happening in GlusterD-2.0 since the last
 summit.

 Discussion around GlusterD-2.0 was initially started at the last Gluster
 Development summit. Since then we've had many followup discussions, and
 officially started working on GD2. In this talk I'll be providing an update
 on what has been done, what we're doing and what needs to be done.

 Kaushal
 kshlms...@gmail.com
 Future Gluster Features


 On Fri, Aug 12, 2016 at 3:16 PM, Amye Scavarda  wrote:
>
> Challenges with Gluster and Persistent Memory
>
> A discussion of the difficulties posed by persistent memory with
> Gluster and  short and long term steps to address them.
>
> Persistent memory will significantly improve storage performance. But
> these benefits may be hard to realize in Gluster. Gains are mitigated from
> costly network overhead and its deep software layer. It is also likely 
> that
> the high costs of persistent memory will limit deployments. This talk 
> shall
> discuss short and long term steps to take on those problems. Possible
> strategies include better incorporating high speed networks such as
> infiniband, client side caching of metadata, and centralizing DHT's 
> layouts.
> The talk will include discussion and results from a range of experiments 
> in
> software and hardware.
>
> Presenters:
> Dan Lambright, Rafi Parambil dlamb...@redhat.com
> Future Gluster Features
>
> On Fri, Aug 12, 2016 at 3:15 PM, Amye Scavarda  wrote:
>>
>>
>>
>> On Fri, Aug 12, 2016 at 12:48 PM, Vijay Bellur 
>> wrote:
>>>
>>> Hey All,
>>>
>>> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
>>> looking to h

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-08-16 Thread Ankit Raj
Hi all,

The weekly Gluster bug triage is about to take place in 15 minutes.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

Appreciate your participation

Regards,
Ankit Raj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users