Re: [Gluster-users] Gluster inside containers

2016-08-17 Thread Zach Lanich
It's good to hear the support is coming though. Thanks!


Best Regards,

Zach Lanich
Owner/Lead Developer
weCreate LLC
www.WeCreate.com
814.580.6636

> On Aug 17, 2016, at 8:54 AM, Kaushal M  wrote:
> 
> On Wed, Aug 17, 2016 at 5:18 PM, Humble Devassy Chirammal
>  wrote:
>> Hi Zach,
>> 
>>> 
>> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for
>> each website), mounting the respective subdirs into their containers & using
>> ACLs & LXD’s u/g id maps (mixed feelings about security here)
>>> 
>> 
>> Which version of GlusterFS is in use here ? because gluster sub directory
>> support patch is available in upstream, however  I dont think its in a good
>> state to consume. Yeah, if the subdirectory mount is performed we have to
>> take enough care to make sure the isolation of the mounts between multiple
>> user, ie security is a concern here.
> 
> A correction here. Sub-directory mount support hasn't been merged yet.
> It's still a patch under review.
> 
>> 
>>> 
>> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating
>> website-specific volumes, then mounting those respective volumes into their
>> containers. Example:
>>gnode-1
>>- /data/website1/brick1
>>- /data/website2/brick1
>>gnode-2
>>- /data/website1/brick2
>>- /data/website2/brick2
>>gnode-3
>>- /data/website1/brick3
>>- /data/website2/brick3
>>> 
>> 
>> Yes, this looks to be an ideal or more consumable approach to me.
>> 
>>> 
>> 
>> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster
>> Cluster” via LXD containers on the Gluster nodes. Example:
>>gnode-1
>>- gcontainer-website1
>>  - /data/brick1
>>- gcontainer-website2
>>  - /data/brick1
>>gnode-2
>>- gcontainer-website1
>>  - /data/brick2
>>- gcontainer-website2
>>  - /data/brick2
>>gnode-3
>>- gcontainer-website1
>>  - /data/brick3
>>- gcontainer-website2
>>  - /data/brick3
>>> 
>> 
>> This is very difficult or complex to achieve and maintain.
>> 
>> In short,  I would vote for option 2.
>> 
>> Also for safer side,  you may need take snapshot of the volumes or configure
>> a backup for these volumes to avoid single point of failure.
>> 
>> Please let me know if you need any details.
>> 
>> --Humble
>> 
>> 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] does root squash works ?

2016-08-17 Thread Laurent Bardi
Hi,

I ve setup a 2 machine gluster system (in "mirror" mode).
The backend filesystem is xfs, i use sssd to query groups/users from and AD.

Everything is ok if i desactivate the root squash, but if i activate it : 
-for a given user in ith homedir under gluster i can create a file, but
not a dir ?!
-when i log via ssh it complains about xauthority timeout ?

I do not understand where is my mistake ?

-- 
Laurent BARDI /  RSI CNRS-IPBS / CRSSI DR14
INSTITUT  de PHARMACOLOGIE et de BIOLOGIE STRUCTURALE
Tel : 05-61-17-59-05http://www.ipbs.fr/
Fax : 05-61-17-59-94Laurent.BardiATipbs.fr
CNRS-IPBS 205 Route de Narbonne 31400 TOULOUSE FRANCE
...
J'étais indéniablement misanthrope.
Je voulus traverser à gué un marigot infesté d'imbéciles. 
Quand j'atteignis l'autre rive, j'étais devenu philanthrope.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-17 Thread Atin Mukherjee
Here is one of the proposal from my end:

"Gluster maintainers responsibilities"
Theme - Process & Infrastructure

- Tracking incoming reviews and managing pending review backlogs with the
help of peer reviews/review marathon on a weekly basis

- Bug triaging & prioritization  - Current form of community bugzilla
triaging is all about putting a keyword "triaged" and assigning the BZ to
right people and at most asking for further logs/information, while this
helps in the initial screening but maintainers need to further look into
them from their component and come up with a plan on "when to fix what"
sort of model for bug fix updates.

- Addressing community users issues on a regular basis (both over email &
IRC)

- Keeping a track on overall component health (cumulation of above three)

The key point to discuss with touching upon all the above points is how to
balance out all of these activities with the other commitments (mostly
development) you have for the project deliverables.


On Sat, Aug 13, 2016 at 1:18 AM, Vijay Bellur  wrote:

> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
> looking to have talks and discussions related to the following themes in
> the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to reduce
> our technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
>
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
>
> Please do reach out to me or Amye if you have any questions.
>
> Thanks!
> Vijay
>
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] CFP Gluster Developer Summit

2016-08-17 Thread Kaleb S. KEITHLEY
I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status
* Architecture of the High Availability Solution for Ganesha and Samba
 - detailed walk through and demo of current implementation
 - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
 (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
 (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)


-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster inside containers

2016-08-17 Thread Personal
Thanks Humble. Re: The single point of failure, would there be a single point 
of failure in a 4 or 6 node Distributed Replicated setup? I still have to wrap 
my head around exactly how many nodes I need for H/A & linear scalability over 
time. 

PS good to hear subdirectory mount support is coming.


Best Regards,

Zach Lanich
Business Owner, Entrepreneur, Creative
Owner/Lead Developer
weCreate LLC
www.WeCreate.com

> On Aug 17, 2016, at 7:48 AM, Humble Devassy Chirammal 
>  wrote:
> 
> Hi Zach, 
> 
> >
> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for 
> each website), mounting the respective subdirs into their containers & using 
> ACLs & LXD’s u/g id maps (mixed feelings about security here)
> >
> 
> Which version of GlusterFS is in use here ? because gluster sub directory 
> support patch is available in upstream, however  I dont think its in a good 
> state to consume. Yeah, if the subdirectory mount is performed we have to 
> take enough care to make sure the isolation of the mounts between multiple 
> user, ie security is a concern here.
> 
> >
> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating 
> website-specific volumes, then mounting those respective volumes into their 
> containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
> >
> 
> Yes, this looks to be an ideal or more consumable approach to me.
> 
> >
> 
> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster 
> Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
>   - /data/brick1
> - gcontainer-website2
>   - /data/brick1
> gnode-2
> - gcontainer-website1
>   - /data/brick2
> - gcontainer-website2
>   - /data/brick2
> gnode-3
> - gcontainer-website1
>   - /data/brick3
> - gcontainer-website2
>   - /data/brick3
> >
> 
> This is very difficult or complex to achieve and maintain. 
> 
> In short,  I would vote for option 2. 
> 
> Also for safer side,  you may need take snapshot of the volumes or configure 
> a backup for these volumes to avoid single point of failure. 
> 
> Please let me know if you need any details.
> 
> --Humble
> 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes : Gluster Community meeting (Wednesday 17th Aug 2016)

2016-08-17 Thread Mohammed Rafi K C
Hi All,

The meeting minutes and logs for this weeks meeting are available at
the links below.Minutes :
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-17/weekly_community_meeting_17-aug-2016.2016-08-17-12.00.htmlMinutes
(Text)
:https://meetbot-raw.fedoraproject.org/gluster-meeting/2016-08-17/weekly_community_meeting_17-aug-2016.2016-08-17-12.00.txtlog
:
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-17/weekly_community_meeting_17-aug-2016.2016-08-17-12.00.log.html

We had a very lively meeting this time, and had good participation.
Hope next weeks meeting is also the same. The next meeting is as
always at 1200UTC next Wednesday in #gluster-meeting. See you all
there and thank you for attending todays meeting.

*Please note that we have decided to do the screening of remaining 3.6
bugs as it is reached EOL. This task will take place on next Tuesday as
part of the bug triage meeting. If you are a maintainer, Please ensure
your presence. Looking forward to see everyone for this bug screening . *
Regards!
Rafi KC


Meeting summary
---
* Roll call  (rafi, 12:01:25)
  * The agenda is available at
https://public.pad.fsfe.org/p/gluster-community-meetings  (rafi,
12:02:22)
  * LINK: https://public.pad.fsfe.org/p/gluster-community-meetings
(rafi, 12:02:34)

* Next weeks meeting host  (rafi, 12:05:33)
  * kshlm will be hosting next week community meeting  (rafi, 12:07:49)

* GlusterFS-4.0  (rafi, 12:08:08)

* GlusterFS-3.9  (rafi, 12:15:19)

* GlusterFS-3.8  (rafi, 12:18:10)
  * GlusterFS-3.8.3 is scheduled in first week of Sept  (rafi, 12:21:44)

* GlusterFS-3.7  (rafi, 12:22:55)
  * ACTION: kshlm will send out a reminder for 3.7.15 time lines  (rafi,
12:25:23)

* GlusterFS-3.6  (rafi, 12:26:42)
  * LINK:
http://www.gluster.org/pipermail/maintainers/2016-August/001227.html
(ndevos, 12:30:03)
  * 84 bugs for 3.6 still need to be screened  (ndevos, 12:32:55)

* Infra  (rafi, 12:35:06)

* NFS ganesha  (rafi, 12:36:19)

* Gluster samba  (rafi, 12:41:56)

* Last weeks AIs  (rafi, 12:47:55)

* kshlm to setup a time to go through the 3.6 buglist one last time
  (everyone should attend).  (rafi, 12:48:22)
  * ACTION: kshlm to send reminder to go through the 3.6 buglist one
last time (everyone should attend).  (rafi, 12:50:23)

* open floor  (rafi, 12:50:50)

* Glusto - libraries have been ported by the QE Automation Team and just
  need your +1s on Glusto to begin configuring upstream and make
  available.  (rafi, 12:51:05)

* * umbrella versioning for glusterfs in bugzilla (i.e. 3.9, not 3.9.0,
  3.9.1, etc.  starting with 3.9 release)  (rafi, 12:52:03)
  * ACTION: kkeithley will send more information to gluster ML's about
changing the bugzilla versioning  to umbrella  (rafi, 12:55:32)

Meeting ended at 13:00:12 UTC.




Action Items

* kshlm will send out a reminder for 3.7.15 time lines
* kshlm to send reminder to go through the 3.6 buglist one last time
  (everyone should attend).
* kkeithley will send more information to gluster ML's about changing
  the bugzilla versioning  to umbrella




Action Items, by person
---
* kkeithley
  * kkeithley will send more information to gluster ML's about changing
the bugzilla versioning  to umbrella
* kshlm
  * kshlm will send out a reminder for 3.7.15 time lines
  * kshlm to send reminder to go through the 3.6 buglist one last time
(everyone should attend).
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* rafi (110)
* kshlm (31)
* kkeithley (30)
* ndevos (27)
* ira (18)
* post-factum (10)
* zodbot (6)
* skoduri (4)
* glusterbot (4)
* aravindavk (3)
* ankitraj (1)
* msvbhat (1)
* kotreshhr (1)
* ira_ (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



On 08/17/2016 02:35 PM, Mohammed Rafi K C wrote:
>
>
> Hi all,
>
> The weekly Gluster community meeting is about to take place in three
> hour from now.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting
>  )
> - date: every Wednesday
> - time: 12:00 UTC
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: *https://public.pad.fsfe.org/p/gluster-community-meetings**
> 
> * Currently the following items are listed:
> * *GlusterFS 4.0
> * **GlusterFS 3.9*
> * *GlusterFS 3.8*
> * *GlusterFS 3.7
> * **GlusterFS 3.6
> *** Related projects**
> * **Last weeks AIs
> * **Open Floor**
>
> *If you have any topic that need to be discussed, please add to the
> Open Floor section as a sub topic.
> *
> *Appreciate your participation.
>
> Regards,
> Rafi KC

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster inside containers

2016-08-17 Thread Kaushal M
On Wed, Aug 17, 2016 at 5:18 PM, Humble Devassy Chirammal
 wrote:
> Hi Zach,
>
>>
> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for
> each website), mounting the respective subdirs into their containers & using
> ACLs & LXD’s u/g id maps (mixed feelings about security here)
>>
>
> Which version of GlusterFS is in use here ? because gluster sub directory
> support patch is available in upstream, however  I dont think its in a good
> state to consume. Yeah, if the subdirectory mount is performed we have to
> take enough care to make sure the isolation of the mounts between multiple
> user, ie security is a concern here.

A correction here. Sub-directory mount support hasn't been merged yet.
It's still a patch under review.

>
>>
> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating
> website-specific volumes, then mounting those respective volumes into their
> containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
>>
>
> Yes, this looks to be an ideal or more consumable approach to me.
>
>>
>
> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster
> Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
>   - /data/brick1
> - gcontainer-website2
>   - /data/brick1
> gnode-2
> - gcontainer-website1
>   - /data/brick2
> - gcontainer-website2
>   - /data/brick2
> gnode-3
> - gcontainer-website1
>   - /data/brick3
> - gcontainer-website2
>   - /data/brick3
>>
>
> This is very difficult or complex to achieve and maintain.
>
> In short,  I would vote for option 2.
>
> Also for safer side,  you may need take snapshot of the volumes or configure
> a backup for these volumes to avoid single point of failure.
>
> Please let me know if you need any details.
>
> --Humble
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Input/Output error only when some files exists

2016-08-17 Thread Berkay Unal
If you use NFS you need to deal with uid and groupid and permissions (Match
uid and etc on the web servers with the ones on the NFS server). Gluster
heals this pain.

Currently i am upgrading to 3.7.

Thanks for the reply



--
Berkay UNAL
www.berkayunal.com
berkayu...@gmail.com





On Wed, Aug 17, 2016 at 2:11 PM, Serkan Çoban  wrote:

> Why are you using one server gluster? NFS is a perfect solution for this.
> You should not delete files directly from bricks.
> Can you umount/mount from client and see same error?
> 3.5 is very old, you should use 3.7+
>
> On Wed, Aug 17, 2016 at 12:35 PM, Berkay Unal 
> wrote:
> > Hi,
> >
> > I have a strange issue and any help would be appreciated much.
> >
> > Here is my setup. Servers are ubuntu 14.04 and i am using the repo
> > ppa:gluster/glusterfs-3.5 for the gluster server.
> >
> > Server A (Gluster): I am using Gluster server with no replication so it
> is
> > more like (GlusterFS Single Server NFS Style). Files are located under
> > /gluster-storage/
> >
> > Server B (Client) I have a web server that needs shared storage. The
> Gluster
> > Server is mounted to /storage-pool/site/
> >
> > The mounted volume on the client is used by CMS. So it is a shared
> storage
> > for multi CMS web servers. The current setup was working with no problem
> > until today. When i try to list the files on the client for the folder
> > /storage-pool/site/content i got the following error.
> >
> > "ls: cannot open directory .: No such file or directory"
> >
> > The ls was working with no problem until now.
> >
> > When i delete some files from this folder on the Server A(Gluster
> server) ls
> > starts to work again. If i create the same files again on the server
> client
> > ls has the problem again.
> >
> > So if some files exists in that folder ls is broken and i am getting
> > Input/Output error.
> >
> > Hope i could explain the problem. Are there any recommendations
> >
> > Any help would be appreciated much. Thanks
> >
> > --
> > Berkay
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster inside containers

2016-08-17 Thread Humble Devassy Chirammal
Hi Zach,

>
*Option 1*. 3 Gluster nodes, one large volume, divided up into subdirs (1
for each website), mounting the respective subdirs into their containers &
using ACLs & LXD’s u/g id maps (mixed feelings about security here)
>

Which version of GlusterFS is in use here ? because gluster sub directory
support patch is available in upstream, however  I dont think its in a good
state to consume. Yeah, if the subdirectory mount is performed we have to
take enough care to make sure the isolation of the mounts between multiple
user, ie security is a concern here.

>
*Option 2*. 3 Gluster nodes, website-specifc bricks on each, creating
website-specific volumes, then mounting those respective volumes into their
containers. Example:
gnode-1
- /data/website1/brick1
- /data/website2/brick1
gnode-2
- /data/website1/brick2
- /data/website2/brick2
gnode-3
- /data/website1/brick3
- /data/website2/brick3
>

Yes, this looks to be an ideal or more consumable approach to me.

>

*Option 3*. 3 Gluster nodes, every website get’s their own mini “Gluster
Cluster” via LXD containers on the Gluster nodes. Example:
gnode-1
- gcontainer-website1
  - /data/brick1
- gcontainer-website2
  - /data/brick1
gnode-2
- gcontainer-website1
  - /data/brick2
- gcontainer-website2
  - /data/brick2
gnode-3
- gcontainer-website1
  - /data/brick3
- gcontainer-website2
  - /data/brick3
>

This is very difficult or complex to achieve and maintain.

In short,  I would vote for option 2.

Also for safer side,  you may need take snapshot of the volumes or
configure a backup for these volumes to avoid single point of failure.

Please let me know if you need any details.

--Humble
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.8.2 : Node not healing

2016-08-17 Thread Lindsay Mathieson
Just as another data point - today I took one server down to add a 
network card. Heal Count got up to around 1500 while I was doing that.


Once the server was back up, it started healing right away, in under a 
hour it was done. While it was healing I brought VM's backup on the 
node, this was not a problem.


This of course was a clean shutdown. The previous one which had issues, 
I killed glusterfsd with VM's still running on that node.



--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Дмитрий Глушенок
You are right, stat triggers self-heal. Thank you!

--
Dmitry Glushenok
Jet Infosystems

> 17 авг. 2016 г., в 13:38, Ravishankar N  написал(а):
> 
> On 08/17/2016 03:48 PM, Дмитрий Глушенок wrote:
>> Unfortunately not:
>> 
>> Remount FS, then access test file from second client:
>> 
>> [root@srv02 ~]# umount /mnt
>> [root@srv02 ~]# mount -t glusterfs srv01:/test01 /mnt
>> [root@srv02 ~]# ls -l /mnt/passwd 
>> -rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd
>> [root@srv02 ~]# ls -l /R1/test01/
>> итого 4
>> -rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
>> [root@srv02 ~]# 
>> 
>> Then remount FS and check if accessing the file from second node triggered 
>> self-heal on first node:
>> 
>> [root@srv01 ~]# umount /mnt
>> [root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
>> [root@srv01 ~]# ls -l /mnt
> 
> Can you try `stat /mnt/passwd` from this node after remounting? You need to 
> explicitly lookup the file.  `ls -l /mnt`  is only triggering readdir on the 
> parent directory.
> If that doesn't work, is this mount connected to both clients? i.e. if you 
> create a new file from here, is it getting replicated to both bricks?
> 
> -Ravi
> 
>> итого 0
>> [root@srv01 ~]# ls -l /R1/test01/
>> итого 0
>> [root@srv01 ~]#
>> 
>> Nothing appeared.
>> 
>> [root@srv01 ~]# gluster volume info test01
>>  
>> Volume Name: test01
>> Type: Replicate
>> Volume ID: 2c227085-0b06-4804-805c-ea9c1bb11d8b
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: srv01:/R1/test01
>> Brick2: srv02:/R1/test01
>> Options Reconfigured:
>> features.scrub-freq: hourly
>> features.scrub: Active
>> features.bitrot: on
>> transport.address-family: inet
>> performance.readdir-ahead: on
>> nfs.disable: on
>> [root@srv01 ~]# 
>> 
>> [root@srv01 ~]# gluster volume get test01 all | grep heal
>> cluster.background-self-heal-count  8
>>
>> cluster.metadata-self-heal  on   
>>
>> cluster.data-self-heal  on   
>>
>> cluster.entry-self-heal on   
>>
>> cluster.self-heal-daemonon   
>>
>> cluster.heal-timeout600  
>>
>> cluster.self-heal-window-size   1
>>
>> cluster.data-self-heal-algorithm(null)   
>>
>> cluster.self-heal-readdir-size  1KB  
>>
>> cluster.heal-wait-queue-length  128  
>>
>> features.lock-heal  off  
>>
>> features.lock-heal  off  
>>
>> storage.health-check-interval   30   
>>
>> features.ctr_lookupheal_link_timeout300  
>>
>> features.ctr_lookupheal_inode_timeout   300  
>>
>> cluster.disperse-self-heal-daemon   enable   
>>
>> disperse.background-heals   8
>>
>> disperse.heal-wait-qlength  128  
>>
>> cluster.heal-timeout600  
>>
>> cluster.granular-entry-heal no   
>>
>> [root@srv01 ~]#
>> 
>> --
>> Dmitry Glushenok
>> Jet Infosystems
>> 
>>> 17 авг. 2016 г., в 11:30, Ravishankar N >> > написал(а):
>>> 
>>> On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:
 Hello Ravi,
 
 Thank you for reply. Found bug number (for those who will google the 
 email) https://bugzilla.redhat.com/show_bug.cgi?id=1112158 
 
 
 Accessing the removed file from mount-point is not always working because 
 we have to find a special client which DHT will point to the brick with 
 removed file. Otherwise the file will be accessed from good brick and 
 self-healing will not happen (just verified). Or by accessing you meant 
 something like touch?
>>> 
>>> Sorry should have been more explicit. I meant triggering a lookup on that 
>>> file with `stat filename`. I don't think you need a special client. DHT 
>>> sends the lookup to AFR which in turn sends to all its children. When one 
>>> of them returns ENOENT (because you removed it from the brick), AFR will 
>>> automatically trigger heal. I'm guessing it is not always working in your 
>>> case due to caching at various levels and the lookup not coming till AFR. 
>>> If you do it from a fresh mount ,it should always work.
>>> -Ravi
>>> 
 Dmitry 

Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Ravishankar N

On 08/17/2016 03:48 PM, Дмитрий Глушенок wrote:

Unfortunately not:

Remount FS, then access test file from second client:

[root@srv02 ~]# umount /mnt
[root@srv02 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv02 ~]# ls -l /mnt/passwd
-rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd
[root@srv02 ~]# ls -l /R1/test01/
итого 4
-rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
[root@srv02 ~]#

Then remount FS and check if accessing the file from second node 
triggered self-heal on first node:


[root@srv01 ~]# umount /mnt
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# ls -l /mnt


Can you try `stat /mnt/passwd` from this node after remounting? You need 
to explicitly lookup the file.  `ls -l /mnt`  is only triggering readdir 
on the parent directory.
If that doesn't work, is this mount connected to both clients? i.e. if 
you create a new file from here, is it getting replicated to both bricks?


-Ravi


итого 0
[root@srv01 ~]# ls -l /R1/test01/
итого 0
[root@srv01 ~]#

Nothing appeared.

[root@srv01 ~]# gluster volume info test01
Volume Name: test01
Type: Replicate
Volume ID: 2c227085-0b06-4804-805c-ea9c1bb11d8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: srv01:/R1/test01
Brick2: srv02:/R1/test01
Options Reconfigured:
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@srv01 ~]#

[root@srv01 ~]# gluster volume get test01 all | grep heal
cluster.background-self-heal-count  8
cluster.metadata-self-heal  on
cluster.data-self-heal  on
cluster.entry-self-heal on
cluster.self-heal-daemonon
cluster.heal-timeout600
cluster.self-heal-window-size   1
cluster.data-self-heal-algorithm(null)
cluster.self-heal-readdir-size  1KB
cluster.heal-wait-queue-length  128
features.lock-heal  off
features.lock-heal  off
storage.health-check-interval   30
features.ctr_lookupheal_link_timeout300
features.ctr_lookupheal_inode_timeout   300
cluster.disperse-self-heal-daemon   enable
disperse.background-heals   8
disperse.heal-wait-qlength  128
cluster.heal-timeout600
cluster.granular-entry-heal no
[root@srv01 ~]#

--
Dmitry Glushenok
Jet Infosystems

17 авг. 2016 г., в 11:30, Ravishankar N > написал(а):


On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:

Hello Ravi,

Thank you for reply. Found bug number (for those who will google the 
email) https://bugzilla.redhat.com/show_bug.cgi?id=1112158


Accessing the removed file from mount-point is not always working 
because we have to find a special client which DHT will point to the 
brick with removed file. Otherwise the file will be accessed from 
good brick and self-healing will not happen (just verified). Or by 
accessing you meant something like touch?


Sorry should have been more explicit. I meant triggering a lookup on 
that file with `stat filename`. I don't think you need a special 
client. DHT sends the lookup to AFR which in turn sends to all its 
children. When one of them returns ENOENT (because you removed it 
from the brick), AFR will automatically trigger heal. I'm guessing it 
is not always working in your case due to caching at various levels 
and the lookup not coming till AFR. If you do it from a fresh mount 
,it should always work.

-Ravi


Dmitry Glushenok
Jet Infosystems

17 авг. 2016 г., в 4:24, Ravishankar N > написал(а):


On 08/16/2016 10:44 PM, Дмитрий Глушенок wrote:

Hello,

While testing healing after bitrot error it was found that self 
healing cannot heal files which were manually deleted from brick. 
Gluster 3.8.1:


- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2 
 srv01:/R1/test01 srv02:/R1/test01

volume create: test01: success: please start the volume to access data
[root@srv01 ~]# gluster volume start test01
volume start: test01: success
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# cp /etc/passwd /mnt
[root@srv01 ~]# ls -l /mnt
итого 2
-rw-r--r--. 1 root root 1505 авг 16 19:59 passwd

- Then remove test file from first brick like we have to do in 
case of bitrot error in the file


You also need to remove all hard-links to the corrupted file from 
the brick, including the one in the .glusterfs folder.
There is a bug in heal-full that prevents it from crawling all 
bricks of the replica. The right way to heal the corrupted files as 
of now is to access them from the mount-point like you did after 
removing the hard-links. The list of files that are corrupted can 
be obtained with the scrub status command.


Hope this helps,
Ravi


[root@srv01 ~]# rm 

Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Дмитрий Глушенок
Unfortunately not:

Remount FS, then access test file from second client:

[root@srv02 ~]# umount /mnt
[root@srv02 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv02 ~]# ls -l /mnt/passwd 
-rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd
[root@srv02 ~]# ls -l /R1/test01/
итого 4
-rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
[root@srv02 ~]# 

Then remount FS and check if accessing the file from second node triggered 
self-heal on first node:

[root@srv01 ~]# umount /mnt
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]# ls -l /R1/test01/
итого 0
[root@srv01 ~]#

Nothing appeared.

[root@srv01 ~]# gluster volume info test01
 
Volume Name: test01
Type: Replicate
Volume ID: 2c227085-0b06-4804-805c-ea9c1bb11d8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: srv01:/R1/test01
Brick2: srv02:/R1/test01
Options Reconfigured:
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@srv01 ~]# 

[root@srv01 ~]# gluster volume get test01 all | grep heal
cluster.background-self-heal-count  8   
cluster.metadata-self-heal  on  
cluster.data-self-heal  on  
cluster.entry-self-heal on  
cluster.self-heal-daemonon  
cluster.heal-timeout600 
cluster.self-heal-window-size   1   
cluster.data-self-heal-algorithm(null)  
cluster.self-heal-readdir-size  1KB 
cluster.heal-wait-queue-length  128 
features.lock-heal  off 
features.lock-heal  off 
storage.health-check-interval   30  
features.ctr_lookupheal_link_timeout300 
features.ctr_lookupheal_inode_timeout   300 
cluster.disperse-self-heal-daemon   enable  
disperse.background-heals   8   
disperse.heal-wait-qlength  128 
cluster.heal-timeout600 
cluster.granular-entry-heal no  
[root@srv01 ~]#

--
Dmitry Glushenok
Jet Infosystems

> 17 авг. 2016 г., в 11:30, Ravishankar N  написал(а):
> 
> On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:
>> Hello Ravi,
>> 
>> Thank you for reply. Found bug number (for those who will google the email) 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1112158 
>> 
>> 
>> Accessing the removed file from mount-point is not always working because we 
>> have to find a special client which DHT will point to the brick with removed 
>> file. Otherwise the file will be accessed from good brick and self-healing 
>> will not happen (just verified). Or by accessing you meant something like 
>> touch?
> 
> Sorry should have been more explicit. I meant triggering a lookup on that 
> file with `stat filename`. I don't think you need a special client. DHT sends 
> the lookup to AFR which in turn sends to all its children. When one of them 
> returns ENOENT (because you removed it from the brick), AFR will 
> automatically trigger heal. I'm guessing it is not always working in your 
> case due to caching at various levels and the lookup not coming till AFR. 
> If you do it from a fresh mount ,it should always work.
> -Ravi
> 
>> Dmitry Glushenok
>> Jet Infosystems
>> 
>>> 17 авг. 2016 г., в 4:24, Ravishankar N >> > написал(а):
>>> 
>>> On 08/16/2016 10:44 PM, Дмитрий Глушенок wrote:
 Hello,
 
 While testing healing after bitrot error it was found that self healing 
 cannot heal files which were manually deleted from brick. Gluster 3.8.1:
 
 - Create volume, mount it locally and copy test file to it
 [root@srv01 ~]# gluster volume create test01 replica 2  srv01:/R1/test01 
 srv02:/R1/test01
 volume create: test01: success: please start the volume to access data
 [root@srv01 ~]# gluster volume start test01
 volume start: test01: success
 [root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
 [root@srv01 ~]# cp /etc/passwd /mnt
 [root@srv01 ~]# ls -l /mnt
 итого 2
 -rw-r--r--. 1 root root 1505 авг 16 19:59 passwd
 
 - Then remove test file from 

[Gluster-users] Input/Output error only when some files exists

2016-08-17 Thread Berkay Unal
Hi,

I have a strange issue and any help would be appreciated much.

Here is my setup. Servers are ubuntu 14.04 and i am using the repo
ppa:gluster/glusterfs-3.5
for the gluster server.

Server A (Gluster): I am using Gluster server with no replication so it is
more like (GlusterFS Single Server NFS Style). Files are located under
/gluster-storage/

Server B (Client) I have a web server that needs shared storage. The
Gluster Server is mounted to /storage-pool/site/

The mounted volume on the client is used by CMS. So it is a shared storage
for multi CMS web servers. The current setup was working with no problem
until today. When i try to list the files on the client for the folder
/storage-pool/site/content i got the following error.

"ls: cannot open directory .: No such file or directory"

The ls was working with no problem until now.

When i delete some files from this folder on the Server A(Gluster server)
ls starts to work again. If i create the same files again on the server
client ls has the problem again.

So if some files exists in that folder ls is broken and i am getting
Input/Output error.

Hope i could explain the problem. Are there any recommendations

Any help would be appreciated much. Thanks

--
Berkay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community meeting (Wednesday 17th Aug 2016)

2016-08-17 Thread Mohammed Rafi K C


Hi all,

The weekly Gluster community meeting is about to take place in three
hour from now.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting
 )
- date: every Wednesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: *https://public.pad.fsfe.org/p/gluster-community-meetings**

* Currently the following items are listed:
* *GlusterFS 4.0
* **GlusterFS 3.9*
* *GlusterFS 3.8*
* *GlusterFS 3.7
* **GlusterFS 3.6
*** Related projects**
* **Last weeks AIs
* **Open Floor**

*If you have any topic that need to be discussed, please add to the Open
Floor section as a sub topic.
*
*Appreciate your participation.

Regards,
Rafi KC
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Ravishankar N

On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:

Hello Ravi,

Thank you for reply. Found bug number (for those who will google the 
email) https://bugzilla.redhat.com/show_bug.cgi?id=1112158


Accessing the removed file from mount-point is not always working 
because we have to find a special client which DHT will point to the 
brick with removed file. Otherwise the file will be accessed from good 
brick and self-healing will not happen (just verified). Or by 
accessing you meant something like touch?


Sorry should have been more explicit. I meant triggering a lookup on 
that file with `stat filename`. I don't think you need a special client. 
DHT sends the lookup to AFR which in turn sends to all its children. 
When one of them returns ENOENT (because you removed it from the brick), 
AFR will automatically trigger heal. I'm guessing it is not always 
working in your case due to caching at various levels and the lookup not 
coming till AFR. If you do it from a fresh mount ,it should always work.

-Ravi


Dmitry Glushenok
Jet Infosystems

17 авг. 2016 г., в 4:24, Ravishankar N > написал(а):


On 08/16/2016 10:44 PM, Дмитрий Глушенок wrote:

Hello,

While testing healing after bitrot error it was found that self 
healing cannot heal files which were manually deleted from brick. 
Gluster 3.8.1:


- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2 
 srv01:/R1/test01 srv02:/R1/test01

volume create: test01: success: please start the volume to access data
[root@srv01 ~]# gluster volume start test01
volume start: test01: success
[root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
[root@srv01 ~]# cp /etc/passwd /mnt
[root@srv01 ~]# ls -l /mnt
итого 2
-rw-r--r--. 1 root root 1505 авг 16 19:59 passwd

- Then remove test file from first brick like we have to do in case 
of bitrot error in the file


You also need to remove all hard-links to the corrupted file from the 
brick, including the one in the .glusterfs folder.
There is a bug in heal-full that prevents it from crawling all bricks 
of the replica. The right way to heal the corrupted files as of now 
is to access them from the mount-point like you did after removing 
the hard-links. The list of files that are corrupted can be obtained 
with the scrub status command.


Hope this helps,
Ravi


[root@srv01 ~]# rm /R1/test01/passwd
[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]#

- Issue full self heal
[root@srv01 ~]# gluster volume heal test01 full
Launching heal operation to perform full self heal on volume test01 
has been successful

Use heal info commands to check status
[root@srv01 ~]# tail -2 /var/log/glusterfs/glustershd.log
[2016-08-16 16:59:56.483767] I [MSGID: 108026] 
[afr-self-heald.c:611:afr_shd_full_healer] 0-test01-replicate-0: 
starting full sweep on subvol test01-client-0
[2016-08-16 16:59:56.486560] I [MSGID: 108026] 
[afr-self-heald.c:621:afr_shd_full_healer] 0-test01-replicate-0: 
finished full sweep on subvol test01-client-0


- Now we still see no files in mount point (it becomes empty right 
after removing file from the brick)

[root@srv01 ~]# ls -l /mnt
итого 0
[root@srv01 ~]#

- Then try to access file by using full name (lookup-optimize and 
readdir-optimize are turned off by default). Now glusterfs shows the 
file!

[root@srv01 ~]# ls -l /mnt/passwd
-rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd

- And it reappeared in the brick
[root@srv01 ~]# ls -l /R1/test01/
итого 4
-rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
[root@srv01 ~]#

Is it a bug or we can tell self heal to scan all files on all bricks 
in the volume?


--
Dmitry Glushenok
Jet Infosystems

___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Дмитрий Глушенок
Hello Ravi,

Thank you for reply. Found bug number (for those who will google the email) 
https://bugzilla.redhat.com/show_bug.cgi?id=1112158

Accessing the removed file from mount-point is not always working because we 
have to find a special client which DHT will point to the brick with removed 
file. Otherwise the file will be accessed from good brick and self-healing will 
not happen (just verified). Or by accessing you meant something like touch?

--
Dmitry Glushenok
Jet Infosystems

> 17 авг. 2016 г., в 4:24, Ravishankar N  написал(а):
> 
> On 08/16/2016 10:44 PM, Дмитрий Глушенок wrote:
>> Hello,
>> 
>> While testing healing after bitrot error it was found that self healing 
>> cannot heal files which were manually deleted from brick. Gluster 3.8.1:
>> 
>> - Create volume, mount it locally and copy test file to it
>> [root@srv01 ~]# gluster volume create test01 replica 2  srv01:/R1/test01 
>> srv02:/R1/test01
>> volume create: test01: success: please start the volume to access data
>> [root@srv01 ~]# gluster volume start test01
>> volume start: test01: success
>> [root@srv01 ~]# mount -t glusterfs srv01:/test01 /mnt
>> [root@srv01 ~]# cp /etc/passwd /mnt
>> [root@srv01 ~]# ls -l /mnt
>> итого 2
>> -rw-r--r--. 1 root root 1505 авг 16 19:59 passwd
>> 
>> - Then remove test file from first brick like we have to do in case of 
>> bitrot error in the file
> 
> You also need to remove all hard-links to the corrupted file from the brick, 
> including the one in the .glusterfs folder.
> There is a bug in heal-full that prevents it from crawling all bricks of the 
> replica. The right way to heal the corrupted files as of now is to access 
> them from the mount-point like you did after removing the hard-links. The 
> list of files that are corrupted can be obtained with the scrub status 
> command.
> 
> Hope this helps,
> Ravi
> 
>> [root@srv01 ~]# rm /R1/test01/passwd
>> [root@srv01 ~]# ls -l /mnt
>> итого 0
>> [root@srv01 ~]#
>> 
>> - Issue full self heal
>> [root@srv01 ~]# gluster volume heal test01 full
>> Launching heal operation to perform full self heal on volume test01 has been 
>> successful
>> Use heal info commands to check status
>> [root@srv01 ~]# tail -2 /var/log/glusterfs/glustershd.log
>> [2016-08-16 16:59:56.483767] I [MSGID: 108026] 
>> [afr-self-heald.c:611:afr_shd_full_healer] 0-test01-replicate-0: starting 
>> full sweep on subvol test01-client-0
>> [2016-08-16 16:59:56.486560] I [MSGID: 108026] 
>> [afr-self-heald.c:621:afr_shd_full_healer] 0-test01-replicate-0: finished 
>> full sweep on subvol test01-client-0
>> 
>> - Now we still see no files in mount point (it becomes empty right after 
>> removing file from the brick)
>> [root@srv01 ~]# ls -l /mnt
>> итого 0
>> [root@srv01 ~]#
>> 
>> - Then try to access file by using full name (lookup-optimize and 
>> readdir-optimize are turned off by default). Now glusterfs shows the file!
>> [root@srv01 ~]# ls -l /mnt/passwd
>> -rw-r--r--. 1 root root 1505 авг 16 19:59 /mnt/passwd
>> 
>> - And it reappeared in the brick
>> [root@srv01 ~]# ls -l /R1/test01/
>> итого 4
>> -rw-r--r--. 2 root root 1505 авг 16 19:59 passwd
>> [root@srv01 ~]#
>> 
>> Is it a bug or we can tell self heal to scan all files on all bricks in the 
>> volume?
>> 
>> --
>> Dmitry Glushenok
>> Jet Infosystems
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users 
>> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self healing does not see files to heal

2016-08-17 Thread Ravishankar N

On 08/17/2016 10:40 AM, Krutika Dhananjay wrote:

Good question.

Any attempt from a client to access /.shard or its contents from the 
mount point will be met with an EPERM (Operation not permitted). We do 
not expose .shard on the mount point.




Just to be clear, I was referring to the shard xlator accessing the 
participant shard by sending a named lookup when we access the file (say 
'cat /mount/file > /dev/null`) from the mount.
I removed a shard and its hard-link from one of the bricks of a 2 way 
replica, unmounted the client, stopped and started the volume and did 
read the file from a fresh mount. For some reason (I need to debug why), 
a reverse heal seems to be happening where both bricks of the 2-replica 
volume end up with zero byte file for the shard in question.

-Ravi


-Krutika

On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N 
> wrote:


On 08/17/2016 07:25 AM, Lindsay Mathieson wrote:

On 17 August 2016 at 11:24, Ravishankar N
> wrote:

The right way to heal the corrupted files as of now is to
access them from
the mount-point like you did after removing the
hard-links. The list of
files that are corrupted can be obtained with the scrub
status command.


Hows that work with sharding where you can't see the shards
from the
mount point?

If sharding xlator does a named lookup of the shard in question as
and when it is accessed, AFR can heal it. But I'm not sure if that
is the case though. Let me check and get back.
-Ravi



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users