Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-01 Thread David Gossage
On Thu, Sep 1, 2016 at 12:09 AM, Krutika Dhananjay 
wrote:

>
>
> On Wed, Aug 31, 2016 at 8:13 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Just as a test I did not shut down the one VM on the cluster as finding a
>> window before weekend where I can shut down all VM's and fit in a full heal
>> is unlikely so wanted to see what occurs.
>>
>>
>> kill -15 brick pid
>> rm -Rf /gluster2/brick1/1
>> mkdir /gluster2/brick1/1
>> mkdir /rhev/data-center/mnt/glusterSD/192.168.71.10\:_glustershard/fake3
>> setfattr -n "user.some-name" -v "some-value"
>> /rhev/data-center/mnt/glusterSD/192.168.71.10\:_glustershard
>>
>> getfattr -d -m . -e hex /gluster2/brick2/1
>> # file: gluster2/brick2/1
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x0001
>> trusted.afr.glustershard-client-0=0x0002
>>
>
> This is unusual. The last digit ought to have been 1 on account of "fake3"
> being created while hte first brick is offline.
>
> This discussion is becoming unnecessary lengthy. Mind if we discuss this
> and sort it out on IRC today, at least the communication will be continuous
> and in real-time. I'm kdhananjay on #gluster (Freenode). Ping me when
> you're online.
>
> -Krutika
>

Thanks for assistance this morning.  Looks like I lost connection in IRC
and didn't realize it so sorry if you came back looking for me.  Let me
know when the steps you worked out have been reviewed and if it's found
safe for production use and I'll give a try.



>
>
>> trusted.afr.glustershard-client-2=0x
>> trusted.gfid=0x0001
>> trusted.glusterfs.dht=0x0001
>> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
>> user.some-name=0x736f6d652d76616c7565
>>
>> getfattr -d -m . -e hex /gluster2/brick3/1
>> # file: gluster2/brick3/1
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x0001
>> trusted.afr.glustershard-client-0=0x0002
>> trusted.gfid=0x0001
>> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
>> user.some-name=0x736f6d652d76616c7565
>>
>> setfattr -n trusted.afr.glustershard-client-0 -v
>> 0x00010002 /gluster2/brick2/1
>> setfattr -n trusted.afr.glustershard-client-0 -v
>> 0x00010002 /gluster2/brick3/1
>>
>> getfattr -d -m . -e hex /gluster2/brick3/1/
>> getfattr: Removing leading '/' from absolute path names
>> # file: gluster2/brick3/1/
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.afr.glustershard-client-0=0x00010002
>> trusted.gfid=0x0001
>> trusted.glusterfs.dht=0x0001
>> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
>> user.some-name=0x736f6d652d76616c7565
>>
>> getfattr -d -m . -e hex /gluster2/brick2/1/
>> getfattr: Removing leading '/' from absolute path names
>> # file: gluster2/brick2/1/
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.afr.glustershard-client-0=0x00010002
>> trusted.afr.glustershard-client-2=0x
>> trusted.gfid=0x0001
>> trusted.glusterfs.dht=0x0001
>> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
>> user.some-name=0x736f6d652d76616c7565
>>
>> gluster v start glustershard force
>>
>> gluster heal counts climbed up and down a little as it healed everything
>> in visible gluster mount and .glusterfs for visible mount files then
>> stalled with around 15 shards and the fake3 directory still in list
>>
>> getfattr -d -m . -e hex /gluster2/brick2/1/
>> getfattr: Removing leading '/' from absolute path names
>> # file: gluster2/brick2/1/
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.afr.glustershard-client-0=0x0001
>> trusted.afr.glustershard-client-2=0x
>> trusted.gfid=0x0001
>> trusted.glusterfs.dht=0x0001
>> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
>> user.some-name=0x736f6d652d76616c7565
>>
>> getfattr -d -m . -e hex /gluster2/brick3/1/
>> getfattr: Removing leading '/' from absolute path names
>> # file: gluster2/brick3/1/
>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f7
>> 23a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.afr.glustershard-client-0=0x0001
>> 

[Gluster-users] Invitation à rejoindre Cloud Openstack Abidjan

2016-09-01 Thread Ousmane Sanogo

Cloud Openstack Abidjan


Rejoignez Ousmane Sanogo et 2 autres Membres à Abidjan. Soyez le premier à être 
informé des prochains Meetups.

Ce groupe s'adresse à tous ceux qui sont intéressés par le Cloud libre avec 
Openstack , Utilisateurs et developpeurs dans le monde de l’open-source.
L'objectif du groupe est créer une communauté autou...

--

Accepter l'invitation

https://secure.meetup.com/n/?s=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXN0IjoiaHR0cHM6Ly9zZWN1cmUubWVldHVwLmNvbS9yZWdpc3Rlci8_Z2o9ZWo0cyZjPTIwMzc3NTI2Jl94dGQ9Z3F0bGJXRnBiRjlqYkdsamE5b0FKRFF3WlRaa09UVXhMV1U1T0RNdE5HRTRNeTFoWldNeUxXRTNOV0ZoTmpZek5UY3pNNnBwYm5acGRHVmxYMmxrcURFeU56RTFNVEkyJnJnPWVqNHMmY3R4PWludiZ0YXJnZXRVcmk9aHR0cHMlM0ElMkYlMkZ3d3cubWVldHVwLmNvbSUyRkNsb3VkLU9wZW5zdGFjay1BYmlkamFuJTJGJTNGZ2olM0RlajRzJTI2cmclM0RlajRzIiwiaG9vayI6ImludiIsImVtYWlsX2ludml0ZWVfaWQiOjEyNzE1MTI2LCJpYXQiOjE0NzI3NTQ4MDcsImp0aSI6ImIyZDEwMmI3LTkwMTItNDQ0ZC1iZmMyLTA1MzcwMTQ1YjQ4MCIsImV4cCI6MTQ3Mzk2NDQwN30%3D.97uucX7xlQ6ehrmY5-HJ-hH-OMjXWRtOuQwIp5wOhvc%3D

--

---
Message envoyé par Meetup de la part de Ousmane Sanogo 
(https://www.meetup.com/Cloud-Openstack-Abidjan/members/212092234/) depuis 
Cloud Openstack Abidjan.


Une question? Vous pouvez contacter le support Meetup via supp...@meetup.com

Je ne souhaite plus recevoir ce type d'e-mail 
(https://secure.meetup.com/n/?s=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJob29rIjoiaW52X29wdG91dCIsImRlc3QiOiJodHRwczovL3d3dy5tZWV0dXAuY29tL2FjY291bnQvb3B0b3V0Lz9zdWJtaXQ9dHJ1ZSZlbz10YzImZW1haWw9aW52aXRlJl9tc191bnN1Yj10cnVlIiwiZW1haWwiOiJnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3JnIiwiaW52aXRlcl9pZCI6MjEyMDkyMjM0LCJpYXQiOjE0NzI3NTQ4MDcsImp0aSI6ImFhNDdhYjdjLTY3MTAtNDU4Mi05ZTkzLWMwM2RmNmFlZGI1MSIsImV4cCI6MTQ3Mzk2NDQwN30%3D.p-pY-frWnI3gofZMnNHyx9Jk0DPtoQrgNOo7TxPZc0A%3D)

Meetup Inc. (https://www.meetup.com/), POB 4668 #37895 New York NY USA 10163
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] group write permissions not being respected

2016-09-01 Thread Pat Haley


Hi Pranith,

In attached file capture.pcap


On 09/01/2016 01:01 PM, Pranith Kumar Karampuri wrote:
You need to capture the file so that we can see the tcpdump in 
wireshark to inspect the uid/gid etc that are going out the wire.


On Thu, Sep 1, 2016 at 10:04 PM, Pat Haley > wrote:



Hi Pranith,

Here is the output when I'm trying a touch command that fails with
"Permission denied"

[root@compute-11-10 ~]# tcpdump -nnSs 0 host 10.1.1.4
tcpdump: verbose output suppressed, use -v or -vv for full
protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535
bytes
12:30:46.248293 IP 10.255.255.124.4215828946 > 10.1.1.4.2049: 208
getattr fh 0,0/22
12:30:46.252509 IP 10.1.1.4.2049 > 10.255.255.124.4215828946:
reply ok 240 getattr NON 3 ids 0/3 sz 0
12:30:46.252596 IP 10.255.255.124.4232606162  >
10.1.1.4.2049: 300 getattr fh 0,0/22
12:30:46.253308 IP 10.1.1.4.2049 > 10.255.255.124.4232606162:
reply ok 52 getattr ERROR: Permission denied
12:30:46.253358 IP 10.255.255.124.4249383378  >
10.1.1.4.2049: 216 getattr fh 0,0/22
12:30:46.260347 IP 10.1.1.4.2049 > 10.255.255.124.4249383378:
reply ok 52 getattr ERROR: No such file or directory
12:30:46.300306 IP 10.255.255.124.931 > 10.1.1.4.2049: Flags [.],
ack 1979284005, win 501, options [nop,nop,TS val 490628016 ecr
75449144], length 0
^C
7 packets captured
7 packets received by filter
0 packets dropped by kernel


On 09/01/2016 03:31 AM, Pranith Kumar Karampuri wrote:

hi Pat,
   I think the other thing we should probably look for would
be to see the tcp dump of what uid/gid parameters are sent over
network when this command is executed.

On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley > wrote:




hi Pat,
  Are you seeing this issue only after migration or even
before? May be we should look at the gid numbers on the disk
and the ones that are coming from client for the given user
to see if they match or not?


-
This issue was not being seen before the migration.  We have
copied the /etc/passwd and /etc/group files from the
front-end machine (the client) to the data server, so they
all match

-

Could you give stat output of the directory in question from
both the brick and the nfs client



--
From the server for gluster:
[root@mseas-data2 ~]# stat /gdata/projects/nsf_alpha
  File: `/gdata/projects/nsf_alpha'
  Size: 4096  Blocks: 8  IO Block: 131072
directory
Device: 13h/19dInode: 13094773206281819436  Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the server for first underlying brick
[root@mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
  File: `/mnt/brick1/projects/nsf_alpha/'
  Size: 4096  Blocks: 8 IO Block: 4096   directory
Device: 800h/2048dInode: 185630 Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/ root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.669990907 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the server for second underlying brick
[root@mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
  File: `/mnt/brick2/projects/nsf_alpha/'
  Size: 4096  Blocks: 8 IO Block: 4096   directory
Device: 810h/2064dInode: 24085468 Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/ root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-03 14:01:52.0 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the client
[root@mseas FixOwn]# stat /gdata/projects/nsf_alpha
  File: `/gdata/projects/nsf_alpha'
  Size: 4096  Blocks: 8 IO Block: 1048576 directory
Device: 23h/35dInode: 13094773206281819436  Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/ root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 

[Gluster-users] bug-upcall-stat.t always fails on master

2016-09-01 Thread Ravishankar N


Test Summary Report
---
./tests/bugs/upcall/bug-upcall-stat.t (Wstat: 0 Tests: 16 Failed: 2)
  Failed tests:  15-16

https://build.gluster.org/job/centos6-regression/470/consoleFull
https://build.gluster.org/job/centos6-regression/471/consoleFull
https://build.gluster.org/job/centos6-regression/469/console

Please take a look. It's failing locally also,

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] group write permissions not being respected

2016-09-01 Thread Pranith Kumar Karampuri
You need to capture the file so that we can see the tcpdump in wireshark to
inspect the uid/gid etc that are going out the wire.

On Thu, Sep 1, 2016 at 10:04 PM, Pat Haley  wrote:

>
> Hi Pranith,
>
> Here is the output when I'm trying a touch command that fails with
> "Permission denied"
>
> [root@compute-11-10 ~]# tcpdump -nnSs 0 host 10.1.1.4
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
> 12:30:46.248293 IP 10.255.255.124.4215828946 > 10.1.1.4.2049: 208 getattr
> fh 0,0/22
> 12:30:46.252509 IP 10.1.1.4.2049 > 10.255.255.124.4215828946: reply ok 240
> getattr NON 3 ids 0/3 sz 0
> 12:30:46.252596 IP 10.255.255.124.4232606162 > 10.1.1.4.2049: 300 getattr
> fh 0,0/22
> 12:30:46.253308 IP 10.1.1.4.2049 > 10.255.255.124.4232606162: reply ok 52
> getattr ERROR: Permission denied
> 12:30:46.253358 IP 10.255.255.124.4249383378 > 10.1.1.4.2049: 216 getattr
> fh 0,0/22
> 12:30:46.260347 IP 10.1.1.4.2049 > 10.255.255.124.4249383378: reply ok 52
> getattr ERROR: No such file or directory
> 12:30:46.300306 IP 10.255.255.124.931 > 10.1.1.4.2049: Flags [.], ack
> 1979284005, win 501, options [nop,nop,TS val 490628016 ecr 75449144],
> length 0
> ^C
> 7 packets captured
> 7 packets received by filter
> 0 packets dropped by kernel
>
>
> On 09/01/2016 03:31 AM, Pranith Kumar Karampuri wrote:
>
> hi Pat,
>I think the other thing we should probably look for would be to see
> the tcp dump of what uid/gid parameters are sent over network when this
> command is executed.
>
> On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley  wrote:
>
>> 
>> 
>>
>> hi Pat,
>>   Are you seeing this issue only after migration or even before? May
>> be we should look at the gid numbers on the disk and the ones that are
>> coming from client for the given user to see if they match or not?
>>
>> 
>> -
>> This issue was not being seen before the migration.  We have copied the
>> /etc/passwd and /etc/group files from the front-end machine (the client) to
>> the data server, so they all match
>> 
>> -
>>
>> Could you give stat output of the directory in question from both the
>> brick and the nfs client
>>
>> 
>> --
>> From the server for gluster:
>> [root@mseas-data2 ~]# stat /gdata/projects/nsf_alpha
>>   File: `/gdata/projects/nsf_alpha'
>>   Size: 4096  Blocks: 8  IO Block: 131072 directory
>> Device: 13h/19dInode: 13094773206281819436  Links: 13
>> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
>> Access: 2016-08-31 19:08:59.735990904 -0400
>> Modify: 2016-08-31 16:37:09.048997167 -0400
>> Change: 2016-08-31 16:37:41.315997148 -0400
>>
>> From the server for first underlying brick
>> [root@mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
>>   File: `/mnt/brick1/projects/nsf_alpha/'
>>   Size: 4096  Blocks: 8  IO Block: 4096   directory
>> Device: 800h/2048dInode: 185630  Links: 13
>> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
>> Access: 2016-08-31 19:08:59.669990907 -0400
>> Modify: 2016-08-31 16:37:09.048997167 -0400
>> Change: 2016-08-31 16:37:41.315997148 -0400
>>
>> From the server for second underlying brick
>> [root@mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
>>   File: `/mnt/brick2/projects/nsf_alpha/'
>>   Size: 4096  Blocks: 8  IO Block: 4096   directory
>> Device: 810h/2064dInode: 24085468Links: 13
>> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
>> Access: 2016-08-31 19:08:59.735990904 -0400
>> Modify: 2016-08-03 14:01:52.0 -0400
>> Change: 2016-08-31 16:37:41.315997148 -0400
>>
>> From the client
>> [root@mseas FixOwn]# stat /gdata/projects/nsf_alpha
>>   File: `/gdata/projects/nsf_alpha'
>>   Size: 4096  Blocks: 8  IO Block: 1048576 directory
>> Device: 23h/35dInode: 13094773206281819436  Links: 13
>> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
>> Access: 2016-08-31 19:08:59.735990904 -0400
>> Modify: 2016-08-31 16:37:09.048997167 -0400
>> Change: 2016-08-31 16:37:41.315997148 -0400
>>
>> 
>> 
>>
>> Could you also let us know version of gluster you are using
>>
>> 
>> -
>>
>>
>> [root@mseas-data2 ~]# gluster --version
>> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>>
>> [root@mseas-data2 ~]# gluster volume 

Re: [Gluster-users] CFP for Gluster Developer Summit

2016-09-01 Thread Amye Scavarda
Thanks all! The CfP is closed as of yesterday.
We'll be reaching out next week to the selected talks.

Let me know if you have further questions.
- amye

On Fri, Aug 12, 2016 at 12:48 PM, Vijay Bellur  wrote:

> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
> looking to have talks and discussions related to the following themes in
> the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to reduce
> our technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
>
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
>
> Please do reach out to me or Amye if you have any questions.
>
> Thanks!
> Vijay
>
> [1] https://www.gluster.org/events/summit2016/
>



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] group write permissions not being respected

2016-09-01 Thread Pat Haley


Hi Pranith,

Here is the output when I'm trying a touch command that fails with 
"Permission denied"


[root@compute-11-10 ~]# tcpdump -nnSs 0 host 10.1.1.4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
12:30:46.248293 IP 10.255.255.124.4215828946 > 10.1.1.4.2049: 208 
getattr fh 0,0/22
12:30:46.252509 IP 10.1.1.4.2049 > 10.255.255.124.4215828946: reply ok 
240 getattr NON 3 ids 0/3 sz 0
12:30:46.252596 IP 10.255.255.124.4232606162 > 10.1.1.4.2049: 300 
getattr fh 0,0/22
12:30:46.253308 IP 10.1.1.4.2049 > 10.255.255.124.4232606162: reply ok 
52 getattr ERROR: Permission denied
12:30:46.253358 IP 10.255.255.124.4249383378 > 10.1.1.4.2049: 216 
getattr fh 0,0/22
12:30:46.260347 IP 10.1.1.4.2049 > 10.255.255.124.4249383378: reply ok 
52 getattr ERROR: No such file or directory
12:30:46.300306 IP 10.255.255.124.931 > 10.1.1.4.2049: Flags [.], ack 
1979284005, win 501, options [nop,nop,TS val 490628016 ecr 75449144], 
length 0

^C
7 packets captured
7 packets received by filter
0 packets dropped by kernel


On 09/01/2016 03:31 AM, Pranith Kumar Karampuri wrote:

hi Pat,
   I think the other thing we should probably look for would be to 
see the tcp dump of what uid/gid parameters are sent over network when 
this command is executed.


On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley > wrote:





hi Pat,
  Are you seeing this issue only after migration or even
before? May be we should look at the gid numbers on the disk and
the ones that are coming from client for the given user to see if
they match or not?


-
This issue was not being seen before the migration.  We have
copied the /etc/passwd and /etc/group files from the front-end
machine (the client) to the data server, so they all match

-

Could you give stat output of the directory in question from both
the brick and the nfs client



--
From the server for gluster:
[root@mseas-data2 ~]# stat /gdata/projects/nsf_alpha
  File: `/gdata/projects/nsf_alpha'
  Size: 4096  Blocks: 8  IO Block: 131072 directory
Device: 13h/19dInode: 13094773206281819436 Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/ root)   Gid: (  598/nsf_alpha)
Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the server for first underlying brick
[root@mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
  File: `/mnt/brick1/projects/nsf_alpha/'
  Size: 4096  Blocks: 8  IO Block: 4096   directory
Device: 800h/2048dInode: 185630  Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.669990907 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the server for second underlying brick
[root@mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
  File: `/mnt/brick2/projects/nsf_alpha/'
  Size: 4096  Blocks: 8  IO Block: 4096   directory
Device: 810h/2064dInode: 24085468Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-03 14:01:52.0 -0400
Change: 2016-08-31 16:37:41.315997148 -0400

From the client
[root@mseas FixOwn]# stat /gdata/projects/nsf_alpha
  File: `/gdata/projects/nsf_alpha'
  Size: 4096  Blocks: 8  IO Block: 1048576 directory
Device: 23h/35dInode: 13094773206281819436  Links: 13
Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: ( 
598/nsf_alpha)

Access: 2016-08-31 19:08:59.735990904 -0400
Modify: 2016-08-31 16:37:09.048997167 -0400
Change: 2016-08-31 16:37:41.315997148 -0400




Could you also let us know version of gluster you are using

-



[root@mseas-data2 ~]# gluster --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:22


[root@mseas-data2 ~]# gluster volume info

Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 

Re: [Gluster-users] Switching bricks

2016-09-01 Thread Pranith Kumar Karampuri
On Tue, Aug 30, 2016 at 5:52 PM, Kevin Lemonnier 
wrote:

> Hi,
>
> I'm about to bump a 1x3 (replicated) volume up to 2x3, but I just realised
> the 3 new servers
> are physically in the same datacenter. Is there a safe way to switch a
> brick from the first
> replica set with one from the second replica set ?
>
> The only way I see how would be to go down to replica 2, removing a brick
> from the first replica,
> then add 2 of the new servers as disperese (at that point the volume would
> be 2x2),
> then go up to replica 3 adding the third one plus the one I removed
> earlier.
> That should work, right ? There is no other "better" way of doing it ?
>

What you want to do sounds like replace-brick. From what I remember you use
sharding,so all the replace-brick changes after 3.7.3 release are already
in, so you just need to execute "gluster volume replace-brick 
  commit force". Please make sure new-brick has no
data. This will involve full brick healing. So you may want to wait for
that to complete before starting add-brick/rebalance. Let us know if you
find something that you don't expect in your test runs of this step.


> Thanks,
> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Need help to design a data storage

2016-09-01 Thread Xavier Hernandez

Hi,

On 09/08/16 20:43, Gandalf Corvotempesta wrote:

Il 09 ago 2016 19:57, "Ashish Pandey" > ha scritto:

Yes, redundant data spread across multiple servers. In my example I

mentioned 6 different nodes each with one brick.

Point is that for 4+2 you can loose any 2 bricks. It could be because

of node failure or brick failure.

1 - 6 bricks on 6 different nodes - any 2 nodes may go down - EC win

However if you have only 2 nodes and 3 bricks on each nodes, then yes

in this case even if one node goes down, ec will fail because that will
cause 3 bricks down.

In this case replica 3 would win.


6 nodes with 1 brick each is a surreal case.
A much common case is multiple nodes with multiple bricks, something
like 9 nodes with 12 bricks each. (In example,  a 2U supermicro server
with 12 disks)

In this case, EC replicas could be placed on a single server.


Not really. The disperse sets, like the replica sets, are defined when 
the volume is created. You must make sure that every disperse set is 
made of bricks from different servers. If this condition is satisfied 
while creating the volume, there won't be two fragments of the same file 
on two bricks of the same server.




And with 9*12 bricks you still have 2 single disks (or one server if
both are placed on the same hardware) as failure domains.
Yes, you'll get 9*(12-2) usable bricks and not (9*12)/3 but you risk
data loss for sure.


It's true that the probability of failure of a distributed-replicated 
volume is smaller than a distributed-dispersed one. However if you are 
considering big volumes of redundancy 2 or higher, replica gets 
prohibitively expensive and wastes a lot of bandwidth.


You can reduce local disk failure probability by creating bricks over a 
RAID5 or RAID6 if you want. It will waste more disks, but many less than 
a replica.




Just a question:  with EC which is the right calc method between these 3:

a)  (#servers*#bricks)-#replicas

Or

b) #servers*(#bricks - #replicas)

Or

c) (#servers-#replicas)*#bricks

In case A I'll use 2 disks as replica for the whole volume (exactly like
a raid6)

In case B I'll use 2 disks from each server as replica

in case C I'll use 2 whole servers as replica (this is the most secure
as i can loose 2 whole servers)


In fact none of these is completely correct. The redundancy level is per 
disperse set, not for the whole volume.


S: number of servers
D: number of disks per server
N: Disperse set size
R: Disperse redundancy

Usable disks = S * D * (1 - R / N)





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] EC clarification

2016-09-01 Thread Xavier Hernandez

Hi,

On 27/08/16 10:57, Gandalf Corvotempesta wrote:

In short: how can i set the node hosting the erasure codes? In a 16+4 EC
(or bigger) i would like to put the 4 bricks hosting the ECs on 4
different servers so that i can loose 4 servers and still be able to
access/recover data


EC builds several fragments of data for each file. In the case of a 16+4 
configuration, a file of size 1MB is transformed into 20 smaller files 
(fragments) of 64KB.


To recover the original file *any* subset of 16 fragments is enough. 
There aren't special fragments with more information or importance.


If you put more than one fragment into the same server, you will lose 
all the fragments if the server goes down. If there are more than 4 
fragments on that server, the file will be unrecoverable until the 
server is brought up again.


Putting more than one fragment into a single server only makes sense to 
account for disk failures, since the protection against server failures 
is lower.


Xavi





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Data reconstruction from an EC volume

2016-09-01 Thread Xavier Hernandez

Hi,

On 27/08/16 11:01, Gandalf Corvotempesta wrote:

il 25 gen 2016 15:29, "Serkan Çoban" > ha scritto:

Hi Pranith,

I want to use the tool in case of disaster.
If somehow we cannot start gluster or some problem happened during

upgrade and

we cannot roll back or continue I don't want to loose my files.
I prefer the tool connects to machines and reconstruct the files to
some other path...
It would be great if you write such a tool.


I'm also waiting for this
having a *standalone* tools that would be able to reconstruct files in
case of disaster would be great.


I've this in my todo list. Not sure when I'll be able to do that though.

BTW, should the tool be able to connect directly to the bricks ? or is 
it enough if it can reconstruct the file from the fragments manually 
copied in local ?


Xavi





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] group write permissions not being respected

2016-09-01 Thread Pranith Kumar Karampuri
hi Pat,
   I think the other thing we should probably look for would be to see
the tcp dump of what uid/gid parameters are sent over network when this
command is executed.

On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley  wrote:

> 
> 
>
> hi Pat,
>   Are you seeing this issue only after migration or even before? May
> be we should look at the gid numbers on the disk and the ones that are
> coming from client for the given user to see if they match or not?
>
> 
> -
> This issue was not being seen before the migration.  We have copied the
> /etc/passwd and /etc/group files from the front-end machine (the client) to
> the data server, so they all match
> 
> -
>
> Could you give stat output of the directory in question from both the
> brick and the nfs client
>
> 
> --
> From the server for gluster:
> [root@mseas-data2 ~]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096  Blocks: 8  IO Block: 131072 directory
> Device: 13h/19dInode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for first underlying brick
> [root@mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
>   File: `/mnt/brick1/projects/nsf_alpha/'
>   Size: 4096  Blocks: 8  IO Block: 4096   directory
> Device: 800h/2048dInode: 185630  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.669990907 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for second underlying brick
> [root@mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
>   File: `/mnt/brick2/projects/nsf_alpha/'
>   Size: 4096  Blocks: 8  IO Block: 4096   directory
> Device: 810h/2064dInode: 24085468Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-03 14:01:52.0 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the client
> [root@mseas FixOwn]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096  Blocks: 8  IO Block: 1048576 directory
> Device: 23h/35dInode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (0/root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> 
> 
>
> Could you also let us know version of gluster you are using
>
> 
> -
>
>
> [root@mseas-data2 ~]# gluster --version
> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>
> [root@mseas-data2 ~]# gluster volume info
>
> Volume Name: data-volume
> Type: Distribute
> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: mseas-data2:/mnt/brick1
> Brick2: mseas-data2:/mnt/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: off
>
> [root@mseas-data2 ~]# gluster volume status
> Status of volume: data-volume
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick mseas-data2:/mnt/brick1   49154 0  Y
> 5005
> Brick mseas-data2:/mnt/brick2   49155 0  Y
> 5010
>
> Task Status of Volume data-volume
> 
> --
> Task : Rebalance
> ID   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
> Status   : completed
>
>
> [root@mseas-data2 ~]# gluster peer status
> Number of Peers: 0
>
>
> 
> -
>
> On Thu, Sep 1, 2016 at 2:46 AM, Pat Haley  wrote:
>
>>
>> Hi,
>>
>> Another piece of data.  There are 2 distinct volumes on the file server
>>
>>1. a straight nfs partition
>>2. a gluster volume (served over nfs)
>>
>> The straight nfs partition does respect the group write permissions,
>> while the gluster 

[Gluster-users] GlusterFS 3.7.15 released

2016-09-01 Thread Kaushal M
GlusterFS 3.7.15 has been released. This is a regular scheduled
release for GlusterFS-3.7 and includes 26 bug fixes since 3.7.14.
The release-notes can be read at [1].

## Downloads

The tarball can be downloaded from [2].

### Packages

Binary packages have been built are in the process of being made
available as updates.

The CentOS Storage SIG packages have been built and will become
available in the centos-gluster37-test repository (from the
centos-release-gluster37 package) shortly.
These will be made available in the release repository after some more testing.

Packages for Fedora 23 are queued for testing in Fedora Koji/Bodhi.
They will appear first via dnf in the Updates-Testing repo, then in
the Updates repo.

Packages for Fedora 24, 25, 26; Debian wheezy, jessie, and stretch,
are available now on [2].

Packages for Ubuntu Trusty, Wily, and Xenial are available now in Launchpad.

Packages for SuSE available now in the SuSE build system.

See the READMEs in the respective subdirs at [2] for more details on
how to obtain them.

## Next release

GlusterFS-3.7.16 will be the next release for GlusterFS-3.7, and is
currently targetted for release on 30th September 2016.
The tracker bug[3] for GlusterFS-3.7.16 has been created. Bugs that
need to be included in 3.7.16 need to be marked as dependencies of
this bug.



[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.15.md
[2]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.15/
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.16
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users