Re: [Gluster-users] 3.8.2 : Node not healing

2016-08-19 Thread Lindsay Mathieson

On 20/08/2016 1:21 AM, David Gossage wrote:
Any issues since then?  Was contemplating updating  from 3.7.14 -> 
3.8.2 this weekend prior to doing some work changing up the underlying 
brick raid levels and needing to do full heals one by one.  So far has 
been fine on my test bed with what limited use I can put on it.


No problems at all, VM's operating normally, performance quite good.


I was wondering if my original problem was being over hasty with a "heal 
full" command - maybe if I had waited a few minutes it would started 
healing normally. Its my understanding that would do a hash comparison 
of all shards, which would take a long time and really thrash the CPU.


I have backups running at the moment, once they are finished I'll repeat 
the test and see how it does when left to its own devices.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.7.1 not honoring 'acl' mount option when using autofs

2016-08-19 Thread Ravishankar N
It looks like http://review.gluster.org/#/c/14686/, which went in 
glusterfs 3.7.13 could solve the issue. You might want to test on this 
version (or later) and see if things work.

-Ravi

On 08/19/2016 08:43 PM, Anthony Altemara wrote:

Hello,

On the gluster client, I'm seeing an issue with using Gluster's 'acl' mount 
option when using autofs. When mounting directly, acl support works fine. When 
mounting via autofs, I'm getting 'operation not supported' when attempting to 
use 'setfacl' on a file on the gluster mount.

Glusterfs version: 3.7.1
Autofs version: 5.0.7
CentOS version: 7.2.1511 (Core)

Autofs line:
*   -fstype=glusterfs,acl gluster-server:/&


 WORKS WITH DIRECT MOUNT 

# stat -f /mountpoint/deleteme
   File: "/mountpoint/deleteme"
 ID: 0Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 518635600  Free: 347741442  Available: 347741442
Inodes: Total: 1327748864 Free: 1315937975

# getfacl /mountpoint/deleteme
getfacl: Removing leading '/' from absolute path names
# file: mountpoint/deleteme
# owner: root
# group: root
user::rw-
user:git-ro:r--
group::r--
mask::r--
other::r--

# stat -f /mountpoint/deleteme
   File: "/mountpoint/deleteme"
 ID: 0Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 518635600  Free: 347740826  Available: 347740826
Inodes: Total: 1327748864 Free: 1315937926

# umount /mountpoint


 BROKEN WITH AUTOFS  MOUNT 

### Contents of auto.gluster-server:
*   -fstype=glusterfs,acl gluster-server:/&

# service autofs start

# setfacl -m 'u:git-ro:r' /mountpoint/deleteme
setfacl: /mountpoint/deleteme: Operation not supported

[root@gluster-client-host]  /srv# strace setfacl -m 'u:git-ro:r' 
/mountpoint/deleteme
execve("/usr/bin/setfacl", ["setfacl", "-m", "u:git-ro:r", "/mountpoint/d"...], 
[/* 23 vars */]) = 0
brk(0)  = 0x83a000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a211000
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=29014, ...}) = 0
mmap(NULL, 29014, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2a5a209000
close(3)= 0
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\37\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=37056, ...}) = 0
mmap(NULL, 2130560, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59de8000
mprotect(0x7f2a59def000, 2097152, PROT_NONE) = 0
mmap(0x7f2a59fef000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f2a59fef000
close(3)= 0
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\23\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=19888, ...}) = 0
mmap(NULL, 2113904, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59be3000
mprotect(0x7f2a59be7000, 2093056, PROT_NONE) = 0
mmap(0x7f2a59de6000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f2a59de6000
close(3)= 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \34\2\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2107816, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a208000
mmap(NULL, 3932736, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59822000
mprotect(0x7f2a599d8000, 2097152, PROT_NONE) = 0
mmap(0x7f2a59bd8000, 24576, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b6000) = 0x7f2a59bd8000
mmap(0x7f2a59bde000, 16960, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f2a59bde000
close(3)= 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a206000
arch_prctl(ARCH_SET_FS, 0x7f2a5a206740) = 0
mprotect(0x7f2a59bd8000, 16384, PROT_READ) = 0
mprotect(0x7f2a59de6000, 4096, PROT_READ) = 0
mprotect(0x7f2a59fef000, 4096, PROT_READ) = 0
mprotect(0x607000, 4096, PROT_READ) = 0
mprotect(0x7f2a5a212000, 4096, PROT_READ) = 0
munmap(0x7f2a5a209000, 29014)   = 0
brk(0)  = 0x83a000
brk(0x85b000)   = 0x85b000
brk(0)  = 0x85b000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=106065056, ...}) = 0
mmap(NULL, 106065056, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2a532fb000
close(3)= 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 

Re: [Gluster-users] One client can effectively hang entire gluster array

2016-08-19 Thread Steve Dainard
As a potential solution on the compute node side, can you have users copy
relevant data from the gluster volume to a local disk (ie $TMDIR), operate
on that disk, write output files to that disk, and then write the results
back to persistent storage once the job is complete?

There are lots of factors to consider, but this is how we operate in a
small compute environment trying to avoid over-loading gluster storage
nodes.

On Fri, Jul 8, 2016 at 6:29 AM, Glomski, Patrick <
patrick.glom...@corvidtec.com> wrote:

> Hello, users and devs.
>
> TL;DR: One gluster client can essentially cause denial of service /
> availability loss to entire gluster array. There's no way to stop it and
> almost no way to find the bad client. Probably all (at least 3.6 and 3.7)
> versions are affected.
>
> We have two large replicate gluster arrays (3.6.6 and 3.7.11) that are
> used in a high-performance computing environment. Two file access cases
> cause severe issues with glusterfs: Some of our scientific codes write
> hundreds of files (~400-500) simultaneously (one file or more per processor
> core, so lots of small or large writes) and others read thousands of files
> (2000-3000) simultaneously to grab metadata from each file (lots of small
> reads).
>
> In either of these situations, one glusterfsd process on whatever peer the
> client is currently talking to will skyrocket to *nproc* cpu usage (800%,
> 1600%) and the storage cluster is essentially useless; all other clients
> will eventually try to read or write data to the overloaded peer and, when
> that happens, their connection will hang. Heals between peers hang because
> the load on the peer is around 1.5x the number of cores or more. This
> occurs in either gluster 3.6 or 3.7, is very repeatable, and happens much
> too frequently.
>
> Even worse, there seems to be no definitive way to diagnose which client
> is causing the issues. Getting 'volume status <> clients' doesn't help
> because it reports the total number of bytes read/written by each client.
> (a) The metadata in question is tiny compared to the multi-gigabyte output
> files being dealt with and (b) the byte-count is cumulative for the clients
> and the compute nodes are always up with the filesystems mounted, so the
> byte transfer counts are astronomical. The best solution I've come up with
> is to blackhole-route traffic from clients one at a time (effectively push
> the traffic over to the other peer), wait a few minutes for all of the
> backlogged traffic to dissipate (if it's going to), see if the load on
> glusterfsd drops, and repeat until I find the client causing the issue. I
> would *love* any ideas on a better way to find rogue clients.
>
> More importantly, though, there must be some feature envorced to stop one
> user from having the capability to render the entire filesystem unavailable
> for all other users. In the worst case, I would even prefer a gluster
> volume option that simply disconnects clients making over some threshold of
> file open requests. That's WAY more preferable than a complete availability
> loss reminiscent of a DDoS attack...
>
> Apologies for the essay and looking forward to any help you can provide.
>
> Thanks,
> Patrick
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] memory leak in glusterd 3.7.x

2016-08-19 Thread Zdenek Styblik
Hello,

we've found a memory leak in glusterd v3.7.x(currently at v3.7.14, but
we are users of v3.7.x from the beginning).
It seems, we've empirically verified, that continuous execution of %
gluster volume set   ; leads to memory leaks in
glusterd and OOM, although not necessarily OOM of glusterd itself.
Settings which were being set over and over again are
`nfs.addr-namelookup false` and `nfs.disable true`. There might have
been other settings, but I was able to find these in recent logs.
Unfortunately, we don't have capacity to debug this issue
further(statedumps are quite overwhelming :] ).
Repeated execution has been caused due to bug in Puppet module we're
using(and we were able to address this issue). Therefore, it's safe to
say that the number of affected users or like hood of somebody else
having these problem is probably low. It's still a memory leak and,
well, rather serious one if you happen to stumble upon it. Also, it
must be noted that this gets amplified if you have more than just
volume.

If there is anything I can help with, let me know.

Please, keep me on CC as I'm not subscribed to the mailing list.

Best regards,
Zdenek Styblik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 转发: 转发: glusterfs chmod/chown bug 1343362

2016-08-19 Thread yanping....@xtaotech.com




yanping@xtaotech.com
 
发件人: yanping@xtaotech.com
发送时间: 2016-08-19 19:07
收件人: ndevos; gluster-devel
抄送: gluster-devel
主题: 转发: 转发: glusterfs chmod/chown bug 1343362




yanping@xtaotech.com
 
发件人: yanping@xtaotech.com
发送时间: 2016-08-17 16:05
收件人: 
抄送: gluster-devel
主题: 转发: glusterfs chmod/chown bug 1343362

 
发件人: yanping@xtaotech.com
发送时间: 2016-08-17 14:33
收件人: thottanjiffin; kshlmster
抄送: javen.wu; peng.hse
主题: glusterfs chmod/chown bug 1343362
hello:
   Thanks  thotz and kshlm  for your fix of the problem glusterfs 
bug1343362   
   
https://github.com/gluster/glusterfs/commit/c04ee47d1e9847b50c459734a42681450005ee60

   I  am aware that you splited the two variables to store the argument of 
setattr and the lookup returned attr. I was thinking increasing the size of 
   call_state_t structure only for the problem might consume more memory 
for all irrelevant operations. I am trying another method and looking forward 
to your advise. 
   You can visit it here  
https://github.com/YanyunGao/glusterfs/commit/f4e248ce2fd4cea74c19d7c9bc9a3fcd7f8edaca
   
How do you think?
  
  Thanks


yanping@xtaotech.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Manoj Pillai

Here's a proposal ...

Title: State of Gluster Performance
Theme: Stability and Performance

I hope to achieve the following in this talk:

* present a brief overview of current performance for the broad
workload classes: large-file sequential and random workloads,
small-file and metadata-intensive workloads.

* highlight some use-cases where we are seeing really good
performance.

* highlight some of the areas of concerns, covering in some detail
the state of analysis and work in progress.

Regards,
Manoj

- Original Message -
> Hey All,
> 
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
> looking to have talks and discussions related to the following themes in
> the summit:
> 
> 1. Gluster.Next - focusing on features shaping the future of Gluster
> 
> 2. Experience - Description of real world experience and feedback from:
> a> Devops and Users deploying Gluster in production
> b> Developers integrating Gluster with other ecosystems
> 
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
> 
> 4. Stability & Performance - focusing on current improvements to reduce
> our technical debt backlog
> 
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
> 
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please
> clearly mention the theme for which your proposal is relevant when you
> do so. We will be ending the CFP by 12 midnight PDT on August 31st, 2016.
> 
> If you have other topics that do not fit in the themes listed, please
> feel free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
> 
> Please do reach out to me or Amye if you have any questions.
> 
> Thanks!
> Vijay
> 
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Humble Devassy Chirammal
You are very welcome !

--Humble


On Fri, Aug 19, 2016 at 5:10 PM, Mohamed Ashiq Liyazudeen <
mliya...@redhat.com> wrote:

> Hi,
>
> I would like to be co-presenter for the talk(Containers and Persistent
> Storage for Containers).
>
> --ashiq
>
> - Original Message -
> From: "Humble Devassy Chirammal" 
> To: "Manoj Pillai" 
> Cc: "Gluster Devel" , "Amye Scavarda" <
> ascav...@redhat.com>, "gluster-users Discussion List" <
> Gluster-users@gluster.org>
> Sent: Friday, August 19, 2016 1:46:52 PM
> Subject: Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer
> Summit
>
> Title: Containers and Perisstent Storage for Containers.
>
> Theme: Integration with emerging technologies.
>
> I would like to cover below topics in this talk.
>
> *) Brief Overview about containers.
> *) Storage in containers
> *) Persistent Storage for containers.
> *) Storage hyperconvergence.
>
> --Humble
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
> --
> Regards,
> Mohamed Ashiq.L
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster 3.7.1 not honoring 'acl' mount option when using autofs

2016-08-19 Thread Anthony Altemara
Hello,

On the gluster client, I'm seeing an issue with using Gluster's 'acl' mount 
option when using autofs. When mounting directly, acl support works fine. When 
mounting via autofs, I'm getting 'operation not supported' when attempting to 
use 'setfacl' on a file on the gluster mount.

Glusterfs version: 3.7.1
Autofs version: 5.0.7
CentOS version: 7.2.1511 (Core)

Autofs line:
*   -fstype=glusterfs,acl gluster-server:/&


 WORKS WITH DIRECT MOUNT 

# stat -f /mountpoint/deleteme
  File: "/mountpoint/deleteme"
ID: 0Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 518635600  Free: 347741442  Available: 347741442
Inodes: Total: 1327748864 Free: 1315937975

# getfacl /mountpoint/deleteme
getfacl: Removing leading '/' from absolute path names
# file: mountpoint/deleteme
# owner: root
# group: root
user::rw-
user:git-ro:r--
group::r--
mask::r--
other::r--

# stat -f /mountpoint/deleteme
  File: "/mountpoint/deleteme"
ID: 0Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 518635600  Free: 347740826  Available: 347740826
Inodes: Total: 1327748864 Free: 1315937926

# umount /mountpoint


 BROKEN WITH AUTOFS  MOUNT 

### Contents of auto.gluster-server:
*   -fstype=glusterfs,acl gluster-server:/&

# service autofs start

# setfacl -m 'u:git-ro:r' /mountpoint/deleteme
setfacl: /mountpoint/deleteme: Operation not supported

[root@gluster-client-host]  /srv# strace setfacl -m 'u:git-ro:r' 
/mountpoint/deleteme
execve("/usr/bin/setfacl", ["setfacl", "-m", "u:git-ro:r", "/mountpoint/d"...], 
[/* 23 vars */]) = 0
brk(0)  = 0x83a000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a211000
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=29014, ...}) = 0
mmap(NULL, 29014, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2a5a209000
close(3)= 0
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\37\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=37056, ...}) = 0
mmap(NULL, 2130560, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59de8000
mprotect(0x7f2a59def000, 2097152, PROT_NONE) = 0
mmap(0x7f2a59fef000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f2a59fef000
close(3)= 0
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\23\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=19888, ...}) = 0
mmap(NULL, 2113904, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59be3000
mprotect(0x7f2a59be7000, 2093056, PROT_NONE) = 0
mmap(0x7f2a59de6000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f2a59de6000
close(3)= 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \34\2\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2107816, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a208000
mmap(NULL, 3932736, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f2a59822000
mprotect(0x7f2a599d8000, 2097152, PROT_NONE) = 0
mmap(0x7f2a59bd8000, 24576, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b6000) = 0x7f2a59bd8000
mmap(0x7f2a59bde000, 16960, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f2a59bde000
close(3)= 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f2a5a206000
arch_prctl(ARCH_SET_FS, 0x7f2a5a206740) = 0
mprotect(0x7f2a59bd8000, 16384, PROT_READ) = 0
mprotect(0x7f2a59de6000, 4096, PROT_READ) = 0
mprotect(0x7f2a59fef000, 4096, PROT_READ) = 0
mprotect(0x607000, 4096, PROT_READ) = 0
mprotect(0x7f2a5a212000, 4096, PROT_READ) = 0
munmap(0x7f2a5a209000, 29014)   = 0
brk(0)  = 0x83a000
brk(0x85b000)   = 0x85b000
brk(0)  = 0x85b000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=106065056, ...}) = 0
mmap(NULL, 106065056, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2a532fb000
close(3)= 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = -1 
ENOENT (No such file or directory)
close(3)= 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = -1 
ENOENT (No 

Re: [Gluster-users] [Gluster-devel] release-3.6 end of life

2016-08-19 Thread Kaleb S. KEITHLEY
On 08/19/2016 11:17 AM, Kaleb S. KEITHLEY wrote:
> On 08/19/2016 08:59 AM, Diego Remolina wrote:
> 
>> My issue is trying to install a particular minor version, or after
>> doing an update to the latest minor version change, i.e. 3.6.5 to
>> 3.6.9, trying to go back to an older release, if there is a problem
>> with the latest.
>>
>> How does one do that in Ubuntu?
>>
>> This shows 3.6.9 is available:
>>
>> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+packages
>>
>> But nothing under 3.6.9 is there, or if it is how do I get those specific 
>> ones?
>>
> 
> You don't. Launchpad doesn't keep old versions. We (we = gluster
> community) have no control over that.
> 
> If you think you're going to want to install an older version then
> you'll need to save copies while they're available.
> 
> The GlusterFS Community decided a long time ago that using Launchpad was
> the preferred way to go.
> 

And you can build your own. Anyone can get a Launchpad account and
create their own PPAs.

The packaging files to build your own packages are in the git repo at
https://github.com/gluster/glusterfs-debian .  You can build any version
of GlusterFS you want.


-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.8.2 : Node not healing

2016-08-19 Thread David Gossage
On Wed, Aug 17, 2016 at 6:21 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> Just as another data point - today I took one server down to add a network
> card. Heal Count got up to around 1500 while I was doing that.
>
> Once the server was back up, it started healing right away, in under a
> hour it was done. While it was healing I brought VM's backup on the node,
> this was not a problem.
>
> This of course was a clean shutdown. The previous one which had issues, I
> killed glusterfsd with VM's still running on that node.


Any issues since then?  Was contemplating updating  from 3.7.14 -> 3.8.2
this weekend prior to doing some work changing up the underlying brick raid
levels and needing to do full heals one by one.  So far has been fine on my
test bed with what limited use I can put on it.


>
>
> --
> Lindsay Mathieson
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] release-3.6 end of life

2016-08-19 Thread Kaleb S. KEITHLEY
On 08/19/2016 08:59 AM, Diego Remolina wrote:

> My issue is trying to install a particular minor version, or after
> doing an update to the latest minor version change, i.e. 3.6.5 to
> 3.6.9, trying to go back to an older release, if there is a problem
> with the latest.
> 
> How does one do that in Ubuntu?
> 
> This shows 3.6.9 is available:
> 
> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+packages
> 
> But nothing under 3.6.9 is there, or if it is how do I get those specific 
> ones?
> 

You don't. Launchpad doesn't keep old versions. We (we = gluster
community) have no control over that.

If you think you're going to want to install an older version then
you'll need to save copies while they're available.

The GlusterFS Community decided a long time ago that using Launchpad was
the preferred way to go.
-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] release-3.6 end of life

2016-08-19 Thread Diego Remolina
Let me correct what I meant in my previous post a little bit, I know
that for different major versions, you need to change PPA:

So for 3.7
deb http://ppa.launchpad.net/gluster/glusterfs-3.7/ubuntu trusty main
deb-src http://ppa.launchpad.net/gluster/glusterfs-3.7/ubuntu trusty main

For 3.6
deb http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu trusty main
deb-src http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu trusty main

So changing between major versions works, change repo, then install.

My issue is trying to install a particular minor version, or after
doing an update to the latest minor version change, i.e. 3.6.5 to
3.6.9, trying to go back to an older release, if there is a problem
with the latest.

How does one do that in Ubuntu?

This shows 3.6.9 is available:

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+packages

But nothing under 3.6.9 is there, or if it is how do I get those specific ones?

Say I need to go back to 3.6.7, I can locate:

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+build/8350060

But no packages there.

Diego

On Fri, Aug 19, 2016 at 5:57 AM, Diego Remolina  wrote:
> The issue is the PPA does not change and the moment a new version is
> released (even minor versions), the old packages are lost forever. Even if
> you go to the PPA itself, the binaries seem to be deleted there.
>
> apt-chache showpkg glusterfs only shows the latest version available, none
> of the older ones are displayed so that you can then downgrade.
>
> I am not sure if there is another way of doing this, but I have never
> managed to successfully find and install the older releases without manually
> obtaining the .deb files if I can find them.
>
> Diego
>
>
> On Aug 18, 2016 5:33 PM, "Lindsay Mathieson" 
> wrote:
>>
>> On 19/08/2016 3:45 AM, Diego Remolina wrote:
>>>
>>> The one thing that still remains a mystery to me is how to downgrade
>>> glusterfs packages in Ubuntu. I have never been able to do that. There
>>> was also a post from someone about it recently on the list and I do
>>> not think it got any replies.
>>
>>
>> I would have assumed something like:
>>
>>
>> 1. stop volume(s)
>>
>> 2. if needed: reset gluster options not available is older version
>>
>> 3. if needed: downgrade op-version
>>
>> 4. stop all gluster daemons :)
>>
>> 5. sudo apt-get purge gluster*
>>
>> 6. sudo ppa-purge 
>>
>> 7. Install older ppa.
>>
>> 8. install older gluster service
>>
>> 9. Start services
>>
>> 10. check peers and status
>>
>> 11. start volume
>>
>> 12. test
>>
>>
>> if 2) and 3) can't be done then I presume you can't downgrade.
>>
>> --
>> Lindsay Mathieson
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Mohamed Ashiq Liyazudeen
Hi,

I would like to be co-presenter for the talk(Containers and Persistent Storage 
for Containers).

--ashiq

- Original Message -
From: "Humble Devassy Chirammal" 
To: "Manoj Pillai" 
Cc: "Gluster Devel" , "Amye Scavarda" 
, "gluster-users Discussion List" 

Sent: Friday, August 19, 2016 1:46:52 PM
Subject: Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

Title: Containers and Perisstent Storage for Containers. 

Theme: Integration with emerging technologies. 

I would like to cover below topics in this talk. 

*) Brief Overview about containers. 
*) Storage in containers 
*) Persistent Storage for containers. 
*) Storage hyperconvergence. 

--Humble 





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Regards, 
Mohamed Ashiq.L 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] release-3.6 end of life

2016-08-19 Thread Diego Remolina
The issue is the PPA does not change and the moment a new version is
released (even minor versions), the old packages are lost forever. Even if
you go to the PPA itself, the binaries seem to be deleted there.

apt-chache showpkg glusterfs only shows the latest version available, none
of the older ones are displayed so that you can then downgrade.

I am not sure if there is another way of doing this, but I have never
managed to successfully find and install the older releases without
manually obtaining the .deb files if I can find them.

Diego

On Aug 18, 2016 5:33 PM, "Lindsay Mathieson" 
wrote:

> On 19/08/2016 3:45 AM, Diego Remolina wrote:
>
>> The one thing that still remains a mystery to me is how to downgrade
>> glusterfs packages in Ubuntu. I have never been able to do that. There
>> was also a post from someone about it recently on the list and I do
>> not think it got any replies.
>>
>
> I would have assumed something like:
>
>
> 1. stop volume(s)
>
> 2. if needed: reset gluster options not available is older version
>
> 3. if needed: downgrade op-version
>
> 4. stop all gluster daemons :)
>
> 5. sudo apt-get purge gluster*
>
> 6. sudo ppa-purge 
>
> 7. Install older ppa.
>
> 8. install older gluster service
>
> 9. Start services
>
> 10. check peers and status
>
> 11. start volume
>
> 12. test
>
>
> if 2) and 3) can't be done then I presume you can't downgrade.
>
> --
> Lindsay Mathieson
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Humble Devassy Chirammal
Title: Containers and Perisstent Storage for Containers.

Theme: Integration with emerging technologies.

I would like to cover below topics in this talk.

*) Brief Overview about containers.
*) Storage in containers
*) Persistent Storage for containers.
*) Storage hyperconvergence.

--Humble
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Mohammed Rafi K C
Hi All,


Here is my proposal

Tittle : Debugging a live gluster file system using .meta directory

Theme : Troubleshooting

Meta is a client side xlator which provide  an interface similar to the
Linux procfs, for GlusterFS runtime and configuration.

The contents are provided using a virtual hidden directory called .meta
which is inside the root of GlusterFS mount.


Planning to cover the following topics:

* current state of meta xlators,

* Information's that can be fetched through .meta directory

* Debugging with .meta directory (for both developers and users)

* Enhancement planned for meta xlators

* Other troubleshooting  options like statedump,io-stat, etc


Regards

Rafi KC

On 08/13/2016 01:18 AM, Vijay Bellur wrote:
> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
> looking to have talks and discussions related to the following themes
> in the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other
> ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to
> reduce our technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these
> themes, please send out your proposal(s) by replying to this thread.
> Please clearly mention the theme for which your proposal is relevant
> when you do so. We will be ending the CFP by 12 midnight PDT on August
> 31st, 2016.
>
> If you have other topics that do not fit in the themes listed, please
> feel free to propose and we might be able to accommodate some of them
> as lightening talks or something similar.
>
> Please do reach out to me or Amye if you have any questions.
>
> Thanks!
> Vijay
>
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] CFP Gluster Developer Summit

2016-08-19 Thread Jiffin Tony Thottan



On 17/08/16 19:26, Kaleb S. KEITHLEY wrote:

I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status


Sorry for the late notice. I am willing to be a co-presenter for the 
above topic.

--
Jiffin


* Architecture of the High Availability Solution for Ganesha and Samba
  - detailed walk through and demo of current implementation
  - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
  (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
  (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users