Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
Not sure why mountd  is registering on 2049 port number. Please try 
resetting volume options (if its fine) using "gluster v reset  
force" and enable only nfs and verify the behavior.


"gluster v set  nfs.disable off"

Thanks,
Soumya

On 05/04/2016 12:17 PM, ABHISHEK PALIWAL wrote:

still having the same problem not able to see NETFACL option in rpcinfo -p

rpcinfo -p
program vers proto   port  service
 104   tcp111  portmapper
 103   tcp111  portmapper
 102   tcp111  portmapper
 104   udp111  portmapper
 103   udp111  portmapper
 102   udp111  portmapper
 153   tcp   2049  mountd
 151   tcp   2049  mountd
 133   tcp   2049  nfs
 1002273   tcp   2049




On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote:



On 05/04/2016 12:04 PM, ABHISHEK PALIWAL wrote:

but gluster mentioned that Gluster NFS works on 38465-38469 that
why I
took these ports.


yes. Gluster-NFS  uses ports 38465-38469 to register the side-band
protocols it needs like MOUNTD, NLM, NFSACL etc.. I am not sure, but
maybe the wording in that documentation was misleading.

Thanks,
Soumya


Regards,
Abhishek

On Wed, May 4, 2016 at 12:00 PM, Soumya Koduri
mailto:skod...@redhat.com>
>> wrote:

  > nfs.port
  > 38465

 Is there any reason behind choosing this port for NFS
protocol? This
 port number seems to be used for MOUNT (V3) protocol as
well and
 that may have resulted in port registration failures. Could you
 please change nfs port to default (2049) or some other port
(other
 than 38465-38469). Also please make sure kernel-nfs is stopeed.

 Thanks,
 Soumya


 On 05/04/2016 11:49 AM, ABHISHEK PALIWAL wrote:

 Hi Soumya,


 Please find attached nfs.log file.

 Regards,
 Abhishek

 On Wed, May 4, 2016 at 11:45 AM, ABHISHEK PALIWAL
 mailto:abhishpali...@gmail.com> >
 
 
>
   
 >

 >>
  
 >
  
  wrote:


Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri

> nfs.port
> 38465

Is there any reason behind choosing this port for NFS protocol? This 
port number seems to be used for MOUNT (V3) protocol as well and that 
may have resulted in port registration failures. Could you please change 
nfs port to default (2049) or some other port (other than 38465-38469). 
Also please make sure kernel-nfs is stopeed.


Thanks,
Soumya


On 05/04/2016 11:49 AM, ABHISHEK PALIWAL wrote:

Hi Soumya,


Please find attached nfs.log file.

Regards,
Abhishek

On Wed, May 4, 2016 at 11:45 AM, ABHISHEK PALIWAL
mailto:abhishpali...@gmail.com>> wrote:

HI Soumya,

Thanks for reply.

Yes, I am getting following error in /var/log/glusterfs/nfs.log file

[2016-04-25 06:27:23.721851] E [MSGID: 112109] [nfs.c:1482:init]
0-nfs: Failed to initialize protocols

Please suggest me how can I resolve it.

Regards,
Abhishek

On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri mailto:skod...@redhat.com>> wrote:

Hi Abhishek,

Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must
be the reason client is not able set ACLs. Could you please
check the log file '/var/lib/glusterfs/nfs.log' if there are any
errors logged with respect protocol registration failures.

Thanks,
Soumya

On 05/04/2016 11:15 AM, ABHISHEK PALIWAL wrote:

Hi Niels,

Please reply it is really urgent.

Regards,
Abhishek

On Tue, May 3, 2016 at 11:36 AM, ABHISHEK PALIWAL
mailto:abhishpali...@gmail.com>
>> wrote:

 Hi Niels,

 Do you require more logs...

 Regards,
 Abhishek

 On Mon, May 2, 2016 at 4:58 PM, ABHISHEK PALIWAL
 mailto:abhishpali...@gmail.com>
>> wrote:

 Hi Niels,


 Here is the output of rpcinfo -p $NFS_SERVER

 root@128:/# rpcinfo -p $NFS_SERVER
 program vers proto   port  service
  104   tcp111  portmapper
  103   tcp111  portmapper
  102   tcp111  portmapper
  104   udp111  portmapper
  103   udp111  portmapper
  102   udp111  portmapper
  153   tcp  38465  mountd
  151   tcp  38465  mountd
  133   tcp  38465  nfs
  1002273   tcp  38465


 out of mount command

 #mount -vvv -t nfs -o acl,vers=3
128.224.95.140:/gv0 /tmp/e
 mount: fstab path: "/etc/fstab"
 mount: mtab path:  "/etc/mtab"
 mount: lock path:  "/etc/mtab~"
 mount: temp path:  "/etc/mtab.tmp"
 mount: UID:0
 mount: eUID:   0
 mount: spec:  "128.224.95.140:/gv0"
 mount: node:  "/tmp/e"
 mount: types: "nfs"
 mount: opts:  "acl,vers=3"
 mount: external mount: argv[0] = "/sbin/mount.nfs"
 mount: external mount: argv[1] = "128.224.95.140:/gv0"
 mount: external mount: argv[2] = "/tmp/e"
 mount: external mount: argv[3] = "-v"
 mount: external mount: argv[4] = "-o"
 mount: external mount: argv[5] = "rw,acl,vers=3"
 mount.nfs: timeout set for Mon May  2 16:58:58 2016
 mount.nfs: trying text-based options
 'acl,vers=3,addr=128.224.95.140'
 mount.nfs: prog 13, trying vers=3, prot=6
 mount.nfs: trying 128.224.95.140 prog 13 vers 3
prot TCP
 port 38465
 mount.nfs: prog 15, trying vers=3, prot=17
 mount.nfs: portmap query retrying: RPC: Program not
registered
 mount.nfs: prog 15, trying vers=3, prot=6
 mount.nfs: trying 128.224.95.140 prog 15 vers 3
prot TCP
 port 38465


 On Mon, May 2, 2016 at 4:36 PM, Niels de Vos
mailto:nde...@redhat.com>
 >> wrote:

 On Mon, May 02, 2016 at 04:14:01PM +0530,
ABHISHEK PALIWAL
 wrote:
  > HI Team,
  >

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread ABHISHEK PALIWAL
Hi Soumya,


Please find attached nfs.log file.

Regards,
Abhishek

On Wed, May 4, 2016 at 11:45 AM, ABHISHEK PALIWAL 
wrote:

> HI Soumya,
>
> Thanks for reply.
>
> Yes, I am getting following error in /var/log/glusterfs/nfs.log file
>
> [2016-04-25 06:27:23.721851] E [MSGID: 112109] [nfs.c:1482:init] 0-nfs:
> Failed to initialize protocols
>
> Please suggest me how can I resolve it.
>
> Regards,
> Abhishek
>
> On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri  wrote:
>
>> Hi Abhishek,
>>
>> Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the
>> reason client is not able set ACLs. Could you please check the log file
>> '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect
>> protocol registration failures.
>>
>> Thanks,
>> Soumya
>>
>> On 05/04/2016 11:15 AM, ABHISHEK PALIWAL wrote:
>>
>>> Hi Niels,
>>>
>>> Please reply it is really urgent.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Tue, May 3, 2016 at 11:36 AM, ABHISHEK PALIWAL
>>> mailto:abhishpali...@gmail.com>> wrote:
>>>
>>> Hi Niels,
>>>
>>> Do you require more logs...
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Mon, May 2, 2016 at 4:58 PM, ABHISHEK PALIWAL
>>> mailto:abhishpali...@gmail.com>> wrote:
>>>
>>> Hi Niels,
>>>
>>>
>>> Here is the output of rpcinfo -p $NFS_SERVER
>>>
>>> root@128:/# rpcinfo -p $NFS_SERVER
>>> program vers proto   port  service
>>>  104   tcp111  portmapper
>>>  103   tcp111  portmapper
>>>  102   tcp111  portmapper
>>>  104   udp111  portmapper
>>>  103   udp111  portmapper
>>>  102   udp111  portmapper
>>>  153   tcp  38465  mountd
>>>  151   tcp  38465  mountd
>>>  133   tcp  38465  nfs
>>>  1002273   tcp  38465
>>>
>>>
>>> out of mount command
>>>
>>> #mount -vvv -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
>>> mount: fstab path: "/etc/fstab"
>>> mount: mtab path:  "/etc/mtab"
>>> mount: lock path:  "/etc/mtab~"
>>> mount: temp path:  "/etc/mtab.tmp"
>>> mount: UID:0
>>> mount: eUID:   0
>>> mount: spec:  "128.224.95.140:/gv0"
>>> mount: node:  "/tmp/e"
>>> mount: types: "nfs"
>>> mount: opts:  "acl,vers=3"
>>> mount: external mount: argv[0] = "/sbin/mount.nfs"
>>> mount: external mount: argv[1] = "128.224.95.140:/gv0"
>>> mount: external mount: argv[2] = "/tmp/e"
>>> mount: external mount: argv[3] = "-v"
>>> mount: external mount: argv[4] = "-o"
>>> mount: external mount: argv[5] = "rw,acl,vers=3"
>>> mount.nfs: timeout set for Mon May  2 16:58:58 2016
>>> mount.nfs: trying text-based options
>>> 'acl,vers=3,addr=128.224.95.140'
>>> mount.nfs: prog 13, trying vers=3, prot=6
>>> mount.nfs: trying 128.224.95.140 prog 13 vers 3 prot TCP
>>> port 38465
>>> mount.nfs: prog 15, trying vers=3, prot=17
>>> mount.nfs: portmap query retrying: RPC: Program not registered
>>> mount.nfs: prog 15, trying vers=3, prot=6
>>> mount.nfs: trying 128.224.95.140 prog 15 vers 3 prot TCP
>>> port 38465
>>>
>>>
>>> On Mon, May 2, 2016 at 4:36 PM, Niels de Vos >> > wrote:
>>>
>>> On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL
>>> wrote:
>>>  > HI Team,
>>>  >
>>>  > I am exporting gluster volume using GlusterNFS  with ACL
>>> support but at NFS
>>>  > client while running 'setfacl' command getting "setfacl:
>>> /tmp/e: Remote I/O
>>>  > error"
>>>  >
>>>  >
>>>  > Following is the NFS option status for Volume:
>>>  >
>>>  > nfs.enable-ino32
>>>  > no
>>>  > nfs.mem-factor
>>>  > 15
>>>  > nfs.export-dirs
>>>  > on
>>>  > nfs.export-volumes
>>>  > on
>>>  > nfs.addr-namelookup
>>>  > off
>>>  > nfs.dynamic-volumes
>>>  > off
>>>  > nfs.register-with-portmap
>>>  > on
>>>  > nfs.outstanding-rpc-limit
>>>  > 16
>>>  > nfs.port
>>>  > 38465
>>>  > nfs.rpc-auth-unix
>>>  > on
>>>  > nfs.rpc-auth-null
>>>  > on
>>>  > nfs.rpc-auth-allow
>>>  > all
>>>  > nfs.rpc-auth-reject
>>>  > none
>>>  > nfs.ports-insecure
>>>  > off
>>>  > nfs.trusted-sync
>>>  > off
>>>  > nfs.trusted-write
>>>  > off
>>>

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread ABHISHEK PALIWAL
HI Soumya,

Thanks for reply.

Yes, I am getting following error in /var/log/glusterfs/nfs.log file

[2016-04-25 06:27:23.721851] E [MSGID: 112109] [nfs.c:1482:init] 0-nfs:
Failed to initialize protocols

Please suggest me how can I resolve it.

Regards,
Abhishek

On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri  wrote:

> Hi Abhishek,
>
> Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the
> reason client is not able set ACLs. Could you please check the log file
> '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect
> protocol registration failures.
>
> Thanks,
> Soumya
>
> On 05/04/2016 11:15 AM, ABHISHEK PALIWAL wrote:
>
>> Hi Niels,
>>
>> Please reply it is really urgent.
>>
>> Regards,
>> Abhishek
>>
>> On Tue, May 3, 2016 at 11:36 AM, ABHISHEK PALIWAL
>> mailto:abhishpali...@gmail.com>> wrote:
>>
>> Hi Niels,
>>
>> Do you require more logs...
>>
>> Regards,
>> Abhishek
>>
>> On Mon, May 2, 2016 at 4:58 PM, ABHISHEK PALIWAL
>> mailto:abhishpali...@gmail.com>> wrote:
>>
>> Hi Niels,
>>
>>
>> Here is the output of rpcinfo -p $NFS_SERVER
>>
>> root@128:/# rpcinfo -p $NFS_SERVER
>> program vers proto   port  service
>>  104   tcp111  portmapper
>>  103   tcp111  portmapper
>>  102   tcp111  portmapper
>>  104   udp111  portmapper
>>  103   udp111  portmapper
>>  102   udp111  portmapper
>>  153   tcp  38465  mountd
>>  151   tcp  38465  mountd
>>  133   tcp  38465  nfs
>>  1002273   tcp  38465
>>
>>
>> out of mount command
>>
>> #mount -vvv -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
>> mount: fstab path: "/etc/fstab"
>> mount: mtab path:  "/etc/mtab"
>> mount: lock path:  "/etc/mtab~"
>> mount: temp path:  "/etc/mtab.tmp"
>> mount: UID:0
>> mount: eUID:   0
>> mount: spec:  "128.224.95.140:/gv0"
>> mount: node:  "/tmp/e"
>> mount: types: "nfs"
>> mount: opts:  "acl,vers=3"
>> mount: external mount: argv[0] = "/sbin/mount.nfs"
>> mount: external mount: argv[1] = "128.224.95.140:/gv0"
>> mount: external mount: argv[2] = "/tmp/e"
>> mount: external mount: argv[3] = "-v"
>> mount: external mount: argv[4] = "-o"
>> mount: external mount: argv[5] = "rw,acl,vers=3"
>> mount.nfs: timeout set for Mon May  2 16:58:58 2016
>> mount.nfs: trying text-based options
>> 'acl,vers=3,addr=128.224.95.140'
>> mount.nfs: prog 13, trying vers=3, prot=6
>> mount.nfs: trying 128.224.95.140 prog 13 vers 3 prot TCP
>> port 38465
>> mount.nfs: prog 15, trying vers=3, prot=17
>> mount.nfs: portmap query retrying: RPC: Program not registered
>> mount.nfs: prog 15, trying vers=3, prot=6
>> mount.nfs: trying 128.224.95.140 prog 15 vers 3 prot TCP
>> port 38465
>>
>>
>> On Mon, May 2, 2016 at 4:36 PM, Niels de Vos > > wrote:
>>
>> On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL
>> wrote:
>>  > HI Team,
>>  >
>>  > I am exporting gluster volume using GlusterNFS  with ACL
>> support but at NFS
>>  > client while running 'setfacl' command getting "setfacl:
>> /tmp/e: Remote I/O
>>  > error"
>>  >
>>  >
>>  > Following is the NFS option status for Volume:
>>  >
>>  > nfs.enable-ino32
>>  > no
>>  > nfs.mem-factor
>>  > 15
>>  > nfs.export-dirs
>>  > on
>>  > nfs.export-volumes
>>  > on
>>  > nfs.addr-namelookup
>>  > off
>>  > nfs.dynamic-volumes
>>  > off
>>  > nfs.register-with-portmap
>>  > on
>>  > nfs.outstanding-rpc-limit
>>  > 16
>>  > nfs.port
>>  > 38465
>>  > nfs.rpc-auth-unix
>>  > on
>>  > nfs.rpc-auth-null
>>  > on
>>  > nfs.rpc-auth-allow
>>  > all
>>  > nfs.rpc-auth-reject
>>  > none
>>  > nfs.ports-insecure
>>  > off
>>  > nfs.trusted-sync
>>  > off
>>  > nfs.trusted-write
>>  > off
>>  > nfs.volume-access
>>  > read-write
>>  > nfs.export-dir
>>  >
>>  > nfs.disable
>>  > off
>>  > nfs.nlm
>>  > on
>>  > nfs.acl
>>  > on
>>  > nfs.mount-

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri

Hi Abhishek,

Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the 
reason client is not able set ACLs. Could you please check the log file 
'/var/lib/glusterfs/nfs.log' if there are any errors logged with respect 
protocol registration failures.


Thanks,
Soumya

On 05/04/2016 11:15 AM, ABHISHEK PALIWAL wrote:

Hi Niels,

Please reply it is really urgent.

Regards,
Abhishek

On Tue, May 3, 2016 at 11:36 AM, ABHISHEK PALIWAL
mailto:abhishpali...@gmail.com>> wrote:

Hi Niels,

Do you require more logs...

Regards,
Abhishek

On Mon, May 2, 2016 at 4:58 PM, ABHISHEK PALIWAL
mailto:abhishpali...@gmail.com>> wrote:

Hi Niels,


Here is the output of rpcinfo -p $NFS_SERVER

root@128:/# rpcinfo -p $NFS_SERVER
program vers proto   port  service
 104   tcp111  portmapper
 103   tcp111  portmapper
 102   tcp111  portmapper
 104   udp111  portmapper
 103   udp111  portmapper
 102   udp111  portmapper
 153   tcp  38465  mountd
 151   tcp  38465  mountd
 133   tcp  38465  nfs
 1002273   tcp  38465


out of mount command

#mount -vvv -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
mount: fstab path: "/etc/fstab"
mount: mtab path:  "/etc/mtab"
mount: lock path:  "/etc/mtab~"
mount: temp path:  "/etc/mtab.tmp"
mount: UID:0
mount: eUID:   0
mount: spec:  "128.224.95.140:/gv0"
mount: node:  "/tmp/e"
mount: types: "nfs"
mount: opts:  "acl,vers=3"
mount: external mount: argv[0] = "/sbin/mount.nfs"
mount: external mount: argv[1] = "128.224.95.140:/gv0"
mount: external mount: argv[2] = "/tmp/e"
mount: external mount: argv[3] = "-v"
mount: external mount: argv[4] = "-o"
mount: external mount: argv[5] = "rw,acl,vers=3"
mount.nfs: timeout set for Mon May  2 16:58:58 2016
mount.nfs: trying text-based options
'acl,vers=3,addr=128.224.95.140'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 128.224.95.140 prog 13 vers 3 prot TCP
port 38465
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 15, trying vers=3, prot=6
mount.nfs: trying 128.224.95.140 prog 15 vers 3 prot TCP
port 38465


On Mon, May 2, 2016 at 4:36 PM, Niels de Vos mailto:nde...@redhat.com>> wrote:

On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL
wrote:
 > HI Team,
 >
 > I am exporting gluster volume using GlusterNFS  with ACL
support but at NFS
 > client while running 'setfacl' command getting "setfacl:
/tmp/e: Remote I/O
 > error"
 >
 >
 > Following is the NFS option status for Volume:
 >
 > nfs.enable-ino32
 > no
 > nfs.mem-factor
 > 15
 > nfs.export-dirs
 > on
 > nfs.export-volumes
 > on
 > nfs.addr-namelookup
 > off
 > nfs.dynamic-volumes
 > off
 > nfs.register-with-portmap
 > on
 > nfs.outstanding-rpc-limit
 > 16
 > nfs.port
 > 38465
 > nfs.rpc-auth-unix
 > on
 > nfs.rpc-auth-null
 > on
 > nfs.rpc-auth-allow
 > all
 > nfs.rpc-auth-reject
 > none
 > nfs.ports-insecure
 > off
 > nfs.trusted-sync
 > off
 > nfs.trusted-write
 > off
 > nfs.volume-access
 > read-write
 > nfs.export-dir
 >
 > nfs.disable
 > off
 > nfs.nlm
 > on
 > nfs.acl
 > on
 > nfs.mount-udp
 > off
 > nfs.mount-rmtab
 > /var/lib/glusterd/nfs/rmtab
 > nfs.rpc-statd
 > /sbin/rpc.statd
 > nfs.server-aux-gids
 > off
 > nfs.drc
 > off
 > nfs.drc-size
 > 0x2
 > nfs.read-size   (1 *
 > 1048576ULL)
 > nfs.write-size  (1 *
 > 1048576ULL)
 > nfs.readdir-size(1 *
 > 1048576ULL)
 > nfs.exports-auth-enable
 > (null)
 > nfs.auth-refresh-interval-sec
 > (null)
 > nfs.auth-

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread ABHISHEK PALIWAL
Hi Niels,

Please reply it is really urgent.

Regards,
Abhishek

On Tue, May 3, 2016 at 11:36 AM, ABHISHEK PALIWAL 
wrote:

> Hi Niels,
>
> Do you require more logs...
>
> Regards,
> Abhishek
>
> On Mon, May 2, 2016 at 4:58 PM, ABHISHEK PALIWAL 
> wrote:
>
>> Hi Niels,
>>
>>
>> Here is the output of rpcinfo -p $NFS_SERVER
>>
>> root@128:/# rpcinfo -p $NFS_SERVER
>>program vers proto   port  service
>> 104   tcp111  portmapper
>> 103   tcp111  portmapper
>> 102   tcp111  portmapper
>> 104   udp111  portmapper
>> 103   udp111  portmapper
>> 102   udp111  portmapper
>> 153   tcp  38465  mountd
>> 151   tcp  38465  mountd
>> 133   tcp  38465  nfs
>> 1002273   tcp  38465
>>
>>
>> out of mount command
>>
>> #mount -vvv -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
>> mount: fstab path: "/etc/fstab"
>> mount: mtab path:  "/etc/mtab"
>> mount: lock path:  "/etc/mtab~"
>> mount: temp path:  "/etc/mtab.tmp"
>> mount: UID:0
>> mount: eUID:   0
>> mount: spec:  "128.224.95.140:/gv0"
>> mount: node:  "/tmp/e"
>> mount: types: "nfs"
>> mount: opts:  "acl,vers=3"
>> mount: external mount: argv[0] = "/sbin/mount.nfs"
>> mount: external mount: argv[1] = "128.224.95.140:/gv0"
>> mount: external mount: argv[2] = "/tmp/e"
>> mount: external mount: argv[3] = "-v"
>> mount: external mount: argv[4] = "-o"
>> mount: external mount: argv[5] = "rw,acl,vers=3"
>> mount.nfs: timeout set for Mon May  2 16:58:58 2016
>> mount.nfs: trying text-based options 'acl,vers=3,addr=128.224.95.140'
>> mount.nfs: prog 13, trying vers=3, prot=6
>> mount.nfs: trying 128.224.95.140 prog 13 vers 3 prot TCP port 38465
>> mount.nfs: prog 15, trying vers=3, prot=17
>> mount.nfs: portmap query retrying: RPC: Program not registered
>> mount.nfs: prog 15, trying vers=3, prot=6
>> mount.nfs: trying 128.224.95.140 prog 15 vers 3 prot TCP port 38465
>>
>>
>> On Mon, May 2, 2016 at 4:36 PM, Niels de Vos  wrote:
>>
>>> On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL wrote:
>>> > HI Team,
>>> >
>>> > I am exporting gluster volume using GlusterNFS  with ACL support but
>>> at NFS
>>> > client while running 'setfacl' command getting "setfacl: /tmp/e:
>>> Remote I/O
>>> > error"
>>> >
>>> >
>>> > Following is the NFS option status for Volume:
>>> >
>>> > nfs.enable-ino32
>>> > no
>>> > nfs.mem-factor
>>> > 15
>>> > nfs.export-dirs
>>> > on
>>> > nfs.export-volumes
>>> > on
>>> > nfs.addr-namelookup
>>> > off
>>> > nfs.dynamic-volumes
>>> > off
>>> > nfs.register-with-portmap
>>> > on
>>> > nfs.outstanding-rpc-limit
>>> > 16
>>> > nfs.port
>>> > 38465
>>> > nfs.rpc-auth-unix
>>> > on
>>> > nfs.rpc-auth-null
>>> > on
>>> > nfs.rpc-auth-allow
>>> > all
>>> > nfs.rpc-auth-reject
>>> > none
>>> > nfs.ports-insecure
>>> > off
>>> > nfs.trusted-sync
>>> > off
>>> > nfs.trusted-write
>>> > off
>>> > nfs.volume-access
>>> > read-write
>>> > nfs.export-dir
>>> >
>>> > nfs.disable
>>> > off
>>> > nfs.nlm
>>> > on
>>> > nfs.acl
>>> > on
>>> > nfs.mount-udp
>>> > off
>>> > nfs.mount-rmtab
>>> > /var/lib/glusterd/nfs/rmtab
>>> > nfs.rpc-statd
>>> > /sbin/rpc.statd
>>> > nfs.server-aux-gids
>>> > off
>>> > nfs.drc
>>> > off
>>> > nfs.drc-size
>>> > 0x2
>>> > nfs.read-size   (1 *
>>> > 1048576ULL)
>>> > nfs.write-size  (1 *
>>> > 1048576ULL)
>>> > nfs.readdir-size(1 *
>>> > 1048576ULL)
>>> > nfs.exports-auth-enable
>>> > (null)
>>> > nfs.auth-refresh-interval-sec
>>> > (null)
>>> > nfs.auth-cache-ttl-sec  (null)
>>> >
>>> > Command to mount exported gluster volume on NFS client is
>>> >
>>> > mount -v -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
>>>
>>> Could you post the output of mounting with 'mount -vvv ...'? In previous
>>> emails I've asked for the output of 'rpcinfo -p $NFS_SERVER', I do not
>>> think I've seen that yet.
>>>
>>> The port used for NFSv3 ACLs on the NFS-server should be listed in
>>> 'netstat -tulpen' and the PID of the process should be the one of the
>>> Gluster/NFS service.
>>>
>>> HTH,
>>> Niels
>>>
>>>
>>> > setfacl -m u:nobody:r /tmp/e
>>> > setfacl: /tmp/e: Remote I/O error
>>> >
>>> > --
>>> >
>>> >
>>> >
>>> >
>>> > Regards
>>> > Abhishek Paliwal
>>>
>>> > ___
>>> > Gluster-devel mailing list
>>> > gluster-de...@gluster.org
>>> > http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Compilation error: undefined reference to `yylex'

2016-05-03 Thread Atin Mukherjee
Kaushal has replied into the same thread [1] and that may help?

[1] http://www.gluster.org/pipermail/gluster-users/2016-May/026514.html

~Atin

On Wed, May 4, 2016 at 10:20 AM, Rob Syme  wrote:

> Hi Atin
>
> Yes, I'm running both ./autogen.sh and ./configure before make but the
> "undefined reference to `yylex'" error remains.
>
> autogen stdout: http://pastebin.com/RWGUe1q1
> autogen stderr: http://pastebin.com/mFeBwBMi
> make stdout: http://pastebin.com/2bmP1Zpp
> make stderr: http://pastebin.com/aGF4iWvs
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] changes to client port range in release 3.1.3

2016-05-03 Thread Vijay Bellur
On Tue, May 3, 2016 at 5:28 AM, Prasanna Kalever  wrote:

> If we see the ephemeral port standards
> 1.  The Internet Assigned Numbers Authority (IANA) suggests the range
> 49152 to 65535
> 2.  Many Linux kernels use the port range 32768 to 61000
> more at [2]
>
> Some of our thoughts include split the current brick port range ( ~16K
> ) into two (may be ~8K each or some other ratio) and use them for
> client and bricks, which could solve the problem but also  introduce a
>  limitation for scalability.
>

I would recommend providing the administrator an ability to override
any logic that we use to implement this behavior.

> The patch [1] goes in 3.1.3, we wanted know if there are any impacts
> caused with these changes.
>
>
> [1] http://review.gluster.org/#/c/13998/


I would have ideally liked the patch to have spent some time in the
review queue after this email was sent. It does look like the patch
got merged within 2 hours after the email was sent. This interval is
grossly inadequate if you are looking to obtain any feedback that can
add value. Nevertheless I have provided my comments on the patchset.
Please incorporate them in a subsequent commit.

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] gluster 3.7.9 permission denied and mv errors

2016-05-03 Thread Glomski, Patrick
Attaching a text file with the same content that is easier to read.

Patrick

On Tue, May 3, 2016 at 4:59 PM, Glomski, Patrick <
patrick.glom...@corvidtec.com> wrote:

> Raghavendra,
>
> Last night the backup had four of these errors and only one of the
> 'retried moves' succeeded. The only one to succeed in moving the file the
> second time had target files on a different gluster peer (gfs01bkp). Not
> sure if that is significant.
>
> Note that I cannot stat the target file over the FUSE mount for any of
> these, but it exists on the bricks. Running an 'ls' on the directory
> containing the file (via FUSE) does not fix the issue. Source and target
> xattrs are appended for all bricks on all machines in the distributed
> volume.
>
> Let me know if there's any other information it would be useful to gather,
> as this issue seems to recur frequently.
>
> Thanks,
> Patrick
>
> # Move failures
>>
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/056-1/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/090-1/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/090-1/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/057-2/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/057-2/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/54/data_collected4' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/54/data_collected4': File exists
>>
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/056-1/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/090-1/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/090-1/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/057-2/data_collected3' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/057-2/data_collected3': File
>> exists
>> /bin/mv: cannot move
>> `./homegfs/hpc_shared/motorsports/54/data_collected4' to
>> `../bkp00/./homegfs/hpc_shared/motorsports/54/data_collected4': File exists
>>
>>
>> 
>> retry: renaming ./homegfs/hpc_shared/motorsports/056-1/data_collected3 ->
>> ../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>>
>> source xattrs
>>   gfs01bkp
>> getfattr:
>> /data/brick01bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>> getfattr:
>> /data/brick02bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>>
>>   gfs02bkp
>> getfattr:
>> /data/brick01bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>> # file:
>> data/brick02bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>>   trusted.bit-rot.version=0x0200570308980001d157
>>   trusted.gfid=0xe07abd8ae861442ebc0df8b20719af30
>>   trusted.pgfid.1776adb6-2925-49d3-9cca-8a04c29f4c05=0x0001
>>
>> getfattr: Removing leading '/' from absolute path names
>> getfattr:
>> /data/brick03bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>> getfattr:
>> /data/brick04bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>> getfattr:
>> /data/brick05bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>>
>> target xattrs
>>   gfs01bkp
>>getfattr:
>> /data/brick01bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>>getfattr:
>> /data/brick02bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
>> No such file or directory
>>
>>   gfs02bkp
>> # file:
>> data/brick01bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>>   trusted.bit-rot.version=0x0200569bb8d20003ed00
>>   trusted.gfid=0xaefffbd0676649cd95eb6dfc874d7a59
>>   trusted.pgfid.f7c5eff3-f474-433b-b10e-480f8353c6b9=0x0001
>>
>> getfattr: Removing leading '/' from absolute path names
>> # file:
>> data/brick02bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>>   trusted.gfid=0xaefffbd0676649cd95eb6dfc874d7a59
>>
>> trusted.glusterfs.dht.linkto=0x6766736261636b75702d636c69656e742d3200
>>   trusted.pgfid.f7c5eff3-f474-433b-b10e-480f8353c6b9=0x0001
>>
>> getfattr: Removing leading '/' from absolute path names
>> getfattr:
>> /data/brick03bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_col

Re: [Gluster-users] [Gluster-devel] gluster 3.7.9 permission denied and mv errors

2016-05-03 Thread Glomski, Patrick
Raghavendra,

Last night the backup had four of these errors and only one of the 'retried
moves' succeeded. The only one to succeed in moving the file the second
time had target files on a different gluster peer (gfs01bkp). Not sure if
that is significant.

Note that I cannot stat the target file over the FUSE mount for any of
these, but it exists on the bricks. Running an 'ls' on the directory
containing the file (via FUSE) does not fix the issue. Source and target
xattrs are appended for all bricks on all machines in the distributed
volume.

Let me know if there's any other information it would be useful to gather,
as this issue seems to recur frequently.

Thanks,
Patrick

# Move failures
>
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/056-1/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3': File
> exists
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/090-1/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/090-1/data_collected3': File
> exists
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/057-2/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/057-2/data_collected3': File
> exists
> /bin/mv: cannot move `./homegfs/hpc_shared/motorsports/54/data_collected4'
> to `../bkp00/./homegfs/hpc_shared/motorsports/54/data_collected4': File
> exists
>
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/056-1/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3': File
> exists
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/090-1/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/090-1/data_collected3': File
> exists
> /bin/mv: cannot move
> `./homegfs/hpc_shared/motorsports/057-2/data_collected3' to
> `../bkp00/./homegfs/hpc_shared/motorsports/057-2/data_collected3': File
> exists
> /bin/mv: cannot move `./homegfs/hpc_shared/motorsports/54/data_collected4'
> to `../bkp00/./homegfs/hpc_shared/motorsports/54/data_collected4': File
> exists
>
>
> 
> retry: renaming ./homegfs/hpc_shared/motorsports/056-1/data_collected3 ->
> ../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>
> source xattrs
>   gfs01bkp
> getfattr:
> /data/brick01bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> getfattr:
> /data/brick02bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
>
>   gfs02bkp
> getfattr:
> /data/brick01bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> # file:
> data/brick02bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>   trusted.bit-rot.version=0x0200570308980001d157
>   trusted.gfid=0xe07abd8ae861442ebc0df8b20719af30
>   trusted.pgfid.1776adb6-2925-49d3-9cca-8a04c29f4c05=0x0001
>
> getfattr: Removing leading '/' from absolute path names
> getfattr:
> /data/brick03bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> getfattr:
> /data/brick04bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> getfattr:
> /data/brick05bkp/gfsbackup/bkp01/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
>
> target xattrs
>   gfs01bkp
>getfattr:
> /data/brick01bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
>getfattr:
> /data/brick02bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
>
>   gfs02bkp
> # file:
> data/brick01bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>   trusted.bit-rot.version=0x0200569bb8d20003ed00
>   trusted.gfid=0xaefffbd0676649cd95eb6dfc874d7a59
>   trusted.pgfid.f7c5eff3-f474-433b-b10e-480f8353c6b9=0x0001
>
> getfattr: Removing leading '/' from absolute path names
> # file:
> data/brick02bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3
>   trusted.gfid=0xaefffbd0676649cd95eb6dfc874d7a59
>   trusted.glusterfs.dht.linkto=0x6766736261636b75702d636c69656e742d3200
>   trusted.pgfid.f7c5eff3-f474-433b-b10e-480f8353c6b9=0x0001
>
> getfattr: Removing leading '/' from absolute path names
> getfattr:
> /data/brick03bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> getfattr:
> /data/brick04bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected3:
> No such file or directory
> getfattr:
> /data/brick05bkp/gfsbackup/bkp01/../bkp00/./homegfs/hpc_shared/motorsports/056-1/data_collected

Re: [Gluster-users] how to detach the peer off line, which carries data

2016-05-03 Thread 袁仲
ok, thanks for your help

On Mon, May 2, 2016 at 8:21 PM, Atin Mukherjee  wrote:

>
>
> On 05/02/2016 01:30 PM, 袁仲 wrote:
> > I am sorry that I get you misunderstanding.
> > Actually I can stop the volume and even delete it. What I really want to
> > express is that  the volume does not allow to be stopped and deleted as
> > some virtual machines running on it.
> > In the case above, P1 has crashed and I have to reinstall the system for
> > P1, so P1 lost the all the information about the volume and other peers
> > mentioned above. When P1 comes back, I want to probe it to the cluster
> > P2/P3 belongs to, and recover brick b1 and b2. So, what should I do?
> Refer
> https://www.gluster.org/pipermail/gluster-users.old/2016-March/025917.html
> >
> > On Sat, Apr 30, 2016 at 11:04 PM, Atin Mukherjee
> > mailto:atin.mukherje...@gmail.com>> wrote:
> >
> > -Atin
> > Sent from one plus one
> > On 30-Apr-2016 8:20 PM, "袁仲"  > > wrote:
> > >
> > > I have a scenes like this:
> > >
> > >
> > > I have 3 peers.  eg. P1, P2 and P3, and each of them has 2 bricks,
> > >
> > > e.g. P1 have 2 bricks, b1 and b2.
> > >
> > >P2 has 2 bricks, b3 and b4.
> > >
> > >P3 has 2 bricks, b5 and b6.
> > >
> > > Based that above, I create  a volume (afr volume) like this:
> > >
> > > b1 and b3 make up a replicate subvolume   rep-sub1
> > >
> > > b4 and b5  make up a replicate subvolume  rep-sub2
> > >
> > > b2 and b6  make up a replicate sub volume rep-sub3
> > >
> > > And rep-sub1,2,3 make up a distribute volume, AND start the volume.
> > >
> > >
> > > now, p1 has a crash or it just disconnected. I want to detach P1
> and the volume has started absolutely can’t be stop or deleted. so I did
> this:  gluster peer detach host-P1.
> >
> > This is destructive, detaching a peer hosting bricks is definitely
> > needs to be blocked otherwise technically you loose the volume as
> > Gluster is a distributed file system. Have you tried to analyze why
> > the node has crashed? And is there any specific reason why do you
> > want to stop the volume as replication gives you the high
> > availability and your volume would still be accessible.  Even if you
> > want to stop the volume, try the following:
> >
> > 1. Restart glusterd, if it still fails go to 2nd step
> > 2. Go for a peer replacement procedure
> >
> > Otherwise, you may try out volume stop force, it may work too.
> >
> > >
> > > but it does not work, the reason is that  P1 has bricks on it
> according to the glusterfs error message printed on shell.
> > >
> > >
> > > so, I comment out  the code leaded the error above, and try again.
> I it really works. Its amazing. And the VM runs on the volume is all right.
> > >
> > > BUT, this leads a big problem that  the glusterd restart failed.
> Both on P2 and P3, but when I remove the stuff below
> /var/lib/glusterfs/vols/, it restarts success. so, I wander that there is
> something about volume.
> > >
> > >
> > > my question is,
> > >
> > > if there is a method to detach  P1 in the scenes above.
> > >
> > > or what issue i will meet if I make it works through modify the
> code source.
> > >
> > >
> > > thanks so much.
> > >
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] changes to client port range in release 3.1.3

2016-05-03 Thread Vijay Bellur
> The patch [1] goes in 3.1.3, we wanted know if there are any impacts
> caused with these changes.


What is 3.1.3? We are way past that release in GlusterFS. Are you
referring to 3.7.12?

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client

2016-05-03 Thread Serkan Çoban
I also checked the df output all 20 bricks are same like below:
/dev/sdu1 7.3T 34M 7.3T 1% /bricks/20

On Tue, May 3, 2016 at 1:40 PM, Raghavendra G  wrote:
>
>
> On Mon, May 2, 2016 at 11:41 AM, Serkan Çoban  wrote:
>>
>> >1. What is the out put of du -hs ? Please get this
>> > information for each of the brick that are part of disperse.
>
>
> Sorry. I needed df output of the filesystem containing brick. Not du. Sorry
> about that.
>
>>
>> There are 20 bricks in disperse-56 and the du -hs output is like:
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 1.8M /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>> 80K /bricks/20
>>
>> I see that gluster is not writing to this disperse set. All other
>> disperse sets are filled 13GB but this one is empty. I see directory
>> structure created but no files in directories.
>> How can I fix the issue? I will try to rebalance but I don't think it
>> will write to this disperse set...
>>
>>
>>
>> On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G 
>> wrote:
>> >
>> >
>> > On Fri, Apr 29, 2016 at 12:32 AM, Serkan Çoban 
>> > wrote:
>> >>
>> >> Hi, I cannot get an answer from user list, so asking to devel list.
>> >>
>> >> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
>> >> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
>> >> adding more bricks.
>> >>
>> >> message on client logs.My cluster is empty there are only a couple of
>> >> GB files for testing. Why this message appear in syslog?
>> >
>> >
>> > dht uses disk usage information from backend export.
>> >
>> > 1. What is the out put of du -hs ? Please get this
>> > information for each of the brick that are part of disperse.
>> > 2. Once you get du information from each brick, the value seen by dht
>> > will
>> > be based on how cluster/disperse aggregates du info (basically statfs
>> > fop).
>> >
>> > The reason for 100% disk usage may be,
>> > In case of 1, backend fs might be shared by data other than brick.
>> > In case of 2, some issues with aggregation.
>> >
>> >> Is is safe to
>> >> ignore it?
>> >
>> >
>> > dht will try not to have data files on the subvol in question
>> > (v0-disperse-56). Hence lookup cost will be two hops for files hashing
>> > to
>> > disperse-56 (note that other fops like read/write/open still have the
>> > cost
>> > of single hop and dont suffer from this penalty). Other than that there
>> > is
>> > no significant harm unless disperse-56 is really running out of space.
>> >
>> > regards,
>> > Raghavendra
>> >
>> >> ___
>> >> Gluster-devel mailing list
>> >> gluster-de...@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/gluster-devel
>> >
>> >
>> >
>> >
>> > --
>> > Raghavendra G
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
> Raghavendra G
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client

2016-05-03 Thread Raghavendra G
On Mon, May 2, 2016 at 11:41 AM, Serkan Çoban  wrote:

> >1. What is the out put of du -hs ? Please get this
> information for each of the brick that are part of disperse.
>

Sorry. I needed df output of the filesystem containing brick. Not du. Sorry
about that.


> There are 20 bricks in disperse-56 and the du -hs output is like:
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 1.8M /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
> 80K /bricks/20
>
> I see that gluster is not writing to this disperse set. All other
> disperse sets are filled 13GB but this one is empty. I see directory
> structure created but no files in directories.
> How can I fix the issue? I will try to rebalance but I don't think it
> will write to this disperse set...
>
>
>
> On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G 
> wrote:
> >
> >
> > On Fri, Apr 29, 2016 at 12:32 AM, Serkan Çoban 
> > wrote:
> >>
> >> Hi, I cannot get an answer from user list, so asking to devel list.
> >>
> >> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
> >> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
> >> adding more bricks.
> >>
> >> message on client logs.My cluster is empty there are only a couple of
> >> GB files for testing. Why this message appear in syslog?
> >
> >
> > dht uses disk usage information from backend export.
> >
> > 1. What is the out put of du -hs ? Please get this
> > information for each of the brick that are part of disperse.
> > 2. Once you get du information from each brick, the value seen by dht
> will
> > be based on how cluster/disperse aggregates du info (basically statfs
> fop).
> >
> > The reason for 100% disk usage may be,
> > In case of 1, backend fs might be shared by data other than brick.
> > In case of 2, some issues with aggregation.
> >
> >> Is is safe to
> >> ignore it?
> >
> >
> > dht will try not to have data files on the subvol in question
> > (v0-disperse-56). Hence lookup cost will be two hops for files hashing to
> > disperse-56 (note that other fops like read/write/open still have the
> cost
> > of single hop and dont suffer from this penalty). Other than that there
> is
> > no significant harm unless disperse-56 is really running out of space.
> >
> > regards,
> > Raghavendra
> >
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> >
> > --
> > Raghavendra G
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] changes to client port range in release 3.1.3

2016-05-03 Thread Prasanna Kalever
Hi all,

The various port ranges in glusterfs as of now:  (very high level view)


client:
  In case of bind secure:
will start from 1023 - 1, In case all these
port exhaust bind to random port (a connect() with out bind() call)
  In case of bind insecure:
will start from 65535 all the way down till 1

bricks/server:
  any port starting from 49152 to 65535

glusterd:
  24007


There was a recent bug, In case of bind secure, client see all ports
as exhausted and connect to a random port which was unfortunately in
brick port map range. So client successfully got a connected on a
given port. Now without these information with glusterd (since pmap
alloc done only at start), it passes the same port to brick, where
brick fails to connect on it (also consider the race situation)


To solve this issue we decided to split the client and brick port ranges. [1]

As usual bricks port map range will be IANA  ephemeral port range i.e
49152-65535.
For clients only in-case of secure ports exhaust (which is a rare
case),  we decided to fall back to registered ports i.e 49151 - 1024


If we see the ephemeral port standards
1.  The Internet Assigned Numbers Authority (IANA) suggests the range
49152 to 65535
2.  Many Linux kernels use the port range 32768 to 61000
more at [2]

Some of our thoughts include split the current brick port range ( ~16K
) into two (may be ~8K each or some other ratio) and use them for
client and bricks, which could solve the problem but also  introduce a
 limitation for scalability.

The patch [1] goes in 3.1.3, we wanted know if there are any impacts
caused with these changes.


[1] http://review.gluster.org/#/c/13998/
[2] https://en.wikipedia.org/wiki/Ephemeral_port


Thanks,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users