Re: [Gluster-users] Gluster Issue

2016-01-07 Thread ABHISHEK PALIWAL
Hello Kaushal,

Following is the glusterd.service file from Gluster3.7.6 version

+++

[Unit]
Description=GlusterFS, a clustered file-system server
After=network.target rpcbind.service
Before=network-online.target

[Service]
Type=forking
PIDFile=@localstatedir@/run/glusterd.pid
LimitNOFILE=65536
Environment="LOG_LEVEL=INFO"
EnvironmentFile=-@sysconfdir@/sysconfig/glusterd
ExecStart=@prefix@/sbin/glusterd -p @localstatedir@/run/glusterd.pid
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS
KillMode=process

[Install]
WantedBy=multi-user.target

++

But saw in some blogs that Requires=rpcbind.service is necessary is this
might be causing this issue.

I am very new about the Gluster so do not have the much knowledge.

I also tried to install other RPM but they are not causing to reload of
systemd

Regards,
Abhishek

On Fri, Jan 8, 2016 at 12:23 PM, ABHISHEK PALIWAL 
wrote:

> Hello Kaushal,
>
> May be the following will help you:
>
> I am installing glusterfs as an RPM on Powerpc and after installation it
> is causing to reload the system daemon and getting the following syslog
>
> log systemd[1]: Reloading
>
> Because of the systerm daemon reloading all other userspace cgroups
> application getting abnormal shutdown which is effecting the system
>
> but when I used glusterfs pre-installed in rootfs it is working fine.
>
> Please suggest me the proper solution of this problem.
>
>
> On Fri, Jan 8, 2016 at 12:13 PM, Kaushal M  wrote:
>
>> On Fri, Jan 8, 2016 at 12:02 PM, ABHISHEK PALIWAL
>>  wrote:
>> > Is there any one to help me on this is coming on gluster 3.7.6 version
>> for
>> > PowerPC arch.
>> >
>> > Regards,
>> > Abhishek
>> >
>> > On Wed, Jan 6, 2016 at 3:45 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com>
>> > wrote:
>> >>
>> >> Hello,
>> >>
>> >> I am getting the following log message when Removing gluster from
>> rootfs
>> >> and installing gluster as an RPM:
>> >>
>> >>
>> >> info systemd[1]: Reloading.
>>
>> This is just a systemd log and not even related to glusterfs. There is
>> nothing we can do with this.
>>
>> >>
>> >> Due to this trace there are errors in the system and the cgroup
>> >> functionality is getting effected.
>> >>
>> >>
>> >> when I use gluster from rootfs it is working fine.
>> >>
>> >> Please provide the solution for the same.
>> >>
>> >> Thanke in advance..
>> >>
>> >> Regards,
>> >> Abhishek
>> >>
>> >>
>> >
>>
>> Could you describe your problem better? Your mails don't provide any
>> information to work with.
>>
>> >
>> >
>> > --
>> >
>> >
>> >
>> >
>> > Regards
>> > Abhishek Paliwal
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Issue

2016-01-07 Thread ABHISHEK PALIWAL
Hello Kaushal,

May be the following will help you:

I am installing glusterfs as an RPM on Powerpc and after installation it is
causing to reload the system daemon and getting the following syslog

log systemd[1]: Reloading

Because of the systerm daemon reloading all other userspace cgroups
application getting abnormal shutdown which is effecting the system

but when I used glusterfs pre-installed in rootfs it is working fine.

Please suggest me the proper solution of this problem.


On Fri, Jan 8, 2016 at 12:13 PM, Kaushal M  wrote:

> On Fri, Jan 8, 2016 at 12:02 PM, ABHISHEK PALIWAL
>  wrote:
> > Is there any one to help me on this is coming on gluster 3.7.6 version
> for
> > PowerPC arch.
> >
> > Regards,
> > Abhishek
> >
> > On Wed, Jan 6, 2016 at 3:45 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com>
> > wrote:
> >>
> >> Hello,
> >>
> >> I am getting the following log message when Removing gluster from rootfs
> >> and installing gluster as an RPM:
> >>
> >>
> >> info systemd[1]: Reloading.
>
> This is just a systemd log and not even related to glusterfs. There is
> nothing we can do with this.
>
> >>
> >> Due to this trace there are errors in the system and the cgroup
> >> functionality is getting effected.
> >>
> >>
> >> when I use gluster from rootfs it is working fine.
> >>
> >> Please provide the solution for the same.
> >>
> >> Thanke in advance..
> >>
> >> Regards,
> >> Abhishek
> >>
> >>
> >
>
> Could you describe your problem better? Your mails don't provide any
> information to work with.
>
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Issue

2016-01-07 Thread Kaushal M
On Fri, Jan 8, 2016 at 12:02 PM, ABHISHEK PALIWAL
 wrote:
> Is there any one to help me on this is coming on gluster 3.7.6 version for
> PowerPC arch.
>
> Regards,
> Abhishek
>
> On Wed, Jan 6, 2016 at 3:45 PM, ABHISHEK PALIWAL 
> wrote:
>>
>> Hello,
>>
>> I am getting the following log message when Removing gluster from rootfs
>> and installing gluster as an RPM:
>>
>>
>> info systemd[1]: Reloading.

This is just a systemd log and not even related to glusterfs. There is
nothing we can do with this.

>>
>> Due to this trace there are errors in the system and the cgroup
>> functionality is getting effected.
>>
>>
>> when I use gluster from rootfs it is working fine.
>>
>> Please provide the solution for the same.
>>
>> Thanke in advance..
>>
>> Regards,
>> Abhishek
>>
>>
>

Could you describe your problem better? Your mails don't provide any
information to work with.

>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Issue

2016-01-07 Thread ABHISHEK PALIWAL
Is there any one to help me on this is coming on gluster 3.7.6 version for
PowerPC arch.

Regards,
Abhishek

On Wed, Jan 6, 2016 at 3:45 PM, ABHISHEK PALIWAL 
wrote:

> Hello,
>
> I am getting the following log message when Removing gluster from rootfs
> and installing gluster as an RPM:
>
>
> info
> 
>  systemd[1]: Reloading.
>
> Due to this trace there are errors in the system and the cgroup
> functionality is getting effected.
>
>
> when I use gluster from rootfs it is working fine.
>
> Please provide the solution for the same.
>
> Thanke in advance..
>
> Regards,
> Abhishek
>
>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable To Heal On 3.7 Branch With Arbiter

2016-01-07 Thread Ravishankar N

On 01/08/2016 04:08 AM, Kyle Harris wrote:


Brick kvm:/export/brick1

Status: Transport endpoint is not connected

This seems to indicate that this brick is down. I don't see it being 
shown in the output of 'gluster volume status` either. Could you check 
that? In all probability those 12 entries need to be healed to this 
(arbiter) brick.


-Ravi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



[Gluster-users] Unable To Heal On 3.7 Branch With Arbiter

2016-01-07 Thread Kyle Harris
Hello,



I have a rather odd situation I’m hoping someone can help me out with.  I
have a 2 node gluster replica with an arbiter running on the 3.7 branch.  I
don’t appear to have a split-brain yet I am unable able to heal the cluster
after a power failure.  Perhaps someone can tell me how to fix this?  I
don't see much in the logs that may help.  ’Here is some command output
that might be helpful:



gluster volume info gv0:

Volume Name: gv0

Type: Replicate

Volume ID: 14e7bb9c-aa5e-4386-8dd2-83a88d93dc54

Status: Started

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: server1:/export/brick1

Brick2: server2:/export/brick1

Brick3: kvm:/export/brick1

Options Reconfigured:

nfs.acl: off

performance.readdir-ahead: on

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

performance.stat-prefetch: off

cluster.eager-lock: enable

network.remote-dio: enable


---

gluster volume status gv0 detail:

Status of volume: gv0

--

Brick: Brick server1:/export/brick1

TCP Port : 49152

RDMA Port: 0

Online   : Y

Pid  : 4409

File System  : ext3

Device   : /dev/sdb1

Mount Options: rw

Inode Size   : 128

Disk Space Free  : 1.7TB

Total Disk Space : 1.8TB

Inode Count  : 244203520

Free Inodes  : 244203413

--

Brick: Brick server2:/export/brick1

TCP Port : 49152

RDMA Port: 0

Online   : Y

Pid  : 4535

File System  : ext3

Device   : /dev/sdb1

Mount Options: rw

Inode Size   : 128

Disk Space Free  : 1.7TB

Total Disk Space : 1.8TB

Inode Count  : 244203520

Free Inodes  : 244203405


---

Why doesn’t this accomplish anything?

gluster volume heal gv0:

Launching heal operation to perform index self heal on volume gv0 has been
successful

Use heal info commands to check status


---

Or this?

gluster volume heal gv0 full:

Launching heal operation to perform full self heal on volume gv0 has been
successful

Use heal info commands to check status


---

gluster volume heal gv0 info split-brain:

Brick server1:/export/brick1

Number of entries in split-brain: 0



Brick server2:/export/brick1

Number of entries in split-brain: 0



Brick kvm:/export/brick1

Status: Transport endpoint is not connected



---

I can't seem to get these to heal?

gluster volume heal gv0 info:

Brick server1:/export/brick1

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/0b16f938-e859-41e3-bb33-fefba749a578.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/715ddb6c-67af-4047-9fa0-728019b49d63.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/d2873b74-f6be-43a9-bdf1-276761e3e228.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/asdf

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/940ee016-8288-4369-9fb8-9c64cb3af256.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/930196aa-0b85-4482-97ab-3d05e9928884.vhd

Number of entries: 12



Brick server2:/export/brick1

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/d2873b74-f6be-43a9-bdf1-276761e3e228.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/715ddb6c-67af-4047-9fa0-728019b49d63.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/930196aa-0b85-4482-97ab-3d05e9928884.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/asdf

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/940ee016-8288-4369-9fb8-9c64cb3af256.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/0b16f938-e859-41e3-bb33-fefba749a578.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd

Number of entries: 12



Brick kvm:/export/brick1

Status: Transport endpoint is not connected

---

Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Issue

2016-01-07 Thread ABHISHEK PALIWAL
Hello,

I am getting the following log message when Removing gluster from rootfs
and installing gluster as an RPM:


info

 systemd[1]: Reloading.

Due to this trace there are errors in the system and the cgroup
functionality is getting effected.


when I use gluster from rootfs it is working fine.

Please provide the solution for the same.

Thanke in advance..

Regards,
Abhishek
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-07 Thread Oleksandr Natalenko
OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the most recent 
revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision too).

On traversing GlusterFS volume with many files in one folder via NFS mount I 
get an assertion:

===
ganesha.nfsd: inode.c:716: __inode_forget: Assertion `inode->nlookup >=
nlookup' failed.
===

I used GDB on NFS-Ganesha process to get appropriate stacktraces:

1. short stacktrace of failed thread:

https://gist.github.com/7f63bb99c530d26ded18

2. full stacktrace of failed thread:

https://gist.github.com/d9bc7bc8f6a0bbff9e86

3. short stacktrace of all threads:

https://gist.github.com/f31da7725306854c719f

4. full stacktrace of all threads:

https://gist.github.com/65cbc562b01211ea5612

GlusterFS volume configuration:

https://gist.github.com/30f0129d16e25d4a5a52

ganesha.conf:

https://gist.github.com/9b5e59b8d6d8cb84c85d

How I mount NFS share:

===
mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o 
defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100
===

On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wrote:
> Entries_HWMark = 500;


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy v.2 - roadmap

2016-01-07 Thread Sachidananda URS
Hi All,

We are making some changes to the gdeploy design.
With the new changes, it is very easy to develop modules without cluttering
the core functionality.

Documentation on the new design can be found at:

https://github.com/gluster/gdeploy/blob/master/doc/gdeploy-2

And development can be tracked at:

https://github.com/gluster/gdeploy/tree/2.0

Any comments and suggestions are welcome.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users