On Wed, July 31, 2013 9:12 pm, Bharata B Rao wrote:
> On Fri, Jul 26, 2013 at 10:56 PM, Anand Avati
wrote:
>>
>> Looking forward to hearing from you!
>
> Hi Avati,
>
> As many said on the thread, frequent releases will be good.
>
> Another aspect which we felt could be improved is the timely revie
On Fri, Jul 26, 2013 at 10:56 PM, Anand Avati wrote:
>
> Looking forward to hearing from you!
Hi Avati,
As many said on the thread, frequent releases will be good.
Another aspect which we felt could be improved is the timely review
for the patches. Though this is the responsibility of the commu
In particular:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Self_heal.html
shows output being generated from these commands which I am not seeing.
Thanks!
Joel
On Wed, Jul 31, 2013 at 3:24 PM, Joel Young wro
In particular, I would like to enable client side caching.
Where are the docs that tell which files need to be edited to enable a
translator for a particular volume? What commands need to be done to
restart the client?
I am not finding any good docs on how, as a user, to work with translators.
Using gluster 3.4.0, how do I monitor the heal status for a volume?
In the gluster cli, I type:
"volume heal home info" and it returns nothing, just bringing up the
"gluster>" prompt after waiting a while. Does this mean that the system
thinks nothing is wrong?
If I type "volume heal home", aga
Ok Folks, Thanks for helping out. I kicked off my users and forced a
reboot and it looks like it came back up fine.
On Wed, Jul 31, 2013 at 11:45 AM, Joe Julian wrote:
> On 07/31/2013 11:42 AM, Joel Young wrote:
>>
>> On Wed, Jul 31, 2013 at 10:29 AM, Joe Julian wrote:
>>>
>>> To kill a zombie
On 31.07.2013 19:07, Nux! wrote:
On 31.07.2013 18:14, Anand Avati wrote:
On Wed, Jul 31, 2013 at 8:57 AM, Nux! wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation
is
very slow and in the nfs.lo
On 07/31/2013 11:42 AM, Joel Young wrote:
On Wed, Jul 31, 2013 at 10:29 AM, Joe Julian wrote:
To kill a zombie process, you have to kill the parent process.
ps -p 23744 -o ppid=
If the result is 1, then you are stuck rebooting. Otherwise, kill that
process.
Thanks Joe. Unfortunately the par
On Wed, Jul 31, 2013 at 10:29 AM, Joe Julian wrote:
> To kill a zombie process, you have to kill the parent process.
>
> ps -p 23744 -o ppid=
>
> If the result is 1, then you are stuck rebooting. Otherwise, kill that
> process.
Thanks Joe. Unfortunately the parent pid is indeed 1 and I'm not
lik
On 31.07.2013 18:14, Anand Avati wrote:
On Wed, Jul 31, 2013 at 8:57 AM, Nux! wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation
is
very slow and in the nfs.log I see the following:
[2013-07-31
We've been monitoring gluster with an icinga/nagios plugin we wrote that looks
at gluster volume status output (using --xml) to check that the volume and all
bricks are in the 'started' state. Additionally, for 'replicate' volumes, we
look at 'gluster volume heal info' and 'gluster volume heal i
To kill a zombie process, you have to kill the parent process.
ps -p 23744 -o ppid=
If the result is 1, then you are stuck rebooting. Otherwise, kill that process.
Deleting a filename does not close the named pipe, so that caused the failure
below.
Joel Young wrote:
>On Tue, Jul 30, 2013 at 1
On 31.07.2013 18:14, Anand Avati wrote:
On Wed, Jul 31, 2013 at 8:57 AM, Nux! wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation
is
very slow and in the nfs.log I see the following:
[2013-07-31
On Wed, Jul 31, 2013 at 8:57 AM, Nux! wrote:
> On 31.07.2013 16:21, Nux! wrote:
>
>> On 31.07.2013 12:29, Nux! wrote:
>>
>>> Hello,
>>> I'm trying to use a volume on Windows via NFS and every operation is
>>> very slow and in the nfs.log I see the following:
>>> [2013-07-31 11:26:22.644794] W [so
Hi gurus,
I'm back with more shenanigans.
I've been testing a setup with four machines, two drives in each. While
running an rsync to back up a bunch of files to the volume I simulated a
drive failure by forcing one of the drives to remount read-only. I then
took Joe Julian's advice and brought
On 31.07.2013 17:02, Vijay Bellur wrote:
On 07/31/2013 09:27 PM, Nux! wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation
is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.64
On Tue, Jul 30, 2013 at 10:49 PM, Kaushal M wrote:
> I think I've found the problem. The problem is not with the brick port, but
> instead with
> the unix domain socket used for communication between glusterd and glusterfsd.
Makes sense.
> So this is most likely due the zombie process 23744 sti
On 07/31/2013 09:27 PM, Nux! wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.644794] W [socket.c:514:__socket_rwv]
0-socket
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.644794] W [socket.c:514:__socket_rwv]
0-socket.nfs-server: writev failed (Invalid a
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.644794] W [socket.c:514:__socket_rwv]
0-socket.nfs-server: writev failed (Invalid argument)
[2013-07-31 11:26:34.73
Hello,
I'm trying to use a volume on Windows via NFS and every operation is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.644794] W [socket.c:514:__socket_rwv]
0-socket.nfs-server: writev failed (Invalid argument)
[2013-07-31 11:26:34.738955] W [socket.c:514:__socket_
- Original Message -
> From: "Anand Avati"
> To: "Balamurugan Arumugam"
> Cc: "gluster-users" , "Pablo"
> , "Gluster Devel"
>
> Sent: Wednesday, July 31, 2013 12:27:57 PM
> Subject: Re: [Gluster-devel] [Gluster-users] new glusterfs logging framework
>
>
> On Tue, Jul 30, 2013 at 11:
I guess I find a solution:
1. add a additional virtual interface with an own IP
2. create two folder to export
3. volume create test replica 2 IP1:/folder1 IP2:/folder2
It seem to work, so I only need one machine,... and if necessary I could add a
other brick of an other machine.
Bye
Gregor
On 28 Jul 2013, at 19:16, Anand Avati wrote:
> You might want to give the native client another shot by setting "option
> max-file-size 128KB" in the quick-read section of the client volfile in
> /var/lib/glusterd//*fuse*.vol (there will be two). Unfotunately this
> is not settable through the
On 07/31/2013 12:51 PM, Kaushal M wrote:
On Wed, Jul 31, 2013 at 12:23 AM, John Mark Walker
wrote:
Yes, please :)
John,
Is there a location for collecting the known issues with glusterfs-3.4?
Can be added to the known issues section in release notes (part of doc/
folder in git).
Thanks
On Wed, Jul 31, 2013 at 12:23 AM, John Mark Walker
wrote:
>
> Yes, please :)
>
John,
Is there a location for collecting the known issues with glusterfs-3.4?
~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org
Even I have similar requirement in my setup.
Please suggest us the correct way to ensure the replication
.Sejal
From:
Matthew Sacks
To:
gluster-users@gluster.org
Date:
31-07-2013 03:21
Subject:
[Gluster-users] monitoring gluster replication status
Sent by:
gluster-users-boun...@gluster.org
27 matches
Mail list logo