On 09/08/2014 10:29 PM, Krishnan Parthasarathi wrote:
While the proposal for Glusterd-2.0 is doing its rounds in the devel/users
lists, let me find out how the Go toolchain fares in debugging a live
application and a core file, with a dash of go routines and channels for good
effect :-)
On Tue, Sep 09, 2014 at 02:03:28AM -0400, Krishnan Parthasarathi wrote:
IIRC, lazy umount is being used in glusterd to avoid leaving
behind an entry in /etc/mtab, for every internal mount that
failed to unmount for some reason. replace-brick command doesn't
have a requirement that the umount
Emmanuel,
Yeah, that should do. I don't think you need
to go out of the way to support lazy umount functionality
in *BSD, when it's not essential to the working of replace-brick
and other commands that use lazy umount.
~KP
- Original Message -
On Tue, Sep 09, 2014 at 02:03:28AM
Hi all:
I do the following test:
I create a glusterfs replica volume (replica count is 2 ) with two server
node(server A and server B),use XFS as the underlying filesystem, then mount
the volume in client node,
then, I shut down the network of server A node, in client node,
Anyone is welcome to join us in this IRC meeting on Freenode. This event
is happening in #gluster-meeting in about 5 minutes from now.
Please see the agenda:
- https://public.pad.fsfe.org/p/gluster-bug-triage
Thanks,
Niels
___
Gluster-devel mailing
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2014-09-09/gluster-meeting.2014-09-09-12.03.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2014-09-09/gluster-meeting.2014-09-09-12.03.txt
Log:
On Tue, Sep 09, 2014 at 03:21:59AM -0400, Krishnan Parthasarathi wrote:
Yeah, that should do. I don't think you need
to go out of the way to support lazy umount functionality
in *BSD, when it's not essential to the working of replace-brick
and other commands that use lazy umount.
Here is my
On 08/06/2014 06:26 PM, Justin Clift wrote:
- Original Message -
Did we get to break the tie? :)
Yep. Latest results are:
* 5:30 PM IST / 12:00 UTC - 47 votes (52%)
* 6:30 PM IST / 13:00 UTC - 9 votes (10%)
* 7:30 PM IST / 14:00 UTC - 12 votes (13%)
* 8:30 PM IST /
On 09/09/2014, at 7:47 PM, Vijay Bellur wrote:
On 08/06/2014 06:26 PM, Justin Clift wrote:
- Original Message -
Did we get to break the tie? :)
Yep. Latest results are:
* 5:30 PM IST / 12:00 UTC - 47 votes (52%)
* 6:30 PM IST / 13:00 UTC - 9 votes (10%)
* 7:30 PM IST /
Just an FYI, slave22 in Rackspace went weird and couldn't
be rebooted. So a new VM has been created and put in
it's place.
If you get warnings about changed ssh key when logging in
to the new slave22 VM manually, in this instance it's to
be expected. :)
+ Justin
--
GlusterFS -
Hi Vijay,
There's something wrong with the release-3.6 branch. ;)
Several of the Rackspace regression VM's have been somehow
killed over the last few hours, and needing to be rebuilt.
Not sure exactly what's being done to them, as they stop
responding to ssh, and the best I can do is get them
11 matches
Mail list logo