On 21 April 2016 at 15:01, Bishoy Mikhael wrote:
> you don’t need to create a directory and set the extended attributes
> manually anymore if you are using Gluster 3.7.x.
>
Thats really good to know, thanks.
> But the question is, why do you need the replace brick
you don’t need to create a directory and set the extended attributes manually
anymore if you are using Gluster 3.7.x.
But the question is, why do you need the replace brick command if you are using
ZFS?!
—Bishoy
> On Apr 20, 2016, at 9:50 PM, Lindsay Mathieson
>
Are the steps for replacing a brick still current here?:
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick
Do we still need to mount the volume, create/delete a director, set some
extended attributes?
Or can we just use the replace-brick
Hi,
I'm running gluster 3.6.9 on ubuntu 14.04 on a single test server (under
Vagrant and VirtualBox), with 4 filesystems (in addition to the root), 2 of
which are xfs directly on the disk, and the other 2 are xfs on an LVM config -
the scenario I'm testing for is migration of our production
On 04/20/2016 10:31 PM, Dj Merrill wrote:
> On 04/20/2016 12:06 PM, Atin Mukherjee wrote:
>>> Curious, is there any reason why this isn't automatically updated when
>>> managing the updates with "yum update"?
>> This is still manual as we want to give users choose whether they want
>> to use a
On 04/20/2016 12:06 PM, Atin Mukherjee wrote:
>> Curious, is there any reason why this isn't automatically updated when
>> managing the updates with "yum update"?
> This is still manual as we want to give users choose whether they want
> to use a new feature or not. If they want, then a manual
-Atin
Sent from one plus one
On 20-Apr-2016 9:22 pm, "Dj Merrill" wrote:
>
> On 04/19/2016 05:42 PM, Atin Mukherjee wrote:
> >> After a brief search, I discovered the following solution for RHGS:
https://access.redhat.com/solutions/2050753 It suggests updating the
op-version of
david.h...@mariadb.com
--
--
David Hill
InfiniDB Development/Customer Support
MariaDB Corporation
http://www.mariadb.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On 04/19/2016 05:42 PM, Atin Mukherjee wrote:
>> After a brief search, I discovered the following solution for RHGS:
>> https://access.redhat.com/solutions/2050753 It suggests updating the
>> op-version of the cluster after the upgrade. There isn't any evidence of
>> this procedure in the
On 21/04/2016 1:22 AM, Krutika Dhananjay wrote:
Any heal in progress?
-Krutika
zfs scrub is going on to, might be effecting things.
Its nearly 2am here and I'm making mistakes, so I'll go to bed, time for
some Zzzz's and hope its all fixed itself in the morning :)
One of those days, two
Lots of people at Vault this week!
Last month's newsletter included a list of all of the Gluster related
talks, so if you're here, come by the Red Hat booth and say hi!
New things:
3.7.11 Released:
https://www.gluster.org/pipermail/gluster-devel/2016-April/049155.html
We've got a new Events
On 21/04/2016 1:22 AM, Krutika Dhananjay wrote:
Any heal in progress?
Yes, and it drops to normal (<8%) once the heal stops.
Didn't use to be this extreme though. And the heal seems *much* slower.
I reverted to 3.7.10, didn't make any difference.
--
Lindsay Mathieson
Any heal in progress?
-Krutika
On Wed, Apr 20, 2016 at 8:17 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> 3.7.11 hasn't been going so well :( first glusterfsd crashed and
> zombified requiring a reboot. Now its hogging the CPU, at one time it was
> up to 1000%.
>
> Is it safe
3.7.11 hasn't been going so well :( first glusterfsd crashed and
zombified requiring a reboot. Now its hogging the CPU, at one time it
was up to 1000%.
Is it safe to revert to 3.7.10?
--
Lindsay Mathieson
___
Gluster-users mailing list
On 21/04/2016 12:17 AM, Joe Julian wrote:
A zombied glusterfsd means it's stuck in a kernel operation, likely
some io wait that was hung in the kernel. Since there's no way to
clear that from the kernel, the only option was to reboot.
Yah, I rebooted. Had a failed disk in a ZFS pool that
The only automated way I can think of would be to add a udev rule that
would force start the volume associated with that disk.
On 04/20/2016 01:51 AM, jayakrishnan mm wrote:
Hi,
I am reinserting the HDD on a gluster server after some time. When the
brick is removed, the process gets killed
A zombied glusterfsd means it's stuck in a kernel operation, likely some
io wait that was hung in the kernel. Since there's no way to clear that
from the kernel, the only option was to reboot.
On 04/19/2016 11:07 PM, Lindsay Mathieson wrote:
A brick has died on node vnb of my cluster.
Hi.
I've been trying to find out what's going on for several days now, but
can't find anything myself, so I'm asking for some help with GlusterFS
experts ;-)
I'm running 3 replicated gluster volumes between 2 nodes (each node
hosting 3 bricks: one per volume). Components involved:
- CentOS 7.0
The meeting logs for this weeks meeting are available at the following links
- Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-20/gluster_community_meeting_20-apr-2016.2016-04-20-12.01.html
- Minutes (text):
Here is the steps that I do in detail and relevant output from bricks:
I am using below command for volume creation:
gluster volume create v0 disperse 20 redundancy 4 \
1.1.1.{185..204}:/bricks/02 \
1.1.1.{205..224}:/bricks/02 \
1.1.1.{225..244}:/bricks/02 \
1.1.1.{185..204}:/bricks/03 \
On 04/20/2016 02:21 PM, jayakrishnan mm wrote:
Hi,
I am reinserting the HDD on a gluster server after some time. When the
brick is removed, the process gets killed after sometime by itself.
How can I make to restart the brick process automatically, after
I remount it back ?
Do
Hi,
I am reinserting the HDD on a gluster server after some time. When the
brick is removed, the process gets killed after sometime by itself. How
can I make to restart the brick process automatically, after I remount
it back ?
-JK
___
On 20 April 2016 at 17:55, Lindsay Mathieson
wrote:
> Data is safe though - RAID!) mirror
Scratch that, 2nd drive in mirror failed :(
Going to need guiding on replacing a brick :)
Trail by fire this test ...
--
Lindsay
On 20 April 2016 at 16:22, Lindsay Mathieson
wrote:
> Already tried all those, its a zombie linux process with parent pid 1,
> so can't be killed short of a reboot.
>
> It seems to have released the socket handle now (49156) but the brick
> still isn't connecting to
Hi Serkan,
On 19/04/16 15:16, Serkan Çoban wrote:
I assume that gluster is used to store the intermediate files before the reduce
phase
Nope, gluster is the destination for distcp command. hadoop distcp -m
50 http://nn1:8020/path/to/folder file:///mnt/gluster
This run maps on datanodes which
On 20 April 2016 at 16:15, Bishoy Mikhael wrote:
> try restarting glusterd.
> # service glistered restart
>
> if it didn’t work, try killing glusterfsd PID(s)
> # kill $(ps -ef | grep glusterfsd | awk '{print $2}’)
> t
> hen, restart glusterd
> # service glusterd restart
try restarting glusterd.
# service glistered restart
if it didn’t work, try killing glusterfsd PID(s)
# kill $(ps -ef | grep glusterfsd | awk '{print $2}’)
t
hen, restart glusterd
# service glusterd restart
PS: killing glusterfsd that way will kill all the bricks on that node, but
restarting
A brick has died on node vnb of my cluster. Unfortnately it has left a
zombie glusterfsd process which is holding the brick socket so I can't
restart it. Any advice on how to work round that asap would be
appreciated.
Tail of brick logging:
2016-04-20 05:41:37.325846] I [dict.c:473:dict_get]
28 matches
Mail list logo