Hi,
Sorry for the delay, it took a long time to reproduce. But currently we
have the same issue again. This was happened afrer reseting all nodes.
Quorum is enabled. Logs and details below.
gluster volume heal vm_storage_volume info split-brain
Gathering list of split brain entries on volume
Gluster gets a good mention in this article, which is also good for explaining
what you do for a living to aged relatives :)
https://medium.com/message/how-paper-magazines-web-engineers-scaled-kim-kardashians-back-end-sfw-6367f8d37688
Marcus
--
Marcus Bointon
Technical Director, Synchromedia
Hello,
I tried using the linuxl's NFS server by exporting mount.glusterfs mounts,
and man it didn't go well either (similar, stale file handle issues from
time to time). Actually, it's not just glusterfs, but all FUse based
file-systems that encounter problems with the linuxl's nfs server.
Hi Pranith,
Yes, the very same (chalcogen_eg_oxy...@yahoo.com). Justin Clift sent me a
mail a while back telling me that it is better if we all use our business
email addresses so I made me a new profile.
Glusterfs complains about /proc/sys/net/ipv4/ip_local_reserved_ports
because we use a
Dear all,
Since a few days, some users raise IT issues concerning GlusterFS NFS
Shares and more specifically, in about NFS shares available on my HPC
clusters.
Indeed, from their workstations which, if they try to copy some
directories/files from the NFS share into somewhere else (taking
W dniu 2015-01-21 o 18:35, Bartłomiej Syryjczyk pisze:
W dniu 2015-01-21 o 15:22, Lindsay Mathieson pisze:
On 21 January 2015 at 23:46, Bartłomiej Syryjczyk
bsyryjc...@kamsoft.pl mailto:bsyryjc...@kamsoft.pl wrote:
# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed.
Help with a solution to the problem. Set geo replication to synchronize
files (about 5m). After starting the synchronization occurs. But after
copying files 80K runs out of space on tmpfs (/ run). It is normal for geo
replication or not?
Perhaps something I did wrong?
dpkg -l | grep glust
ii
Hi Peter,
Can you please try manually mounting those volumes using any/other nfs
client and check if you are able to perform write operations. Also
please collect the gluster nfs log while doing so.
Thanks,
Soumya
On 01/22/2015 08:18 AM, Peter Auyeung wrote:
Hi,
We have been having 5
Hi All,
Please let me know is there any way to change the brick port of gluster
volume.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Good thing they put SFW for that article. I was about to get worried ;)
-JM
- Original Message -
Gluster gets a good mention in this article, which is also good for
explaining what you do for a living to aged relatives :)
I've got problem with mount. Anyone help?
# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed. Please check the log file for more details.*
Log: http://pastebin.com/GzkbEGCw
Oracle Linux Server release 7.0
Kernel 3.8.13-55.1.2.el7uek.x86_64
glusterfs packages from official yum
Hi Guys,
I am using gluster 3.5.2 and for couple of directories I am getting used
space as 16384.0 PB
node1:~# gluster volume quota homedir list |grep PB
/storage/home1 3.0GB 90% 16384.0PB 3.0GB
/storage/home2 1.0GB 90%
My gut still says it could be related to the multipath.
I never got the answer to whether the bricks are using the multipath'ed
devices using mpathXX device or you are direclty using the dm-X device ?
If dm-X then are you ensuring that you are NOT using 2 dm-X device that map
to the same LUN on
HI Deepak,
Please find below details.
* cat multipath.conf
multipath {
uid 162
gid 162
wwid 360050763008084b07808
mode 0777
alias nova
}
* ls -l /dev/mapper/
nova - ../dm-0
*df -h
/dev/mapper/nova 120T 4.1T 116T 4% /gluster1
* ls /gluster1/nova/
brick0 brick1
Hi Omar,
If the issue is happening consistently, can we get a re-creatable
test-case for this same?
Thanks,
Vijay
On Wednesday 21 January 2015 02:47 AM, Omkar Kulkarni wrote:
Hi Guys,
I am using gluster 3.5.2 and for couple of directories I am getting
used space as 16384.0 PB
node1:~#
On 01/22/2015 11:02 PM, kenji kondo wrote:
Hi Guys,
I'm using the GlusterFS version 3.3.2.
I have a problem that unable to delete files as below
-
$ cat log/d0105f.ai.hook
hello!
$ rm log/d0105f.ai.hook
rm: cannot remove `log/d0105f.ai.hook': No such file or directory
Maybe start the mount daemon from shell, like this?
/usr/sbin/glusterfs --debug --volfile-server=glnode1 --volfile-id=/testvol
/mnt/gluster
You could get some useful debug data on your terminal.
However, it's more likely you have a configuration related problem here.
So the output of the
Yes, but...
There are ways to do it by manually building .vol files. You lose all
ability to manage your cluster live and you can no longer use the CLI.
Generally it's not a good idea unless you consider yourself an expert.
On 01/22/2015 07:51 AM, RAKESH P B wrote:
Hi All,
Please let me
Hi All,
There is a fairly recent thread in reddit/sysadmin on practical
GlusterFS usage [1]. If you could describe your experience with
GlusterFS there and/or upvote posts that seem relevant to you, it could
be helpful to folks beyond our community too.
Thanks,
Vijay
[1]
I am also working on a similar issue on 3.5.2
https://bugzilla.redhat.com/show_bug.cgi?id=917901
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Omkar Kulkarni [omkarajitkulka...@gmail.com]
Sent: Tuesday, January 20,
Hi Guys,
I'm using the GlusterFS version 3.3.2.
I have a problem that unable to delete files as below
-
$ cat log/d0105f.ai.hook
hello!
$ rm log/d0105f.ai.hook
rm: cannot remove `log/d0105f.ai.hook': No such file or directory
-
This problem was found
Hi Soumya,
I was able to mount the same volume on other NFS client and do writes
got the following nfs.log entries when write
[2015-01-22 17:39:03.528405] I
[afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
0-sas02-replicate-1: metadata self heal is successfully
We keep getting the lock err on /var/log/message after we stop and start
glusterfs-server
Jan 20 12:35:14 hdpnnprod001 kernel: lockd: server 10.101.165.67 not
responding, timed out
Jan 20 12:36:14 hdpnnprod001 kernel: lockd: server 10.101.165.67 not
responding, timed out
So far only this host
On 01/22/2015 02:07 PM, A Ghoshal wrote:
Hi Pranith,
Yes, the very same (chalcogen_eg_oxy...@yahoo.com). Justin Clift sent
me a mail a while back telling me that it is better if we all use our
business email addresses so I made me a new profile.
Glusterfs complains about
Sorry, wrong ML earlier
On 01/23/2015 12:33 PM, Sahina Bose wrote:
On 01/22/2015 08:03 PM, Demeter Tibor wrote:
Hello,
I have an ovirt 3.5.0 cluster with three nodes and we using glusterfs
for serving backend storage for VM-s. Glusterfs are on same servers
with ovirt.
We have Gluster
hi Arnold,
You gave the output only on one brick it seems? Could you also
provide it on other brick as well. Sorry I didn't make that clear in my
earlier mail.
Pranith
On 01/23/2015 10:44 AM, Arnold Yang wrote:
Hi Pranith,
Here is the output for the commands provide by you, anything
26 matches
Mail list logo