Hi,
We want upgrade gluster 3.6.1 to 3.7.1. I have a question about quota,
must take into account the change in configuration? In this case, what
steps should follow?
Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
I have a question about heal process. Sometimes appear heal entries failed
after execute gluster volume heal Volname statistics. When can happen
these cases?
Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
I have monitoring gluster with scripts that lunch scripts. All scripts are
redirected to a one script that check if is active any process glusterd and
if the repsonse its false, the script lunch the check.
All checks are:
- gluster volume volname info
- gluster volume heal volname
Hi,
I have a cluster with 3 nodes on pre-production. Yesterday, one node was
down. The errror that I have seen is that:
[2015-05-28 19:04:27.305560] E [glusterd-syncop.c:1578:gd_sync_task_begin]
0-management: Unable to acquire lock for cfe-gv1
The message I [MSGID: 106006]
Hi,
I have upgraded gluster from release 6.3, with quota enable, to release
3.7, and when I have restarted the nodes, the log lunch a error:
E [MSGID: 106012] [glusterd-utils.c:2670:glusterd_compare_friend_volume]
0-management: Cksums of quota configuration of volume cfe-gv1 differ. local
cksum
Hi,
I had a problem with a filesystem xfs on gluster. The filesystem metadata
was filled:
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: space map metadata:
unable to allocate new metadata block
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: 252:2: metadata
operation
Hi,
Today, Glusterd daemon has been killed due to excessive memory consumption:
[3505254.762715] Out of memory: Kill process 7780 (glusterd) score 581 or
sacrifice child
[3505254.763451] Killed process 7780 (glusterd) total-vm:3537640kB,
anon-rss:1205240kB, file-rss:672kB
I have installed
this?
Thanks
2015-03-18 11:36 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
On 03/18/2015 03:24 PM, Félix de Lelelis wrote:
Hi,
I have a problem with glusterfs 3.6. I am monitoring it with scripts that
lunch gluster volume status VONAME detail and gluster volume profile
VOLNAME info
Hi,
I have a problem with glusterfs 3.6. I am monitoring it with scripts that
lunch gluster volume status VONAME detail and gluster volume profile
VOLNAME info. When this scripts are running about 1-2 hours, with check
every 1 minute, gluster is blocked and the node generates Another
transaction
Hi,
I have a cluster with 2 nodes, and sometimes when I lunch gluster volume
status, appear a error on the log:
[2015-03-16 17:24:25.215352] E
[glusterd-utils.c:7364:glusterd_add_inode_size_to_dict] 0-management:
xfs_info exited with non-zero exit status
[2015-03-16 17:24:25.215379] E
Hi,
Which is the difference between these types of fops: read, readdir and
readdirp. I am monitoring and I am interesting on read and write operations
but when lunch cat comand over the client only is modified readdirp on the
profile.
Thanks.
___
Hi,
Someone know how obtain stats over fops or I/O operations in gluster? The
idea is integrate this scripts with zabbix.
Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I am testing geo-replication in gluster-3.6. I have created 2 sessions but
when I deleted the last one glusterd yet has register it:
[2015-02-23 08:51:25.440521] I
[glusterd-geo-rep.c:3907:glusterd_get_gsync_status_mst_slv] 0-:
geo-replication status prueba srv-vln-des3-priv1::back-cfe
Hi,
There is anyway to take information about the last changelog that is
applied on slave and master node in geo-replication?
Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I wolud like to know if splitmount utility is useful for gluster version
upper 3.3. I think that since 3.3 version I need to delete the hardlink
also in split-brain case, it's true?
Thank's
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
I am simulating a split brain condition on my cluster but I don't be able
it. I have disconnected the nodes and creating a file with the same name
and different contents but always the self-heal process take the last copy
of the file.
How can create thos condition?
thanks
Hi,
I don't understand if the changelog is needed to replica and geo-replica
situations, so the filesystem is fill of a lot of that files in
/.glusterfs/changelogs. Is there anyway to reduce or waive directory?
Thanks.
___
Gluster-users mailing list
Hi,
The last week upgraded us cluster to 3.6 version. I noticed in the log the
following error:
W [socket.c:611:__socket_rwv] 0-management: readv on
/var/run/f3fcde54ca5d30115274155a37baa079.socket failed (Invalid argument)
It is due a nfs daemon?
Thanks.
Hi,
I don't know if this way it's the correct, so if I reported to other
account, please let me know.
I have a problem after upgrade my cluster to version 3.6 from version 3.5.
When I have started the volumes, I have seen a lot of errors:
[2015-02-11 11:23:18.231142] W
19 matches
Mail list logo