Thanks Kotresh.
Existing Geo-replication Create command does many things
asynchronously(distribute and update SSH keys using hook scripts). It is
difficult to understand the problem. But this tool does all the steps
synchronously so that we will get to know the issues immediately.
Run this t
Hi Jonathan,
This issue has been fixed in glusterfs-3.7.4, you can upgrade to this
version.
We will also fix this in 3.6 and will be available in the next release
3.6.6.
I have filed a bug for 3.6 to track this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1259578
Thanks,
Vijay
On
Hi everyone,
is it normal to see the following sort of errors in the brick's logs?
2015-09-02 21:26:40.808486] E [posix.c:1864:posix_create] 0-repsilo-posix: setting xattrs on
/mnt/glusterRawL/CDB_data/10704/RAW_DATA/PCIE_ATCA_ADC_01.BOARD_12.CHANNEL_019.1.h5 failed
(Operation not supported)
Hi Aravinda,
I used it yesterday. It greatly simplifies the geo-rep setup.
It would be great if it is enhanced to troubleshoot what's
wrong in already corrupted setup.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Aravinda"
> To: "Gluster Devel" , "gluster-users"
>
> S
Raghavendra and Raghavendra,
Thanks, I will enable tracing and reply with logs. I will also rebuild my test
bed to use simpler apache configs. I appreciate your efforts, it’s good to
know we should expect things to “just work” as a starting point, that gives me
hope we can fix this here. To
Hi Everybody,
Perhaps I asked too many questions at once in my first mail, sorry...
But if anyone can provide any info on the one question below, it might
help,
Q) I realise that if a file has --T perms, zero size, and a linkto
xattr, then it is a gluster linkto file.
But, we also have othe
Hi,
Created a CLI tool using Python to simplify the Geo-replication Setup
process. This tool takes care of running gsec_create command,
distributing the SSH keys from Master to all Slave nodes etc. All in
one single command :)
Initial password less SSH login is not required, this tool prompts th
Just double checked for the location of the snapshot files.
Documentations says they should be here:
A directory named snap will be created under the vol directory
(/glusterd/vols//snap). Under which each created snap
will be a self contained directory with meta files, and snap volumes
http:
So what would be the fastest possible way to make a backup to one single
fileof the entire file system? Would this be probably dd?
e.g.:
sudo umount /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
sudo dd if=/dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 | gzip >
snap01.gz
Tha
Minutes:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.txt
Log:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09
Hi,
I've a 3-replicated gluster architecture,
It's working properly but when monitoring
the files that need to be healed it's increasing
every time, and when listing the files it seems
that it is healing previous files every time
gluster volume heal VOLUME info > 32000 files
Healed-Fail on each
On 09/02/2015 12:45 PM, Raghavendra Bhat wrote:
Hi Christian,
I have been working on it since couple of days. I have not been able
to recreate the issue. I will continue to recreate and get back to you
in a day or two.
Regards,
Raghavendra Bhat
Hi Christian,
As per our tests (me and Rag
- Original Message -
> From: "Merlin Morgenstern"
> To: "gluster-users"
> Sent: Wednesday, September 2, 2015 1:07:30 PM
> Subject: [Gluster-users] snapshot directory not available on glusterfs 3.7.2
>
> According to the docs, snapshots should be present at "gs:/snaps" just as
> volumes
- Original Message -
> From: "Merlin Morgenstern"
> To: "Rajesh Joseph"
> Cc: "gluster-users"
> Sent: Wednesday, September 2, 2015 11:53:05 AM
> Subject: Re: [Gluster-users] gluster volume snap shot - basic questions
>
> Thank you Rjesh for your help. I have a thinly provisioned LVM n
Hi All,
In about 2 hours from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public
Hi everybody,
we are experiencing random problems accessing some files on a volume via NFS. Any help is highly
appreciated.
The setup:
CentOS release 6.5
glusterfs-libs-3.4.7-1.el6.x86_64
glusterfs-fuse-3.4.7-1.el6.x86_64
glusterfs-server-3.4.7-1.el6.x86_64
glusterfs-3.4.7-1.el6.x86_64
gluster
Hello,
What I could say from my few knowledge on gluster:
- If each server have itself as a mount point (name of server in fstab),
then it should only ask others for metadata but grab file locally from
itself. Use backupvolfile-server to provide an alternate one in case of
issue at bo
According to the docs, snapshots should be present at "gs:/snaps" just as
volumes are under "gs:/volume". This is not the case. I can see mounted
snaps under /var/rund/snaps/UUID/...
Furthermore the name of the snapshot is not as declared in the command.
E.g. "snap1", but it is "snap1_timestamp"
Hi Christian,
I have been working on it since couple of days. I have not been able to
recreate the issue. I will continue to recreate and get back to you in a
day or two.
Regards,
Raghavendra Bhat
On 09/02/2015 12:45 AM, Christian Rice wrote:
This is still an issue for me, I don’t need any
19 matches
Mail list logo