Hi Cyril,
Need some clarifications. Comments inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Cyril N PEPONNET (Cyril)"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "gluster-users"
> Sent: Tuesday, May 26, 2015 11:43:44 PM
> Subject: Re: [Gluster-users] Geo-Replicat
[+ gluster list]
On 05/25/2015 09:32 PM, p...@email.cz wrote:
Hello,
can anybody help me with hanging replica2 stripe2 datastore on 4
nodes cluster ??
oVirt - ovirt-engine-lib-3.5.2.1-1.el7.centos.noarch
gluster - glusterfs-server-3.7.0-2.el7.x86_64
VM - Centos 7.1
If I use any bigger writ
Hi HUANG,
I got that problem some time. It came for my different volumes when one
node quit the volume management because of a network problem, or more
often because of a glusterd daemon stop.
So You have to control that all the glusterd daemon are on on each node
"service glusterd status"
Hi,
Looks like Geo-rep is started by remembering previous sync time even
though Slave Volume is recreated. Please try the following steps
1. Stop Geo-rep.
2. Delete all files from Slave
3. Remove stime xattrs from all Master Bricks
setfattr -x
trusted.glusterfs...stime
Where,
I have a two node replicated volume, I recently rebuilt one and while they
are re re synced even with a gigabit interconnect, they only transfer at
300mbps and with 6 cores at 2.0 utilization.
I turned on performance.lower.threads.disable which didn't change much and
stat'd the whole volume. There
Dear all,
I found a strange cases that I remove the directory from server side including
the hide files under .gluster/, while I can still list the directory.
Server side:
ls:
/data1/gdata1/home/liyb/boss/TestRelease/TestRelease-00-00-80/run/hc/4420real/log:
No such file or directory
ls:
So, changelog is still active but I notice that some file were missing.
So I ‘m running a rsync -avn between the two vol (master and slave) to sync
then again by touching the missing files (hopping geo-rep will do the rest).
One question, can I pass the slave vol a RO ? Because if somebody chang
Hi again,
geo-replication.indexing cannot be made off without deleting the
geo-replication session!!!???
I don't know else what to do, beside recreating again my slave volume.
2015-05-26 13:28 GMT+01:00 wodel youchi :
> Hi again,
>
> As I mentioned earlier, I had to recreate my slave volume et
Hello all,
I'm facing a weird issue with GlusterFS used as a Distributed volume to
store our backup files. Let me detail it:
We are using Netbackup as a backup solution in our company.
This is made of 1 server managing all backups coming from different clients.
Backups are all done on disk (and t
Hi, everyone! My name is Louis, and I am a newcomer of glusterfs. In recent,
I want to set up a cluster by using glusterfs, and use the nfs-ganesha as a nfs
server in user space for considering the performance. it works well when i
use linux client to access the volume which create by the g
Louis Zuckerman was/is working on automatic package builds for Ubuntu.
Infos: https://github.com/semiosis/glusterfs-debian/issues/5
Regards
André
Am 23.05.2015 um 19:00 schrieb Tom Pepper:
> Just wondering if we can expect 3.6.3 to make it to launchpad anytime soon?
>
> Thanks,
> -t
>
> __
Hi Louis,
AFAIK, we never tested nfs-ganesha+glusterfs using windows client. May
be it would be good if you can collect and provide packet trace/cores or
logs (with nfs-ganesha atleast in NIV_DEBUG level) on the server side
while you run these tests to debug futher.
Thanks,
Soumya
On 05/25/
On 05/26/2015 04:05 PM, Atin Mukherjee wrote:
> Hi all,
>
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?chann
Hi again,
As I mentioned earlier, I had to recreate my slave volume et restart the
geo-replication again.
and as usual, the geo-replication went well at the beginning, but after
restoring another container on MASTERS, we start getting these errors :
On Master:
[2015-05-26 11:56:04.858262] I [mon
Sorry for the wrong incomplete message sent by mistake earlier.
Hi Sussant,
Extremely sorry for the belated reply. Thanks for your input. We will try
having rebalance then create a set of small files with a random pattern
generation and check where it falls in by DM_TYPE DHT.
I have one shor
Hi Sussant,
Extremely sorry for the belated reply. Thanks for your input. We will try
having rebalance then create a set of small files with a random pattern
generation and check where it falls in by DM_TYPE DHT.
I have one short query:
We were also thinking to use (in case we need ) Translat
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Hi Glusterfs Experts,
We are testing glusterfs 3.7.0 tarball on our 10 Node glusterfs cluster.
Each node has 36 dirves and please find the volume info below
Volume Name: vaulttest5
Type: Distributed-Disperse
Volume ID: 68e082a6-9819-4885-856c-1510cd201bd9
Status: Started
Number of Bricks: 36 x (8
18 matches
Mail list logo