Re: [Gluster-users] [Gluster-devel] Glusterfs as a root file system on the same node

2015-10-15 Thread satish kondapalli
Hi Atin, First of all I am not sure how the datacenter cluster nodes boot. My assumption is that cluster nodes boot from their local disk. Let suppose each node has some number of HDDs and one of that is boot disk. Gluster is running on each node to distribute the storage(except boot disk). He

[Gluster-users] Unnecessary healing in 3-node replication setup on reboot

2015-10-15 Thread Udo Giacomozzi
Hello everybody, I'm new to this list, apologies if I'm asking something stupid.. ;-) I'm using GlusterFS on three nodes as the foundation for a 3-node high-availability Proxmox cluster. GlusterFS is mostly used to store the HDD images of a number of VMs and is accessed via NFS. My problem i

[Gluster-users] Gluster snapshot Prerequisites

2015-10-15 Thread 彭繼霆
Hi guys, In document, https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Snapshots.html#Prerequisites37 Prerequisites say that: "Only *linear LVM* is supported with Red Hat Gluster Storage 3.0." Is there any problem if I build a

Re: [Gluster-users] Unnecessary healing in 3-node replication setup onreboot

2015-10-15 Thread Lindsay Mathieson
The gluster.org debian wheezy repo installs 3.6.6 safely on px 3.4, I use it myself Lindsay Mathieson -Original Message- From: "Udo Giacomozzi" Sent: ‎15/‎10/‎2015 6:34 PM To: "gluster-users@gluster.org" Subject: [Gluster-users] Unnecessary healing in 3-node replication setup onreboo

Re: [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro M.
One feature I would like to see in 4.0 is to be able to have a volume started with only ONE brick up and running, at least as a configurable option if not a default. This was possible in 3.5, perhaps more by mistake than design, but it disappeared in 3.6 and it is a major issue if I want to run a

Re: [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro M.
To date my experience with upgrades has been a disaster in that in two cases I was unable to start my volume and eventually I had to downgrade. What I want to recommend is that there is an EXTENSIVE REGRESSION TEST. The most important goal is that NOTHING that works with the previous release shoul

Re: [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Roman
Oh, and yes. make sure to double check the .deb packages pls :) Last time there was a bug, because of what volumes didn't start after upgrade. I know these are made by volunteers, but devs should give them appropriate signals I believe. 2015-10-15 11:29 GMT+03:00 Mauro M. : > To date my experien

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-15 Thread Aravinda
Status looks good. Two master bricks are Active and participating in syncing. Please let us know the issue you are observing. regards Aravinda On 10/15/2015 11:40 AM, Wade Fitzpatrick wrote: I have twice now tried to configure geo-replication of our Stripe-Replicate volume to a remote Stripe v

Re: [Gluster-users] geo-replications invalid names when using rsyncd

2015-10-15 Thread Aravinda
Slave will be eventually consistent. If rsync created temp files in Master Volume and renamed, that gets recorded in Changelogs(Journal). Exact same steps will be replayed in Slave Volume. If no errors, Geo-rep should unlink temp files in Slave and retain actual files. Let us know if Issue pers

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-15 Thread Wade Fitzpatrick
Well I'm kind of worried about the 3 million failures listed in the FAILURES column, the timestamp showing that syncing "stalled" 2 days ago and the fact that only half of the files have been transferred to the remote volume. On 15/10/2015 9:27 pm, Aravinda wrote: Status looks good. Two master

Re: [Gluster-users] geo-replications invalid names when using rsyncd

2015-10-15 Thread Brian Ericson
Thanks! As near as I can tell, the GlusterFS thinks it's done -- I finally ended up renaming the files myself after waiting a couple of days. If I take an idle master/slave (no pending writes) and do an rsync to copy a file to the master volume, I can see that the file is otherwise correct (

[Gluster-users] what is recommended glusterfs version for production use?

2015-10-15 Thread Khoi Mai
Hi, My current production glusterfs version is 3.4.2-1 on Centos 6.6. I'm trying to find the appropriate version to upgrade to for a production environment. Does the community have recommendations? Do I go with 3.5 or 3.6. Where may I read bout change logs of each release? Thanks Khoi Ma

[Gluster-users] 3.6.6 healing issues?

2015-10-15 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
HI, I am seeing what I can best describe as an oddity where my monitoring is telling me that there is an issue (nagios touches a file and then removes it - to check read/write access available on the client mount point), gluster says that there is not an issue on the server mounting the file s

Re: [Gluster-users] File changed as we read it

2015-10-15 Thread Gabriel Kuri
I enabled the option on the volume, but it did not fix the issue, the problem is still there. Gabe On Tue, Oct 6, 2015 at 6:10 PM, Krutika Dhananjay wrote: > Let me know if this helps: > > http://www.gluster.org/pipermail/gluster-users/2015-September/023641.html > > -Krutika > > ---

Re: [Gluster-users] Unnecessary healing in 3-node replication setup on reboot

2015-10-15 Thread Lindsay Mathieson
On 15 October 2015 at 17:26, Udo Giacomozzi wrote: > My problem is, that every time I reboot one of the nodes, Gluster starts > healing all of the files. Since they are quite big, it takes up to ~15-30 > minutes to complete. It completes successfully, but I have to be extremely > careful not to m

Re: [Gluster-users] Importing bricks/datastore into new gluster

2015-10-15 Thread Lindsay Mathieson
On 15 October 2015 at 11:15, Pranith Kumar Karampuri wrote: > Okay, so re-installation is going to change root partition, but the brick > data is going to remain intact, am I correct? Are you going to stop the > volume, re-install all the machines in cluster and bring them back up, or > you want

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-15 Thread Wade Fitzpatrick
I now have a situation similar to https://bugzilla.redhat.com/show_bug.cgi?id=1202649 but trying to register to report the bug, I don't receive the confirmation email to my account so I can't register. Stopping and starting geo-replication has no effect and in fact now shows no status at all.

Re: [Gluster-users] [Gluster-devel] Backup support for GlusterFS

2015-10-15 Thread Pranith Kumar Karampuri
Probably a good question on gluster-users (CCed) Pranith On 10/14/2015 03:57 AM, Brian Lahoue wrote: Has anyone tested backing up a fairly large Gluster implementation with Amanda/ZManda recently? ___ Gluster-devel mailing list gluster-de...@

Re: [Gluster-users] [ovirt-users] glusterfsd is not not default enabled after a host installation

2015-10-15 Thread Ravishankar N
+gluster-users On 10/16/2015 03:27 AM, Nir Soffer wrote: This is a good qeustion for gluster mailing list. בתאריך 15 באוק׳ 2015 4:07 אחה״צ,‏ "Nathanaël Blanchet" mailto:blanc...@abes.fr>> כתב: Hello, I noticed after several different installations that the glusterd daemon was

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-15 Thread Aravinda
Oh ok. I overlooked the status output. Please share the geo-replication logs from "james" and "hilton" nodes. regards Aravinda On 10/15/2015 05:55 PM, Wade Fitzpatrick wrote: Well I'm kind of worried about the 3 million failures listed in the FAILURES column, the timestamp showing that syncing

Re: [Gluster-users] geo-replications invalid names when using rsyncd

2015-10-15 Thread Aravinda
Do you see any errors in Master logs? (/var/log/glusterfs/geo-replication//*.log) regards Aravinda On 10/15/2015 07:51 PM, Brian Ericson wrote: Thanks! As near as I can tell, the GlusterFS thinks it's done -- I finally ended up renaming the files myself after waiting a couple of days. If I