[Gluster-users] gluster-3.7.3-1 el6 problem

2015-09-07 Thread Camelia Botez
I configured 2 gluster servers and 2 gluster clients all of them running 
CentOS6.6.
On all 4 computers I have the same release of gluster software ( server , 
client etc) 3.7.3-1.
On the clients I mounted the volume from server with the following options in 
/etc/fstab:

Glusterfs rw,defaults,_netdev 0 0


I cannot remove files or directories created on the mounted volume.
I get the error:
Cannot remove directory : Transport endpoint is not connected.

What has to be done?


Thank you
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What is the recommended backup strategy for GlusterFS?

2015-09-07 Thread Aravinda

We have one more tool. glusterfind!

This tool comes with gluster installaton, if you are using Gluster 3.7.  
glusterfind enables Changelogging(Journal) to Gluster Volume and uses 
that information to detect the changes happened in the Volume.


1. Create a glusterfind session using, glusterfind create  


2. Do a full backup.
3. Run glusterfind pre command to generate the output file with the list 
of changes happened in Gluster Volume after glusterfind create. For 
usage information glusterfind pre --help

4. Consume that output file and backup only the files listed in output file.
5. After consuming the output file, run glusterfind post command. 
(glusterfind post --help)


Doc link: 
http://gluster.readthedocs.org/en/latest/GlusterFS%20Tools/glusterfind/index.html


This tool is newly released with Gluster release 3.7, please report 
issues or request for features here 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS


regards
Aravinda

On 09/06/2015 12:37 AM, Mathieu Chateau wrote:

Hello,

for my needs, it's about having a simple "photo" of files present 5 
days ago for example.

But i do not want to store file data twice, as most file didn't change.
Using snapshot is convenient of course, but it's risky as you loose 
both data and snapshot in case of failure (snapshot only contains 
delta blocks).
Rsync with hardlink is more resistant (inode stay until last reference 
is removed)


But interested to hear about production setup relying on it

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-09-05 21:03 GMT+02:00 M S Vishwanath Bhat >:


MS
On 5 Sep 2015 12:57 am, "Mathieu Chateau" > wrote:
>
> Hello,
>
> so far I use rsnapshot. This script do rsync with rotation, and
most important same files are stored only once through hard link
(inode). I save space, but still rsync need to parse all folders
to know for new files.
>
> I am also interested in solution 1), but need to be stored on
distinct drives/servers. We can't afford to loose data and
snapshot in case of human error or disaster.
>
>
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2015-09-03 13:05 GMT+02:00 Merlin Morgenstern
>:
>>
>> I have about 1M files in a GlusterFS with rep 2 on 3 nodes
runnnig gluster 3.7.3.
>>
>> What would be a recommended automated backup strategy for this
setup?
>>
>> I already considered the following:

Have you considered glusterfs geo-rep? It's actually for disaster
recovery. But might suit your backup use case as well.

My two cents

//MS

>>
>> 1) glusterfs snapshots in combination with dd. This
unfortunatelly was not possible so far as I could not find any
info on how to make a image file out of the snapshots and how to
automate the snapshot procedure.
>>
>> 2) rsync the mounted file share to a second directory and do a
tar on the entire directory after rsync completed
>>
>> 3) combination of 1 and 2. Doing a snapshot that gets mounted
automaticaly and then rsync from there. Problem: How to automate
snapshots and how to know the mount path
>>
>> Currently I am only able to do the second option, but the fist
option seems to be the most atractive.
>>
>> Thank you for any help on this.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster-3.7.3-1 el6 problem

2015-09-07 Thread Vijay Bellur

On Monday 07 September 2015 01:14 PM, Camelia Botez wrote:

I configured 2 gluster servers and 2 gluster clients all of them running
CentOS6.6.

On all 4 computers I have the same release of gluster software ( server
, client etc) 3.7.3-1.

On the clients I mounted the volume from server with the following
options in /etc/fstab:

Glusterfs rw,defaults,_netdev 0 0

I cannot remove files or directories created on the mounted volume.

I get the error:

Cannot remove directory : Transport endpoint is not connected.



Do you notice any errors in the client log file around the time this 
rmdir operation is performed?


Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Q] Successful Gluster Peer Probe via GRE tunnel, but State returns Accepted peer request (Connected/Disconnected)

2015-09-07 Thread Atin Mukherjee
+ Humble

Considering you had recently faced a similar problem, you could share
the work around.

On 09/08/2015 09:15 AM, Tohru_Kao wrote:
> Hi all,
> 
> This is regarding using Gluster 3.7.4 with GRE tunnel between 2 Ubuntu
> (14.04.3) VMs.
> 
> ### Observation
> 
> After Gluster Peer Probe success, Status returns few thing strange:
> 
> #1: the State as "Accepted peer request (Connected/Disconnected)”.
> #2: IP address return is the Tunnel IP, not probed host IP.
> 
> * Environments and GRE commands I applied are listed below.
> * GRE tunnel settings are done by
> referring http://ask.xmodulo.com/create-gre-tunnel-linux.html
> 
> ### Questions
> 
> Q1:
>   Do I need to add/set additional parameters in glusterd.vol  because of
> GRE ?
> 
> Q2:
>   Are my GRE tunnel command settings wrong ? or missing some commands?
> 
> Any idea?
> 
> Appreciate any comments or pointers.
> 
> ### Note:
> * Peer Probe via external IPs works properly.
> 
> Thank in Advance.
> -JaCoder
> 
> ##  Environment and Commands ## 
> Environment:
>   U1: external ip address:  172.16.213.128   internal network:
> 169.254.0.0/24
>   U2: external ip address:  172.16.213.129   internal network:
> 169.254.1.0/24
> 
> ### Output of ‘gluster peer probe’ and status
> 
> On U1
> # gluster peer probe 169.254.1.1
> peer probe: success. 
> 
> # gluster peer status
> Number of Peers: 1
> 
> Hostname: 169.254.1.1
> Uuid: b6519618-e3aa-4307-afce-8f3d0dae39fc
> State: Accepted peer request (Connected)
> ---
> On U2
> 
> # gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.10.10.1
> Uuid: 2306591b-25d6-41cf-ba50-db30ef0687bb
> State: Accepted peer request (Disconnected)
> 
> 
> ### glusterd.vol Files
> On U1
> 
> # cat /etc/glusterfs/glusterd.vol 
> volume management
> type mgmt/glusterd
> option working-directory /var/lib/glusterd
> option transport.socket.bind-address 169.254.0.1
> option transport-type socket,rdma
> option transport.socket.keepalive-time 10
> option transport.socket.keepalive-interval 2
> option transport.socket.read-fail-log off
> option ping-timeout 30
> #   option base-port 49152
> end-volume
> 
> ---
> On U2
> 
> # cat /etc/glusterfs/glusterd.vol 
> volume management
> type mgmt/glusterd
> option working-directory /var/lib/glusterd
> option transport.socket.bind-address 169.254.1.1
> option transport-type socket,rdma
> option transport.socket.keepalive-time 10
> option transport.socket.keepalive-interval 2
> option transport.socket.read-fail-log off
> option ping-timeout 30
> #   option base-port 49152
> end-volume
> 
> ### Output of ‘ifconfig | inet'
> 
> On U1
> # ifconfig | grep inet
>   inet addr:172.16.213.128  Bcast:172.16.213.255  Mask:255.255.255.0
>   inet addr:169.254.0.1  Bcast:169.254.0.255  Mask:255.255.255.0
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet addr:10.10.10.1  P-t-P:10.10.10.1  Mask:255.255.255.0
> 
> On U2
> # ifconfig | grep inet
>   inet addr:172.16.213.129  Bcast:172.16.213.255  Mask:255.255.255.0
>   inet addr:169.254.1.1  Bcast:169.254.1.255  Mask:255.255.255.0
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet addr:10.10.10.2  P-t-P:10.10.10.2  Mask:255.255.255.0
> 
> 
> ### Applied GRE Tunnel Commands
> On U1
> 
> modprobe ip_gre
> ip tunnel add test-gre mode gre remote 172.16.213.129
> local 172.16.213.128 ttl 255
> ip link set test-gre up
> ip addr add 10.10.10.1/24 dev test-gre
> ip route add 169.254.1.0/24 dev test-gre
> 
> On U2
> 
> modprobe ip_gre
> ip tunnel add test-gre mode gre remote 172.16.213.128
> local 172.16.213.129 ttl 255
> ip link set test-gre up
> ip addr add 10.10.10.2/24 dev test-gre
> ip route add 169.254.0.0/24 dev test-gre
> 
> ### Output of 'ip route show'
> On U1
> 
> # ip route show
> default via 172.16.213.2 dev eth0 
> 10.10.10.0/24 dev test-gre  proto kernel  scope link  src 10.10.10.1 
> 169.254.0.0/24 dev ux-br0  proto kernel  scope link  src 169.254.0.1 
> 169.254.1.0/24 dev test-gre  scope link 
> 172.16.213.0/24 dev eth0  proto kernel  scope link  src 172.16.213.128 
> 
> # ping 169.254.1.1
> PING 169.254.1.1 (169.254.1.1) 56(84) bytes of data.
> 64 bytes from 169.254.1.1: icmp_seq=1 ttl=64 time=0.340 ms
> 64 bytes from 169.254.1.1: icmp_seq=2 ttl=64 time=0.305 ms
> 
> ---
> On U2
> 
> # ip route show
> default via 172.16.213.2 dev eth0 
> 10.10.10.0/24 dev test-gre  proto kernel  scope link  src 10.10.10.2 
> 169.254.0.0/24 dev test-gre  scope link 
> 169.254.1.0/24 dev ux-br0  proto kernel  scope link  src 169.254.1.1 
> 172.16.213.0/24 dev eth0  proto kernel  scope link  src 172.16.213.100
> 
> # ping 169.254.0.1
> PING 169.254.0.1 (169.254.0.1) 56(84) bytes of data.
> 64 bytes from 169.254.0.1: icmp_seq=1 ttl=64 time=0.720 ms
> 64 bytes from 169.254.0.1: icmp_seq=2 ttl=64 time=0.298 ms
> 
> 
> === End ===
> 
> 
> 
> ___
> Gluster-users mailing list
>