[Gluster-users] Geo-replication slaves are faulty after startup

2016-11-25 Thread Alexandr Porunov
Hello,

I want to create geo replication between two volumes. Volumes works just
fine. But geo-replication doesn't work at all.

My master volume nodes are:
192.168.0.120
192.168.0.121
192.168.0.122

My slave volume nodes are:
192.168.0.123
192.168.0.124
192.168.0.125

My OS is: CentOS 7
I am running GlusterFS 3.8.5

Here is the status of geo-replication session:
# gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 status

MASTER NODE  MASTER VOLMASTER BRICKSLAVE USERSLAVE
   SLAVE NODE   STATUSCRAWL STATUS
LAST_SYNCED
--
192.168.0.120gv0   /data/brick1/gv0geoaccount
 geoaccount@192.168.0.123::gv0192.168.0.123ActiveChangelog
Crawl2016-11-25 22:25:12
192.168.0.121gv0   /data/brick1/gv0geoaccount
 geoaccount@192.168.0.123::gv0N/A  FaultyN/A
 N/A
192.168.0.122gv0   /data/brick1/gv0geoaccount
 geoaccount@192.168.0.123::gv0N/A  FaultyN/A
 N/A


I don't understand why it doesn't work. Here are interesting log files from
the master node (192.168.0.120):
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log -
http://paste.openstack.org/show/590503/

/var/log/glusterfs/mnt.log - http://paste.openstack.org/show/590504/

/var/log/glusterfs/run-gluster-shared_storage.log -
http://paste.openstack.org/show/590505/

/var/log/glusterfs/geo-replication/gv0/ssh%3A%2F%2Fgeoaccount%40192.168.0.123%3Agluster%3A%2F%2F127.0.0.1%3Agv0.log
- http://paste.openstack.org/show/590506/

Here is a log file from the slave node (192.168.0.123):
/var/log/glusterfs/geo-replication-slaves/5afe64e3-d4e9-452b-a9cf-10674e052616\:gluster%3A%2F%2F127.0.0.1%3Agv0.gluster.log
 - http://paste.openstack.org/show/590507/

Here is how I have created a session:
On slave nodes:
useradd geoaccount
groupadd geogroup
usermod -a -G geogroup geoaccount
usermod -a -G geogroup root
passwd geoaccount
mkdir -p /var/mountbroker-root
chown root:root -R /var/mountbroker-root
chmod 0711 -R /var/mountbroker-root
chown root:geogroup -R /var/lib/glusterd/geo-replication/*
chmod g=rwx,u=rwx,o-rwx -R /var/lib/glusterd/geo-replication/*

On the slave (192.168.0.123):
gluster system:: execute mountbroker opt mountbroker-root
/var/mountbroker-root
gluster system:: execute mountbroker opt geo-replication-log-group geogroup
gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
gluster system:: execute mountbroker user geoaccount gv0
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount gv0 gv0
gluster volume set all cluster.enable-shared-storage enable

Then I have restarted all the slaves:
systemctl restart glusterd

On the master node (192.168.0.120):
ssh-keygen
ssh-copy-id geoaccount@192.168.0.123
gluster system:: execute gsec_create container
gluster volume set all cluster.enable-shared-storage enable
gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 create
ssh-port 22 push-pem
gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 config
remote-gsyncd /usr/libexec/glusterfs/gsyncd
gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 config
use-meta-volume true
gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 config
sync-jobs 3
gluster volume geo-replication gv0 geoaccount@192.168.0.123::gv0 start

Does somebody know what is wrong with this installation? I tried to install
geo-replication for several times but without success.. Please help me

Sincerely,
Alexandr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Lindsay Mathieson

On 26/11/2016 1:47 AM, Lindsay Mathieson wrote:

On 26/11/2016 1:11 AM, Martin Toth wrote:

Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7


Why? Proxmox qemu already has gluster support built in.



Ooops, sorry, wrong list - thought this was the proxmox list.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Lindsay Mathieson

On 26/11/2016 1:11 AM, Martin Toth wrote:

Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7


Why? Proxmox qemu already has gluster support built in.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Martin Toth
Hello all,

we are using your qemu packages to deploy qemu VMs on our gluster via gfsapi.
Recent upgrade broken our qemu and we are not able to deploy / start VMs 
anymore.

Gluster is running OK, mounted with FUSE, everything looks ok, there is 
probably some problem with qemu while accessing gluster with gfsapi.

These are our current versions (Qemu is from 
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 ):
ii  glusterfs-client3.7.17-ubuntu1~trusty2  
 amd64clustered file-system (client 
package)
ii  glusterfs-common3.7.17-ubuntu1~trusty2  
 amd64GlusterFS common libraries and translator 
modules
ii  glusterfs-server3.7.17-ubuntu1~trusty2  
 amd64clustered file-system (server package)
ii  qemu-keymaps
2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 all  QEMU keyboard maps
ii  qemu-kvm
2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64QEMU Full 
virtualization
ii  qemu-system-common
2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64QEMU full system 
emulation binaries (common files)
ii  qemu-system-x86 
2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64QEMU full system 
emulation binaries (x86)
ii  qemu-utils  
2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64QEMU utilities

We see error attached lower in mail. Do you have any suggestions what can cause 
this problem ?

Thanks in advance for your help.

Regards,
Martin

Volume Name: vmvol
Type: Replicate
Volume ID: a72b5c9e-b8ff-488e-b10f-5ba4b71e62b8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1.storage.internal:/gluster/vmvol/brick1
Brick2: node2.storage.internal:/gluster/vmvol/brick1
Brick3: node3.storage.internal:/gluster/vmvol/brick1
Options Reconfigured:
cluster.self-heal-daemon: on
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
performance.stat-prefetch: on
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
cluster.eager-lock: enable
storage.owner-gid: 9869
storage.owner-uid: 9869
server.allow-insecure: on
performance.readdir-ahead: on

2016-11-25 12:54:06.121+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name one-67 -S -machine 
pc-i440fx-trusty,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 
1,sockets=1,cores=1,threads=1 -uuid 3c703b26-3f57-44d0-8d76-bb281fd8902c 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-67.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=gluster://node1:24007/vmvol/67/disk.1,if=none,id=drive-ide0-0-0,format=qcow2,cache=none
 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 
-drive 
file=/var/lib/one//datastores/112/67/disk.0,if=none,id=drive-ide0-0-1,readonly=on,format=raw,cache=none
 -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=3 
-drive file=/var/lib/one//datastores/112/67/disk.2,if=none,id=drive-ide0-1-0
 ,readonly=on,format=raw -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev 
tap,fd=24,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:c8:64:1f,bus=pci.0,addr=0x3,bootindex=1
 -vnc 0.0.0.0:67 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
[2016-11-25 12:54:06.254217] I [MSGID: 104045] [glfs-master.c:96:notify] 
0-gfapi: New graph 6e6f6465-322d-3231-3330-392d32303136 (0) coming up
[2016-11-25 12:54:06.254246] I [MSGID: 114020] [client.c:2113:notify] 
0-vmvol-client-0: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254473] I [MSGID: 114020] [client.c:2113:notify] 
0-vmvol-client-1: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254672] I [MSGID: 114020] [client.c:2113:notify] 
0-vmvol-client-2: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254844] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 
0-vmvol-client-0: changing port to 49152 (from 0)
[2016-11-25 12:54:06.255303] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-0: 
Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 12:54:06.255391] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 
0-vmvol-client-2: changing port to 49152 (from 0)
[2016-11-25 12:54:06.255844] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 0-vmvol-clie

Re: [Gluster-users] geo replication as backup

2016-11-25 Thread Ivan Rossi
I would not say that it is the only and official way.
For examples, the bareos (bareos.org) backup system can talk to cluster via
gfapi, IIRC

Il 21/nov/2016 17:32, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
>
> 2016-11-21 15:48 GMT+01:00 Aravinda :
> > When you set checkpoint, you can watch the status of checkpoint
completion
> > using geo-rep status. Once checkpoint complete, it is guaranteed that
> > everything created before Checkpoint Time is synced to slave.(Note: It
only
> > ensures that all the creates/updates done before checkpoint time but
Geo-rep
> > may sync the files which are created/modified after Checkpoint time)
> >
> > Read more about Checkpoint here
> >
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/#checkpoint
>
> Thank you.
> So, can I assume this is the official (and only) way to properly
> backup a Gluster storage ?
> I'm saying "only" way because it would be impossible to backup a multi
> terabyte storage with any other software.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Impact of low optical power on GlusterFS?

2016-11-25 Thread Pieter Baele
Hi,

We've some important volumes on replica 2 sets (Red Hat Storage). Today we
had an incident in which we couldn't mount one of the volumes anymore, the
other volumes (less used) no problems.

More or less on the same moment we also received a warning / alarm from our
networking team that one of the optical/fibre 10G uplinks on those servers
is having low power issues (< -10dB).

Which impact can this have on Gluster?

in the logs; chronologically:
- a lot of selfheals: dht_log_new_layout_for_dir_selfheal
- afther this; both servers from a client:  has not responded in the last
42 seconds, disconnecting.
- saved_frames_unwind
- remote operation failures

I can upload a lot of files to RH support, but AFAIK the best expertise is
available here :-)


Thx
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] DISPERSED VOLUME

2016-11-25 Thread Serkan Çoban
I think you should try with a bigger file.1,10,100,1000KB?
Small files might just being replicated to bricks...(Just a guess..)

On Fri, Nov 25, 2016 at 12:41 PM, Alexandre Blanca
 wrote:
> Hi,
>
> I am a beginner in distributed file systems and I currently work on
> Glusterfs.
> I work with 4 VM : srv1, srv2, srv3 and cli1
> I tested several types of volume (distributed, replicated, striped ...)
> which are for me JBOD, RAID 1 and RAID 0.
> When I try to make a dispersed volume (raid5 / 6) I have a misunderstanding
> ...
>
>
> gluster volume create gv7 disperse-data 3 redundancy 1
> ipserver1:/data/brick1/gv7 ipserver2:/data/brick1/gv7
> ipserver3:/data/brick1/gv7 ipserver4:/data/brick1/gv7
>
>
> gluster volume info
>
>
> Volume Name: gv7
> Type: Disperse
> Status: Created
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: ipserver1:/data/brick1/gv7
> Brick2: ipserver2:/data/brick1/gv7
> Brick3: ipserver3:/data/brick1/gv7
> Brick4: ipserver4:/data/brick1/gv7
>
> gluster volume start gv7
>
>
> mkdir /home/cli1/gv7_dispersed_directory
>
>
> mount -t glusterfs ipserver1:/gv7 /home/cli1/gv7_dispersed_directory
>
>
>
> Now, when i create a file on my moint point (gv7_dispersed_directory) :
>
>
> cd /home/cli1/gv7_dispersed_directory
>
>
> echo 'hello world !' >> test_file
>
>
> I can see in my srv1 :
>
>
> cd /data/brick1/gv7
>
>
> cat test
>
>
> hello world !
>
>
> in my srv2 :
>
>
>
> cd /data/brick1/gv7
>
>
>
> cat test
>
>
>
> hello world !
>
>
> in my srv4:
>
>
>
> cd /data/brick1/gv7
>
>
>
> cat test
>
>
>
> hello world !
>
>
> but in my srv3 :
>
>
>
> cd /data/brick1/gv7
>
>
>
> cat test
>
>
>
> hello world !
>
> hello world !
>
>
> Why?! output of server 3 displays 2 times hello world ! Parity? Redundancy?
> I don't know...
>
> Best regards
>
> Alex
>
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] DISPERSED VOLUME

2016-11-25 Thread Alexandre Blanca
Hi, 
I am a beginner in distributed file systems and I currently work on Glusterfs. 
I work with 4 VM : srv1, srv2, srv3 and cli1 
I tested several types of volume (distributed, replicated, striped ...) which 
are for me JBOD, RAID 1 and RAID 0. 
When I try to make a dispersed volume (raid5 / 6) I have a misunderstanding ... 



gluster volume create gv7 disperse-data 3 redundancy 1 
ipserver1:/data/brick1/gv7 ipserver2:/data/brick1/gv7 
ipserver3:/data/brick1/gv7 ipserver4:/data/brick1/gv7 



gluster volume info 


Volume Name: gv7 Type: Disperse Status: Created Number of Bricks: 4 
Transport-type: tcp Bricks: Brick1: ipserver1:/data/brick1/gv7 Brick 2 : 
ipserver 2 :/data/brick 1 /gv 7 Brick3: ipserver3:/data/brick1/gv7 Brick4: 
ipserver4:/data/brick1/gv7 



gluster volume start gv7 




m kdir / home/cli1/gv7_dispersed_directory 




m ount -t glusterfs ipserv er 1:/gv 7 / home/cli1 / gv7_dispersed_directory 







Now, when i create a file on my moint point (gv7_dispersed_directory) : 





cd /home/cli1/gv7 _dispersed_directory 





echo 'hello world !' >> test_file 





I can see in my srv1 : 





cd /data/brick1/gv7 





cat test 





hello world ! 





in my srv2 : 






cd /data/brick1/gv7 








cat test 








hello world ! 




in my srv4: 






cd /data/brick1/gv7 








cat test 








hello world ! 




but in my srv3 : 






cd /data/brick1/gv7 








cat test 








hello world ! 

hello world ! 






Why?! output of server 3 displays 2 times hello world ! Parity? Redundancy? I 
don't know... 

Best regards 

Alex 















___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users