Re: [Gluster-users] gluster and kvm livemigration

2014-01-26 Thread Paul Boven

Hi James, everyone,

When debugging things, I already came across this bug. It is unlikely to 
be the cause of our issues:


Firstly, we migrated from 3.4.0 to 3.4.1, so we already had the possible 
port number conflict, but things worked fine with 3.4.0.


Secondly, I don't see the 'address already in use' messages in my 
logfiles (see the qemu logfiles I posted).


Also, the migration itself doesn't fail: it works fine, the guest ends 
up on the other server, it's just that the migrated guest loses 
read/write access to its filesystem.


Regards, Paul Boven.

On 01/24/2014 01:19 AM, James wrote:

Not sure if it's related at all, but is there any chance this has
anything to do with:

https://bugzilla.redhat.com/show_bug.cgi?id=987555

It came to mind as something to do with glusterfs+libvirt+migration.

HTH,
James



On Thu, Jan 16, 2014 at 5:52 AM, Bernhard Glomm
bernhard.gl...@ecologic.eu mailto:bernhard.gl...@ecologic.eu wrote:

I experienced a strange behavior of glusterfs during livemigration
of a qemu-kvm guest
using a 10GB file on a mirrored gluster 3.4.2 volume
(both on ubuntu 13.04)
I run
virsh migrate --verbose --live --unsafe --p2p --domain atom01
--desturi qemu+ssh://target_ip/system
and the migration works,
the running machine is pingable and keeps sending pings.
nevertheless, when I let the machine touch a file during migration
it stops, complaining that it's filesystem is read only (from that
moment that
migration finished)
A reboot from inside the machine failes,
machine goes down and comes up with an error
unable to write to sector xx on hd0
(than falling into the initrd).
a
virsh destroy VM  virsh start VM
leads to a perfect running VM again,
no matter on which of the two hosts I start the machine
anybody better experience with livemigration?
any hint on a procedure how to debug that?
TIA
Bernhard

--

*Ecologic Institute**Bernhard Glomm*
IT Administration

Phone:  +49 (30) 86880 134 tel:%2B49%20%2830%29%2086880%20134
Fax:+49 (30) 86880 100 tel:%2B49%20%2830%29%2086880%20100
Skype:  bernhard.glomm.ecologic

Website: http://ecologic.eu | Video:
http://www.youtube.com/v/hZtiK04A9Yo | Newsletter:
http://ecologic.eu/newsletter/subscribe | Facebook:
http://www.facebook.com/Ecologic.Institute | Linkedin:
http://www.linkedin.com/company/ecologic-institute-berlin-germany
| Twitter: http://twitter.com/EcologicBerlin | YouTube:
http://www.youtube.com/user/EcologicInstitute | Google+:
http://plus.google.com/113756356645020994482
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 |
10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 |
USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH



___
Gluster-users mailing list
Gluster-users@gluster.org mailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
Paul Boven bo...@jive.nl +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-26 Thread Paul Boven

Hi Bernhard,

Indeed I see the same behaviour:
When a guest is running, it is owned by libvirt:kvm on both servers.
When a guest is stopped, it is owned by root:root on both servers.
In a failed migration, the ownership changes to root:root.

I'm not convinced though that it is a simple unix permission problem, 
because after a failed migration, the guest.raw image is completely 
unreadable on the destination machine, even for root (permission 
denied), whereas I can still read it fine (e.g. dd or md5sum) on the 
originating server.


Regards, Paul Boven.

On 01/23/2014 08:10 PM, BGM wrote:

Hi Paul,
thnx, nice report,
u file(d) the bug?
can u do a
watch tree - pfungiA path to ur vm images pool
on both hosts
some vm running, some stopped.
start a machine
trigger the migration
at some point, the ownership of the vmimage.file flips from
libvirtd (running machnie) to root (normal permission, but only when stopped).
If the ownership/permission flips that way,
libvirtd on the reciving side
can't write that file ...
does group/acl permission flip likewise?
Regards
Bernhard

On 23.01.2014, at 16:49, Paul Boven bo...@jive.nl wrote:


Hi Bernhard,

I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1 packages 
from semiosis. It worked fine with glusterfs-3.4.0.

We've been trying to debug this on the list, but haven't found the smoking gun 
yet.

Please have a look at the URL below, and see if it matches what you are 
experiencing?

http://epboven.home.xs4all.nl/gluster-migrate.html

Regards, Paul Boven.

On 01/23/2014 04:27 PM, Bernhard Glomm wrote:


I had/have problems with live-migrating a virtual machine on a 2sided
replica volume.

I run ubuntu 13.04 and gluster 3.4.2 from semiosis


with network.remote-dio to enable I can use cache mode = none as
performance option for the virtual disks,

so live migration works without --unsafe

I'm triggering the migration now through the Virtual Machine Manager as an

unprivileged user which is group member of libvirtd.


After migration the disks become read-only because

on migration the disk files changes ownership from

libvirt-qemu to root


What am I missing?


TIA


Bernhard



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Paul Boven bo...@jive.nl +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Paul Boven bo...@jive.nl +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-26 Thread Andrew Lau
Have you tried setting the uid/guid as part of the gluster volume? For
oVirt it uses 36:36 for virt

gluster volume set DATA storage.owner-uid 36
gluster volume set DATA storage.owner-gid 36

I'm assuming, setting this will enforce these ownership permissions on all
files.

On Sun, Jan 26, 2014 at 10:42 PM, Paul Boven bo...@jive.nl wrote:

 Hi Bernhard,

 Indeed I see the same behaviour:
 When a guest is running, it is owned by libvirt:kvm on both servers.
 When a guest is stopped, it is owned by root:root on both servers.
 In a failed migration, the ownership changes to root:root.

 I'm not convinced though that it is a simple unix permission problem,
 because after a failed migration, the guest.raw image is completely
 unreadable on the destination machine, even for root (permission denied),
 whereas I can still read it fine (e.g. dd or md5sum) on the originating
 server.

 Regards, Paul Boven.


 On 01/23/2014 08:10 PM, BGM wrote:

 Hi Paul,
 thnx, nice report,
 u file(d) the bug?
 can u do a
 watch tree - pfungiA path to ur vm images pool
 on both hosts
 some vm running, some stopped.
 start a machine
 trigger the migration
 at some point, the ownership of the vmimage.file flips from
 libvirtd (running machnie) to root (normal permission, but only when
 stopped).
 If the ownership/permission flips that way,
 libvirtd on the reciving side
 can't write that file ...
 does group/acl permission flip likewise?
 Regards
 Bernhard

 On 23.01.2014, at 16:49, Paul Boven bo...@jive.nl wrote:

  Hi Bernhard,

 I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1
 packages from semiosis. It worked fine with glusterfs-3.4.0.

 We've been trying to debug this on the list, but haven't found the
 smoking gun yet.

 Please have a look at the URL below, and see if it matches what you are
 experiencing?

 http://epboven.home.xs4all.nl/gluster-migrate.html

 Regards, Paul Boven.

 On 01/23/2014 04:27 PM, Bernhard Glomm wrote:


 I had/have problems with live-migrating a virtual machine on a 2sided
 replica volume.

 I run ubuntu 13.04 and gluster 3.4.2 from semiosis


 with network.remote-dio to enable I can use cache mode = none as
 performance option for the virtual disks,

 so live migration works without --unsafe

 I'm triggering the migration now through the Virtual Machine Manager
 as an

 unprivileged user which is group member of libvirtd.


 After migration the disks become read-only because

 on migration the disk files changes ownership from

 libvirt-qemu to root


 What am I missing?


 TIA


 Bernhard



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



 --
 Paul Boven bo...@jive.nl +31 (0)521-596547
 Unix/Linux/Networking specialist
 Joint Institute for VLBI in Europe - www.jive.nl
 VLBI - It's a fringe science
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



 --
 Paul Boven bo...@jive.nl +31 (0)521-596547
 Unix/Linux/Networking specialist
 Joint Institute for VLBI in Europe - www.jive.nl
 VLBI - It's a fringe science
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-24 Thread Bernhard Glomm

samuli wrote: 
  Can you try to set storage.owner-uid and storage.owner-gid to 
  libvirt-qemu? To do that you have to stop volume.

hi samuli, hi all 
I tried setting storage.owner-uid and storage.owner-gid to libvirt-qemu, as 
suggested, but with the same effect,during livemigration the ownership of the 
imagefile changes from libvirt-qemu/kvm to root/root
root@pong[/5]:~ # gluster volume info glfs_atom01 Volume Name: glfs_atom01Type: 
ReplicateVolume ID: f28f0f62-37b3-4b10-8e86-9b373f4c0e75Status: StartedNumber 
of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 
172.24.1.11:/ecopool/fs_atom01Brick2: 172.24.1.13:/ecopool/fs_atom01Options 
Reconfigured:storage.owner-gid: 104storage.owner-uid: 107network.remote-dio: 
enable
this is tree -pfungiA path to where my images live : atom01 is running
[-rw--- libvirt- kvm     ]  /srv/vms/mnt_atom01/atom01.img[drwxr-xr-x 
libvirt- kvm     ]  /srv/vms/mnt_atom02[-rw--- root     root    ]  
/srv/vms/mnt_atom02/atom02.img[drwxr-xr-x libvirt- kvm     ]  
/srv/vms/mnt_atom03
Now I migrate through VirtualMachineManager and watching treeI see the 
permission changing to:
[drwxr-xr-x libvirt- kvm     ]  /srv/vms/mnt_atom01[-rw--- root     root    
]  /srv/vms/mnt_atom01/atom01.img[drwxr-xr-x libvirt- kvm     ]  
/srv/vms/mnt_atom02[-rw--- root     root    ]  
/srv/vms/mnt_atom02/atom02.img
From inside the atom01 (the VM) the filesystem becomes readonly.But in contrast 
tohttp://epboven.home.xs4all.nl/gluster-migrate.html
I can still read all file, can checksum them, just no write accessfrom outside 
the image file behaves as Paul described,as long as the machine is running I 
can't read the file
root@pong[/5]:~ # virsh list
 Id    Name                           State



 6     atom01                         running



root@pong[/5]:~ # l /srv/vms/mnt_atom01/atom01.img

-rw--- 1 root root 10G Jan 24 10:20 /srv/vms/mnt_atom01/atom01.img

root@pong[/5]:~ # file /srv/vms/mnt_atom01/atom01.img

/srv/vms/mnt_atom01/atom01.img: writable, regular file, no read permission

root@pong[/5]:~ # md5sum /srv/vms/mnt_atom01/atom01.img

md5sum: /srv/vms/mnt_atom01/atom01.img: Permission denied

root@pong[/5]:~ # virsh destroy atom01

Domain atom01 destroyed



root@pong[/5]:~ # l /srv/vms/mnt_atom01/atom01.img

-rw--- 1 root root 10G Jan 24 10:20 /srv/vms/mnt_atom01/atom01.img

root@pong[/5]:~ # file /srv/vms/mnt_atom01/atom01.img

/srv/vms/mnt_atom01/atom01.img: x86 boot sector; partition 1: ID=0x83, 
starthead 1, startsector 63, 16777165 sectors; partition 2: ID=0xf, starthead 
254, startsector 16777228, 1677718 sectors, code offset 0x63

root@pong[/5]:~ # md5sum /srv/vms/mnt_atom01/atom01.img

9d048558deb46fef7b24e8895711c554  /srv/vms/mnt_atom01/atom01.img
root@pong[/5]:~ # 

But interestingly the source of the migration can access the file after 
migration completedlike so: start atom01 on host ping, migrate it to pong
root@pong[/8]:~ # file 
/srv/vms/mnt_atom01/atom01.img/srv/vms/mnt_atom01/atom01.img: writable, regular 
file, no read permission root@ping[/5]:~ # file 
/srv/vms/mnt_atom01/atom01.img/srv/vms/mnt_atom01/atom01.img: x86 boot sector; 
partition 1: ID=0x83, starthead 1, startsector 63, 16777165 sectors; partition 
2: ID=0xf, starthead 254, startsector 16777228, 1677718 sectors, code offset 
0x63
100% reproducible 
Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-24 Thread Bernhard Glomm

I submitted 
Bug 1057645 https://bugzilla.redhat.com/show_bug.cgi?id=1057645


Bernhard
On 24.01.2014 11:07:49, Bernhard Glomm wrote:
 samuli wrote: 
   Can you try to set storage.owner-uid and storage.owner-gid to 
   libvirt-qemu? To do that you have to stop volume.
 hi samuli, hi all 
 I tried setting storage.owner-uid and storage.owner-gid to  libvirt-qemu, as 
 suggested, but with the same effect, during livemigration the ownership of 
 the imagefile changes from libvirt-qemu/kvm to root/root
 root@pong[/5]:~ # gluster volume info glfs_atom01   Volume Name: 
 glfs_atom01 Type: Replicate Volume ID: 
 f28f0f62-37b3-4b10-8e86-9b373f4c0e75 Status: Started Number of Bricks: 1 x 
 2 = 2 Transport-type: tcp Bricks: Brick1: 172.24.1.11:/ecopool/fs_atom01 
 Brick2: 172.24.1.13:/ecopool/fs_atom01 Options Reconfigured: 
 storage.owner-gid: 104 storage.owner-uid: 107 network.remote-dio: enable
 this is tree -pfungiA path to where my images live : atom01 is running
 [-rw--- libvirt- kvm     ]  /srv/vms/mnt_atom01/atom01.img [drwxr-xr-x 
 libvirt- kvm     ]  /srv/vms/mnt_atom02 [-rw--- root     root    ]  
 /srv/vms/mnt_atom02/atom02.img [drwxr-xr-x libvirt- kvm     ]  
 /srv/vms/mnt_atom03
 Now I migrate through VirtualMachineManager and watching tree I see the 
 permission changing to:
 [drwxr-xr-x libvirt- kvm     ]  /srv/vms/mnt_atom01 [-rw--- root     
 root    ]  /srv/vms/mnt_atom01/atom01.img [drwxr-xr-x libvirt- kvm     ]  
 /srv/vms/mnt_atom02 [-rw--- root     root    ]  
 /srv/vms/mnt_atom02/atom02.img
 From inside the atom01 (the VM) the filesystem becomes readonly. But in 
 contrast to http://epboven.home.xs4all.nl/gluster-migrate.html
 I can still read all file, can checksum them, just no write access from 
 outside the image file behaves as Paul described, as long as the machine is 
 running I can't read the file
 root@pong[/5]:~ # virsh list  Id    Name                           State
 
  6     atom01                         running

 root@pong[/5]:~ # l /srv/vms/mnt_atom01/atom01.img
 -rw--- 1 root root 10G Jan 24 10:20 /srv/vms/mnt_atom01/atom01.img
 root@pong[/5]:~ # file /srv/vms/mnt_atom01/atom01.img
 /srv/vms/mnt_atom01/atom01.img: writable, regular file, no read permission
 root@pong[/5]:~ # md5sum /srv/vms/mnt_atom01/atom01.img
 md5sum: /srv/vms/mnt_atom01/atom01.img: Permission denied
 root@pong[/5]:~ # virsh destroy atom01
 Domain atom01 destroyed

 root@pong[/5]:~ # l /srv/vms/mnt_atom01/atom01.img
 -rw--- 1 root root 10G Jan 24 10:20 /srv/vms/mnt_atom01/atom01.img
 root@pong[/5]:~ # file /srv/vms/mnt_atom01/atom01.img
 /srv/vms/mnt_atom01/atom01.img: x86 boot sector; partition 1: ID=0x83, 
 starthead 1, startsector 63, 16777165 sectors; partition 2: ID=0xf, starthead 
 254, startsector 16777228, 1677718 sectors, code offset 0x63
 root@pong[/5]:~ # md5sum /srv/vms/mnt_atom01/atom01.img
 9d048558deb46fef7b24e8895711c554  /srv/vms/mnt_atom01/atom01.img
 root@pong[/5]:~ # 
 But interestingly the source of the migration can access the file after 
 migration completed like so: start atom01 on host ping, migrate it to 
 pong
 root@pong[/8]:~ # file /srv/vms/mnt_atom01/atom01.img 
 /srv/vms/mnt_atom01/atom01.img: writable, regular file, no read permission  
 root@ping[/5]:~ # file /srv/vms/mnt_atom01/atom01.img 
 /srv/vms/mnt_atom01/atom01.img: x86 boot sector; partition 1: ID=0x83, 
 starthead 1, startsector 63, 16777165 sectors; partition 2: ID=0xf, starthead 
 254, startsector 16777228, 1677718 sectors, code offset 0x63
 100% reproducible 
 Regards
 Bernhard ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Bernhard Glomm


I had/have problems with live-migrating a virtual machine on a 2sided replica 
volume.I run ubuntu 13.04 and gluster 3.4.2 from semiosis
with network.remote-dio to enable I can use cache mode = none as performance 
option for the virtual disks, so live migration works without --unsafeI'm 
triggering the migration now through the Virtual Machine Manager as 
anunprivileged user which is group member of libvirtd.
After migration the disks become read-only because on migration the disk files 
changes ownership fromlibvirt-qemu to root 
What am I missing?
TIA
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Paul Boven

Hi Bernhard,

I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1 
packages from semiosis. It worked fine with glusterfs-3.4.0.


We've been trying to debug this on the list, but haven't found the 
smoking gun yet.


Please have a look at the URL below, and see if it matches what you are 
experiencing?


http://epboven.home.xs4all.nl/gluster-migrate.html

Regards, Paul Boven.

On 01/23/2014 04:27 PM, Bernhard Glomm wrote:


I had/have problems with live-migrating a virtual machine on a 2sided
replica volume.

I run ubuntu 13.04 and gluster 3.4.2 from semiosis


with network.remote-dio to enable I can use cache mode = none as
performance option for the virtual disks,

so live migration works without --unsafe

I'm triggering the migration now through the Virtual Machine Manager as an

unprivileged user which is group member of libvirtd.


After migration the disks become read-only because

on migration the disk files changes ownership from

libvirt-qemu to root


What am I missing?


TIA


Bernhard



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
Paul Boven bo...@jive.nl +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Vijay Bellur

On 01/23/2014 09:19 PM, Paul Boven wrote:

Hi Bernhard,

I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1
packages from semiosis. It worked fine with glusterfs-3.4.0.

We've been trying to debug this on the list, but haven't found the
smoking gun yet.

Please have a look at the URL below, and see if it matches what you are
experiencing?

http://epboven.home.xs4all.nl/gluster-migrate.html



I think it would be a good idea to track this as a bug report. Would it 
be possible to open a new bug at [1] with client and server log files?


Thanks,
Vijay

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread BGM
Hi Paul,
thnx, nice report,
u file(d) the bug?
can u do a
watch tree - pfungiA path to ur vm images pool
on both hosts
some vm running, some stopped.
start a machine
trigger the migration
at some point, the ownership of the vmimage.file flips from
libvirtd (running machnie) to root (normal permission, but only when stopped).
If the ownership/permission flips that way, 
libvirtd on the reciving side
can't write that file ... 
does group/acl permission flip likewise?
Regards
Bernhard

On 23.01.2014, at 16:49, Paul Boven bo...@jive.nl wrote:

 Hi Bernhard,
 
 I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1 packages 
 from semiosis. It worked fine with glusterfs-3.4.0.
 
 We've been trying to debug this on the list, but haven't found the smoking 
 gun yet.
 
 Please have a look at the URL below, and see if it matches what you are 
 experiencing?
 
 http://epboven.home.xs4all.nl/gluster-migrate.html
 
 Regards, Paul Boven.
 
 On 01/23/2014 04:27 PM, Bernhard Glomm wrote:
 
 I had/have problems with live-migrating a virtual machine on a 2sided
 replica volume.
 
 I run ubuntu 13.04 and gluster 3.4.2 from semiosis
 
 
 with network.remote-dio to enable I can use cache mode = none as
 performance option for the virtual disks,
 
 so live migration works without --unsafe
 
 I'm triggering the migration now through the Virtual Machine Manager as an
 
 unprivileged user which is group member of libvirtd.
 
 
 After migration the disks become read-only because
 
 on migration the disk files changes ownership from
 
 libvirt-qemu to root
 
 
 What am I missing?
 
 
 TIA
 
 
 Bernhard
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 
 -- 
 Paul Boven bo...@jive.nl +31 (0)521-596547
 Unix/Linux/Networking specialist
 Joint Institute for VLBI in Europe - www.jive.nl
 VLBI - It's a fringe science
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread James
Not sure if it's related at all, but is there any chance this has anything
to do with:

https://bugzilla.redhat.com/show_bug.cgi?id=987555

It came to mind as something to do with glusterfs+libvirt+migration.

HTH,
James



On Thu, Jan 16, 2014 at 5:52 AM, Bernhard Glomm
bernhard.gl...@ecologic.euwrote:

 I experienced a strange behavior of glusterfs during livemigration
 of a qemu-kvm guest
 using a 10GB file on a mirrored gluster 3.4.2 volume
 (both on ubuntu 13.04)
 I run
 virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
 qemu+ssh://target_ip/system
 and the migration works,
 the running machine is pingable and keeps sending pings.
 nevertheless, when I let the machine touch a file during migration
 it stops, complaining that it's filesystem is read only (from that moment
 that
 migration finished)
 A reboot from inside the machine failes,
 machine goes down and comes up with an error
 unable to write to sector xx on hd0
 (than falling into the initrd).
 a
 virsh destroy VM  virsh start VM
 leads to a perfect running VM again,
 no matter on which of the two hosts I start the machine
 anybody better experience with livemigration?
 any hint on a procedure how to debug that?
 TIA
 Bernhard

 --
   --
   [image: *Ecologic Institute*]   *Bernhard Glomm*
 IT Administration

Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
 bernhard.glomm.ecologic [image:
 Website:] http://ecologic.eu [image: | 
 Video:]http://www.youtube.com/v/hZtiK04A9Yo [image:
 | Newsletter:] http://ecologic.eu/newsletter/subscribe [image: |
 Facebook:] http://www.facebook.com/Ecologic.Institute [image: |
 Linkedin:]http://www.linkedin.com/company/ecologic-institute-berlin-germany 
 [image:
 | Twitter:] http://twitter.com/EcologicBerlin [image: | 
 YouTube:]http://www.youtube.com/user/EcologicInstitute [image:
 | Google+:] http://plus.google.com/113756356645020994482   Ecologic
 Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
 Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
 --

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Samuli Heinonen

23.1.2014 17:27, Bernhard Glomm kirjoitti:



After migration the disks become read-only because

on migration the disk files changes ownership from

libvirt-qemu to root


What am I missing?


I'm not sure of this but is it possible that this is because of 
different ownership and permission on bricks?


Can you try to set storage.owner-uid and storage.owner-gid to 
libvirt-qemu? To do that you have to stop volume.


-samuli

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-20 Thread Bernhard Glomm
On 17.01.2014 16:45:20, Samuli Heinonen wrote:
 Hello Bernhard,
 Can you test if setting option network.remote-dio to enable allows you to use 
 cache=none?
 

Hi Samuli,nope network.remote-dio enable didn't changed it.got a too unclean 
system meanwhile,will do a full reinstall of the hostsand than come back to 
this topicthnx Bernhard
 -samuli
 
 Bernhard Glomm  bernhard.gl...@ecologic.eu  kirjoitti 17.1.2014 kello 
 16.41:
 
  Pranith,
I stopped the volume
started it again,
mounted it on both hosts
started the VM
did the livemigration
and collected the logs:
- etc-glusterfs-glusterd.vol.log
- glustershd.log
- srv-vm_infrastructure-vm-atom01.log
- cli.log
from the beginning of the gluster volume start.
You can found them here (part 1 to 3):
http://pastebin.com/mnATm2BE
http://pastebin.com/RYZFP3E9
http://pastebin.com/HAXEGd54
  
  further more:
gluster --version: glusterfs 3.4.2 built on Jan 11 2014 03:21:47
ubuntu: raring
filesystem on the gluster bricks: zfs-0.6.2
  
  gluster volume info fs_vm_atom01 
Volume Name: fs_vm_atom01
Type: Replicate
Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
Options Reconfigured:
diagnostics.client-log-level: DEBUG
server.allow-insecure: on
  
  
  disk part of VM configuration
  
  emulator/usr/bin/kvm-spice/emulator
    disk type='file' device='disk'
      driver name='qemu' type='raw' cache='writethrough'/
      source file='/srv/vm_infrastructure/vm/atom01/atom01.img'/
      target dev='vda' bus='virtio'/
      address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
    /disk
  
  
  can't use source protocol='gluster' ...
as josh suggested because couldn't get
my qemu recompiled with gluster enabled yet.
  
  Are there other special tuning parameter for kvm/qemu/ to set on gluster?
as mentioned: all works except the livemigration (disk image file 
becomes read only)
and I have to use something different than cache=none...
  
  TIA
  
  Bernhard
  

  On 17.01.2014 05:04:52, Pranith Kumar Karampuri wrote:
   Bernhard,
   Configuration seems ok. Could you please give the log files of the bricks 
   and mount please. If you think it is not a big procedure to do this live 
   migration, could you set client-log-level to DEBUG and provide the log 
   files of that run.
   
   Pranith
   
   - Original Message -
From: Bernhard Glomm bernhard.gl...@ecologic.eu
To: pkara...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, January 16, 2014 5:58:17 PM
Subject: Re: [Gluster-users] gluster and kvm livemigration


hi Pranith

# gluster volume info fs_vm_atom01
 
Volume Name: fs_vm_atom01
Type: Replicate
Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
Options Reconfigured:
diagnostics.client-log-level: ERROR


TIA
Bernhard


On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
 hi Bernhard,
 Could you give gluster volume info output?
 
 Pranith
 
 - Original Message -
  From: Bernhard Glomm   
  bernhard.gl...@ecologic.eu
  To:   gluster-users@gluster.org
  Sent: Thursday, January 16, 2014 4:22:36 PM
  Subject: [Gluster-users] gluster and kvm livemigration
  
  I experienced a strange behavior of glusterfs during livemigration
  of a qemu-kvm guest
  using a 10GB file on a mirrored gluster 3.4.2 volume
  (both on ubuntu 13.04)
  I run
  virsh migrate --verbose --live --unsafe --p2p --domain atom01 
  --desturi
  qemu+ssh://target_ip/system
  and the migration works,
  the running machine is pingable and keeps sending pings.
  nevertheless, when I let the machine touch a file during migration
  it stops, complaining that it's filesystem is read only (from that 
  moment
  that
  migration finished)
  A reboot from inside the machine failes,
  machine goes down and comes up with an error
  unable to write to sector xx on hd0
  (than falling into the initrd).
  a
  virsh destroy VM  virsh start VM
  leads to a perfect running VM again,
  no matter on which of the two hosts I start the machine
  anybody better experience with livemigration?
  any hint on a procedure how to debug that?
  TIA
  Bernhard
  
  --
  
  
  Bernhard Glomm
  IT Administration
  
  Phone: +49 (30) 86880 134
  Fax: +49 (30

Re: [Gluster-users] gluster and kvm livemigration

2014-01-17 Thread Bernhard Glomm

Pranith,
I stopped the volume
started it again,
mounted it on both hosts
started the VM
did the livemigration
and collected the logs:
- etc-glusterfs-glusterd.vol.log
- glustershd.log
- srv-vm_infrastructure-vm-atom01.log
- cli.log
from the beginning of the gluster volume start.
You can found them here (part 1 to 3):http://pastebin.com/mnATm2BE
http://pastebin.com/RYZFP3E9
http://pastebin.com/HAXEGd54


further more:
gluster --version: glusterfs 3.4.2 built on Jan 11 2014 03:21:47

ubuntu: raring
filesystem on the gluster bricks: zfs-0.6.2

gluster volume info fs_vm_atom01 
Volume Name: fs_vm_atom01
Type: Replicate
Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
Options Reconfigured:
diagnostics.client-log-level: DEBUG
server.allow-insecure: on


disk part of VM configuration

emulator/usr/bin/kvm-spice/emulator
    disk type='file' device='disk'
      driver name='qemu' type='raw' cache='writethrough'/
      source file='/srv/vm_infrastructure/vm/atom01/atom01.img'/
      target dev='vda' bus='virtio'/
      address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
    /disk


can't use source protocol='gluster' ...
as josh suggested because couldn't get
my qemu recompiled with gluster enabled yet.

Are there other special tuning parameter for kvm/qemu/ to set on gluster?
as mentioned: all works except the livemigration (disk image file becomes read 
only)
and I have to use something different than cache=none...

TIA

Bernhard

On 17.01.2014 05:04:52, Pranith Kumar Karampuri wrote:
 Bernhard,
 Configuration seems ok. Could you please give the log files of the bricks 
 and mount please. If you think it is not a big procedure to do this live 
 migration, could you set client-log-level to DEBUG and provide the log files 
 of that run.
 
 Pranith
 
 - Original Message -
  From: Bernhard Glomm   bernhard.gl...@ecologic.eu  
  To:   pkara...@redhat.com
  Cc:   gluster-users@gluster.org
  Sent: Thursday, January 16, 2014 5:58:17 PM
  Subject: Re: [Gluster-users] gluster and kvm livemigration
  
  
  hi Pranith
  
  # gluster volume info fs_vm_atom01
   
  Volume Name: fs_vm_atom01
  Type: Replicate
  Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
  Status: Started
  Number of Bricks: 1 x 2 = 2
  Transport-type: tcp
  Bricks:
  Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
  Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
  Options Reconfigured:
  diagnostics.client-log-level: ERROR
  
  
  TIA
  Bernhard
  
  
  On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
   hi Bernhard,
   Could you give gluster volume info output?
   
   Pranith
   
   - Original Message -
From: Bernhard Glomm   bernhard.gl...@ecologic.eu 
 
To:   gluster-users@gluster.org
Sent: Thursday, January 16, 2014 4:22:36 PM
Subject: [Gluster-users] gluster and kvm livemigration

I experienced a strange behavior of glusterfs during livemigration
of a qemu-kvm guest
using a 10GB file on a mirrored gluster 3.4.2 volume
(both on ubuntu 13.04)
I run
virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
qemu+ssh://target_ip/system
and the migration works,
the running machine is pingable and keeps sending pings.
nevertheless, when I let the machine touch a file during migration
it stops, complaining that it's filesystem is read only (from that 
moment
that
migration finished)
A reboot from inside the machine failes,
machine goes down and comes up with an error
unable to write to sector xx on hd0
(than falling into the initrd).
a
virsh destroy VM  virsh start VM
leads to a perfect running VM again,
no matter on which of the two hosts I start the machine
anybody better experience with livemigration?
any hint on a procedure how to debug that?
TIA
Bernhard

--


Bernhard Glomm
IT Administration

Phone:  +49 (30) 86880 134
Fax:+49 (30) 86880 100
Skype:  bernhard.glomm.ecologic

Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 
10717
Berlin
| Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

  
  
  
  --
  
  
  
  
  
  
  
  
  
  Bernhard Glomm
  
  IT Administration
  
  
  
  
  Phone:
  
  
  +49 (30) 86880 134
  
  
  Fax:
  
  
  +49 (30) 86880 100
  
  
  Skype:
  
  
  bernhard.glomm.ecologic

Re: [Gluster-users] gluster and kvm livemigration

2014-01-17 Thread Samuli Heinonen
Hello Bernhard,

Can you test if setting option network.remote-dio to enable allows you to use 
cache=none?

-samuli

Bernhard Glomm bernhard.gl...@ecologic.eu kirjoitti 17.1.2014 kello 16.41:

 Pranith,
 I stopped the volume
 started it again,
 mounted it on both hosts
 started the VM
 did the livemigration
 and collected the logs:
 - etc-glusterfs-glusterd.vol.log
 - glustershd.log
 - srv-vm_infrastructure-vm-atom01.log
 - cli.log
 from the beginning of the gluster volume start.
 You can found them here (part 1 to 3):
 http://pastebin.com/mnATm2BE
 http://pastebin.com/RYZFP3E9
 http://pastebin.com/HAXEGd54
 
 further more:
 gluster --version: glusterfs 3.4.2 built on Jan 11 2014 03:21:47
 ubuntu: raring
 filesystem on the gluster bricks: zfs-0.6.2
 
 gluster volume info fs_vm_atom01 
 Volume Name: fs_vm_atom01
 Type: Replicate
 Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
 Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
 Options Reconfigured:
 diagnostics.client-log-level: DEBUG
 server.allow-insecure: on
 
 disk part of VM configuration
 
 emulator/usr/bin/kvm-spice/emulator
 disk type='file' device='disk'
   driver name='qemu' type='raw' cache='writethrough'/
   source file='/srv/vm_infrastructure/vm/atom01/atom01.img'/
   target dev='vda' bus='virtio'/
   address type='pci' domain='0x' bus='0x00' slot='0x04' 
 function='0x0'/
 /disk
 
 can't use source protocol='gluster' ...
 as josh suggested because couldn't get
 my qemu recompiled with gluster enabled yet.
 
 Are there other special tuning parameter for kvm/qemu/ to set on gluster?
 as mentioned: all works except the livemigration (disk image file becomes 
 read only)
 and I have to use something different than cache=none...
 
 TIA
 
 Bernhard
 
 
 On 17.01.2014 05:04:52, Pranith Kumar Karampuri wrote:
 Bernhard,
 Configuration seems ok. Could you please give the log files of the bricks and 
 mount please. If you think it is not a big procedure to do this live 
 migration, could you set client-log-level to DEBUG and provide the log files 
 of that run.
 
 Pranith
 
 - Original Message -
 From: Bernhard Glomm bernhard.gl...@ecologic.eu
 To: pkara...@redhat.com
 Cc: gluster-users@gluster.org
 Sent: Thursday, January 16, 2014 5:58:17 PM
 Subject: Re: [Gluster-users] gluster and kvm livemigration
 
 
 hi Pranith
 
 # gluster volume info fs_vm_atom01
  
 Volume Name: fs_vm_atom01
 Type: Replicate
 Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
 Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
 Options Reconfigured:
 diagnostics.client-log-level: ERROR
 
 
 TIA
 Bernhard
 
 
 On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
 hi Bernhard,
 Could you give gluster volume info output?
 
 Pranith
 
 - Original Message -
 From: Bernhard Glomm   bernhard.gl...@ecologic.eu  
 To:   gluster-users@gluster.org
 Sent: Thursday, January 16, 2014 4:22:36 PM
 Subject: [Gluster-users] gluster and kvm livemigration
 
 I experienced a strange behavior of glusterfs during livemigration
 of a qemu-kvm guest
 using a 10GB file on a mirrored gluster 3.4.2 volume
 (both on ubuntu 13.04)
 I run
 virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
 qemu+ssh://target_ip/system
 and the migration works,
 the running machine is pingable and keeps sending pings.
 nevertheless, when I let the machine touch a file during migration
 it stops, complaining that it's filesystem is read only (from that moment
 that
 migration finished)
 A reboot from inside the machine failes,
 machine goes down and comes up with an error
 unable to write to sector xx on hd0
 (than falling into the initrd).
 a
 virsh destroy VM  virsh start VM
 leads to a perfect running VM again,
 no matter on which of the two hosts I start the machine
 anybody better experience with livemigration?
 any hint on a procedure how to debug that?
 TIA
 Bernhard
 
 --
 
 
 Bernhard Glomm
 IT Administration
 
 Phone: +49 (30) 86880 134
 Fax: +49 (30) 86880 100
 Skype: bernhard.glomm.ecologic
 
 Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
 Berlin
 | Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 
 
 
 --
 
 
 
 
 
 
 
 
 
 Bernhard Glomm
 
 IT Administration
 
 
 
 
 Phone:
 
 
 +49 (30) 86880 134
 
 
 Fax:
 
 
 +49 (30) 86880 100
 
 
 Skype:
 
 
 bernhard.glomm.ecologic
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 |
 10717 Berlin | Germany
 
 GF: R. Andreas

Re: [Gluster-users] gluster and kvm livemigration

2014-01-16 Thread Pranith Kumar Karampuri
hi Bernhard,
   Could you give gluster volume info output?

Pranith

- Original Message -
 From: Bernhard Glomm bernhard.gl...@ecologic.eu
 To: gluster-users@gluster.org
 Sent: Thursday, January 16, 2014 4:22:36 PM
 Subject: [Gluster-users] gluster and kvm livemigration
 
 I experienced a strange behavior of glusterfs during livemigration
 of a qemu-kvm guest
 using a 10GB file on a mirrored gluster 3.4.2 volume
 (both on ubuntu 13.04)
 I run
 virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
 qemu+ssh://target_ip/system
 and the migration works,
 the running machine is pingable and keeps sending pings.
 nevertheless, when I let the machine touch a file during migration
 it stops, complaining that it's filesystem is read only (from that moment
 that
 migration finished)
 A reboot from inside the machine failes,
 machine goes down and comes up with an error
 unable to write to sector xx on hd0
 (than falling into the initrd).
 a
 virsh destroy VM  virsh start VM
 leads to a perfect running VM again,
 no matter on which of the two hosts I start the machine
 anybody better experience with livemigration?
 any hint on a procedure how to debug that?
 TIA
 Bernhard
 
 --
   
 
   Bernhard Glomm
 IT Administration
 
   Phone:  +49 (30) 86880 134
   Fax:+49 (30) 86880 100
   Skype:  bernhard.glomm.ecologic
   
   Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
 Berlin
   | Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
   
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-16 Thread Bernhard Glomm

hi Pranith

# gluster volume info fs_vm_atom01 
 
Volume Name: fs_vm_atom01
Type: Replicate
Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
Options Reconfigured:
diagnostics.client-log-level: ERROR


TIA
Bernhard


On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
 hi Bernhard,
Could you give gluster volume info output?
 
 Pranith
 
 - Original Message -
  From: Bernhard Glomm   bernhard.gl...@ecologic.eu  
  To:   gluster-users@gluster.org
  Sent: Thursday, January 16, 2014 4:22:36 PM
  Subject: [Gluster-users] gluster and kvm livemigration
  
  I experienced a strange behavior of glusterfs during livemigration
  of a qemu-kvm guest
  using a 10GB file on a mirrored gluster 3.4.2 volume
  (both on ubuntu 13.04)
  I run
  virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
  qemu+ssh://target_ip/system
  and the migration works,
  the running machine is pingable and keeps sending pings.
  nevertheless, when I let the machine touch a file during migration
  it stops, complaining that it's filesystem is read only (from that moment
  that
  migration finished)
  A reboot from inside the machine failes,
  machine goes down and comes up with an error
  unable to write to sector xx on hd0
  (than falling into the initrd).
  a
  virsh destroy VM  virsh start VM
  leads to a perfect running VM again,
  no matter on which of the two hosts I start the machine
  anybody better experience with livemigration?
  any hint on a procedure how to debug that?
  TIA
  Bernhard
  
  --
  
  
  Bernhard Glomm
  IT Administration
  
  Phone:  +49 (30) 86880 134
  Fax:+49 (30) 86880 100
  Skype:  bernhard.glomm.ecologic
  
  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
  Berlin
  | Germany
  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
  DE811963464
  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
  
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-16 Thread Bernhard Glomm

btw:

The VM are configured to use cache=writethrough
since cache=none doesn't work (can't install the machine
because of i/o errors when I try to use the disk)
can't start an installed machine with cache=none
with cache=writethrough and cache=writeback
the machines starts and runs, but lifemigration
shows both time the same issue that
the disk will become readonline on successful migration
remount, reboot failed
only a shutdown and a restart works.

TIA

Bernhard 


  I experienced a strange behavior of glusterfs during livemigration
  of a qemu-kvm guest
  using a 10GB file on a mirrored gluster 3.4.2 volume
  (both on ubuntu 13.04)
  I run
  virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
  qemu+ssh://target_ip/system
  and the migration works,
  the running machine is pingable and keeps sending pings.
  nevertheless, when I let the machine touch a file during migration
  it stops, complaining that it's filesystem is read only (from that moment
  that
  migration finished)
  A reboot from inside the machine failes,
  machine goes down and comes up with an error
  unable to write to sector xx on hd0
  (than falling into the initrd).
  a
  virsh destroy VM  virsh start VM
  leads to a perfect running VM again,
  no matter on which of the two hosts I start the machine
  anybody better experience with livemigration?
  any hint on a procedure how to debug that?


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-16 Thread Josh Boon
Hey Bernhard, 

I've always needed the 




server.allow-insecure: on 
for successful migrations. 

Also, I've not had any issues with cache=none with my machines. A copy of a 
disk segment from one of my working machines 


disk type='network' device='disk' 
driver name='qemu' type='raw' cache='none'/ 
source protocol='gluster' name='VMARRAY/HFMMNG1.img' 
host name='10.9.1.2' port=''/ 
/source 
target dev='vda' bus='virtio'/ 
address type='pci' domain='0x' bus='0x00' slot='0x04' function='0x0'/ 
/disk 




Best, 

Josh 


- Original Message -

From: Bernhard Glomm bernhard.gl...@ecologic.eu 
To: pkara...@redhat.com 
Cc: gluster-users@gluster.org 
Sent: Thursday, January 16, 2014 9:02:54 AM 
Subject: Re: [Gluster-users] gluster and kvm livemigration 

btw: 

The VM are configured to use cache=writethrough 
since cache=none doesn't work (can't install the machine 
because of i/o errors when I try to use the disk) 
can't start an installed machine with cache=none 
with cache=writethrough and cache=writeback 
the machines starts and runs, but lifemigration 
shows both time the same issue that 
the disk will become readonline on successful migration 
remount, reboot failed 
only a shutdown and a restart works. 

TIA 

Bernhard 






blockquote
I experienced a strange behavior of glusterfs during livemigration 
of a qemu-kvm guest 
using a 10GB file on a mirrored gluster 3.4.2 volume 
(both on ubuntu 13.04) 
I run 
virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi 
qemu+ssh://target_ip/system 
and the migration works, 
the running machine is pingable and keeps sending pings. 
nevertheless, when I let the machine touch a file during migration 
it stops, complaining that it's filesystem is read only (from that moment 
that 
migration finished) 
A reboot from inside the machine failes, 
machine goes down and comes up with an error 
unable to write to sector xx on hd0 
(than falling into the initrd). 
a 
virsh destroy VM  virsh start VM 
leads to a perfect running VM again, 
no matter on which of the two hosts I start the machine 
anybody better experience with livemigration? 
any hint on a procedure how to debug that? 



/blockquote

___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-16 Thread Pranith Kumar Karampuri
Bernhard,
Configuration seems ok. Could you please give the log files of the bricks 
and mount please. If you think it is not a big procedure to do this live 
migration, could you set client-log-level to DEBUG and provide the log files of 
that run.

Pranith

- Original Message -
 From: Bernhard Glomm bernhard.gl...@ecologic.eu
 To: pkara...@redhat.com
 Cc: gluster-users@gluster.org
 Sent: Thursday, January 16, 2014 5:58:17 PM
 Subject: Re: [Gluster-users] gluster and kvm livemigration
 
 
 hi Pranith
 
 # gluster volume info fs_vm_atom01
  
 Volume Name: fs_vm_atom01
 Type: Replicate
 Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
 Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
 Options Reconfigured:
 diagnostics.client-log-level: ERROR
 
 
 TIA
 Bernhard
 
 
 On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
  hi Bernhard,
 Could you give gluster volume info output?
  
  Pranith
  
  - Original Message -
   From: Bernhard Glomm   bernhard.gl...@ecologic.eu  
   To:   gluster-users@gluster.org
   Sent: Thursday, January 16, 2014 4:22:36 PM
   Subject: [Gluster-users] gluster and kvm livemigration
   
   I experienced a strange behavior of glusterfs during livemigration
   of a qemu-kvm guest
   using a 10GB file on a mirrored gluster 3.4.2 volume
   (both on ubuntu 13.04)
   I run
   virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
   qemu+ssh://target_ip/system
   and the migration works,
   the running machine is pingable and keeps sending pings.
   nevertheless, when I let the machine touch a file during migration
   it stops, complaining that it's filesystem is read only (from that moment
   that
   migration finished)
   A reboot from inside the machine failes,
   machine goes down and comes up with an error
   unable to write to sector xx on hd0
   (than falling into the initrd).
   a
   virsh destroy VM  virsh start VM
   leads to a perfect running VM again,
   no matter on which of the two hosts I start the machine
   anybody better experience with livemigration?
   any hint on a procedure how to debug that?
   TIA
   Bernhard
   
   --
   
   
 Bernhard Glomm
   IT Administration
   
 Phone:  +49 (30) 86880 134
 Fax:+49 (30) 86880 100
 Skype:  bernhard.glomm.ecologic
   
 Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
 Berlin
 | Germany
   GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
   DE811963464
   Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
   
   
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 
 
 
 --
 
 
   
  
 
 
   
 
   
 Bernhard Glomm
 
 IT Administration
 
 
   
 
   Phone:
 
 
   +49 (30) 86880 134
 
   
   Fax:
 
 
   +49 (30) 86880 100
 
   
   Skype:
 
 
   bernhard.glomm.ecologic
 
   
 
   
 
 
 
 
 
 
 
 
 
   
 
 
   Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 |
   10717 Berlin | Germany
 
   GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 |
   USt/VAT-IdNr.: DE811963464
 
   Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige
   GmbH
 
   
 
  
 
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users