Hi Josh Boon,

Below are my setup details:


# qemu-system-x86_64 --version                                                  
                                      
                                                                                
                                      
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard                     
                                                                                
                                      
# gluster --version                                                             
                                      
                                                                                
                                      
glusterfs 3.6.3 built on Jul 29 2015 16:01:10                                   
                                      
Repository revision: git://git.gluster.com/glusterfs.git                        
                                      
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>                   
                                      
GlusterFS comes with ABSOLUTELY NO WARRANTY.                                    
                                      
                                                                                
                                      
# lsb_release -a                                                                
                                      
                                                                                
                                      
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.                           
Distributor ID: Ubuntu                                                          
                                      
Description:    Ubuntu 14.04 LTS                                                
                                      
Release:        14.04                                                           
                                      
Codename:       trusty                                                          
                                      
                                                                                
                                      
# gluster vol info                                                              
                                      
                                                                                
                                      
Volume Name: vol1                                                               
                                      
Type: Replicate                                                                 
                                      
Volume ID: ad78ac6c-c55e-4f4a-8b1b-a11865f1d01e                                 
                                      
Status: Started                                                                 
                                      
Number of Bricks: 1 x 2 = 2                                                     
                                      
Transport-type: tcp                                                             
                                      
Bricks:                                                                         
                                      
Brick1: 10.70.1.156:/brick1                                                     
                                      
Brick2: 10.70.1.156:/brick2                                                     
                                      
Options Reconfigured:                                                           
                                      
server.allow-insecure: on                                                       
                                      
storage.owner-uid: 116                                                          
                                      
storage.owner-gid: 125                                                          
                                      
                                                                                
                                      
                                                                                
                                      
#gluster vol status                                                             
                                      
                                                                                
                                      
Status of volume: vol1                                                          
                                      
Gluster process                                         Port    Online  Pid     
                                      
------------------------------------------------------------------------------  
                                      
Brick 10.70.1.156:/brick1                               49152   Y       3726    
                                      
Brick 10.70.1.156:/brick2                               49153   Y       7014    
                                      
NFS Server on localhost                                 2049    Y       7028    
                                      
Self-heal Daemon on localhost                           N/A     Y       7035    
                                      
                                                                                
                                      
Task Status of Volume vol1                                                      
                                      
------------------------------------------------------------------------------  
                                      
There are no active volume tasks                                                
                                      
                                                                                
                                      
                                                                                
                                      
# du -sh  /brick1/vm1.img                                                       
                                      
8.6G    /brick1/vm1.img                                                         
                                      
                                                                                
                                      
# du -sh  /brick2/vm1.img                                                       
                                      
8.6G    /brick2/vm1.img 


# cat vm1.xml

...

<devices>                                                                       
                                    
    <emulator>/usr/bin/qemu-system-x86_64</emulator>                            
                                      
    <disk type='network' device='disk'>                                         
                                      
      <driver name='qemu' type='qcow2' cache='none'/>                           
                                      
      <source protocol='gluster' name='vol1/vm1.img'>                           
                                      
        <host name='10.70.1.156' port='24007'/>                                 
                                      
      </source>                                                                 
                                      
      <target dev='vda' bus='virtio'/>                                          
                                      
      <alias name='virtio-disk0'/>                                              
                                      
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' 
function='0x0'/>                                     
    </disk>                                                                     
                                      
    <disk type='file' device='cdrom'>                                           
                                      
      <driver name='qemu' type='raw'/>                                          
                                      
      <source file='/home/pkalever/work/qemu/ubuntu-14.04-server-amd64.iso'/>   
                                      
      <target dev='hdb' bus='ide'/>                                             
                                      
      <readonly/>                                                               
                                      
      <alias name='ide0-0-1'/>                                                  
                                      
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>        
                                      
    </disk>  

...

</devices>



With the underline setup given above, Once the VM booted successfully 
I have wrote 5G using dd command but I have not encountered any crashes

I feel qemu- 2.3.0 doesn't have the segfaults issue.

Please revert back for any information if required.


Thanks & regards,
Prasanna Kumar K.



----- Original Message -----
From: "Prasanna Kalever" <pkale...@redhat.com>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "SATHEESARAN" 
<sasun...@redhat.com>
Sent: Tuesday, July 28, 2015 5:04:46 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

finally able to boot the VM

for now I disabled apparmor by update-rc.d -f apparmor remove

Thanks for the support, Now I shall try on reproducing the actual problem :)

Best Regards,
Prasanna Kumar K.




----- Original Message -----
From: "Prasanna Kalever" <pkale...@redhat.com>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "SATHEESARAN" 
<sasun...@redhat.com>
Sent: Tuesday, July 28, 2015 3:45:12 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

apparmor trick didn't work,

How ever I can notice some apparmor="DENIED" logs  (even after adding your 
lines in /etc/apparmor.d/abstractions/libvirt-qemu)



Jul 27 11:46:16 dhcp-0-156 kernel: [ 1152.096873] type=1400 
audit(1437977776.500:55): apparmor="DENIED" operation="open" 
profile="libvirt-331cf08a-c120-45fb-92fe-c5369c8fdf12" 
name="/sys/devices/system/node/" pid=3094 comm="qemu-system-x86" 
requested_mask="r" denied_mask="r" fsuid=0 ouid=0                               
              
Jul 27 11:46:16 dhcp-0-156 kernel: [ 1152.096902] type=1400 
audit(1437977776.500:56): apparmor="DENIED" operation="open" 
profile="libvirt-331cf08a-c120-45fb-92fe-c5369c8fdf12" 
name="/sys/devices/system/cpu/" pid=3094 comm="qemu-system-x86" 
requested_mask="r" denied_mask="r" fsuid=0 ouid=0                               
               
Jul 27 11:46:16 dhcp-0-156 kernel: [ 1152.098569] type=1400 
audit(1437977776.500:57): apparmor="DENIED" operation="file_mmap" 
profile="libvirt-331cf08a-c120-45fb-92fe-c5369c8fdf12" 
name="/usr/lib/x86_64-linux-gnu/qemu/block-curl.so" pid=3094 
comm="qemu-system-x86" requested_mask="m" denied_mask="m" fsuid=0 ouid=107      
             
Jul 27 11:46:16 dhcp-0-156 kernel: [ 1152.098681] type=1400 
audit(1437977776.500:58): apparmor="DENIED" operation="file_mmap" 
profile="libvirt-331cf08a-c120-45fb-92fe-c5369c8fdf12" 
name="/usr/lib/x86_64-linux-gnu/qemu/block-rbd.so" pid=3094 
comm="qemu-system-x86" requested_mask="m" denied_mask="m" fsuid=0 ouid=107      
              
Jul 27 11:46:16 dhcp-0-156 kernel: [ 1152.098733] type=1400 
audit(1437977776.500:59): apparmor="DENIED" operation="file_mmap" 
profile="libvirt-331cf08a-c120-45fb-92fe-c5369c8fdf12" 
name="/usr/lib/x86_64-linux-gnu/qemu/block-gluster.so" pid=3094 
comm="qemu-system-x86" requested_mask="m" denied_mask="m" fsuid=0 ouid=107 


I am still working on it, please find the logs attached.

Sorry for the delay, I was working on something else in parallel.

Thanks & Regards,
Prasanna Kumar K.

----- Original Message -----
From: "Josh Boon" <glus...@joshboon.com>
To: "Prasanna Kalever" <pkale...@redhat.com>
Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "SATHEESARAN" 
<sasun...@redhat.com>
Sent: Thursday, July 23, 2015 9:42:56 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Prasanna,

You've hit apparmor. I should work on providing better instructions for folks 
over at gluster.org :)

To fix up apparmor:
edit /etc/apparmor.d/abstractions/libvirt-qemu
and include at the top of the file

 # For gluster use 
  /usr/lib/x86_64-linux-gnu/glusterfs/** rmix,
  /proc/sys/net/ipv4/ip_local_reserved_ports r,
  /tmp/** rwcx,

and restart apparmor
service apparmor restart
if you still have issues please send me the output of your syslog and qemu-img 
--help


----- Original Message -----
From: "Prasanna Kalever" <pkale...@redhat.com>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "SATHEESARAN" 
<sasun...@redhat.com>
Sent: Thursday, July 23, 2015 12:20:33 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

Thanks for your reply

To avoid issues caused by nested vm's now I moved to to setup

1. Native Ubuntu-14.04
2. with qemu from ppa provided
3. compiled from source gluster-3.6.3

To set up VM's, I followed your procedure modified VARRAY, volume name and IP, 
but still encountering some issues,

# virsh start HFM

error: Failed to start domain HFM 
error: internal error: process exited while connecting to monitor: Failed to 
open module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so: failed to map 
segment from shared object: Permission denied                                   
      
Failed to open module: /usr/lib/x86_64-linux-gnu/qemu/block-rbd.so: failed to 
map segment from shared object: Permission denied                               
                                                                        Failed 
to open module: /usr/lib/x86_64-linux-gnu/qemu/block-gluster.so: failed to map 
segment from shared object: Permission denied                                   
                                                                      
qemu-system-x86_64: -drive 
file=gluster://10.70.1.156/mnt/vm1_mount//HFM.img,if=none,id=drive-virtio-disk0,format=raw,cache=none:
 Unknown protocol 'gluster' 


Best Regards,
Prasanna Kumar K.

----- Original Message -----
From: Josh Boon <glus...@joshboon.com>
To: Prasanna Kalever <pkale...@redhat.com>
Cc: Pranith Kumar Karampuri <pkara...@redhat.com>
Sent: Mon, 20 Jul 2015 09:02:32 -0400 (EDT)
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

I'm not familiar withe the virt-install method. Can you provide me with the 
output of virsh dumpxml VMubuntu_64?

I've attached my template XML I use and the seed files necessary to generate an 
ISO for ubuntu's cloud images if you'd like to go that route. Steps follow for 
use
1. modify template.xml for gluster array. I'm using 10.9.1.1 and VMARRAY as the 
name of my gluster volume. Also don't forget to modify the processor if 
unsupported and the hostname. define the machine
2. Build out a preseed iso (or not.) The two *data files attached recreate the 
cloud-init template necessary for a self-contained install. None of it is 
absolutely necessary but can help make your stay on the server more pleasant. 
Change values as necessary and then generate the instance id and iso image
2a. apt-get install genisoimage; echo "instance-id: $(uuidgen || echo 
i-abcdefg)" >> meta-data ; genisoimage -output /mnt/VMARRAY/$IMAGE-seed.iso 
-volid cidata -joliet -rock meta-data user-data
3. virsh start --console $MACHINE


----- Original Message -----
From: "Prasanna Kalever" <pkale...@redhat.com>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>
Sent: Monday, July 13, 2015 9:48:08 AM
Subject: RE: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,


I am facing issue ("ERROR    XML error: No PCI buses available") while # virsh 
start <VM> 

set up details:
---------------




Native Install Fedora21
Running "Ubuntu amd64 trusty 14.04.02 LTS" as a VM using virt-manager in 
fedora21.
Installed qemu and libseccomp form the ppa 
http://ppa.launchpad.net/josh-boon/qemu-edge-glusterfs/ubuntu and others form 
trusty repo

Issue caused while starting a level 2 VM in Ubuntu-14.04 (which is again a VM 
in fedora)


steps used:
----------

# qemu-img create -f qcow2 gluster://IP/VOLNAME/vm1_file.img 15G
Successful

# mount gluster volume 
successful


# virt-install  -n VMubuntu_64 -r 256 --disk 
path=/mnt/vm1_mount/vm1_file.img,bus=virtio,size=4 -c 
/home/pkalever/ubuntu-14.04-server-amd64.iso --network 
network=default,model=virtio  --nographics   --accelerate 


WARNING  KVM acceleration not available, using 'qemu'                           
                                      
Starting install... 
ERROR    XML error: No PCI buses available                                      
                                      
Domain installation does not appear to have been successful.                    
                                      
If it was, you can restart your domain by running:                              
                                      
  virsh --connect qemu:///system start VMubuntu_64                              
                                      
otherwise, please restart your installation.                                    
                                      
                                                                                
                                      


Other details:
--------------
Following commands executed in Level 1 VM (Ubuntu 14.04 amd64)

# qemu-x86_64 -version
  qemu-x86_64 version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard 

# glusterfs --version
  glusterfs 3.6.3 built on Jul  9 2015 15:51:55                                 
                                       
  Repository revision: git://git.gluster.com/glusterfs.git                      
                                        
  ...

# libvirtd --version
  libvirtd (libvirt) 1.2.2  

# uname -a
  Linux root 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 
x86_64 x86_64 x86_64 GNU/Linux

# lsb_release -a
  Distributor ID: Ubuntu                                                        
                                        
  Description:    Ubuntu 14.04.2 LTS                                            
                                        
  Release:    14.04                                                             
                                       
  Codename:   trusty 




I am expecting this issue might be caused due to 2 level nested VM's creation.

Please revert back for in case of any other information regarding the setup.

Best Regards, 
Prasanna Kumar K. 


----- Forwarded Message -----
From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
To: "Josh Boon" <glus...@joshboon.com>, gluster-in...@gluster.org, "Gluster 
Devel" <gluster-devel@gluster.org>, "Prasanna Kalever" <pkale...@redhat.com>
Sent: Thursday, July 9, 2015 11:02:06 AM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

CC Prasanna who will be looking into it. 

On 07/06/2015 07:30 PM, Josh Boon wrote: 



Hey folks, 

Does anyone have test environment running Ubuntu 14.04, QEMU 2.0, and Gluster 
3.6.3? I'm looking to have some folks test out QEMU 2.3 for stability and 
performance and see if it removes the segfault errors. Another group of folks 
are experiencing the same segfaults I still experience but looking over their 
logs my theory of it being related to a self-heal didn't work out. I've 
included the stack trace below from their environment which matches mine. I've 
already put together a PPA over at 
https://launchpad.net/~josh-boon/+archive/ubuntu/qemu-edge-glusterfs with QEMU 
2.3 and deps built for trusty. If anyone has the time or the resources that I 
could get into I'd appreciate the support. I'd like to get this ironed out so I 
can give my full vote of confidence to Gluster again. 


Thanks, 
Josh 

Stack 
#0 0x00007f369c95248c in ?? () 
No symbol table info available. 
#1 0x00007f369bd2b3b1 in glfs_io_async_cbk (ret=<optimized out>, 
frame=<optimized out>, data=0x7f369ee536c0) at glfs-fops.c:598 
gio = 0x7f369ee536c0 
#2 0x00007f369badb66a in syncopctx_setfspid (pid=0x7f369ee536c0) at 
syncop.c:191 
opctx = 0x0 
ret = -1 
#3 0x0000000000100011 in ?? () 
No symbol table info available. 
#4 0x00007f36a5ae26b0 in ?? () 
No symbol table info available. 
#5 0x00007f36a81e2800 in ?? () 
No symbol table info available. 
#6 0x00007f36a5ae26b0 in ?? () 
No symbol table info available. 
#7 0x00007f36a81e2800 in ?? () 
No symbol table info available. 
#8 0x0000000000000000 in ?? () 
No symbol table info available. 

Full log attached. 


_______________________________________________
Gluster-devel mailing list Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 


_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to