Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Gianluca Cecchi
On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim  wrote:

>> Suggestion:
>> If page
>> http://www.ovirt.org/Features/GlusterFS_Storage_Domain
>> is the reference, perhaps it would be better to explicitly specify
>> that one has to start the created volume before going to add a storage
>> domain based on the created volume.
>> Not knowing Gluster could lead to think that the start phase is
>> responsibility of storage domain creation itself ...
>
>
> its a wiki - please edit/fix it ;)

I was in doubt because it is not explictly set as wiki but more as infra ...
I'll do
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Gianluca Cecchi
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur  wrote:

>
>
> Have the following configuration changes been done?
>
> 1) gluster volume set  server.allow-insecure on
>
> 2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
> line:
> option rpc-auth-allow-insecure on
>
> Post 2), restarting glusterd would be necessary.
>
> Regards,
> Vijay


No, because I didn't find find this kind of info anywhere... ;-)

Done on both hosts (step 1 only one time) and I see that the gui
detects the change in volume setting.
Now the VM can start (I see the qemu process on ovnode02) but it seems
to remain in hourglass state icon.
After 5 minutes it still remains in "executing" phase in tasks

Eventually I'm going to restart the nodes completely
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Vijay Bellur

On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:

qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161:
No data available
2013-09-25 05:42:32.291+: shutting down



Have the following configuration changes been done?

1) gluster volume set  server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this 
line:

option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.

Regards,
Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Gianluca Cecchi
So it seems the probelm is

file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image

it is the same in qemu.log of both hosts
On the other one I have:
2013-09-25 05:42:35.454+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name C6 -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
1,sockets=1,cores=1,threads=1 -uuid
409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-3,serial=421FAF48-83D1-08DC-F2ED-F2894F8BC56D,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2013-09-25T05:42:35,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161:
No data available
2013-09-25 05:42:38.620+: shutting down

Currently iptables as setup by install is this (I checked to set up
iptables from the gui when I added the host)
Do I have to add anything for gluster?

[root@ovnode01 qemu]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  0.0.0.0/00.0.0.0/0state
RELATED,ESTABLISHED
ACCEPT all  --  0.0.0.0/00.0.0.0/0
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:54321
ACCEPT tcp  --

Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Gianluca Cecchi
oVirt hosts are VMs inside an ESX 5.1 infra.
I think all is ok in terms of nested virtualization though
CPU of ESX host is E7-4870 and cluster defined as "Intel Nehalem Family"

selinux is in permissive mode

[root@ovnode01 libvirt]# vdsClient -s localhost getVdsCapabilities
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6344c23973df'}]}
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:6344c23973df'
bondings = {'bond0': {'addr': '',
  'cfg': {},
  'hwaddr': '8e:a1:3b:0c:83:47',
  'ipv6addrs': [],
  'mtu': '1500',
  'netmask': '',
  'slaves': []}}
bridges = {'ovirtmgmt': {'addr': '192.168.33.41',
 'cfg': {'BOOTPROTO': 'none',
 'DEFROUTE': 'yes',
 'DELAY': '0',
 'DEVICE': 'ovirtmgmt',
 'GATEWAY': '192.168.33.15',
 'IPADDR': '192.168.33.41',
 'NETMASK': '255.255.255.0',
 'NM_CONTROLLED': 'no',
 'ONBOOT': 'yes',
 'STP': 'no',
 'TYPE': 'Bridge'},
 'gateway': '192.168.33.15',
 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'],
 'ipv6gateway': '::',
 'mtu': '1500',
 'netmask': '255.255.255.0',
 'ports': ['eth0'],
 'stp': 'off'}}
clusterLevels = ['3.0', '3.1', '3.2', '3.3']
cpuCores = '4'
cpuFlags =
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,mmx,fxsr,sse,sse2,ss,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,nopl,xtopology,tsc_reliable,nonstop_tsc,aperfmperf,pni,monitor,vmx,ssse3,cx16,sse4_1,sse4_2,x2apic,popcnt,lahf_lm,ida,arat,epb,dtherm,tpr_shadow,vnmi,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270'
cpuModel = 'Intel(R) Xeon(R) CPU E7- 4870  @ 2.40GHz'
cpuSockets = '4'
cpuSpeed = '2394.000'
cpuThreads = '4'
emulatedMachines = ['pc',
'q35',
'isapc',
'pc-0.10',
'pc-0.11',
'pc-0.12',
'pc-0.13',
'pc-0.14',
'pc-0.15',
'pc-1.0',
'pc-1.1',
'pc-1.2',
'pc-1.3',
'none']
guestOverhead = '65'
hooks = {}
kvmEnabled = 'true'
lastClient = '192.168.33.40'
lastClientIface = 'ovirtmgmt'
management_ip = '0.0.0.0'
memSize = '16050'
netConfigDirty = 'False'
networks = {'ovirtmgmt': {'addr': '192.168.33.41',
  'bridged': True,
  'cfg': {'BOOTPROTO': 'none',
  'DEFROUTE': 'yes',
  'DELAY': '0',
  'DEVICE': 'ovirtmgmt',
  'GATEWAY': '192.168.33.15',
  'IPADDR': '192.168.33.41',
  'NETMASK': '255.255.255.0',
  'NM_CONTROLLED': 'no',
  'ONBOOT': 'yes',
  'STP': 'no',
  'TYPE': 'Bridge'},
  'gateway': '192.168.33.15',
  'iface': 'ovirtmgmt',
  'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'],
  'ipv6gateway': '::',
  'mtu': '1500',
  'netmask': '255.255.255.0',
  'ports': ['eth0'],
  'stp': 'off'}}
nics = {'ens224': {'addr': '192.168.230.31',
   'cfg': {'BOOTPROTO': 'static',
   'DEVICE': 'ens224',
   'HWADDR': '00:50:56:9F:3C:B0',
   'IPADDR': '192.168.230.31',
   'NETMASK': '255.255.255.0',
   'NM_CONTR

Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Itamar Heim

On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:

Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)

Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated

Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the created volume.
Not knowing Gluster could lead to think that the start phase is
responsibility of storage domain creation itself ...


its a wiki - please edit/fix it ;)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Kanagaraj

On 09/25/2013 04:40 AM, Gianluca Cecchi wrote:

Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)

Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated

Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the created volume.
Not knowing Gluster could lead to think that the start phase is
responsibility of storage domain creation itself ...

All seems ok from a configuration point of view.
Uploaded a CentOS 6.4 iso image ito my ISO_DOMAIN (nfs exported from
engine.. this will be another thread...)
Created a server VM with 10Gb of disk with thin allocation.

I get an error when starting the VM

on engine.log
2013-09-25 00:43:16,027 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-44) Rerun vm
409c5dbe-5e70-40de-bf73-46ef484ea2d7. Called from vds ovnode02
2013-09-25 00:43:16,031 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-48) Correlation ID: 5ea15175, Job ID:
48128550-3633-4da4-8d9c-ab704be02f02, Call Stack: null, Custom Event
ID: -1, Message: Failed to run VM C6 on Host ovnode02.
2013-09-25 00:43:16,057 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Lock Acquired to object EngineLock [exclusiveLocks=
key: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 value: VM
, sharedLocks= ]
2013-09-25 00:43:16,070 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) START, IsVmDuringInitiatingVDSCommand( vmId =
409c5dbe-5e70-40de-bf73-46ef484ea2d7), log id: 7979c53b
2013-09-25 00:43:16,071 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 7979c53b
2013-09-25 00:43:16,086 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Running command: RunVmCommand internal: false.
Entities affected :  ID: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 Type: VM
2013-09-25 00:43:16,110 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread-48) START, IsoPrefixVDSCommand( storagePoolId =
6b3175e6-6fa2-473f-ba21-38917c413ba9, ignoreFailoverLimit = false),
log id: 7fd62f0f
2013-09-25 00:43:16,111 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread
...


On node vdsm.log
Thread-2915::ERROR::2013-09-25
00:43:20,108::vm::2062::vm.Vm::(_startUnderlyingVm)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::The vm start process
failed
Traceback (most recent call last):
   File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
 self._run()
   File "/usr/share/vdsm/vm.py", line 2906, in _run
 self._connection.createXML(domxml, flags),
   File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
 ret = f(*args, **kwargs)
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML
 if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Unable to read from monitor: Connection reset by peer
Thread-2915::DEBUG::2013-09-25
00:43:20,176::vm::2448::vm.Vm::(setDownStatus)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::Changed state to Down:
Unable to read from monitor: Connection reset by peer
libvirtEventLoop::WARNING::2013-09-25
00:43:20,114::clientIF::337::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:>
_customize:> _deviceXML: _makeName:>
_validateIoTuneParams:> apparentsize:0 blockDev:False
cache:none conf:{'status': 'Down', 'acpiEnable': 'true',
'emulatedMachine': 'pc-1.0', 'vmId':
'409c5dbe-5e70-40de-bf73-46ef484ea2d7', 'pid': '0',
'memGuaranteedSize': 1365, 'timeOffset': '0', 'keyboardLayout':
'en-us', 'displayPort': '-1', 'displaySecurePort': '-1',
'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {},
'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv',
'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection':
'false', 'transparentHugePages': 'true', 'devices': [{'device':
'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device':
'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId':
'70eadea2-6b53-

Let me know if you need full logs

The disk image itself seems ok:

[root@ovnode02 ~]# ll
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/
total 1025
-rw-rw. 1 vdsm kvm 10737418240 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161
-rw-rw. 1 vdsm kvm 1048576 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.lease
-rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.meta

[root@ovnod

[Users] unable to start vm in 3.3 and f19 with gluster

2013-09-24 Thread Gianluca Cecchi
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)

Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated

Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the created volume.
Not knowing Gluster could lead to think that the start phase is
responsibility of storage domain creation itself ...

All seems ok from a configuration point of view.
Uploaded a CentOS 6.4 iso image ito my ISO_DOMAIN (nfs exported from
engine.. this will be another thread...)
Created a server VM with 10Gb of disk with thin allocation.

I get an error when starting the VM

on engine.log
2013-09-25 00:43:16,027 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-44) Rerun vm
409c5dbe-5e70-40de-bf73-46ef484ea2d7. Called from vds ovnode02
2013-09-25 00:43:16,031 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-48) Correlation ID: 5ea15175, Job ID:
48128550-3633-4da4-8d9c-ab704be02f02, Call Stack: null, Custom Event
ID: -1, Message: Failed to run VM C6 on Host ovnode02.
2013-09-25 00:43:16,057 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Lock Acquired to object EngineLock [exclusiveLocks=
key: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 value: VM
, sharedLocks= ]
2013-09-25 00:43:16,070 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) START, IsVmDuringInitiatingVDSCommand( vmId =
409c5dbe-5e70-40de-bf73-46ef484ea2d7), log id: 7979c53b
2013-09-25 00:43:16,071 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 7979c53b
2013-09-25 00:43:16,086 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Running command: RunVmCommand internal: false.
Entities affected :  ID: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 Type: VM
2013-09-25 00:43:16,110 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread-48) START, IsoPrefixVDSCommand( storagePoolId =
6b3175e6-6fa2-473f-ba21-38917c413ba9, ignoreFailoverLimit = false),
log id: 7fd62f0f
2013-09-25 00:43:16,111 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread
...


On node vdsm.log
Thread-2915::ERROR::2013-09-25
00:43:20,108::vm::2062::vm.Vm::(_startUnderlyingVm)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::The vm start process
failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/vm.py", line 2906, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Unable to read from monitor: Connection reset by peer
Thread-2915::DEBUG::2013-09-25
00:43:20,176::vm::2448::vm.Vm::(setDownStatus)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::Changed state to Down:
Unable to read from monitor: Connection reset by peer
libvirtEventLoop::WARNING::2013-09-25
00:43:20,114::clientIF::337::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:>
_customize:> _deviceXML: _makeName:>
_validateIoTuneParams:> apparentsize:0 blockDev:False
cache:none conf:{'status': 'Down', 'acpiEnable': 'true',
'emulatedMachine': 'pc-1.0', 'vmId':
'409c5dbe-5e70-40de-bf73-46ef484ea2d7', 'pid': '0',
'memGuaranteedSize': 1365, 'timeOffset': '0', 'keyboardLayout':
'en-us', 'displayPort': '-1', 'displaySecurePort': '-1',
'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {},
'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv',
'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection':
'false', 'transparentHugePages': 'true', 'devices': [{'device':
'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device':
'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId':
'70eadea2-6b53-

Let me know if you need full logs

The disk image itself seems ok:

[root@ovnode02 ~]# ll
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/
total 1025
-rw-rw. 1 vdsm kvm 10737418240 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161
-rw-rw. 1 vdsm kvm 1048576 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.lease
-rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.meta

[root@ovnode02 ~]# qemu-img info
/rhev/data-center/mnt/glusterSD/ovn

Re: [Users] Unable to finish AIO 3.3.0 - VDSM

2013-09-24 Thread Nicholas Kesick
> > Date: Tue, 24 Sep 2013 13:28:26 +0100
> > From: dan...@redhat.com
> > To: cybertimber2...@hotmail.com
> > CC: jbro...@redhat.com; masa...@redhat.com; alo...@redhat.com; 
> > users@ovirt.org
> > Subject: Re: [Users] Unable to finish AIO 3.3.0 - VDSM
> > 
> 
> 
> > 
> > Here, Vdsm is trying to configure em1 with no ip address (because it
> > found no ifcfg-em1 to begin with). But then, it fails to do so since
> > NetworkManager is still running.
> > 
> > So if possible, make sure ifcfg-em1 exists (and has the correct
> > BOOTPROT=dhcp in it) and the NetworkManager is off before initiating
> > installation. That's annoying, I know. It should be fix, for sure. But
> > currently it is a must.
> > 
> > Regards,
> > Dan.
> Hopefully I didn't miss any other comments in that log snippet of log file 
> ^^;; It's good to know why it keeps failing. I'm just trying to figure > out 
> how to move forward from here, and I'll take a crack at it this evening.
> I thought that NetworkManager only needs to be disabled if you are using a 
> static IP? I did try disabling NM before I realized it said only > > for 
> static and had a failure, but probably because of the interface/ifcfg issue. 
> I will try again this evening.
> 
> I'll try to jump into IRC by 5pm EDT if you happen to be around.
> 
I did a mv /etc/sysconfig/network-scripts/ifcfg-enp4sp 
/etc/sysconfig/network-scripts/ifcfg-em1, and then edited the file to say 
NAME="em1" instead of NAME="enp4s0", even though ifconfig showed the "em1" 
interface already*. Rebooted and logged into the webadmin, reinstalled the VDSM 
host and unchecked "configure firewall", and VDSM came up.
 *It turns out that these interfaces (e.g. enp4s0) are called "aliases", so 
enp4s0 is em1 and apparently Fedora 19 is using it in some (but not all) 
instances. Not sure what triggers it, but either creating the proper ifcfg 
file, or moving/editing it to the correct interface name will help get things 
running. Thanks everyone! - Nick___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-09-24 Thread Dan Kenigsberg
On Tue, Sep 24, 2013 at 02:41:58PM -0300, emi...@gmail.com wrote:
> Thanks for your answer Dan!
> 
> Yesterday was talking with an user in the IRC and gave me the hint to
> upgrade the libvirt to the 1.1.2 after trying in his implementation the
> live migration successfully.
> 
> I've upgraded the libvirt but I'm still having the issue. I send to you the
> logs that you asked to me and the information bellow:
> OS Version:
> Fedora - 19 - 3
> Kernel Version:
> 3.11.1 - 200.fc19.x86_64
> KVM Version:
> 1.4.2 - 9.fc19
> LIBVIRT Version:
> libvirt-1.1.2-1.fc19
> VDSM Version:
> vdsm-4.12.1-2.fc19
> SPICE Version:
> 0.12.4 - 1.fc19
> iSCSI Initiator Name:
> iqn.1994-05.com.redhat:d990cf85cdeb
> SPM Priority:
> Medium
> Active VMs:
> 1
> CPU Name:
> Intel Westmere Family
> CPU Type:
> Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
> CPU Sockets:
> 1
> CPU Cores per Socket:
> 4
> CPU Threads per Core:
> 2 (SMT Enabled)
> Physical Memory:
> 12007 MB total, 2762 MB used, 9245 MB free
> Swap Size:
> 15999 MB total, 0 MB used, 15999 MB free
> Shared Memory:
> 0%
> Max free Memory for scheduling new VMs:
> 15511.5 MB
> Memory Page Sharing:
> Inactive
> Automatic Large Pages:
> Always
> 
> (Both hypervisors have the same hardware and software version)
> 
> I'm going to keep trying some things becouse something must get messed up
> because now i have a VM with Debian that doesn't start giving me the error
> "Failed to run VM debian on Host ovirt1." and "Failed to run VM debian on
> Host ovirt2."
> 
> Anyway I'll wait for your answer.
> Very Regards!
> Emiliano

Your destination Vdsm has

vmId=`1f7e60c7-51cb-469a-8016-58a5837f3316`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/vm.py", line 2819, in _run
devices = self.buildConfDevices()
  File "/usr/share/vdsm/vm.py", line 1839, in buildConfDevices
devices = self.getConfDevices()
  File "/usr/share/vdsm/vm.py", line 1806, in getConfDevices
self.normalizeDrivesIndices(devices[DISK_DEVICES])
  File "/usr/share/vdsm/vm.py", line 1990, in normalizeDrivesIndices
if drv['iface'] not in self._usedIndices:
KeyError: 'iface'

Which looks just like
Bug 1011472 - [vdsm] cannot recover VM upon vdsm restart after a disk has
been hot plugged to it.

Could it be that you have hot-plugged a disk to your VM at the source host?
Somehow, Vdsm forgets to keep the 'iface' element passed from Engine for the
hot-plugged disk.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm Domain monitor error

2013-09-24 Thread Eduardo Ramos

I think I found it, but I don't know how to remove:

/sbin/lvm vgs --config " devices { preferred_names = [\"^/dev/mapper/\"] 
ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\"a%36000eb396eb9c0540033|3600508b1001c80dabd7195030a341559%\", 
\"r%.*%\" ] }  global {  locking_type=1  prioritise_write_locks=1 
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " 
--noheadings --units b --nosuffix --separator '|' -o tags


In the return, there it is:

MDT_POOL_DOMAINS=*0226b818-59a6-41bc-8590-91f520aa7859:Active*&44&c332da29-ba9f-4c94-8fa9-346bb8e04e2a:Active&44&51eb6183-157d-4015-ae0f-1c7ffb1731c0:Active&44&0e0be898-6e04-4469-bb32-91f3cf8146d1:Active,MDT__SHA_CKSUM=0ccf56122a8384461c8da7b0eda19e9bdcbd23bf

Any idea to remove it?

On 09/24/2013 02:14 PM, Eduardo Ramos wrote:
This storage domains don't exist anymore. There is an entry in 
postgres with:


"Domain VMExport was forcibly removed by admin@internal"

It was a NFS Export domain.

Is there any chance it is causing problem with iscsi data domain 
operations? Now I can create disks and VMs, but I can't remove them. I 
tried export a VM, and the engine.log returned me this:


2013-09-24 14:03:05,180 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Failed in MoveImageGroupVDS method
2013-09-24 14:03:05,182 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Error code MoveImageError and error 
message IRSGenericException: IRSErrorException: Failed to 
MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)
2013-09-24 14:03:05,184 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-3-thread-33) [547e3abf] IrsBroker::Failed::MoveImageGroupVDS due 
to: IRSErrorException: IRSGenericException: IRSErrorException: Failed 
to MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)




On 09/24/2013 01:09 PM, Dafna Ron wrote:

vdsm cannot find your storage.
check your storage and network connection to it.

On 09/24/2013 03:31 PM, Eduardo Ramos wrote:

Hi all!

I'm getting a strange error in on my SPM:

Message from syslogd@darwin at Sep 24 11:19:58 ...
�<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting 
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring 
information#012Traceback (most recent call last):#012 File 
"/usr/share/vdsm/storage/domainMonitor.py", line 182, in 
_monitorDomain#012 self.domain = sdCache.produce(self.sdUUID)#012 
File "/usr/share/vdsm/storage/sdc.py", line 97, in produce#012 
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", 
line 52, in getRealDomain#012 return 
self._cache._realProduce(self._sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce#012 
domain = self._findDomain(sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 152, in _findDomain#012 raise 
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist: 
Storage domain does not exist: 
(u'0226b818-59a6-41bc-8590-91f520aa7859',)


I also can not remove disks. When I try,immediatly appears on the 
'Events' log of webadmin:


*Data Center is being initialized, please wait for initialization to 
complete.*
*User eduardo.ramos failed to initiate removing of disk 
012.167_teste_InfoDoc_Disk1 from domain VMs.*


Could someone help me?

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to Resize VM Disk in oVirt 3.3 (Upgraded from 3.2)

2013-09-24 Thread Itamar Heim

On 09/24/2013 10:18 PM, H. Haven Liu wrote:

Apparently I needed to logout and log back in? Because after that, the "edit" 
button is no longer grayed out!


einav - thoughts?



On Sep 24, 2013, at 12:16 PM, "H. Haven Liu"  wrote:


I reinstalled the hosts, and changed DC to 3.3.

Both DC and Cluster are reporting "Compatibility Version" of 3.3

On Sep 24, 2013, at 11:26 AM, Itamar Heim  wrote:


On 09/24/2013 07:15 PM, H. Haven Liu wrote:

Hello,

I upgraded our installation of oVirt from 3.2 to 3.3, and one of the features I was looking forward to was the ability to resize 
VM disk. However, it appears that the feature is still not available to me. I selected the "Virtual Machines" tab, 
selected a VM, selected the "Disks" sub-tab, and selected a disk; but the "Edit" button is grayed out. The 
disk status is "OK" and the VM status is "Up".

Help is appreciated.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



did you also upgrade the cluster and DC to 3.3 (changing compatibility level, 
after hosts were upgraded to 3.3 vdsm)?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to Resize VM Disk in oVirt 3.3 (Upgraded from 3.2)

2013-09-24 Thread H. Haven Liu
Apparently I needed to logout and log back in? Because after that, the "edit" 
button is no longer grayed out!

On Sep 24, 2013, at 12:16 PM, "H. Haven Liu"  wrote:

> I reinstalled the hosts, and changed DC to 3.3.
> 
> Both DC and Cluster are reporting "Compatibility Version" of 3.3
> 
> On Sep 24, 2013, at 11:26 AM, Itamar Heim  wrote:
> 
>> On 09/24/2013 07:15 PM, H. Haven Liu wrote:
>>> Hello,
>>> 
>>> I upgraded our installation of oVirt from 3.2 to 3.3, and one of the 
>>> features I was looking forward to was the ability to resize VM disk. 
>>> However, it appears that the feature is still not available to me. I 
>>> selected the "Virtual Machines" tab, selected a VM, selected the "Disks" 
>>> sub-tab, and selected a disk; but the "Edit" button is grayed out. The disk 
>>> status is "OK" and the VM status is "Up".
>>> 
>>> Help is appreciated.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>> 
>> did you also upgrade the cluster and DC to 3.3 (changing compatibility 
>> level, after hosts were upgraded to 3.3 vdsm)?
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to Resize VM Disk in oVirt 3.3 (Upgraded from 3.2)

2013-09-24 Thread H. Haven Liu
I reinstalled the hosts, and changed DC to 3.3.

Both DC and Cluster are reporting "Compatibility Version" of 3.3

On Sep 24, 2013, at 11:26 AM, Itamar Heim  wrote:

> On 09/24/2013 07:15 PM, H. Haven Liu wrote:
>> Hello,
>> 
>> I upgraded our installation of oVirt from 3.2 to 3.3, and one of the 
>> features I was looking forward to was the ability to resize VM disk. 
>> However, it appears that the feature is still not available to me. I 
>> selected the "Virtual Machines" tab, selected a VM, selected the "Disks" 
>> sub-tab, and selected a disk; but the "Edit" button is grayed out. The disk 
>> status is "OK" and the VM status is "Up".
>> 
>> Help is appreciated.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
> did you also upgrade the cluster and DC to 3.3 (changing compatibility level, 
> after hosts were upgraded to 3.3 vdsm)?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to Resize VM Disk in oVirt 3.3 (Upgraded from 3.2)

2013-09-24 Thread Itamar Heim

On 09/24/2013 07:15 PM, H. Haven Liu wrote:

Hello,

I upgraded our installation of oVirt from 3.2 to 3.3, and one of the features I was looking forward to was the ability to resize 
VM disk. However, it appears that the feature is still not available to me. I selected the "Virtual Machines" tab, 
selected a VM, selected the "Disks" sub-tab, and selected a disk; but the "Edit" button is grayed out. The 
disk status is "OK" and the VM status is "Up".

Help is appreciated.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



did you also upgrade the cluster and DC to 3.3 (changing compatibility 
level, after hosts were upgraded to 3.3 vdsm)?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-24 Thread Itamar Heim

On 09/24/2013 06:06 PM, Jason Brooks wrote:

On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:

Dear ovirt users.
I'm trying to setup an oVirt 3.3 installation using an already existing
OpenStack glance service as an external provider.
When I define the external provider, I put:

Openstack Image as "Type"
the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
used the openstack public url.
check "Requires Authentication"
put the administrator user/password/tenant in the following fields.

Unfortunately the connection test always fails and the glance provider
doesn't work in ovirt.


You also need to run this from the command line (of your engine):

engine-config --set KeystoneAuthUrl=http://:35357


maybe open a bug on missing a warning keystone is not configured when 
trying to use glance/neutron with authentication?




And then restart the ovirt-engine service.

Jason



In the engine.log I can see:

2013-09-24 15:57:30,665 INFO
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Running command: TestProviderConnectivityCommand
internal: false. Entities affected :  ID:
aaa0----123456789aaa Type: System
2013-09-24 15:57:30,671 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Command
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw
Vdc Bll exception. With error message VdcBLLException: (Failed with VDSM
error PROVIDER_FAILURE and code 5050)
2013-09-24 15:57:30,674 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Transaction rolled-back for command:
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.

The glance URL is reachable from the oVirt engine host, but looking with
tcpdump on the glance service I noticed that
no connections come up --- when I use "requires authentication" ---
a connection happens if --- I do not use "requires authentication" ---
(even if the test fails ultimately)

My OS is CentOS-6.4 and my packages are the following:

[root@rhvmgr03 ovirt-engine]# rpm -qa | grep ovi
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-log-collector-3.3.0-1.el6.noarch
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
ovirt-engine-tools-3.3.0-4.el6.noarch
ovirt-release-el6-8-1.noarch
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-iso-uploader-3.3.0-1.el6.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-engine-userportal-3.3.0-4.el6.noarch
ovirt-engine-backend-3.3.0-4.el6.noarch
ovirt-engine-setup-3.3.0-4.el6.noarch
ovirt-engine-3.3.0-4.el6.noarch
ovirt-image-uploader-3.3.0-1.el6.noarch
ovirt-engine-lib-3.3.0-4.el6.noarch
ovirt-engine-restapi-3.3.0-4.el6.noarch
ovirt-engine-dbscripts-3.3.0-4.el6.noarch

Do you have some suggestion to debug or solve this issue?

Thanks a lot.
Best Regards
R. Brunetti
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm Domain monitor error

2013-09-24 Thread Eduardo Ramos
This storage domains don't exist anymore. There is an entry in postgres 
with:


"Domain VMExport was forcibly removed by admin@internal"

It was a NFS Export domain.

Is there any chance it is causing problem with iscsi data domain 
operations? Now I can create disks and VMs, but I can't remove them. I 
tried export a VM, and the engine.log returned me this:


2013-09-24 14:03:05,180 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Failed in MoveImageGroupVDS method
2013-09-24 14:03:05,182 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Error code MoveImageError and error 
message IRSGenericException: IRSErrorException: Failed to 
MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)
2013-09-24 14:03:05,184 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-3-thread-33) [547e3abf] IrsBroker::Failed::MoveImageGroupVDS due 
to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to 
MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)




On 09/24/2013 01:09 PM, Dafna Ron wrote:

vdsm cannot find your storage.
check your storage and network connection to it.

On 09/24/2013 03:31 PM, Eduardo Ramos wrote:

Hi all!

I'm getting a strange error in on my SPM:

Message from syslogd@darwin at Sep 24 11:19:58 ...
�<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting 
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring 
information#012Traceback (most recent call last):#012 File 
"/usr/share/vdsm/storage/domainMonitor.py", line 182, in 
_monitorDomain#012 self.domain = sdCache.produce(self.sdUUID)#012 
File "/usr/share/vdsm/storage/sdc.py", line 97, in produce#012 
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", 
line 52, in getRealDomain#012 return 
self._cache._realProduce(self._sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce#012 
domain = self._findDomain(sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 152, in _findDomain#012 raise 
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist: 
Storage domain does not exist: 
(u'0226b818-59a6-41bc-8590-91f520aa7859',)


I also can not remove disks. When I try,immediatly appears on the 
'Events' log of webadmin:


*Data Center is being initialized, please wait for initialization to 
complete.*
*User eduardo.ramos failed to initiate removing of disk 
012.167_teste_InfoDoc_Disk1 from domain VMs.*


Could someone help me?

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to finish AIO 3.3.0 - VDSM

2013-09-24 Thread Nicholas Kesick





> Date: Tue, 24 Sep 2013 13:28:26 +0100
> From: dan...@redhat.com
> To: cybertimber2...@hotmail.com
> CC: jbro...@redhat.com; masa...@redhat.com; alo...@redhat.com; users@ovirt.org
> Subject: Re: [Users] Unable to finish AIO 3.3.0 - VDSM
> 
> On Mon, Sep 23, 2013 at 06:10:09PM -0400, Nicholas Kesick wrote:
> > Ok the thread got a little fragmented, so I'm trying to merge these
> > together. Let me know if I missed something.
> >  
> > - Original Message -
> > > > From: "Dan Kenigsberg" 
> > > > To: "Jason Brooks" 
> > > > Cc: "Nicholas Kesick" , "oVirt Mailing 
> > > > List" 
> > > > Sent: Monday, September 23, 2013 1:23:28 PM
> > > > Subject: Re: [Users] Unable to finish AIO 3.3.0 - VDSM
> > > > 
> > > > On Mon, Sep 23, 2013 at 03:29:10PM -0400, Jason Brooks wrote:
> > > > > 
> > > > > 
> > > > > > 
> > > > > > Hi Nicholas, I just installed an F19 AIO without any problem. My 
> > > > > > install
> > > > > > was
> > > > > > only minimal, though. I restored my snapshot to pre-ovirt install 
> > > > > > and
> > > > > > added
> > > > > > the "standard" group, rebooted, installed, and vdsm still installed
> > > > > > normally.
> > > > > > 
> > > > > > I'm wondering if it makes a difference if the system starts out with
> > > > > > minimal+standard, rather than starting out minimal and adding 
> > > > > > standard
> > > > > > after...
> > > > > > 
> > > > > > This is with dhcp addressing.
> > > > > 
> > > > > Another difference -- my AIO machine has nics w/ the regular eth0 
> > > > > naming --
> > > > > don't know if the biosdevname bits could be causing an issue...
> > > > 
> > > > Would I be wrong to assume that you had
> > > > /etc/sysconfig/network-scripts/ifcfg-eth0 defined before installation
> > > > began?
> > > 
> > > My systems do always have this defined before installation begins. I 
> > > almost always
> > > do PXE installs of Fedora. Wonder how it differs from a DVD install...
> > > 
> > > Jason
> > Good question. My particular attempts with ovirt 3.3 have been by
> > using the netinstall.iso. I can try a DVD install with
> > minimal+standard. For what it's worth that's what I've always used,
> > especially after that thread about minimal missing tar, and that part
> > of the install or setup requires tar. 
> >  
> > I do wonder if the interface names are messing things up. I don't know
> > if something changed upstream, or if it's part of the net install, but
> > interfaces aren't name eth# or em# (embedded) / p#p# (PCI) anymore.
> > Mine are way more cryptic now (enp4s0) and it's very annoying.
> > I know there wasn't a ifcfg-eth0, but there is a ifcfg-enp4s0.
> > ifconfig currently reports that I'm using em1, but there is no config
> > file for that. hmm.
> 
> Naming per se should not matter. I have seen ovirt install on hosts with
> all kinds of nic names.
> 
> However could we get to the bottom of the relation between enp4s0 and
> em1? Do you have two physical nics, or just one? Which of them is
> physically connected to the outer world? Your /var/log/messages that
> it's your em1. THAT nic should have it ifcfg file before Vdsm is
> installed on the host.
> 
There is only one NIC on the system, an NIC that is embedded to the motherboard.
During install it's listed as enp4s0. Not sure what it's called after first or 
second boot, but there is a ifcfg-enp4s0 for it.
Currently on the system, the output of ifconfig doesn't list that interface, 
but instead lists em1. If I try a ifdown enp4s0, em1 goes down. It's like they 
are linked, but I can't find any reference of that. It's off at the moment so 
when I can boot it up I'll provide more info.
I might reinstall and see how it progresses from being named enp4s0 to em1. 
Worst case I'll disable biosdevname.
> >  
> > > On Mon, Sep 23, 2013 at 02:25:49PM -0400, Moti Asayag wrote:
> > > > I have looked at the getVdsCapabilities reported by VDSM for the first 
> > > > time, on which the engine based its
> > > > setupNetwork command for configuring the management network:
> > > > 
> > > > 'lastClientIface': 'em1',
> > > > 'nics': {'em1': {'netmask': '255.255.255.0', 'addr': '192.168.2.9', 
> > > > 'hwaddr': 'a4:ba:db:ec:ea:cd', 'cfg': {}, 'ipv6addrs': > 
> > > > ['fe80::a6ba:dbff:feec:eacd/64', 
> > > > '2001:4830:1692:1:a6ba:dbff:feec:eacd/64'], 'speed': 1000, 'mtu': 
> > > > '1500'}}
> > > > 
> > > > Based on that input, the engine sends setupNetwork command to configure 
> > > > the management network on top of 'em1' nic.
> > > > However, since it has no bootprotocol or gateway, it is identified as 
> > > > bootproto=NONE, which result in engine not to pass ip > 
> > > > address/subnet/gateway to vdsm, therefore the command fails.
> > > 
> > > This seems very similar to what triggered
> > > 
> > >  Bug 987813 - [RFE] report BOOTPROTO and BONDING_OPTS independent of
> > >  netdevice.cfg
> > > 
> > > Vdsm does not really cope with network definitions that are not
> > > ifcfg-based. I do not know what makes Fedora 19 sometimes use ifcfg

[Users] Unable to Resize VM Disk in oVirt 3.3 (Upgraded from 3.2)

2013-09-24 Thread H. Haven Liu
Hello,

I upgraded our installation of oVirt from 3.2 to 3.3, and one of the features I 
was looking forward to was the ability to resize VM disk. However, it appears 
that the feature is still not available to me. I selected the "Virtual 
Machines" tab, selected a VM, selected the "Disks" sub-tab, and selected a 
disk; but the "Edit" button is grayed out. The disk status is "OK" and the VM 
status is "Up".

Help is appreciated.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm Domain monitor error

2013-09-24 Thread Dafna Ron

vdsm cannot find your storage.
check your storage and network connection to it.

On 09/24/2013 03:31 PM, Eduardo Ramos wrote:

Hi all!

I'm getting a strange error in on my SPM:

Message from syslogd@darwin at Sep 24 11:19:58 ...
�<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting 
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring 
information#012Traceback (most recent call last):#012 File 
"/usr/share/vdsm/storage/domainMonitor.py", line 182, in 
_monitorDomain#012 self.domain = sdCache.produce(self.sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 97, in produce#012 
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", line 
52, in getRealDomain#012 return 
self._cache._realProduce(self._sdUUID)#012 File 
"/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce#012 domain 
= self._findDomain(sdUUID)#012 File "/usr/share/vdsm/storage/sdc.py", 
line 152, in _findDomain#012 raise 
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist: 
Storage domain does not exist: (u'0226b818-59a6-41bc-8590-91f520aa7859',)


I also can not remove disks. When I try,immediatly appears on the 
'Events' log of webadmin:


*Data Center is being initialized, please wait for initialization to 
complete.*
*User eduardo.ramos failed to initiate removing of disk 
012.167_teste_InfoDoc_Disk1 from domain VMs.*


Could someone help me?

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm live migration errors in latest master

2013-09-24 Thread Federico Simoncelli
- Original Message -
> From: "Dan Kenigsberg" 
> To: "Dead Horse" 
> Cc: "" , vdsm-de...@fedorahosted.org, 
> fsimo...@redhat.com, aba...@redhat.com
> Sent: Tuesday, September 24, 2013 11:44:48 AM
> Subject: Re: [Users] vdsm live migration errors in latest master
> 
> On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
> > Seeing failed live migrations and these errors in the vdsm logs with latest
> > VDSM/Engine master.
> > Hosts are EL6.4
> 
> Thanks for posting this report.
> 
> The log is from the source of migration, right?
> Could you trace the history of the hosts of this VM? Could it be that it
> was started on an older version of vdsm (say ovirt-3.3.0) and then (due
> to migration or vdsm upgrade) got into a host with a much newer vdsm?
> 
> Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
> your log? I smells like an unintended regression of
> http://gerrit.ovirt.org/17714
> vm: extend shared property to support locking
> 
> solving it may not be trivial, as we should not call
> _normalizeDriveSharedAttribute() automatically on migration destination,
> as it may well still be apart of a 3.3 clusterLevel.
> 
> Also, migration from vdsm with extended shared property, to an ovirt 3.3
> vdsm is going to explode (in a different way), since the destination
> does not expect the extended values.
> 
> Federico, do we have a choice but to revert that patch, and use
> something like "shared3" property instead?

I filed a bug at:

https://bugzilla.redhat.com/show_bug.cgi?id=1011608

A possible fix could be:

http://gerrit.ovirt.org/#/c/19509

-- 
Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-24 Thread Riccardo Brunetti
On 09/24/2013 05:06 PM, Jason Brooks wrote:
> On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
>> Dear ovirt users.
>> I'm trying to setup an oVirt 3.3 installation using an already existing
>> OpenStack glance service as an external provider.
>> When I define the external provider, I put:
>>
>> Openstack Image as "Type"
>> the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
>> used the openstack public url.
>> check "Requires Authentication"
>> put the administrator user/password/tenant in the following fields.
>>
>> Unfortunately the connection test always fails and the glance provider
>> doesn't work in ovirt.
> You also need to run this from the command line (of your engine):
>
> engine-config --set KeystoneAuthUrl=http://:35357
>
> And then restart the ovirt-engine service.
>
> Jason
>
>> In the engine.log I can see:
>>
>> 2013-09-24 15:57:30,665 INFO 
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (ajp--127.0.0.1-8702-3) Running command: TestProviderConnectivityCommand
>> internal: false. Entities affected :  ID:
>> aaa0----123456789aaa Type: System
>> 2013-09-24 15:57:30,671 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (ajp--127.0.0.1-8702-3) Command
>> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw
>> Vdc Bll exception. With error message VdcBLLException: (Failed with VDSM
>> error PROVIDER_FAILURE and code 5050)
>> 2013-09-24 15:57:30,674 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (ajp--127.0.0.1-8702-3) Transaction rolled-back for command:
>> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.
>>
>> The glance URL is reachable from the oVirt engine host, but looking with
>> tcpdump on the glance service I noticed that
>> no connections come up --- when I use "requires authentication" ---
>> a connection happens if --- I do not use "requires authentication" ---
>> (even if the test fails ultimately)
>>
>> My OS is CentOS-6.4 and my packages are the following:
>>
>> [root@rhvmgr03 ovirt-engine]# rpm -qa | grep ovi
>> ovirt-host-deploy-1.1.1-1.el6.noarch
>> ovirt-log-collector-3.3.0-1.el6.noarch
>> ovirt-engine-cli-3.3.0.4-1.el6.noarch
>> ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
>> ovirt-engine-tools-3.3.0-4.el6.noarch
>> ovirt-release-el6-8-1.noarch
>> ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
>> ovirt-iso-uploader-3.3.0-1.el6.noarch
>> ovirt-host-deploy-java-1.1.1-1.el6.noarch
>> ovirt-engine-userportal-3.3.0-4.el6.noarch
>> ovirt-engine-backend-3.3.0-4.el6.noarch
>> ovirt-engine-setup-3.3.0-4.el6.noarch
>> ovirt-engine-3.3.0-4.el6.noarch
>> ovirt-image-uploader-3.3.0-1.el6.noarch
>> ovirt-engine-lib-3.3.0-4.el6.noarch
>> ovirt-engine-restapi-3.3.0-4.el6.noarch
>> ovirt-engine-dbscripts-3.3.0-4.el6.noarch
>>
>> Do you have some suggestion to debug or solve this issue?
>>
>> Thanks a lot.
>> Best Regards
>> R. Brunetti
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
Ok, now it works.
Thank you all very much for your help.

Riccardo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-24 Thread Riccardo Brunetti
On 09/24/2013 04:56 PM, Gianluca Cecchi wrote:
> On Tue, Sep 24, 2013 at 4:15 PM, Riccardo Brunetti  wrote:
>> Dear ovirt users.
>> I'm trying to setup an oVirt 3.3 installation using an already existing
>> OpenStack glance service as an external provider.
>> When I define the external provider, I put:
>>
>> Openstack Image as "Type"
>> the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
>> used the openstack public url.
>> check "Requires Authentication"
>> put the administrator user/password/tenant in the following fields.
>>
>> Unfortunately the connection test always fails and the glance provider
>> doesn't work in ovirt.
> Could you give more details regarding how you configured glance
> service in your openstack environment?
> Does it relay on swift as its store?
>
> Gianluca
Hi Gianluca. Thank you for the prompt reply.

This is the output of the command: keystone endpoint-list (for what
concerns the glance service)

| 78bf9c3381604e1ba1aa837c7a768173 | regionOne |
http://10.0.54.7:9292  |   
http://10.0.54.7:9292| 
http://10.0.54.7:9292   | acb2d31657c5421aaddb1522e2e2340b |

All the public/internal/admin URLs are the same, and are reachable from
the ovirt engine host:

# telnet 10.0.54.7 9292

Trying 10.0.54.7...
Connected to 10.0.54.7.
Escape character is '^]'.

Glance is using a local storage and not swift to store images. Note that
I can use them from the OpenStack testbed that I previously prepared.

Riccardo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-24 Thread Jason Brooks
On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
> Dear ovirt users.
> I'm trying to setup an oVirt 3.3 installation using an already existing
> OpenStack glance service as an external provider.
> When I define the external provider, I put:
> 
> Openstack Image as "Type"
> the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
> used the openstack public url.
> check "Requires Authentication"
> put the administrator user/password/tenant in the following fields.
> 
> Unfortunately the connection test always fails and the glance provider
> doesn't work in ovirt.

You also need to run this from the command line (of your engine):

engine-config --set KeystoneAuthUrl=http://:35357

And then restart the ovirt-engine service.

Jason

> 
> In the engine.log I can see:
> 
> 2013-09-24 15:57:30,665 INFO 
> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
> (ajp--127.0.0.1-8702-3) Running command: TestProviderConnectivityCommand
> internal: false. Entities affected :  ID:
> aaa0----123456789aaa Type: System
> 2013-09-24 15:57:30,671 ERROR
> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
> (ajp--127.0.0.1-8702-3) Command
> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw
> Vdc Bll exception. With error message VdcBLLException: (Failed with VDSM
> error PROVIDER_FAILURE and code 5050)
> 2013-09-24 15:57:30,674 ERROR
> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
> (ajp--127.0.0.1-8702-3) Transaction rolled-back for command:
> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.
> 
> The glance URL is reachable from the oVirt engine host, but looking with
> tcpdump on the glance service I noticed that
> no connections come up --- when I use "requires authentication" ---
> a connection happens if --- I do not use "requires authentication" ---
> (even if the test fails ultimately)
> 
> My OS is CentOS-6.4 and my packages are the following:
> 
> [root@rhvmgr03 ovirt-engine]# rpm -qa | grep ovi
> ovirt-host-deploy-1.1.1-1.el6.noarch
> ovirt-log-collector-3.3.0-1.el6.noarch
> ovirt-engine-cli-3.3.0.4-1.el6.noarch
> ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
> ovirt-engine-tools-3.3.0-4.el6.noarch
> ovirt-release-el6-8-1.noarch
> ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
> ovirt-iso-uploader-3.3.0-1.el6.noarch
> ovirt-host-deploy-java-1.1.1-1.el6.noarch
> ovirt-engine-userportal-3.3.0-4.el6.noarch
> ovirt-engine-backend-3.3.0-4.el6.noarch
> ovirt-engine-setup-3.3.0-4.el6.noarch
> ovirt-engine-3.3.0-4.el6.noarch
> ovirt-image-uploader-3.3.0-1.el6.noarch
> ovirt-engine-lib-3.3.0-4.el6.noarch
> ovirt-engine-restapi-3.3.0-4.el6.noarch
> ovirt-engine-dbscripts-3.3.0-4.el6.noarch
> 
> Do you have some suggestion to debug or solve this issue?
> 
> Thanks a lot.
> Best Regards
> R. Brunetti
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-24 Thread Gianluca Cecchi
On Tue, Sep 24, 2013 at 4:15 PM, Riccardo Brunetti  wrote:
> Dear ovirt users.
> I'm trying to setup an oVirt 3.3 installation using an already existing
> OpenStack glance service as an external provider.
> When I define the external provider, I put:
>
> Openstack Image as "Type"
> the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
> used the openstack public url.
> check "Requires Authentication"
> put the administrator user/password/tenant in the following fields.
>
> Unfortunately the connection test always fails and the glance provider
> doesn't work in ovirt.

Could you give more details regarding how you configured glance
service in your openstack environment?
Does it relay on swift as its store?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How can I make a VM "immortal"?

2013-09-24 Thread lofyer
On 09/24/13 19:57, René Koch (ovido) wrote:
> On Tue, 2013-09-24 at 08:44 +0800, lofyer wrote:
>> On 2013/9/24 6:03, Itamar Heim wrote:
>>> On 09/23/2013 06:18 PM, lofyer wrote:
 Besides assigning a watchdog device to it, are there any other ways to
 make the VM autostart even if an user shutdown it manually?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
>>> if a user shut down a vm from inside or via engine?
>>> i suggest an external script on engine monitoring its status and 
>>> starting it for such a use case
>> You mean a anacrontab script?
>
> You can for example use Nagios/Icinga with the event handler
> functionality. When your vm is down, Nagios/Icinga can start it again
> via an event handler script (which will use Pyhton SDK or REST-API to
> start the vm).
>
>
> Regards,
> René
>
>
> PS: Had some mail/dns issues today, so maybe some mails with suggestions
> are missing on my side...
>
>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
That's interesting, I'll have a try later since I never use
nagios/icinga before.
For now I'm using a anacrontab scripts that start the "down" vms every
10 minutes.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] vdsm Domain monitor error

2013-09-24 Thread Eduardo Ramos

Hi all!

I'm getting a strange error in on my SPM:

Message from syslogd@darwin at Sep 24 11:19:58 ...
 ?<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting 
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring 
information#012Traceback (most recent call last):#012 File 
"/usr/share/vdsm/storage/domainMonitor.py", line 182, in 
_monitorDomain#012self.domain = sdCache.produce(self.sdUUID)#012  
File "/usr/share/vdsm/storage/sdc.py", line 97, in produce#012 
domain.getRealDomain()#012  File "/usr/share/vdsm/storage/sdc.py", line 
52, in getRealDomain#012return 
self._cache._realProduce(self._sdUUID)#012  File 
"/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce#012 domain = 
self._findDomain(sdUUID)#012  File "/usr/share/vdsm/storage/sdc.py", 
line 152, in _findDomain#012 raise 
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist: 
Storage domain does not exist: (u'0226b818-59a6-41bc-8590-91f520aa7859',)


I also can not remove disks. When I try,immediatly appears on the 
'Events' log of webadmin:


*Data Center is being initialized, please wait for initialization to 
complete.*
*User eduardo.ramos failed to initiate removing of disk 
012.167_teste_InfoDoc_Disk1 from domain VMs.*


Could someone help me?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Glance with oVirt

2013-09-24 Thread Riccardo Brunetti
Dear ovirt users.
I'm trying to setup an oVirt 3.3 installation using an already existing
OpenStack glance service as an external provider.
When I define the external provider, I put:

Openstack Image as "Type"
the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
used the openstack public url.
check "Requires Authentication"
put the administrator user/password/tenant in the following fields.

Unfortunately the connection test always fails and the glance provider
doesn't work in ovirt.

In the engine.log I can see:

2013-09-24 15:57:30,665 INFO 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Running command: TestProviderConnectivityCommand
internal: false. Entities affected :  ID:
aaa0----123456789aaa Type: System
2013-09-24 15:57:30,671 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Command
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw
Vdc Bll exception. With error message VdcBLLException: (Failed with VDSM
error PROVIDER_FAILURE and code 5050)
2013-09-24 15:57:30,674 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(ajp--127.0.0.1-8702-3) Transaction rolled-back for command:
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.

The glance URL is reachable from the oVirt engine host, but looking with
tcpdump on the glance service I noticed that
no connections come up --- when I use "requires authentication" ---
a connection happens if --- I do not use "requires authentication" ---
(even if the test fails ultimately)

My OS is CentOS-6.4 and my packages are the following:

[root@rhvmgr03 ovirt-engine]# rpm -qa | grep ovi
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-log-collector-3.3.0-1.el6.noarch
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
ovirt-engine-tools-3.3.0-4.el6.noarch
ovirt-release-el6-8-1.noarch
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-iso-uploader-3.3.0-1.el6.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-engine-userportal-3.3.0-4.el6.noarch
ovirt-engine-backend-3.3.0-4.el6.noarch
ovirt-engine-setup-3.3.0-4.el6.noarch
ovirt-engine-3.3.0-4.el6.noarch
ovirt-image-uploader-3.3.0-1.el6.noarch
ovirt-engine-lib-3.3.0-4.el6.noarch
ovirt-engine-restapi-3.3.0-4.el6.noarch
ovirt-engine-dbscripts-3.3.0-4.el6.noarch

Do you have some suggestion to debug or solve this issue?

Thanks a lot.
Best Regards
R. Brunetti
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt Weekly Meeting Minutes -- 2013-09-24

2013-09-24 Thread Mike Burns
Minutes: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.html
Minutes (text): 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.txt
Log: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.log.html



#ovirt: oVirt Weekly Meeting



Meeting started by mburns at 13:01:17 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.log.html
.



Meeting summary
---
* agenda and roll call  (mburns, 13:01:22)
  * 3.3 updates  (mburns, 13:01:46)
  * 3.4 planning  (mburns, 13:01:52)
  * conferences and workshops  (mburns, 13:02:15)
  * infra update  (mburns, 13:02:23)
  * Other Topics  (mburns, 13:02:27)

* 3.3 updates  (mburns, 13:04:09)
  * there are a few important issues that should be fixed for 3.3.0.1
(mburns, 13:04:38)
  * patch:  19217  (mburns, 13:05:41)
  * bugs 1007980 1008938 1009100  (mburns, 13:05:56)
  * oschreib plans to have builds available next week  (mburns,
13:06:08)
  * ACTION: oschreib to coordinate builds with non-engine components if
needed  (mburns, 13:07:28)
  * ACTION: oschreib to create 3.3.0.1 tracker and close 3.3.0 tracker
(mburns, 13:07:38)
  * plan is to post builds next week to updates-testing, then promote to
stable the following week  (mburns, 13:09:38)
  * plan for 3.3.1 -- rebase engine and vdsm  (mburns, 13:10:25)
  * engine rebase is complete  (mburns, 13:10:56)
  * issue with vdsm rebase  (mburns, 13:11:01)
  * vdsm should be ready by end of next week, in time for posting the
following week  (mburns, 13:19:07)
  * IDEA: scheduled release date for 3.3.1 is 28-Oct  (mburns, 13:22:02)
  * AGREED: beta posting set for week of 8-Oct  (mburns, 13:24:41)
  * AGREED: tentative release date -- week of 28-Oct  (mburns, 13:25:45)
  * mburns working on a solution for gluster storage domains on EL6
(mburns, 13:26:46)
  * no solution yet, but it's being worked on  (mburns, 13:27:00)

* 3.4 planning  (mburns, 13:28:12)
  * feature collection and planning is underway  (mburns, 13:29:14)
  * no hard dates planned until after review of requested features is
complete  (mburns, 13:30:03)
  * itamar to propose process change for the release  (mburns, 13:31:14)

* Conferences and Workshops  (mburns, 13:31:50)
  * LINK: http://www.ovirt.org/Upcoming_events   (mburns, 13:34:04)

* Infra update  (mburns, 13:34:27)
  * added third host at rackspace  (mburns, 13:37:27)
  * plan to install then migrate existing vms  (mburns, 13:37:36)
  * plan to create a gluster-based cluster  (mburns, 13:37:44)
  * working on installing artifactory, but some issues encountered,
still working  (mburns, 13:39:26)
  * for other details, please refer to the minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-23-14.02.html
(mburns, 13:39:49)

* Other Topics  (mburns, 13:39:58)

Meeting ended at 13:44:17 UTC.




Action Items

* oschreib to coordinate builds with non-engine components if needed
* oschreib to create 3.3.0.1 tracker and close 3.3.0 tracker




Action Items, by person
---
* oschreib
  * oschreib to coordinate builds with non-engine components if needed
  * oschreib to create 3.3.0.1 tracker and close 3.3.0 tracker
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* mburns (82)
* danken (16)
* oschreib (12)
* ewoud (8)
* abaron (8)
* sbonazzo (6)
* dcaro (3)
* itamar (3)
* ovirtbot (3)
* dneary (2)
* Rydekull (2)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How can I make a VM "immortal"?

2013-09-24 Thread Koch (ovido)

On Tue, 2013-09-24 at 08:44 +0800, lofyer wrote:
> On 2013/9/24 6:03, Itamar Heim wrote:
> > On 09/23/2013 06:18 PM, lofyer wrote:
> >> Besides assigning a watchdog device to it, are there any other ways to
> >> make the VM autostart even if an user shutdown it manually?
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
> > if a user shut down a vm from inside or via engine?
> > i suggest an external script on engine monitoring its status and 
> > starting it for such a use case
> You mean a anacrontab script?


You can for example use Nagios/Icinga with the event handler
functionality. When your vm is down, Nagios/Icinga can start it again
via an event handler script (which will use Pyhton SDK or REST-API to
start the vm).


Regards,
René


PS: Had some mail/dns issues today, so maybe some mails with suggestions
are missing on my side...


> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-09-24 Thread Dan Kenigsberg
On Mon, Sep 23, 2013 at 12:15:04PM -0300, emi...@gmail.com wrote:
> Hi,
> 
> I'm running ovirt-engine 3.3 in a server with fedora 19, also two host with
> fedora 19 running vdsm and gluster. I'm using the repositories like it
> say's here: http://www.ovirt.org/OVirt_3.3_TestDay with enable the
> [ovirt-beta] & [ovirt-stable] repos and disable the [ovirt-nightly] repo.
> 
> I've configured a datacenter with glusterfs active an the two Host. I've
> installed a VM and when I do a migration it fails with the message *"VM
> pfSense1 is down. Exit message: 'iface'."* and the VM reboots.

Could you sare vdsm.log from source and destination?

> Also if i
> try to make an snapshot the VM with the Save memory it fails with the
> message *"VM pfSense1 is down. Exit message: Lost connection with qemu
> process."* If I make an snapshot without the Save Memory checked it works.

Here, beyond vdsm.log, I would ask to see /etc/libvirt/qemu/pfSense1.log
to understand why qemu crashed. Please report the version of your
qemu-kcm and kernel.

> 
> I've tried to restart the libvirtd service but it's still happening.

Would you make sure you upgrade to newest libvirt? We had some annoying
bugs resolved by a recent upgrade there.

> 
> Before this I've tried the cluster with NFS storage and have problems with
> migration too, but the error messages were differents. Now I'm  trying with
> gluster because this is what i want to use.
> 
> Could you give me any hint about this?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm live migration errors in latest master

2013-09-24 Thread Dan Kenigsberg
On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
> Seeing failed live migrations and these errors in the vdsm logs with latest
> VDSM/Engine master.
> Hosts are EL6.4

Thanks for posting this report.

The log is from the source of migration, right?
Could you trace the history of the hosts of this VM? Could it be that it
was started on an older version of vdsm (say ovirt-3.3.0) and then (due
to migration or vdsm upgrade) got into a host with a much newer vdsm?

Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
your log? I smells like an unintended regression of
http://gerrit.ovirt.org/17714
vm: extend shared property to support locking

solving it may not be trivial, as we should not call
_normalizeDriveSharedAttribute() automatically on migration destination,
as it may well still be apart of a 3.3 clusterLevel.

Also, migration from vdsm with extended shared property, to an ovirt 3.3
vdsm is going to explode (in a different way), since the destination
does not expect the extended values.

Federico, do we have a choice but to revert that patch, and use
something like "shared3" property instead?

> 
> Thread-1306::ERROR::2013-09-23
> 16:02:42,422::BindingXMLRPC::993::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/BindingXMLRPC.py", line 979, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/BindingXMLRPC.py", line 211, in vmDestroy
> return vm.destroy()
>   File "/usr/share/vdsm/API.py", line 323, in destroy
> res = v.destroy()
>   File "/usr/share/vdsm/vm.py", line 4326, in destroy
> response = self.releaseVm()
>   File "/usr/share/vdsm/vm.py", line 4292, in releaseVm
> self._cleanup()
>   File "/usr/share/vdsm/vm.py", line 2750, in _cleanup
> self._cleanupDrives()
>   File "/usr/share/vdsm/vm.py", line 2482, in _cleanupDrives
> drive, exc_info=True)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 1329, in error
> self.logger.error(msg, *args, **kwargs)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
> self._log(ERROR, msg, args, **kwargs)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
> self._log(ERROR, msg, args, **kwargs)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
> self.handle(record)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
> self.callHandlers(record)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in
> callHandlers
> hdlr.handle(record)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
> self.emit(record)
>   File "/usr/lib64/python2.6/logging/handlers.py", line 780, in emit
> msg = self.format(record)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
> return fmt.format(record)
>   File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
> record.message = record.getMessage()
>   File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
> msg = msg % self.args
>   File "/usr/share/vdsm/vm.py", line 107, in __str__
> if not a.startswith('__')]
>   File "/usr/share/vdsm/vm.py", line 1344, in hasVolumeLeases
> if self.shared != DRIVE_SHARED_TYPE.EXCLUSIVE:
> AttributeError: 'Drive' object has no attribute 'shared'
> 
> - DHC

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How can I make a VM "immortal"?

2013-09-24 Thread Michael Pasternak

Actually this is a bug [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1005562

On 09/24/2013 09:53 AM, lof yer wrote:
> That's fine, I thought there was an option in api that make the start as 
> normal one.
> 
> 2013/9/24 Itamar Heim mailto:ih...@redhat.com>>
> 
> On 09/24/2013 06:34 AM, lof yer wrote:
> 
> Ok, that's easy to accomplish.
> But when I use restapi to start a VM, why does the log show me that it
> runs in RUNONCE mode rather than NORMAL START?
> 
> 
> since the api allows you to pass any parameter to affect the run.
> does it matter?
> 
> 
> 
> 2013/9/24 lofyer mailto:lof...@gmail.com> 
> >>
> 
> 
> On 2013/9/24 6:03, Itamar Heim wrote:
> 
> On 09/23/2013 06:18 PM, lofyer wrote:
> 
> Besides assigning a watchdog device to it, are there any
> other ways to
> make the VM autostart even if an user shutdown it 
> manually?
> ___
> Users mailing list
> Users@ovirt.org  
> >
> http://lists.ovirt.org/mailman/listinfo/users 
> 
> 
>  >
> 
> 
> if a user shut down a vm from inside or via engine?
> i suggest an external script on engine monitoring its status 
> and
> starting it for such a use case
> 
> You mean a anacrontab script?
> 
> 
> 
> 


-- 

Michael Pasternak
RedHat, ENG-Virtualization R&D
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users