Re: [Users] Fatal error during migration

2012-09-20 Thread Michal Skrivanek
Well,looks like 16514 is not open on node. I guess it should,tls migration is 
new in 3.1,isn't it?

On 20 Sep 2012, at 15:25, Mike Burns  wrote:

> On Thu, 2012-09-20 at 06:46 -0400, Doron Fediuck wrote:
>> 
>> __
>>From: "Dmitriy A Pyryakov" 
>>To: "Michal Skrivanek" 
>>Cc: users@ovirt.org
>>Sent: Thursday, September 20, 2012 1:34:46 PM
>>Subject: Re: [Users] Fatal error during migration
>> 
>> 
>> 
>>Michal Skrivanek  написано
>>20.09.2012 16:23:31:
>> 
>>> От: Michal Skrivanek 
>>> Кому: Dmitriy A Pyryakov 
>>> Копия: users@ovirt.org
>>> Дата: 20.09.2012 16:24
>>> Тема: Re: [Users] Fatal error during migration
>>> 
>>> 
>>> On Sep 20, 2012, at 12:19 , Dmitriy A Pyryakov wrote:
>>> 
 Michal Skrivanek  написано
>>20.09.201216:13:16:
 
> От: Michal Skrivanek 
> Кому: Dmitriy A Pyryakov 
> Копия: users@ovirt.org
> Дата: 20.09.2012 16:13
> Тема: Re: [Users] Fatal error during migration
> 
> 
> On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
> 
>> Michal Skrivanek 
>>написано 20.09.
>>> 201216:02:11:
>> 
>>> От: Michal Skrivanek 
>>> Кому: Dmitriy A Pyryakov 
>>> Копия: users@ovirt.org
>>> Дата: 20.09.2012 16:02
>>> Тема: Re: [Users] Fatal error during migration
>>> 
>>> Hi,
>>> well, so what is the other side saying? Maybe some
>>connectivity 
>>> problems between those 2 hosts? firewall? 
>>> 
>>> Thanks,
>>> michal
>> 
>> Yes, firewall is not configured properly by default.
>>If I stop it,
> migration done.
>> Thanks.
> The default is supposed to be:
> 
> # oVirt default firewall configuration. Automatically
>>generated by 
> vdsm bootstrap script.
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> # vdsm
> -A INPUT -p tcp --dport 54321 -j ACCEPT
> # libvirt tls
> -A INPUT -p tcp --dport 16514 -j ACCEPT
> # SSH
> -A INPUT -p tcp --dport 22 -j ACCEPT
> # guest consoles
> -A INPUT -p tcp -m multiport --dports 5634:6166 -j
>>ACCEPT
> # migration
> -A INPUT -p tcp -m multiport --dports 49152:49216 -j
>>ACCEPT
> # snmp
> -A INPUT -p udp --dport 161 -j ACCEPT
> # Reject any other input traffic
> -A INPUT -j REJECT --reject-with icmp-host-prohibited
> -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
>>--reject-with
> icmp-host-prohibited
> COMMIT
 
 my default is:
 
 # cat /etc/sysconfig/iptables
 # oVirt automatically generated firewall configuration
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 -A INPUT -p icmp -j ACCEPT
 -A INPUT -i lo -j ACCEPT
 #vdsm
 -A INPUT -p tcp --dport 54321 -j ACCEPT
 # SSH
 -A INPUT -p tcp --dport 22 -j ACCEPT
 # guest consoles
 -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
 # migration
 -A INPUT -p tcp -m multiport --dports 49152:49216 -j
>>ACCEPT
 # snmp
 -A INPUT -p udp --dport 161 -j ACCEPT
 #
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
>>--reject-
>>> with icmp-host-prohibited
 COMMIT
 
> 
> did you change it manually or is the default missing
>>anything?
 
 default missing "libvirt tls" field.
>>> was it an upgrade of some sort?
>>No.
>> 
>>> These are installed at node setup 
>>> from ovirt-engine. Check the engine version and/or the 
>>> IPTablesConfig in vdc_options table on engine
>> 
>>oVirt engine version: 3.1.0-2.fc17
>> 
>>engine=# select * from vdc_options where option_id=100;
>>option_id | option_name | option_value | version
>>
>> ---++---+-
>>100 | IPTablesConfig | # oVirt default firewall configuration.
>>Automatically generated by vdsm bootstrap script.+| general
>>| | *filter +|
>>| | :INPUT ACCEPT [0:0] +|
>>| | :FORWARD ACCEPT [0:0] +|
>>| | :OUTPUT ACCEPT [0:0] +|
>>| | -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +|
>>| | -A INPUT -p icmp -j ACCEPT +|
>>| | -A INPUT -i lo -j ACCEPT +|
>>| | # vdsm +|
>>| | -A INPUT -p tcp --dport 54321 -j ACCEPT +|
>>| | # libvirt tls +|
>>| | -A INPUT -p tcp --dport 16514 -j ACCEPT +|
>>| | # SSH +|
>>| | -A INPUT -p tcp --dport 22 -j ACCEPT +|
>>| | # guest con

Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread wujieke
Thanks a lot. Mark. 

Attach output for reference. 

-Original Message-
From: Mark Wu [mailto:wu...@linux.vnet.ibm.com] 
Sent: Friday, September 21, 2012 2:15 PM
To: wujieke
Cc: 'Itamar Heim'; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/21/2012 01:01 PM, wujieke wrote:
> I follow the wiki page to re-install ovirt with all-in-one version . 
> my local host in ovirt is working now.
> Thanks a lot.
>
> Btw: the cmd " virsh capabilities" complains out :
>
> [root@localhost ~]# virsh capabilities Please enter your 
> authentication name:
> Please enter your password:
> error: Failed to reconnect to the hypervisor
> error: no valid connection
> error: authentication failed: Failed to step SASL negotiation: -1
(SASL(-1):
> generic failure: All-whitespace username.)
>
> any idea?
Please try "virsh -r capabilities"
>
> -Original Message-
> From: Itamar Heim [mailto:ih...@redhat.com]
> Sent: Friday, September 21, 2012 12:44 PM
> To: wujieke
> Cc: node-de...@ovirt.org; users@ovirt.org
> Subject: Re: [Users] non-operational state as host does not meet clusters'
> minimu CPU level.
>
> On 09/21/2012 03:54 AM, wujieke wrote:
>> [root@localhost ~]# vdsClient -s 0 getVdsCaps | grep -i flags
>>   cpuFlags =
>> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse
>> 3
>> 6,clfl
>> ush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp
>> ,
>> lm,con
>> stant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,a
>> p
>> erfmpe
>> rf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtp
>> r
>> ,pdcm,
>> pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx
>> ,
>> lahf_l
>> m,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,
>> v
>> pid,mo
>> del_coreduo,model_Conroe
>>
>> seems only support model_Conroe?
> and output of: virsh capabilities?
>
>
>> -Original Message-
>> From: Itamar Heim [mailto:ih...@redhat.com]
>> Sent: Thursday, September 20, 2012 10:04 PM
>> To: wujieke
>> Cc: node-de...@ovirt.org; users@ovirt.org
>> Subject: Re: [Users] non-operational state as host does not meet
clusters'
>> minimu CPU level.
>>
>> On 09/20/2012 12:19 PM, wujieke wrote:
>>> Hi, everyone, if it's not the right mail list, pls point out.. thanks..
>>>
>>> I am trying to install the ovirt on my Xeon E5-2650 process on Dell 
>>> server, which is installed with Fedora 17. While I create a new host 
>>> , which actually is the same server as overt-engine is running.
>>>
>>> The host is created ,and starting to "installing". But it ends with 
>>> "Non operational state".
>>>
>>> Error:
>>>
>>> Host CPU type is not compatible with cluster properties, missing CPU
>>> feature: model_sandybridge.
>>>
>>> But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is 
>>> also in Sandy bridge family.  And also this error lead my server reboot.
>>>
>>> Any help is appreciated.
>>>
>>> Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel 
>>> modules. . attached is screen shot for error.
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> please send output of this command from the host (not engine) 
>> vdsClient -s 0 getVdsCaps | grep -i flags
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


  
44454c4c-4b00-1059-8044-c6c04f463358

  x86_64
  SandyBridge
  Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  


  


  
  
tcp
  


  

  
















  


  
















  

  


  selinux
  0

  

  
hvm

  32
  /usr/bin/qemu-system-x86_64
  pc-0.15
  pc-1.0
  pc
  pc-0.14
  pc-0.13
  pc-0.12
  pc-0.11
  pc-0.10
  isapc
  
  
  
/usr/bin/qemu-kvm
pc-0.15
pc-1.0
pc
pc-0.14
pc-0.13
pc-0.12
pc-0.11
pc-0.10
isapc
  


  
  
  
  
  
  

  

  
hvm

  64
  /usr/bin/qemu-system-x86_64
  pc-0.15
  pc-1.0
  pc
  pc-0.14
  pc-0.13
  pc-0.12
  pc-0.11
  pc-0.10
  isapc
  
  
   

Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread Mark Wu

On 09/21/2012 01:01 PM, wujieke wrote:

I follow the wiki page to re-install ovirt with all-in-one version . my
local host in ovirt is working now.
Thanks a lot.

Btw: the cmd " virsh capabilities" complains out :

[root@localhost ~]# virsh capabilities
Please enter your authentication name:
Please enter your password:
error: Failed to reconnect to the hypervisor
error: no valid connection
error: authentication failed: Failed to step SASL negotiation: -1 (SASL(-1):
generic failure: All-whitespace username.)

any idea?

Please try "virsh -r capabilities"


-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Friday, September 21, 2012 12:44 PM
To: wujieke
Cc: node-de...@ovirt.org; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/21/2012 03:54 AM, wujieke wrote:

[root@localhost ~]# vdsClient -s 0 getVdsCaps | grep -i flags
  cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse3
6,clfl
ush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,
lm,con
stant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,ap
erfmpe
rf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr
,pdcm,
pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,
lahf_l
m,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,v
pid,mo
del_coreduo,model_Conroe

seems only support model_Conroe?

and output of: virsh capabilities?



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Thursday, September 20, 2012 10:04 PM
To: wujieke
Cc: node-de...@ovirt.org; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/20/2012 12:19 PM, wujieke wrote:

Hi, everyone, if it's not the right mail list, pls point out.. thanks..

I am trying to install the ovirt on my Xeon E5-2650 process on Dell
server, which is installed with Fedora 17. While I create a new host
, which actually is the same server as overt-engine is running.

The host is created ,and starting to "installing". But it ends with
"Non operational state".

Error:

Host CPU type is not compatible with cluster properties, missing CPU
feature: model_sandybridge.

But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is also
in Sandy bridge family.  And also this error lead my server reboot.

Any help is appreciated.

Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel
modules. . attached is screen shot for error.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


please send output of this command from the host (not engine)
vdsClient -s 0 getVdsCaps | grep -i flags



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can't login with the user 'admin'

2012-09-20 Thread Mark Wu

Yair,

After running engine-cleanup and then engine-setup,  the problem 
disappeared.  I don't know what happened.  Anyway,  thanks for your help!


Mark


On 09/19/2012 02:48 PM, Yair Zaslavsky wrote:

Not sure how much it is informative in this case.
What also bothers me is that you got "failed to decrypt" when you used 
engine-config -g


When you start the jboss server, and look at the engine.log and 
server.log, do you see some prints with "failed to decrypt"?




On 09/19/2012 09:26 AM, Mark Wu wrote:

On 09/19/2012 02:04 PM, Yair Zaslavsky wrote:

Mark,
Can you please provide engine logs here?


Yair,

Here's the related engine log:

2012-09-19 14:24:37,899 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-72) XML RPC error in command GetCapabilitiesVDS
( Vds: host2 ), the error was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
SunCertPathBuilderException: unable to find valid certification path to
requested target
2012-09-19 14:24:37,941 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-73) XML RPC error in command GetCapabilitiesVDS
( Vds: Host1 ), the error was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
SunCertPathBuilderException: unable to find valid certification path to
requested target
2012-09-19 14:24:38,637 ERROR
[org.ovirt.engine.core.bll.LoginAdminUserCommand]
(ajp--127.0.0.1-8009-10) USER_FAILED_TO_AUTHENTICATE : admin
2012-09-19 14:24:38,637 WARN
[org.ovirt.engine.core.bll.LoginAdminUserCommand]
(ajp--127.0.0.1-8009-10) CanDoAction of action LoginAdminUser failed.
Reasons:USER_FAILED_TO_AUTHENTICATE
2012-09-19 14:24:39,999 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-77) XML RPC error in command GetCapabilitiesVDS
( Vds: Host1 ), the error was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
SunCertPathBuilderException: unable to find valid certification path to
requested target
...



On 09/19/2012 09:02 AM, Mark Wu wrote:


After upgrading ovirt-engine (new version:
ovirt-engine-3.1.0-3.1345126685.git7649eed.fc17),  I can't login with
the user 'admin'.   Here's my upgrade process:
yum remove ovirt-engine
yum install ovirt-engine
engine-setup   and type the same password for 'admin' as before.

The setup script finished successfully. But I can't login with 'admin'
user.   I tried to run engine-setup again, but it didn't help.

I also tried to change password with engine-config:

# engine-config -g AdminPassword
Failed to decrypt the current value
# engine-config -s AdminPassword=
'' is not a valid value for type Password.

It always complains it's not a valid value whatever I input.

Is there anyone hit problem before?  And any idea about how to
resolve it?
Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread wujieke
I follow the wiki page to re-install ovirt with all-in-one version . my
local host in ovirt is working now. 
Thanks a lot.

Btw: the cmd " virsh capabilities" complains out : 

[root@localhost ~]# virsh capabilities
Please enter your authentication name:
Please enter your password:
error: Failed to reconnect to the hypervisor
error: no valid connection
error: authentication failed: Failed to step SASL negotiation: -1 (SASL(-1):
generic failure: All-whitespace username.)

any idea?

-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Friday, September 21, 2012 12:44 PM
To: wujieke
Cc: node-de...@ovirt.org; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/21/2012 03:54 AM, wujieke wrote:
> [root@localhost ~]# vdsClient -s 0 getVdsCaps | grep -i flags
>  cpuFlags =
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse3
> 6,clfl 
> ush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,
> lm,con 
> stant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,ap
> erfmpe 
> rf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr
> ,pdcm, 
> pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,
> lahf_l 
> m,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,v
> pid,mo
> del_coreduo,model_Conroe
>
> seems only support model_Conroe?

and output of: virsh capabilities?



>
> -Original Message-
> From: Itamar Heim [mailto:ih...@redhat.com]
> Sent: Thursday, September 20, 2012 10:04 PM
> To: wujieke
> Cc: node-de...@ovirt.org; users@ovirt.org
> Subject: Re: [Users] non-operational state as host does not meet clusters'
> minimu CPU level.
>
> On 09/20/2012 12:19 PM, wujieke wrote:
>> Hi, everyone, if it's not the right mail list, pls point out.. thanks..
>>
>> I am trying to install the ovirt on my Xeon E5-2650 process on Dell 
>> server, which is installed with Fedora 17. While I create a new host 
>> , which actually is the same server as overt-engine is running.
>>
>> The host is created ,and starting to "installing". But it ends with 
>> "Non operational state".
>>
>> Error:
>>
>> Host CPU type is not compatible with cluster properties, missing CPU
>> feature: model_sandybridge.
>>
>> But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is also 
>> in Sandy bridge family.  And also this error lead my server reboot.
>>
>> Any help is appreciated.
>>
>> Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel 
>> modules. . attached is screen shot for error.
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> please send output of this command from the host (not engine) 
> vdsClient -s 0 getVdsCaps | grep -i flags
>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Jenkins.ovirt.org

2012-09-20 Thread Bret Palsson
Never mind guys, I just added "make srpm" in the Post Steps > Execute Shell 
box. Works perfectly. I'll send it to koji for packaging.

Thanks!

-Bret
> Hi there! I've setup my own jenkins server for building oVirt-engine, I'm 
> wondering if I can get a copy of the ovirt-engine_create_rpms project 
> properties. I imagine all the *_create_rpms projects are about the same setup 
> with some tweaks in the scripts?
> 
> Thanks!
> 
> -Bret
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread Itamar Heim

On 09/21/2012 03:54 AM, wujieke wrote:

[root@localhost ~]# vdsClient -s 0 getVdsCaps | grep -i flags
 cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clfl
ush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,con
stant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmpe
rf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,
pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_l
m,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,mo
del_coreduo,model_Conroe

seems only support model_Conroe?


and output of: virsh capabilities?





-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Thursday, September 20, 2012 10:04 PM
To: wujieke
Cc: node-de...@ovirt.org; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/20/2012 12:19 PM, wujieke wrote:

Hi, everyone, if it's not the right mail list, pls point out.. thanks..

I am trying to install the ovirt on my Xeon E5-2650 process on Dell
server, which is installed with Fedora 17. While I create a new host ,
which actually is the same server as overt-engine is running.

The host is created ,and starting to "installing". But it ends with
"Non operational state".

Error:

Host CPU type is not compatible with cluster properties, missing CPU
feature: model_sandybridge.

But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is also
in Sandy bridge family.  And also this error lead my server reboot.

Any help is appreciated.

Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel
modules. . attached is screen shot for error.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



please send output of this command from the host (not engine) vdsClient -s 0
getVdsCaps | grep -i flags




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Jenkins.ovirt.org

2012-09-20 Thread Bret Palsson
Hi there! I've setup my own jenkins server for building oVirt-engine, I'm 
wondering if I can get a copy of the ovirt-engine_create_rpms project 
properties. I imagine all the *_create_rpms projects are about the same setup 
with some tweaks in the scripts?

Thanks!

-Bret
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm/engine do not like Infiniband

2012-09-20 Thread Dead Horse
I have updated the bug.

The only side effects I noted from this is that VDSM does not seem to
be able to read/report TX/RX stats IB cards properly.
Also the MAC Address fields in the admin/user portals seem to be of
fixed size and the IB card HW address overflows the fields.

- DHC

On Thu, Sep 20, 2012 at 3:14 AM, Dan Kenigsberg  wrote:

> On Fri, Sep 14, 2012 at 02:13:37PM -0500, Dead Horse wrote:
> > This is a test setup so no worries about future breakage via upgrade.
> > I ended up stopping the engine service, dumping the database and altering
> > the the table vds_interface --> column "mac_addr" and increasing the char
> > varying length from 20 to 60.
> > I then restore the altered database and go about business as usual.
>
> Please note in the BZ that this is the only change that is required. It
> would make pushing this upstream much easier.
>
> Thanks!
>
> >
> > I had to make the edit offline because there are quite a few DB views and
> > rules dependent on that table.
> >
> > - DHC
> >
> > On Fri, Sep 14, 2012 at 2:51 AM, Itamar Heim  wrote:
> >
> > > On 09/14/2012 06:59 AM, Dead Horse wrote:
> > >
> > >> Bug opened BZ857294 (
> https://bugzilla.redhat.com/**show_bug.cgi?id=857294<
> https://bugzilla.redhat.com/show_bug.cgi?id=857294>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread wujieke
[root@localhost ~]# vdsClient -s 0 getVdsCaps | grep -i flags
cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clfl
ush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,con
stant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmpe
rf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,
pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_l
m,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,mo
del_coreduo,model_Conroe

seems only support model_Conroe?

-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Thursday, September 20, 2012 10:04 PM
To: wujieke
Cc: node-de...@ovirt.org; users@ovirt.org
Subject: Re: [Users] non-operational state as host does not meet clusters'
minimu CPU level.

On 09/20/2012 12:19 PM, wujieke wrote:
> Hi, everyone, if it's not the right mail list, pls point out.. thanks..
>
> I am trying to install the ovirt on my Xeon E5-2650 process on Dell 
> server, which is installed with Fedora 17. While I create a new host , 
> which actually is the same server as overt-engine is running.
>
> The host is created ,and starting to "installing". But it ends with 
> "Non operational state".
>
> Error:
>
> Host CPU type is not compatible with cluster properties, missing CPU
> feature: model_sandybridge.
>
> But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is also 
> in Sandy bridge family.  And also this error lead my server reboot.
>
> Any help is appreciated.
>
> Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel 
> modules. . attached is screen shot for error.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

please send output of this command from the host (not engine) vdsClient -s 0
getVdsCaps | grep -i flags

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Is there a way to force remove a host?

2012-09-20 Thread Itamar Heim

On 09/20/2012 07:11 PM, Dominic Kaiser wrote:

Sorry I did not explain.

I had tried to remove the host and had not luck troubleshooting it.  I
then had removed it and used it for a storage unit reinstalling fedora
17.  I foolishly thought that I could just remove the host manually.  It
physically is not there. (My fault I know)  Is there a way that you know
of to remove a host brute force.


why can't you just move it to maint and delete it?
(you can right click and 'confirm host shutdown manually' to release any 
resources supposedly held by it)




dk

On Thu, Sep 20, 2012 at 12:00 PM, Eli Mesika mailto:emes...@redhat.com>> wrote:



- Original Message -
 > From: "Dominic Kaiser" mailto:domi...@bostonvineyard.org>>
 > To: users@ovirt.org 
 > Sent: Thursday, September 20, 2012 6:44:58 PM
 > Subject: [Users] Is there a way to force remove a host?
 >
 >
 > I could not remove old host even if others where up. Can I force
 > remove I do not need it anymore.

Dominic, please attach engine/vdsm logs so we will be able to see
why the Host is not removed.
Thanks
 >
 >
 > --
 > Dominic Kaiser
 > Greater Boston Vineyard
 > Director of Operations
 >
 > cell: 617-230-1412 
 > fax: 617-252-0238 
 > email: domi...@bostonvineyard.org 
 >
 >
 >
 > ___
 > Users mailing list
 > Users@ovirt.org 
 > http://lists.ovirt.org/mailman/listinfo/users
 >




--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Itamar Heim

On 09/20/2012 06:58 PM, Jorick Astrego wrote:

On 09/20/2012 04:36 PM, users-requ...@ovirt.org wrote:

Date: Thu, 20 Sep 2012 17:13:25 +0300 From: Itamar Heim
 To: patrick.hurrelm...@lobster.de Cc:
users@ovirt.org Subject: Re: [Users] SPM not selected after host
failed Message-ID: <505b2485.9080...@redhat.com> Content-Type:
text/plain; charset=ISO-8859-15; format=flowed On 09/20/2012 05:09 PM,
Patrick Hurrelmann wrote:

>On 20.09.2012 16:01, Itamar Heim wrote:

>>>Power management is configured for both nodes. But this might be
the
>>>problem: we use the integrated IPMI over LAN power management - and
>>>if I pull the plug on the machine the power management becomes un-
>>>available, too.
>>>
>>>Could this be the problem?

>>
>>yes... no auto recovery if can't verify node was fenced.
>>for your tests, maybe power off the machine for your tests as
opposed to
>>"no power"?

>
>Ugh, this is ugly. I'm evaluating oVirt currently myself and have
>already suffered from a dead PSU that took down IPMI as well. I really
>don't want to imagine what happens if the host with SPM goes down
due to
>a power failure :/ Is there really no other way? I guess multiple fence
>devices are not possible right now. E.g. first try to fence via IPMI
and
>if that fails pull the plug via APC MasterSwitch. Any thoughts?

SPM would be down until you manually confirm shutdown in this case.
SPM doesn't affect running VMs on NFS/posix/local domains, and only
thinly provisioned VMs on block storage (iscsi/FC).

question, if no power, would the APC still work?
why not just use it to fence instead of IPMI?

(and helping us close the gap on support for multiple fence devices
would be great)


--

Message: 8
Date: Thu, 20 Sep 2012 16:24:47 +0200
From: Patrick Hurrelmann
To:users@ovirt.org
Subject: Re: [Users] SPM not selected after host failed
Message-ID:<505b272f.7000...@lobster.de>
Content-Type: text/plain; charset=ISO-8859-15

On 20.09.2012 16:13, Itamar Heim wrote:

>On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:

>>On 20.09.2012 16:01, Itamar Heim wrote:

Power management is configured for both nodes. But this might
be the
problem: we use the integrated IPMI over LAN power management
- and
if I pull the plug on the machine the power management becomes
un-
available, too.

Could this be the problem?

>>>
>>>yes... no auto recovery if can't verify node was fenced.
>>>for your tests, maybe power off the machine for your tests as
opposed to
>>>"no power"?

>>
>>Ugh, this is ugly. I'm evaluating oVirt currently myself and have
>>already suffered from a dead PSU that took down IPMI as well. I
really
>>don't want to imagine what happens if the host with SPM goes down
due to
>>a power failure :/ Is there really no other way? I guess multiple
fence
>>devices are not possible right now. E.g. first try to fence via
IPMI and
>>if that fails pull the plug via APC MasterSwitch. Any thoughts?

>
>SPM would be down until you manually confirm shutdown in this case.
>SPM doesn't affect running VMs on NFS/posix/local domains, and only
>thinly provisioned VMs on block storage (iscsi/FC).
>
>question, if no power, would the APC still work?
>why not just use it to fence instead of IPMI?
>
>(and helping us close the gap on support for multiple fence devices
>would be great)
>

Ok, maybe I wasn't precise enough. With power failure I actually meant a
broken PSU on the server and I won't be running any local/NFS storage
but only iSCSI.
But you're right with your point that in such situation fencing via APC
would be sufficient. I was mixing my different environments. My lab only
has IPMI right now, while the live environment most likely will have APC
as well.

Regards
Patrick

We don't have an APC, but we have dual psu's on two independent power
feeds with independent backup power. Would we be sufficiently protected?



it is a matter of risk management - if both will fail, you will need to 
manually fence the host to free resources on it (VMs or SPM role).
if both power supplies go down, you usually notice and have bigger 
problems than this.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Is there a way to force remove a host?

2012-09-20 Thread Dominic Kaiser
Sorry I did not explain.

I had tried to remove the host and had not luck troubleshooting it.  I then
had removed it and used it for a storage unit reinstalling fedora 17.  I
foolishly thought that I could just remove the host manually.  It
physically is not there. (My fault I know)  Is there a way that you know of
to remove a host brute force.

dk

On Thu, Sep 20, 2012 at 12:00 PM, Eli Mesika  wrote:

>
>
> - Original Message -
> > From: "Dominic Kaiser" 
> > To: users@ovirt.org
> > Sent: Thursday, September 20, 2012 6:44:58 PM
> > Subject: [Users] Is there a way to force remove a host?
> >
> >
> > I could not remove old host even if others where up. Can I force
> > remove I do not need it anymore.
>
> Dominic, please attach engine/vdsm logs so we will be able to see why the
> Host is not removed.
> Thanks
> >
> >
> > --
> > Dominic Kaiser
> > Greater Boston Vineyard
> > Director of Operations
> >
> > cell: 617-230-1412
> > fax: 617-252-0238
> > email: domi...@bostonvineyard.org
> >
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>



-- 
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Is there a way to force remove a host?

2012-09-20 Thread Eli Mesika


- Original Message -
> From: "Dominic Kaiser" 
> To: users@ovirt.org
> Sent: Thursday, September 20, 2012 6:44:58 PM
> Subject: [Users] Is there a way to force remove a host?
> 
> 
> I could not remove old host even if others where up. Can I force
> remove I do not need it anymore.

Dominic, please attach engine/vdsm logs so we will be able to see why the Host 
is not removed.
Thanks
> 
> 
> --
> Dominic Kaiser
> Greater Boston Vineyard
> Director of Operations
> 
> cell: 617-230-1412
> fax: 617-252-0238
> email: domi...@bostonvineyard.org
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Jorick Astrego

On 09/20/2012 04:36 PM, users-requ...@ovirt.org wrote:
Date: Thu, 20 Sep 2012 17:13:25 +0300 From: Itamar Heim 
 To: patrick.hurrelm...@lobster.de Cc: 
users@ovirt.org Subject: Re: [Users] SPM not selected after host 
failed Message-ID: <505b2485.9080...@redhat.com> Content-Type: 
text/plain; charset=ISO-8859-15; format=flowed On 09/20/2012 05:09 PM, 
Patrick Hurrelmann wrote:

>On 20.09.2012 16:01, Itamar Heim wrote:

>>>Power management is configured for both nodes. But this might be the
>>>problem: we use the integrated IPMI over LAN power management - and
>>>if I pull the plug on the machine the power management becomes un-
>>>available, too.
>>>
>>>Could this be the problem?

>>
>>yes... no auto recovery if can't verify node was fenced.
>>for your tests, maybe power off the machine for your tests as opposed to
>>"no power"?

>
>Ugh, this is ugly. I'm evaluating oVirt currently myself and have
>already suffered from a dead PSU that took down IPMI as well. I really
>don't want to imagine what happens if the host with SPM goes down due to
>a power failure :/ Is there really no other way? I guess multiple fence
>devices are not possible right now. E.g. first try to fence via IPMI and
>if that fails pull the plug via APC MasterSwitch. Any thoughts?

SPM would be down until you manually confirm shutdown in this case.
SPM doesn't affect running VMs on NFS/posix/local domains, and only
thinly provisioned VMs on block storage (iscsi/FC).

question, if no power, would the APC still work?
why not just use it to fence instead of IPMI?

(and helping us close the gap on support for multiple fence devices
would be great)


--

Message: 8
Date: Thu, 20 Sep 2012 16:24:47 +0200
From: Patrick Hurrelmann
To:users@ovirt.org
Subject: Re: [Users] SPM not selected after host failed
Message-ID:<505b272f.7000...@lobster.de>
Content-Type: text/plain; charset=ISO-8859-15

On 20.09.2012 16:13, Itamar Heim wrote:

>On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:

>>On 20.09.2012 16:01, Itamar Heim wrote:

Power management is configured for both nodes. But this might be the
problem: we use the integrated IPMI over LAN power management - and
if I pull the plug on the machine the power management becomes un-
available, too.

Could this be the problem?

>>>
>>>yes... no auto recovery if can't verify node was fenced.
>>>for your tests, maybe power off the machine for your tests as opposed to
>>>"no power"?

>>
>>Ugh, this is ugly. I'm evaluating oVirt currently myself and have
>>already suffered from a dead PSU that took down IPMI as well. I really
>>don't want to imagine what happens if the host with SPM goes down due to
>>a power failure :/ Is there really no other way? I guess multiple fence
>>devices are not possible right now. E.g. first try to fence via IPMI and
>>if that fails pull the plug via APC MasterSwitch. Any thoughts?

>
>SPM would be down until you manually confirm shutdown in this case.
>SPM doesn't affect running VMs on NFS/posix/local domains, and only
>thinly provisioned VMs on block storage (iscsi/FC).
>
>question, if no power, would the APC still work?
>why not just use it to fence instead of IPMI?
>
>(and helping us close the gap on support for multiple fence devices
>would be great)
>

Ok, maybe I wasn't precise enough. With power failure I actually meant a
broken PSU on the server and I won't be running any local/NFS storage
but only iSCSI.
But you're right with your point that in such situation fencing via APC
would be sufficient. I was mixing my different environments. My lab only
has IPMI right now, while the live environment most likely will have APC
as well.

Regards
Patrick
We don't have an APC, but we have dual psu's on two independent power 
feeds with independent backup power. Would we be sufficiently protected?


--
Kind Regards,

Netbulae
Jorick Astrego


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Is there a way to force remove a host?

2012-09-20 Thread Dominic Kaiser
I could not remove old host even if others where up.  Can I force remove I
do not need it anymore.

-- 
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Engine-devel] base url of ovirt

2012-09-20 Thread Jon Thomas
On Thu, 2012-09-20 at 16:40 +0200, Juan Hernandez wrote:
> On 09/20/2012 04:36 PM, Jon Thomas wrote:
> > On Wed, 2012-09-19 at 10:53 +0200, Juan Hernandez wrote:
> >> On 09/19/2012 10:19 AM, Alon Bar-Lev wrote:
> >>>
> >>>
> >>> - Original Message -
>  From: "Juan Hernandez" 
>  To: "Jon Thomas" 
>  Cc: engine-de...@ovirt.org, users@ovirt.org
>  Sent: Wednesday, September 19, 2012 11:08:26 AM
>  Subject: Re: [Users] base url of ovirt
> 
>  Copying engine-devel, as I think this is something we should discuss
>  and
>  maybe do.
> 
>  On 09/18/2012 10:50 PM, Jon Thomas wrote:
> > Is there some config in the engine to set up the web interface base
> > url
> > so that instead of https://localhost.localdomain/ it is
> > https://localhost.localdomain/ovirt ?
> 
>  No, there is no such config.
> 
>  I think this should be the default, I mean, we should have this
>  /ovirt
>  prefix in all our URLs, to make coexistence with other users of the
>  web
>  server easy.
> >>>
> >>> Totally agree.
> >>>
> >>> We discussed that, Itamar agreed to go ahead URL change for 
> >>> ovirt-engine-4.0...
> >>>
> >>> Moving namespace out of root provides many advantages including including 
> >>> simpler apache configuration, easier to use proxies, ability to host 
> >>> multiple applications.
> >>>
> >>> Alon.
> >>
> >> Jon, as you see this will probably go in release 4.0, which is the
> >> future. Meanwhile if what you need is to use the web server with other
> >> applications you could try to replace the directives in
> >> /etc/httpd/conf.d/ovirt-engine.conf with the following:
> >>
> >> ProxyPassMatch ^/(ca.crt|engine.ssh.key.txt)$ ajp://localhost:8009/$1
> >> ProxyPassMatch ^/(api|webadmin|UserPortal|OvirtEngineWeb)(/.*)?$
> >> ajp://localhost:8009/$1$2
> >>
> >> That will change your configuration so that only the URLs that are
> >> really required for the engine will be redirected to it. A notable
> >> exception will be the welcome page, but you probably can live without
> >> it, just use the following to get to the UI:
> >>
> >> https://whatever.example.com/webadmin
> >> https://whatever.example.com/UserPortal
> >>
> >> Take into account that ProxyPassMatch directives are processed in the
> >> order they appear, so if you have another application with conflicting
> >> ProxyPassMatch directives the result can be unexpected.
> > 
> > Thx, that worked better than what I was trying. BTW, I still get the
> > welcome page. I also had to add
> > 
> > WSGIScriptAlias /auth/login 
> > /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
> > 
> > to openstack-dashboard.conf for dashboard to work.
> 
> It is strange that you still get the welcome page, you shouldn't. Maybe
> it is in your browser's cache. Try to reload it.

So it's at http://localhost:8080/ and the default fedora test page is at
http://localhost

> 
> Take into account that I may have missed some of the required URLs. I
> would really appreciate if you report back here if you have any further
> problem.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Engine-devel] base url of ovirt

2012-09-20 Thread Juan Hernandez
On 09/20/2012 04:36 PM, Jon Thomas wrote:
> On Wed, 2012-09-19 at 10:53 +0200, Juan Hernandez wrote:
>> On 09/19/2012 10:19 AM, Alon Bar-Lev wrote:
>>>
>>>
>>> - Original Message -
 From: "Juan Hernandez" 
 To: "Jon Thomas" 
 Cc: engine-de...@ovirt.org, users@ovirt.org
 Sent: Wednesday, September 19, 2012 11:08:26 AM
 Subject: Re: [Users] base url of ovirt

 Copying engine-devel, as I think this is something we should discuss
 and
 maybe do.

 On 09/18/2012 10:50 PM, Jon Thomas wrote:
> Is there some config in the engine to set up the web interface base
> url
> so that instead of https://localhost.localdomain/ it is
> https://localhost.localdomain/ovirt ?

 No, there is no such config.

 I think this should be the default, I mean, we should have this
 /ovirt
 prefix in all our URLs, to make coexistence with other users of the
 web
 server easy.
>>>
>>> Totally agree.
>>>
>>> We discussed that, Itamar agreed to go ahead URL change for 
>>> ovirt-engine-4.0...
>>>
>>> Moving namespace out of root provides many advantages including including 
>>> simpler apache configuration, easier to use proxies, ability to host 
>>> multiple applications.
>>>
>>> Alon.
>>
>> Jon, as you see this will probably go in release 4.0, which is the
>> future. Meanwhile if what you need is to use the web server with other
>> applications you could try to replace the directives in
>> /etc/httpd/conf.d/ovirt-engine.conf with the following:
>>
>> ProxyPassMatch ^/(ca.crt|engine.ssh.key.txt)$ ajp://localhost:8009/$1
>> ProxyPassMatch ^/(api|webadmin|UserPortal|OvirtEngineWeb)(/.*)?$
>> ajp://localhost:8009/$1$2
>>
>> That will change your configuration so that only the URLs that are
>> really required for the engine will be redirected to it. A notable
>> exception will be the welcome page, but you probably can live without
>> it, just use the following to get to the UI:
>>
>> https://whatever.example.com/webadmin
>> https://whatever.example.com/UserPortal
>>
>> Take into account that ProxyPassMatch directives are processed in the
>> order they appear, so if you have another application with conflicting
>> ProxyPassMatch directives the result can be unexpected.
> 
> Thx, that worked better than what I was trying. BTW, I still get the
> welcome page. I also had to add
> 
> WSGIScriptAlias /auth/login 
> /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
> 
> to openstack-dashboard.conf for dashboard to work.

It is strange that you still get the welcome page, you shouldn't. Maybe
it is in your browser's cache. Try to reload it.

Take into account that I may have missed some of the required URLs. I
would really appreciate if you report back here if you have any further
problem.
-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Engine-devel] base url of ovirt

2012-09-20 Thread Jon Thomas
On Wed, 2012-09-19 at 10:53 +0200, Juan Hernandez wrote:
> On 09/19/2012 10:19 AM, Alon Bar-Lev wrote:
> > 
> > 
> > - Original Message -
> >> From: "Juan Hernandez" 
> >> To: "Jon Thomas" 
> >> Cc: engine-de...@ovirt.org, users@ovirt.org
> >> Sent: Wednesday, September 19, 2012 11:08:26 AM
> >> Subject: Re: [Users] base url of ovirt
> >>
> >> Copying engine-devel, as I think this is something we should discuss
> >> and
> >> maybe do.
> >>
> >> On 09/18/2012 10:50 PM, Jon Thomas wrote:
> >>> Is there some config in the engine to set up the web interface base
> >>> url
> >>> so that instead of https://localhost.localdomain/ it is
> >>> https://localhost.localdomain/ovirt ?
> >>
> >> No, there is no such config.
> >>
> >> I think this should be the default, I mean, we should have this
> >> /ovirt
> >> prefix in all our URLs, to make coexistence with other users of the
> >> web
> >> server easy.
> > 
> > Totally agree.
> > 
> > We discussed that, Itamar agreed to go ahead URL change for 
> > ovirt-engine-4.0...
> > 
> > Moving namespace out of root provides many advantages including including 
> > simpler apache configuration, easier to use proxies, ability to host 
> > multiple applications.
> > 
> > Alon.
> 
> Jon, as you see this will probably go in release 4.0, which is the
> future. Meanwhile if what you need is to use the web server with other
> applications you could try to replace the directives in
> /etc/httpd/conf.d/ovirt-engine.conf with the following:
> 
> ProxyPassMatch ^/(ca.crt|engine.ssh.key.txt)$ ajp://localhost:8009/$1
> ProxyPassMatch ^/(api|webadmin|UserPortal|OvirtEngineWeb)(/.*)?$
> ajp://localhost:8009/$1$2
> 
> That will change your configuration so that only the URLs that are
> really required for the engine will be redirected to it. A notable
> exception will be the welcome page, but you probably can live without
> it, just use the following to get to the UI:
> 
> https://whatever.example.com/webadmin
> https://whatever.example.com/UserPortal
> 
> Take into account that ProxyPassMatch directives are processed in the
> order they appear, so if you have another application with conflicting
> ProxyPassMatch directives the result can be unexpected.

Thx, that worked better than what I was trying. BTW, I still get the
welcome page. I also had to add

WSGIScriptAlias /auth/login 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi

to openstack-dashboard.conf for dashboard to work.



> 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Patrick Hurrelmann
On 20.09.2012 16:13, Itamar Heim wrote:
> On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:
>> On 20.09.2012 16:01, Itamar Heim wrote:
 Power management is configured for both nodes. But this might be the
 problem: we use the integrated IPMI over LAN power management - and
 if I pull the plug on the machine the power management becomes un-
 available, too.

 Could this be the problem?
>>>
>>> yes... no auto recovery if can't verify node was fenced.
>>> for your tests, maybe power off the machine for your tests as opposed to
>>> "no power"?
>>
>> Ugh, this is ugly. I'm evaluating oVirt currently myself and have
>> already suffered from a dead PSU that took down IPMI as well. I really
>> don't want to imagine what happens if the host with SPM goes down due to
>> a power failure :/ Is there really no other way? I guess multiple fence
>> devices are not possible right now. E.g. first try to fence via IPMI and
>> if that fails pull the plug via APC MasterSwitch. Any thoughts?
> 
> SPM would be down until you manually confirm shutdown in this case.
> SPM doesn't affect running VMs on NFS/posix/local domains, and only 
> thinly provisioned VMs on block storage (iscsi/FC).
> 
> question, if no power, would the APC still work?
> why not just use it to fence instead of IPMI?
> 
> (and helping us close the gap on support for multiple fence devices 
> would be great)
> 

Ok, maybe I wasn't precise enough. With power failure I actually meant a
broken PSU on the server and I won't be running any local/NFS storage
but only iSCSI.
But you're right with your point that in such situation fencing via APC
would be sufficient. I was mixing my different environments. My lab only
has IPMI right now, while the live environment most likely will have APC
as well.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Itamar Heim

On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:

On 20.09.2012 16:01, Itamar Heim wrote:

Power management is configured for both nodes. But this might be the
problem: we use the integrated IPMI over LAN power management - and
if I pull the plug on the machine the power management becomes un-
available, too.

Could this be the problem?


yes... no auto recovery if can't verify node was fenced.
for your tests, maybe power off the machine for your tests as opposed to
"no power"?


Ugh, this is ugly. I'm evaluating oVirt currently myself and have
already suffered from a dead PSU that took down IPMI as well. I really
don't want to imagine what happens if the host with SPM goes down due to
a power failure :/ Is there really no other way? I guess multiple fence
devices are not possible right now. E.g. first try to fence via IPMI and
if that fails pull the plug via APC MasterSwitch. Any thoughts?


SPM would be down until you manually confirm shutdown in this case.
SPM doesn't affect running VMs on NFS/posix/local domains, and only 
thinly provisioned VMs on block storage (iscsi/FC).


question, if no power, would the APC still work?
why not just use it to fence instead of IPMI?

(and helping us close the gap on support for multiple fence devices 
would be great)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Patrick Hurrelmann
On 20.09.2012 16:01, Itamar Heim wrote:
>> Power management is configured for both nodes. But this might be the
>> problem: we use the integrated IPMI over LAN power management - and
>> if I pull the plug on the machine the power management becomes un-
>> available, too.
>>
>> Could this be the problem?
> 
> yes... no auto recovery if can't verify node was fenced.
> for your tests, maybe power off the machine for your tests as opposed to 
> "no power"?

Ugh, this is ugly. I'm evaluating oVirt currently myself and have
already suffered from a dead PSU that took down IPMI as well. I really
don't want to imagine what happens if the host with SPM goes down due to
a power failure :/ Is there really no other way? I guess multiple fence
devices are not possible right now. E.g. first try to fence via IPMI and
if that fails pull the plug via APC MasterSwitch. Any thoughts?

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread Itamar Heim

On 09/20/2012 12:19 PM, wujieke wrote:

Hi, everyone, if it’s not the right mail list, pls point out.. thanks..

I am trying to install the ovirt on my Xeon E5-2650 process on Dell
server, which is installed with Fedora 17. While I create a new host ,
which actually is the same server as overt-engine is running.

The host is created ,and starting to “installing”. But it ends with “Non
operational state”.

Error:

Host CPU type is not compatible with cluster properties, missing CPU
feature: model_sandybridge.

But in my cluster, I select “sandybridge” CPU, and my Xeon C5 is also in
Sandy bridge family.  And also this error lead my server reboot.

Any help is appreciated.

Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel
modules. . attached is screen shot for error.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



please send output of this command from the host (not engine)
vdsClient -s 0 getVdsCaps | grep -i flags

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Marc-Christian Schröer | ingenit GmbH & Co. KG
Am 20.09.2012 15:34, schrieb Itamar Heim:

Hello Itamar,

thank you for your answer.

> is power management configured on both hosts?
> since the non responsive node happened to be the SPM, it must be fenced.
> engine should to this automatically (and this is what you did manually by 
> 'confirm host has been rebooted'.
> but engine can only do this automatically if power management is configured 
> on both hosts.

Power management is configured for both nodes. But this might be the
problem: we use the integrated IPMI over LAN power management - and
if I pull the plug on the machine the power management becomes un-
available, too.

Could this be the problem?

Kind regards,
  Marc

-- 


 Dipl.-Inform. Marc-Christian Schröer  schro...@ingenit.com
 Geschäftsführer / CEO
 --
 ingenit GmbH & Co. KG   Tel. +49 (0)231 58 698-120
 Emil-Figge-Strasse 76-80Fax. +49 (0)231 58 698-121
 D-44227 Dortmund   www.ingenit.com

 Registergericht: Amtsgericht Dortmund, HRA 13 914
 Gesellschafter : Thomas Klute, Marc-Christian Schröer


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] non-operational state as host does not meet clusters' minimu CPU level.

2012-09-20 Thread Doron Fediuck
wujieke, 
Can you please run in the host- 
vdsClient 0 getVdsCaps 

and post the results? 

- Original Message -

> From: "wujieke" 
> To: node-de...@ovirt.org, users@ovirt.org
> Cc: "wujieke" 
> Sent: Thursday, September 20, 2012 12:19:10 PM
> Subject: [Users] non-operational state as host does not meet
> clusters' minimu CPU level.

> Hi, everyone, if it’s not the right mail list, pls point out..
> thanks..

> I am trying to install the ovirt on my Xeon E5-2650 process on Dell
> server, which is installed with Fedora 17. While I create a new host
> , which actually is the same server as overt-engine is running.
> The host is created ,and starting to “installing”. But it ends with
> “Non operational state”.
> Error:

> Host CPU type is not compatible with cluster properties, missing CPU
> feature: model_sandybridge.

> But in my cluster, I select “sandybridge” CPU, and my Xeon C5 is also
> in Sandy bridge family. And also this error lead my server reboot.

> Any help is appreciated.
> Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel
> modules. . attached is screen shot for error.

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Itamar Heim
On 09/20/2012 04:55 PM, "Marc-Christian Schröer | ingenit GmbH & Co. KG" 
wrote:

Am 20.09.2012 15:34, schrieb Itamar Heim:

Hello Itamar,

thank you for your answer.


is power management configured on both hosts?
since the non responsive node happened to be the SPM, it must be fenced.
engine should to this automatically (and this is what you did manually by 
'confirm host has been rebooted'.
but engine can only do this automatically if power management is configured on 
both hosts.


Power management is configured for both nodes. But this might be the
problem: we use the integrated IPMI over LAN power management - and
if I pull the plug on the machine the power management becomes un-
available, too.

Could this be the problem?


yes... no auto recovery if can't verify node was fenced.
for your tests, maybe power off the machine for your tests as opposed to 
"no power"?


you could use APM, but if no power, even APM won't reply to fencing.



Kind regards,
   Marc




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Marc-Christian Schröer | ingenit GmbH & Co. KG
Am 20.09.2012 15:34, schrieb Itamar Heim:

Hello Itamar,

thank you for your answer.

> is power management configured on both hosts?
> since the non responsive node happened to be the SPM, it must be fenced.
> engine should to this automatically (and this is what you did manually by 
> 'confirm host has been rebooted'.
> but engine can only do this automatically if power management is configured 
> on both hosts.

Power management is configured for both nodes. But this might be the
problem: we use the integrated IPMI over LAN power management - and
if I pull the plug on the machine the power management becomes un-
available, too.

Could this be the problem?

Kind regards,
  Marc

-- 


 Dipl.-Inform. Marc-Christian Schröer  schro...@ingenit.com
 Geschäftsführer / CEO
 --
 ingenit GmbH & Co. KG   Tel. +49 (0)231 58 698-120
 Emil-Figge-Strasse 76-80Fax. +49 (0)231 58 698-121
 D-44227 Dortmund   www.ingenit.com

 Registergericht: Amtsgericht Dortmund, HRA 13 914
 Gesellschafter : Thomas Klute, Marc-Christian Schröer

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Autostarting VMs ?

2012-09-20 Thread Oved Ourfalli


- Original Message -
> From: "Rob Coward" 
> To: users@ovirt.org
> Sent: Thursday, September 20, 2012 4:44:54 PM
> Subject: [Users] Autostarting VMs ?
> 
> Hi,
> I'm new to oVirt and currently just have a single server installed
> with
> the allinone setup. All seems to be working well atm, apart from a
> small
> gripe about spice consoles not working (out of the box) for non-linux
> admin consoles. I just have one question that I hope someone on this
> list might be able to help me with.
> 
> How do you configure oVirt to auto-start vms when it starts after a
> system boot ? I assume that there must be a way for this to happen
> and
> I'm just missing a really obvious option right under my nose.
> 
There is currently no such option for VMs.
There is a pre-start option for VM pools, where you est the number of VMs you 
would like to be available from the Pool, and in this case the engine will make 
sure (if possible) to start this number of VMs automatically.
See feature page in http://wiki.ovirt.org/wiki/Features/PrestartedVm


> Thanks in advance,
> Rob
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Autostarting VMs ?

2012-09-20 Thread Itamar Heim

On 09/20/2012 04:44 PM, Rob Coward wrote:

Hi,
I'm new to oVirt and currently just have a single server installed with
the allinone setup. All seems to be working well atm, apart from a small
gripe about spice consoles not working (out of the box) for non-linux
admin consoles. I just have one question that I hope someone on this
list might be able to help me with.


do you mean windows admins?
http://wiki.ovirt.org/wiki/How_to_Connect_to_SPICE_Console_With_Portal

would be nice if someone will rpm-ify these manual steps.



How do you configure oVirt to auto-start vms when it starts after a
system boot ? I assume that there must be a way for this to happen and
I'm just missing a really obvious option right under my nose.


not yet. though you can write a script leveraging the ovirt api, sdk or 
cli to do so

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Autostarting VMs ?

2012-09-20 Thread Rob Coward

Hi,
I'm new to oVirt and currently just have a single server installed with 
the allinone setup. All seems to be working well atm, apart from a small 
gripe about spice consoles not working (out of the box) for non-linux 
admin consoles. I just have one question that I hope someone on this 
list might be able to help me with.


How do you configure oVirt to auto-start vms when it starts after a 
system boot ? I assume that there must be a way for this to happen and 
I'm just missing a really obvious option right under my nose.


Thanks in advance,
Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Itamar Heim
On 09/20/2012 09:02 AM, "Marc-Christian Schröer | ingenit GmbH & Co. KG" 
wrote:

Hello all,

we are currently in the process of evaluating oVirt as a basis for our
new virutalization environment. As far as our evaluation has processed
it seems to be the way to go, but when testing the high availability
features I ran into a serious problem:

Our testing setup looks like this: 2 hosts on Dell R210 and R210II machines,
a seperate machine running the managing application in JBoss and providing
storage space through NFS. Under normal conditions everything works fine:
I can migrate machines between the two nodes, I can add a third node,
access everything by VNC, monitor the VMs really nicely, the power management
feature of the R210s work just fine.

Then, when simulating the loss of a host by pulling the plug on the machine,
(yes, that is kind of a crude check) some things seem to go terribly wrong:
the system detects the host being unresponsive and assumes it is down. But
the host happens to be the SPM and the other does not take over this function.
This leaves the hole cluster in an unresponseive state and my datacenter
is gone. I tracked down the problem in the log files to the point where
the engine tries to migrate the SPM to another node:

2012-09-20 07:54:40,836 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-60) SPM selection - vds seems as spm node03
2012-09-20 07:54:40,837 WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-60) spm vds is non responsive, stopping spm selection.
2012-09-20 07:54:44,344 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] 
(QuartzScheduler_Worker-51) XML RPC error in command GetCapabilitiesVDS ( Vds: 
node03 ),
the error was: java.util.concurrent.ExecutionException: 
java.lang.reflect.InvocationTargetException, NoRouteToHostException: Keine 
Route zum Zielrechner
2012-09-20 07:54:47,345 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] 
(QuartzScheduler_Worker-47) XML RPC error in command GetCapabilitiesVDS ( Vds: 
node03 ),
the error was: java.util.concurrent.ExecutionException: 
java.lang.reflect.InvocationTargetException, NoRouteToHostException: Keine 
Route zum Zielrechner
2012-09-20 07:54:50,869 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-69) hostFromVds::selectedVds - node04, spmStatus Free, 
storage
pool ingenit
2012-09-20 07:54:50,892 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-69) SPM Init: could not find reported vds or not up -
pool:ingenit vds_spm_id: 2
2012-09-20 07:54:50,905 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-69) SPM selection - vds seems as spm node03

As far as I understand these logs, the engine detects node03 not being
responsive, starts electing a new SPM but does not find node04. That is
strange as the host is online, pingable and worked just fine as part of
the cluster.

What I can do to remedy the situation using the management interface to
set "Confirm Host has been rebooted" and switch the host into maintenance
mode after that. Than the responsive node takes over and the VMs are
being migrated, too.

Has anyone experienced a similar problem? Is this by design and killing
off the SPM is a bad coincident and always requires manual intervention?
I would hope not :-)

I tried to google some answers, but aside from a thread in May that did
not help I came up empty.

Thanks in advance for all the help...

Kind regards from Germany,
   Marc



is power management configured on both hosts?
since the non responsive node happened to be the SPM, it must be fenced.
engine should to this automatically (and this is what you did manually 
by 'confirm host has been rebooted'.
but engine can only do this automatically if power management is 
configured on both hosts.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Mike Burns
On Thu, 2012-09-20 at 06:51 -0400, Doron Fediuck wrote:
> 
> __
> From: "Dmitriy A Pyryakov" 
> To: "Itamar Heim" 
> Cc: "Mike Burns" , users@ovirt.org
> Sent: Thursday, September 20, 2012 1:13:55 PM
> Subject: [Users] HA: Re:  HP Integrated Lights Out 3
> 
> 
> 
> Itamar Heim  написано 20.09.2012 16:01:54:
> 
> > От: Itamar Heim 
> > Кому: Eli Mesika 
> > Копия: Dmitriy A Pyryakov , 
> > users@ovirt.org, Roy Golan , Mike Burns
> 
> > Дата: 20.09.2012 16:02
> > Тема: Re: [Users] HP Integrated Lights Out 3
> > 
> > On 09/20/2012 12:58 PM, Eli Mesika wrote:
> > >
> > >
> > > - Original Message -
> > >> From: "Dmitriy A Pyryakov" 
> > >> To: "Eli Mesika" 
> > >> Cc: "Itamar Heim" , users@ovirt.org
> > >> Sent: Thursday, September 20, 2012 12:05:58 PM
> > >> Subject: Re: [Users] HP Integrated Lights Out 3
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> Eli Mesika  написано 20.09.2012
> 14:55:41: > От:
> > >> Eli Mesika  > Кому: Dmitriy A
> Pyryakov
> > >> 
> > >>> Копия: users@ovirt.org, Itamar Heim 
> > >>> Дата: 20.09.2012 14:55
> > >>> Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated
> Lights Out
> > >>> 3
> > >>>
> > >>>
> > >>>
> > >>> - Original Message -
> >  From: "Dmitriy A Pyryakov" 
> >  To: "Itamar Heim" 
> >  Cc: users@ovirt.org
> >  Sent: Thursday, September 20, 2012 9:59:34 AM
> >  Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated
> Lights Out
> >  3
> > 
> > 
> > 
> > 
> > 
> > 
> >  I change Fedora 17 hosts to ovirt nodes (first -
> 2.5.0-2.0.fc17,
> > 
> > please note editing a file on an ovirt node requires you to
> persist it, 
> > or it will be lost in next boot.
> > mike can explain this better than me.
> 
> What can I do to save my configuration changes at boot time?
> 
> 
> 
> Added Mike here as well.
> Just FYI, most relevant conf' files are persisted during approval or
> installation.
> iirc there's a script you can use to persist a specific file.
> Something like:
> /usr/libexec/ovirt-functions ovirt_store_config 

just simply run 

# persist /path/to/file

As for the py file you want to update, it's not easy to do.  

  * on ovirt-node, run # mount -o remount,rw /
  * Get the original .py from either git or a Fedora host and put it
in the right place on ovirt-node
  * edit the .py file
  * python -m compileall /path/to/python/file
  * persist /path/to/python/file.pyc

> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fatal error during migration

2012-09-20 Thread Mike Burns
On Thu, 2012-09-20 at 06:46 -0400, Doron Fediuck wrote:
> 
> __
> From: "Dmitriy A Pyryakov" 
> To: "Michal Skrivanek" 
> Cc: users@ovirt.org
> Sent: Thursday, September 20, 2012 1:34:46 PM
> Subject: Re: [Users] Fatal error during migration
> 
> 
> 
> Michal Skrivanek  написано
> 20.09.2012 16:23:31:
> 
> > От: Michal Skrivanek 
> > Кому: Dmitriy A Pyryakov 
> > Копия: users@ovirt.org
> > Дата: 20.09.2012 16:24
> > Тема: Re: [Users] Fatal error during migration
> > 
> > 
> > On Sep 20, 2012, at 12:19 , Dmitriy A Pyryakov wrote:
> > 
> > > Michal Skrivanek  написано
> 20.09.201216:13:16:
> > > 
> > > > От: Michal Skrivanek 
> > > > Кому: Dmitriy A Pyryakov 
> > > > Копия: users@ovirt.org
> > > > Дата: 20.09.2012 16:13
> > > > Тема: Re: [Users] Fatal error during migration
> > > > 
> > > > 
> > > > On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
> > > > 
> > > > > Michal Skrivanek 
> написано 20.09.
> > 201216:02:11:
> > > > > 
> > > > > > От: Michal Skrivanek 
> > > > > > Кому: Dmitriy A Pyryakov 
> > > > > > Копия: users@ovirt.org
> > > > > > Дата: 20.09.2012 16:02
> > > > > > Тема: Re: [Users] Fatal error during migration
> > > > > > 
> > > > > > Hi,
> > > > > > well, so what is the other side saying? Maybe some
> connectivity 
> > > > > > problems between those 2 hosts? firewall? 
> > > > > > 
> > > > > > Thanks,
> > > > > > michal
> > > > > 
> > > > > Yes, firewall is not configured properly by default.
> If I stop it,
> > > > migration done.
> > > > > Thanks.
> > > > The default is supposed to be:
> > > > 
> > > > # oVirt default firewall configuration. Automatically
> generated by 
> > > > vdsm bootstrap script.
> > > > *filter
> > > > :INPUT ACCEPT [0:0]
> > > > :FORWARD ACCEPT [0:0]
> > > > :OUTPUT ACCEPT [0:0]
> > > > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> > > > -A INPUT -p icmp -j ACCEPT
> > > > -A INPUT -i lo -j ACCEPT
> > > > # vdsm
> > > > -A INPUT -p tcp --dport 54321 -j ACCEPT
> > > > # libvirt tls
> > > > -A INPUT -p tcp --dport 16514 -j ACCEPT
> > > > # SSH
> > > > -A INPUT -p tcp --dport 22 -j ACCEPT
> > > > # guest consoles
> > > > -A INPUT -p tcp -m multiport --dports 5634:6166 -j
> ACCEPT
> > > > # migration
> > > > -A INPUT -p tcp -m multiport --dports 49152:49216 -j
> ACCEPT
> > > > # snmp
> > > > -A INPUT -p udp --dport 161 -j ACCEPT
> > > > # Reject any other input traffic
> > > > -A INPUT -j REJECT --reject-with icmp-host-prohibited
> > > > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
> --reject-with
> > > > icmp-host-prohibited
> > > > COMMIT
> > > 
> > > my default is:
> > > 
> > > # cat /etc/sysconfig/iptables
> > > # oVirt automatically generated firewall configuration
> > > *filter
> > > :INPUT ACCEPT [0:0]
> > > :FORWARD ACCEPT [0:0]
> > > :OUTPUT ACCEPT [0:0]
> > > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> > > -A INPUT -p icmp -j ACCEPT
> > > -A INPUT -i lo -j ACCEPT
> > > #vdsm
> > > -A INPUT -p tcp --dport 54321 -j ACCEPT
> > > # SSH
> > > -A INPUT -p tcp --dport 22 -j ACCEPT
> > > # guest consoles
> > > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> > > # migration
> > > -A INPUT -p tcp -m multiport --dports 49152:49216 -j
> ACCEPT
> > > # snmp
> > > -A INPUT -p udp --dport 161 -j ACCEPT
> > > #
> > > -A INPUT -j REJECT --reject-with icmp-host-prohibited
> > > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
> --reject-
> > with icmp-host-prohibited
> > > COMMIT
> > > 
> > > > 
> > > > did you change it manually or is the default missing
> anything?
> > > 
> > > default missing "libvirt tls" field.
> > was it an upgrade of some sort?
> No.
> 
> > These are installed at node setup 
> > from ovirt-engine. Check the engine version and/or the 
> > IPTablesConfig in vdc_options table on engine
> 
> oVirt engine version: 3.1.0-2.fc17
> 
> engine=# select * from vdc_options where option_id=100;
> option_id 

[Users] ovirt-cli 3.2.0.3 - important changes

2012-09-20 Thread Michael Pasternak

Two commands renamed:


- "create" command renamed with "add" #855773.
- "delete" command renamed with "remove" #855769.


Changed authentication procedure:


- added username/password prompt/conf-file functionality (see [1] for more 
details).

[1] http://wiki.ovirt.org/wiki/CLI#Connect


* For complete list of changes, see change log.

-- 

Michael Pasternak
RedHat, ENG-Virtualization R&D
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] where do bond interfaces come from when adding a node that is FC17

2012-09-20 Thread Dennis Jacobfeuerborn
On 09/20/2012 08:10 AM, Mark Wu wrote:
> On 09/19/2012 11:25 PM, Christopher Maestas wrote:
>> When you add a fc17 node and look at it, it seems to add:
>> * ovirtmgmt bridge 
>> * p2p1 interface 
>> * and bond0-bond3. 
>>
>> Where does the bonding information get stored? It doesn't seem to be in
>> /etc/sysconfig/network-scripts?
>>
> The bonding devices are created dynamically when vdsmd starts.
>> The reason I ask, is I have two nodes one of which has the bonding
>> interfaces up and the other which doesn't. The one that doesn't have the
>> bonding interfaces up is failing to be added to the ovirt node list.
>>
> I am not sure how the failure of adding host is related to bonding
> interfaces.  You could check the log
> /tmp/vds_bootstrap.xx.log and /tmp/vds_installer.xx.log during node
> installation.

Why doesn't vdsm write its log files to /var/log/vdsm where people would
expect them to be? I think writing this to /tmp is a really bad habit that
should be fixed sooner rather than later especially given the importance of
the logs when something goes wrong.

Regards,
  Dennis

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Doron Fediuck
- Original Message -

> From: "Dmitriy A Pyryakov" 
> To: "Itamar Heim" 
> Cc: "Mike Burns" , users@ovirt.org
> Sent: Thursday, September 20, 2012 1:13:55 PM
> Subject: [Users] HA: Re: HP Integrated Lights Out 3

> Itamar Heim  написано 20.09.2012 16:01:54:

> > От: Itamar Heim 
> > Кому: Eli Mesika 
> > Копия: Dmitriy A Pyryakov ,
> > users@ovirt.org, Roy Golan , Mike Burns
> > 
> > Дата: 20.09.2012 16:02
> > Тема: Re: [Users] HP Integrated Lights Out 3
> >
> > On 09/20/2012 12:58 PM, Eli Mesika wrote:
> > >
> > >
> > > - Original Message -
> > >> From: "Dmitriy A Pyryakov" 
> > >> To: "Eli Mesika" 
> > >> Cc: "Itamar Heim" , users@ovirt.org
> > >> Sent: Thursday, September 20, 2012 12:05:58 PM
> > >> Subject: Re: [Users] HP Integrated Lights Out 3
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> Eli Mesika  написано 20.09.2012 14:55:41: >
> > >> От:
> > >> Eli Mesika  > Кому: Dmitriy A Pyryakov
> > >> 
> > >>> Копия: users@ovirt.org, Itamar Heim 
> > >>> Дата: 20.09.2012 14:55
> > >>> Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights
> > >>> Out
> > >>> 3
> > >>>
> > >>>
> > >>>
> > >>> - Original Message -
> >  From: "Dmitriy A Pyryakov" 
> >  To: "Itamar Heim" 
> >  Cc: users@ovirt.org
> >  Sent: Thursday, September 20, 2012 9:59:34 AM
> >  Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights
> >  Out
> >  3
> > 
> > 
> > 
> > 
> > 
> > 
> >  I change Fedora 17 hosts to ovirt nodes (first -
> >  2.5.0-2.0.fc17,
> >
> > please note editing a file on an ovirt node requires you to persist
> > it,
> > or it will be lost in next boot.
> > mike can explain this better than me.

> What can I do to save my configuration changes at boot time?

Added Mike here as well. 
Just FYI, most relevant conf' files are persisted during approval or 
installation. 
iirc there's a script you can use to persist a specific file. 
Something like: 
/usr/libexec/ovirt-functions ovirt_store_config  
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] where do bond interfaces come from when adding a node that is FC17

2012-09-20 Thread Igor Lvovsky


- Original Message -
> From: "Mark Wu" 
> To: "Christopher Maestas" 
> Cc: users@ovirt.org
> Sent: Thursday, September 20, 2012 9:10:56 AM
> Subject: Re: [Users] where do bond interfaces come from when adding a node 
> that   is FC17
> 
> 
> On 09/19/2012 11:25 PM, Christopher Maestas wrote:
> 
> 
> 
> When you add a fc17 node and look at it, it seems to add:
> * ovirtmgmt bridge
> * p2p1 interface
> * and bond0-bond3.
> 
> 
> Where does the bonding information get stored? It doesn't seem to be
> in /etc/sysconfig/network-scripts?
> 
> The bonding devices are created dynamically when vdsmd starts.
> 
  
Correct, vdsm creates 'empty' bond0-4 by default for future use

> 
> 
> The reason I ask, is I have two nodes one of which has the bonding
> interfaces up and the other which doesn't. The one that doesn't have
> the bonding interfaces up is failing to be added to the ovirt node
> list.
> 

Sorry, but I am not really understand what is a configuration on your hosts.
Could you please send output of ifconfig from both hosts and output of
'vdsClient 0 getVdsCaps' ?
 
> I am not sure how the failure of adding host is related to bonding
> interfaces. You could check the log
> /tmp/vds_bootstrap.xx.log and /tmp/vds_installer.xx.log
> during node installation.
> 
> 
> 
> Thanks,
> 
> -cdm
> 
> 
> ___
> Users mailing list Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fatal error during migration

2012-09-20 Thread Doron Fediuck
- Original Message -

> From: "Dmitriy A Pyryakov" 
> To: "Michal Skrivanek" 
> Cc: users@ovirt.org
> Sent: Thursday, September 20, 2012 1:34:46 PM
> Subject: Re: [Users] Fatal error during migration

> Michal Skrivanek  написано 20.09.2012
> 16:23:31:

> > От: Michal Skrivanek 
> > Кому: Dmitriy A Pyryakov 
> > Копия: users@ovirt.org
> > Дата: 20.09.2012 16:24
> > Тема: Re: [Users] Fatal error during migration
> >
> >
> > On Sep 20, 2012, at 12:19 , Dmitriy A Pyryakov wrote:
> >
> > > Michal Skrivanek  написано
> > > 20.09.201216:13:16:
> > >
> > > > От: Michal Skrivanek 
> > > > Кому: Dmitriy A Pyryakov 
> > > > Копия: users@ovirt.org
> > > > Дата: 20.09.2012 16:13
> > > > Тема: Re: [Users] Fatal error during migration
> > > >
> > > >
> > > > On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
> > > >
> > > > > Michal Skrivanek  написано
> > > > > 20.09.
> > 201216:02:11:
> > > > >
> > > > > > От: Michal Skrivanek 
> > > > > > Кому: Dmitriy A Pyryakov 
> > > > > > Копия: users@ovirt.org
> > > > > > Дата: 20.09.2012 16:02
> > > > > > Тема: Re: [Users] Fatal error during migration
> > > > > >
> > > > > > Hi,
> > > > > > well, so what is the other side saying? Maybe some
> > > > > > connectivity
> > > > > > problems between those 2 hosts? firewall?
> > > > > >
> > > > > > Thanks,
> > > > > > michal
> > > > >
> > > > > Yes, firewall is not configured properly by default. If I
> > > > > stop it,
> > > > migration done.
> > > > > Thanks.
> > > > The default is supposed to be:
> > > >
> > > > # oVirt default firewall configuration. Automatically generated
> > > > by
> > > > vdsm bootstrap script.
> > > > *filter
> > > > :INPUT ACCEPT [0:0]
> > > > :FORWARD ACCEPT [0:0]
> > > > :OUTPUT ACCEPT [0:0]
> > > > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> > > > -A INPUT -p icmp -j ACCEPT
> > > > -A INPUT -i lo -j ACCEPT
> > > > # vdsm
> > > > -A INPUT -p tcp --dport 54321 -j ACCEPT
> > > > # libvirt tls
> > > > -A INPUT -p tcp --dport 16514 -j ACCEPT
> > > > # SSH
> > > > -A INPUT -p tcp --dport 22 -j ACCEPT
> > > > # guest consoles
> > > > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> > > > # migration
> > > > -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> > > > # snmp
> > > > -A INPUT -p udp --dport 161 -j ACCEPT
> > > > # Reject any other input traffic
> > > > -A INPUT -j REJECT --reject-with icmp-host-prohibited
> > > > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
> > > > --reject-with
> > > > icmp-host-prohibited
> > > > COMMIT
> > >
> > > my default is:
> > >
> > > # cat /etc/sysconfig/iptables
> > > # oVirt automatically generated firewall configuration
> > > *filter
> > > :INPUT ACCEPT [0:0]
> > > :FORWARD ACCEPT [0:0]
> > > :OUTPUT ACCEPT [0:0]
> > > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> > > -A INPUT -p icmp -j ACCEPT
> > > -A INPUT -i lo -j ACCEPT
> > > #vdsm
> > > -A INPUT -p tcp --dport 54321 -j ACCEPT
> > > # SSH
> > > -A INPUT -p tcp --dport 22 -j ACCEPT
> > > # guest consoles
> > > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> > > # migration
> > > -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> > > # snmp
> > > -A INPUT -p udp --dport 161 -j ACCEPT
> > > #
> > > -A INPUT -j REJECT --reject-with icmp-host-prohibited
> > > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-
> > with icmp-host-prohibited
> > > COMMIT
> > >
> > > >
> > > > did you change it manually or is the default missing anything?
> > >
> > > default missing "libvirt tls" field.
> > was it an upgrade of some sort?
> No.

> > These are installed at node setup
> > from ovirt-engine. Check the engine version and/or the
> > IPTablesConfig in vdc_options table on engine

> oVirt engine version: 3.1.0-2.fc17

> engine=# select * from vdc_options where option_id=100;
> option_id | option_name | option_value | version
> ---++---+-
> 100 | IPTablesConfig | # oVirt default firewall configuration.
> Automatically generated by vdsm bootstrap script.+| general
> | | *filter +|
> | | :INPUT ACCEPT [0:0] +|
> | | :FORWARD ACCEPT [0:0] +|
> | | :OUTPUT ACCEPT [0:0] +|
> | | -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +|
> | | -A INPUT -p icmp -j ACCEPT +|
> | | -A INPUT -i lo -j ACCEPT +|
> | | # vdsm +|
> | | -A INPUT -p tcp --dport 54321 -j ACCEPT +|
> | | # libvirt tls +|
> | | -A INPUT -p tcp --dport 16514 -j ACCEPT +|
> | | # SSH +|
> | | -A INPUT -p tcp --dport 22 -j ACCEPT +|
> | | # guest consoles +|
> | | -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT +|
> | | # migration +|
> | | -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT +|
> | | # snmp +|
> | | -A INPUT -p udp --dport 161 -j ACCEPT +|
> | | # Reject any other input traffic +|
> | | -A INPUT -j REJECT --reject-with icmp-host-prohibited +|
> | | -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT
> | | --rejec

Re: [Users] Fatal error during migration

2012-09-20 Thread Michal Skrivanek

On Sep 20, 2012, at 12:19 , Dmitriy A Pyryakov wrote:

> Michal Skrivanek  написано 20.09.2012 16:13:16:
> 
> > От: Michal Skrivanek 
> > Кому: Dmitriy A Pyryakov 
> > Копия: users@ovirt.org
> > Дата: 20.09.2012 16:13
> > Тема: Re: [Users] Fatal error during migration
> > 
> > 
> > On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
> > 
> > > Michal Skrivanek  написано 
> > > 20.09.201216:02:11:
> > > 
> > > > От: Michal Skrivanek 
> > > > Кому: Dmitriy A Pyryakov 
> > > > Копия: users@ovirt.org
> > > > Дата: 20.09.2012 16:02
> > > > Тема: Re: [Users] Fatal error during migration
> > > > 
> > > > Hi,
> > > > well, so what is the other side saying? Maybe some connectivity 
> > > > problems between those 2 hosts? firewall? 
> > > > 
> > > > Thanks,
> > > > michal
> > > 
> > > Yes, firewall is not configured properly by default. If I stop it,
> > migration done.
> > > Thanks.
> > The default is supposed to be:
> > 
> > # oVirt default firewall configuration. Automatically generated by 
> > vdsm bootstrap script.
> > *filter
> > :INPUT ACCEPT [0:0]
> > :FORWARD ACCEPT [0:0]
> > :OUTPUT ACCEPT [0:0]
> > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> > -A INPUT -p icmp -j ACCEPT
> > -A INPUT -i lo -j ACCEPT
> > # vdsm
> > -A INPUT -p tcp --dport 54321 -j ACCEPT
> > # libvirt tls
> > -A INPUT -p tcp --dport 16514 -j ACCEPT
> > # SSH
> > -A INPUT -p tcp --dport 22 -j ACCEPT
> > # guest consoles
> > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> > # migration
> > -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> > # snmp
> > -A INPUT -p udp --dport 161 -j ACCEPT
> > # Reject any other input traffic
> > -A INPUT -j REJECT --reject-with icmp-host-prohibited
> > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
> > icmp-host-prohibited
> > COMMIT
> 
> my default is:
> 
> # cat /etc/sysconfig/iptables
> # oVirt automatically generated firewall configuration
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> #vdsm
> -A INPUT -p tcp --dport 54321 -j ACCEPT
> # SSH
> -A INPUT -p tcp --dport 22 -j ACCEPT
> # guest consoles
> -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> # migration
> -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> # snmp
> -A INPUT -p udp --dport 161 -j ACCEPT
> #
> -A INPUT -j REJECT --reject-with icmp-host-prohibited
> -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with 
> icmp-host-prohibited
> COMMIT
> 
> > 
> > did you change it manually or is the default missing anything?
> 
> default missing "libvirt tls" field.
was it an upgrade of some sort? These are installed at node setup from 
ovirt-engine. Check the engine version and/or the IPTablesConfig in vdc_options 
table on engine

> 
> > thanks,
> > michal
> > > > On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
> > > > 
> > > > > Hello,
> > > > > 
> > > > > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> > > > > 
> > > > > When I try to migrate VM from one host to another, I have an 
> > > > error: Migration failed due to Error: Fatal error during migration.
> > > > > 
> > > > > vdsm.log:
> > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
> > > > 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with 
> > > > ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId': 
> > > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {} 
> > > > flowID [180ad979]
> > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
> > > > (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 
> > > > 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
> > > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > > fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
> > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
> > > > 865::vds::(wrapper) return vmMigrate with {'status': {'message': 
> > > > 'Migration process starting', 'code': 0}}
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
> > > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > > fc2aeeae2e86`::Initiating connection with destination
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
> > > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > > fc2aeeae2e86`::Disk hdc stats not available
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
> > > > (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > > fc2aeeae2e86`::migration Process begins
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
> > > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore 
> > > > acquired
> > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
> > > > 427::vm.

[Users] HA: Re: Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov
Michal Skrivanek  написано 20.09.2012
16:13:16:

> От: Michal Skrivanek 
> Кому: Dmitriy A Pyryakov 
> Копия: users@ovirt.org
> Дата: 20.09.2012 16:13
> Тема: Re: [Users] Fatal error during migration
>
>
> On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
>
> > Michal Skrivanek  написано
20.09.201216:02:11:
> >
> > > От: Michal Skrivanek 
> > > Кому: Dmitriy A Pyryakov 
> > > Копия: users@ovirt.org
> > > Дата: 20.09.2012 16:02
> > > Тема: Re: [Users] Fatal error during migration
> > >
> > > Hi,
> > > well, so what is the other side saying? Maybe some connectivity
> > > problems between those 2 hosts? firewall?
> > >
> > > Thanks,
> > > michal
> >
> > Yes, firewall is not configured properly by default. If I stop it,
> migration done.
> > Thanks.
> The default is supposed to be:
>
> # oVirt default firewall configuration. Automatically generated by
> vdsm bootstrap script.
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> # vdsm
> -A INPUT -p tcp --dport 54321 -j ACCEPT
> # libvirt tls
> -A INPUT -p tcp --dport 16514 -j ACCEPT
> # SSH
> -A INPUT -p tcp --dport 22 -j ACCEPT
> # guest consoles
> -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> # migration
> -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> # snmp
> -A INPUT -p udp --dport 161 -j ACCEPT
> # Reject any other input traffic
> -A INPUT -j REJECT --reject-with icmp-host-prohibited
> -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
> icmp-host-prohibited
> COMMIT

my default is:

# cat /etc/sysconfig/iptables
# oVirt automatically generated firewall configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
#vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT
#
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
icmp-host-prohibited
COMMIT

>
> did you change it manually or is the default missing anything?

default missing "libvirt tls" field.

> thanks,
> michal
> > > On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
> > >
> > > > Hello,
> > > >
> > > > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> > > >
> > > > When I try to migrate VM from one host to another, I have an
> > > error: Migration failed due to Error: Fatal error during migration.
> > > >
> > > > vdsm.log:
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
> > > 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with
> > > ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
> > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {}
> > > flowID [180ad979]
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
> > > (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321',
> > > 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
> > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
> > > 865::vds::(wrapper) return vmMigrate with {'status': {'message':
> > > 'Migration process starting', 'code': 0}}
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
> > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Initiating connection with destination
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Disk hdc stats not available
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
> > > (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::migration Process begins
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
> > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore
acquired
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
> > > 427::vm.Vm::(_startUnderlyingMigration)
> > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to
> > > qemu+tls://192.168.10.12/system
> > > > Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
> > > 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::migration downtime thread started
> > > > Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
> > > 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::starting migration monitor thread
> > > > Thread-3798::DEBUG::20

[Users] HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Itamar Heim  написано 20.09.2012 16:01:54:

> От: Itamar Heim 
> Кому: Eli Mesika 
> Копия: Dmitriy A Pyryakov ,
> users@ovirt.org, Roy Golan , Mike Burns

> Дата: 20.09.2012 16:02
> Тема: Re: [Users] HP Integrated Lights Out 3
>
> On 09/20/2012 12:58 PM, Eli Mesika wrote:
> >
> >
> > - Original Message -
> >> From: "Dmitriy A Pyryakov" 
> >> To: "Eli Mesika" 
> >> Cc: "Itamar Heim" , users@ovirt.org
> >> Sent: Thursday, September 20, 2012 12:05:58 PM
> >> Subject: Re: [Users] HP Integrated Lights Out 3
> >>
> >>
> >>
> >>
> >>
> >> Eli Mesika  написано 20.09.2012 14:55:41: > От:
> >> Eli Mesika  > Кому: Dmitriy A Pyryakov
> >> 
> >>> Копия: users@ovirt.org, Itamar Heim 
> >>> Дата: 20.09.2012 14:55
> >>> Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
> >>> 3
> >>>
> >>>
> >>>
> >>> - Original Message -
>  From: "Dmitriy A Pyryakov" 
>  To: "Itamar Heim" 
>  Cc: users@ovirt.org
>  Sent: Thursday, September 20, 2012 9:59:34 AM
>  Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
>  3
> 
> 
> 
> 
> 
> 
>  I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
>
> please note editing a file on an ovirt node requires you to persist it,
> or it will be lost in next boot.
> mike can explain this better than me.

What can I do to save my configuration changes at boot time?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fatal error during migration

2012-09-20 Thread Michal Skrivanek

On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:

> Michal Skrivanek  написано 20.09.2012 16:02:11:
> 
> > От: Michal Skrivanek 
> > Кому: Dmitriy A Pyryakov 
> > Копия: users@ovirt.org
> > Дата: 20.09.2012 16:02
> > Тема: Re: [Users] Fatal error during migration
> > 
> > Hi,
> > well, so what is the other side saying? Maybe some connectivity 
> > problems between those 2 hosts? firewall? 
> > 
> > Thanks,
> > michal
> 
> Yes, firewall is not configured properly by default. If I stop it, migration 
> done.
> Thanks.
The default is supposed to be:

# oVirt default firewall configuration. Automatically generated by vdsm 
bootstrap script.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
# vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# libvirt tls
-A INPUT -p tcp --dport 16514 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT
# Reject any other input traffic
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with 
icmp-host-prohibited
COMMIT


did you change it manually or is the default missing anything?

thanks,
michal
> > On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
> > 
> > > Hello,
> > > 
> > > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> > > 
> > > When I try to migrate VM from one host to another, I have an 
> > error: Migration failed due to Error: Fatal error during migration.
> > > 
> > > vdsm.log:
> > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
> > 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with 
> > ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId': 
> > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {} 
> > flowID [180ad979]
> > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
> > (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 
> > 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
> > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
> > > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
> > 865::vds::(wrapper) return vmMigrate with {'status': {'message': 
> > 'Migration process starting', 'code': 0}}
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
> > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::Initiating connection with destination
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
> > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::Disk hdc stats not available
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
> > (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::migration Process begins
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
> > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
> > 427::vm.Vm::(_startUnderlyingMigration) 
> > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to 
> > qemu+tls://192.168.10.12/system
> > > Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
> > 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::migration downtime thread started
> > > Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
> > 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::starting migration monitor thread
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::
> > 340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::canceling migration downtime thread
> > > Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> > 390::vm.Vm::(stop) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::stopping migration monitor thread
> > > Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> > 337::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > fc2aeeae2e86`::migration downtime thread exiting
> > > Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::
> > (_recover) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation 
> > failed: Failed to connect to remote libvirt URI qemu+tls://192.168.
> > 10.12/system
> > > Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
> > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/vm.py", line 223, in run
> > > File "/usr/share/vdsm/libvirtvm.py", line 451, in 
> > > _startUnderlyingMigration
> > > File "/usr/share/vdsm/libvirtvm.py", line 491, in f
> > > File "/usr/lib/python2.7/

Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Eli Mesika  написано 20.09.2012 15:58:58:

> От: Eli Mesika 
> Кому: Dmitriy A Pyryakov 
> Копия: Itamar Heim , users@ovirt.org, Roy Golan
> 
> Дата: 20.09.2012 15:59
> Тема: Re: [Users] HP Integrated Lights Out 3
>
>
>
> - Original Message -
> > From: "Dmitriy A Pyryakov" 
> > To: "Eli Mesika" 
> > Cc: "Itamar Heim" , users@ovirt.org
> > Sent: Thursday, September 20, 2012 12:05:58 PM
> > Subject: Re: [Users] HP Integrated Lights Out 3
> >
> >
> >
> >
> >
> > Eli Mesika  написано 20.09.2012 14:55:41: > От:
> > Eli Mesika  > Кому: Dmitriy A Pyryakov
> > 
> > > Копия: users@ovirt.org, Itamar Heim 
> > > Дата: 20.09.2012 14:55
> > > Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
> > > 3
> > >
> > >
> > >
> > > - Original Message -
> > > > From: "Dmitriy A Pyryakov" 
> > > > To: "Itamar Heim" 
> > > > Cc: users@ovirt.org
> > > > Sent: Thursday, September 20, 2012 9:59:34 AM
> > > > Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
> > > > 3
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
> > > > second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work.
> > > > In
> > > > vdsm.log now options presented.
> > >
> > > Can you paste here the call to fenceNode from the vdsm.log, thanks
> > Of course,
> >
> > vdsm.log
> > Thread-1882::DEBUG::2012-09-20
> > 09:02:52,920::API::1024::vds::(fenceNode)
> > fenceNode(addr=192.168.10.
>
103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)

>
> See, here in the PM Status command , options are empty in VDSM
>
> > Thread-1882::DEBUG::2012-09-20
> > 09:02:53,951::API::1050::vds::(fenceNode) rc 1 in
> > agent=fence_ipmilan
> > ipaddr=192.168.10.103
> > login=Administrator
> > option=status
> > passwd=
> > out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
> > Failed
> > err
> >
> > engine.log:
> > 2012-09-20 15:02:54,034 INFO
> > [org.ovirt.engine.core.bll.FencingExecutor] (ajp--0.0.0.0-8009-5)
> > Executing  Power Management command, Proxy
> > Host:hyper1.ovirt.com, Agent:ipmilan, Target Host:, Management
> > IP:192.168.10.103, User:Administrator, Options:lanplus,power_wait=4
> > 2012-09-20 15:02:54,056 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > (ajp--0.0.0.0-8009-5) START, FenceVdsVDSCommand(vdsId =
> > 0a268762-02d7-11e2-b750-0011856cf23e, targetVdsId =
> > c57f5aa0-0301-11e2-8c67-0011856cf23e, action = Status, ip =
> > 192.168.10.103, port = , type = ipmilan, user = Administrator,
> > password = **, options = 'lanplus,power_wait=4'), log id:
> > 5821013b
>
> While we still see that engine sends those options correctly.
> CCing Roy
> Roy, it seems connected to the bug you had resolved but Dmitriy
> claims to have the right vdsm with the fix , any ideas ?

I can't apply vdsm fix on oVirt nodes (because I don't
have /usr/share/vdsm/BindingXMLRPC.py file). I can do it on FC17 hosts
only.


>
> >
> > > >BindingXMLRPC.py not found on proxy
> > > > host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
> > > >  написано 14.09.2012 13:46:35:
> > > >
> > > > > От: Itamar Heim 
> > > > > Кому: Darrell Budic 
> > > > > Копия: Dmitriy A Pyryakov ,
> > > > > users@ovirt.org
> > > > > Дата: 14.09.2012 13:46
> > > > > Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3
> > > > >
> > > > > On 09/14/2012 02:32 AM, Darrell Budic wrote:
> > > > > > That fix worked for me (ipmilan wise, anyway. Still no go on
> > > > > > ilo,
> > > > > > but we
> > > > > > knew that, right?). Thanks Itamar!
> > > > > >
> > > > > > Dmitriy, make sure you do this to all your host nodes, it may
> > > > > > run
> > > > > > the
> > > > > > test from any of them. You'll also want to be sure you delete
> > > > > > /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
> > > > > > compiled
> > > > > > python is likely to still get used. Finally, I did need to
> > > > > > restart vdsmd
> > > > > > on all my nodes, "service vdsmd restart" on my Centos 6.3
> > > > > > system.
> > > > > > Glad
> > > > > > to know you can do that without causing problems for running
> > > > > > vms.
> > > > > >
> > > > > > I did notice that the ovirt management GUI still shows 3
> > > > > > Alerts
> > > > > > in the
> > > > > > alert area, and they are all "Power Management test failed"
> > > > > > errors dated
> > > > > > from the first time their particular node was added to the
> > > > > > cluster. This
> > > > > > is even after restarting a vdsmd again and seeing Host xxx
> > > > > > power
> > > > > > management was verified successfully." in the event log.
> > > > >
> > > > > because the engine doesn't go and run 'test power management'
> > > > > all
> > > > > the
> > > > > time...
> > > > > click edit host, power management tab, click 'test'.
> > > > >
> > > >
> > > > ___
> > > > Users mailing list
> > > > Users@ovirt.org
> > > > http://lists.ovirt.

[Users] HA: Re: Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov
Michal Skrivanek  написано 20.09.2012
16:02:11:

> От: Michal Skrivanek 
> Кому: Dmitriy A Pyryakov 
> Копия: users@ovirt.org
> Дата: 20.09.2012 16:02
> Тема: Re: [Users] Fatal error during migration
>
> Hi,
> well, so what is the other side saying? Maybe some connectivity
> problems between those 2 hosts? firewall?
>
> Thanks,
> michal

Yes, firewall is not configured properly by default. If I stop it,
migration done.
Thanks.

> On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
>
> > Hello,
> >
> > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> >
> > When I try to migrate VM from one host to another, I have an
> error: Migration failed due to Error: Fatal error during migration.
> >
> > vdsm.log:
> > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
> 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with
> ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
> '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {}
> flowID [180ad979]
> > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
> (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321',
> 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
> > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
> (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
> > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
> 865::vds::(wrapper) return vmMigrate with {'status': {'message':
> 'Migration process starting', 'code': 0}}
> > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
> (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::Initiating connection with destination
> > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
> 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::Disk hdc stats not available
> > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
> (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::migration Process begins
> > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
> > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
> 427::vm.Vm::(_startUnderlyingMigration)
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to
> qemu+tls://192.168.10.12/system
> > Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
> 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::migration downtime thread started
> > Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
> 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::starting migration monitor thread
> > Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::
> 340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::canceling migration downtime thread
> > Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> 390::vm.Vm::(stop) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::stopping migration monitor thread
> > Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> 337::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::migration downtime thread exiting
> > Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::
> (_recover) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation
> failed: Failed to connect to remote libvirt URI qemu+tls://192.168.
> 10.12/system
> > Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/vm.py", line 223, in run
> > File "/usr/share/vdsm/libvirtvm.py", line 451, in
_startUnderlyingMigration
> > File "/usr/share/vdsm/libvirtvm.py", line 491, in f
> > File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 82, in wrapper
> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1034,
> in migrateToURI2
> > libvirtError: operation failed: Failed to connect to remote
> libvirt URI qemu+tls://192.168.10.12/system
> >
> > Thread-3802::DEBUG::2012-09-20 09:42:57,793::BindingXMLRPC::
> 859::vds::(wrapper) client [192.168.10.10]::call vmGetStats with
> ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
> > Thread-3802::DEBUG::2012-09-20 09:42:57,793::libvirtvm::
> 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> fc2aeeae2e86`::Disk hdc stats not available
> > Thread-3802::DEBUG::2012-09-20 09:42:57,794::BindingXMLRPC::
> 865::vds::(wrapper) return vmGetStats with {'status': {'message':
> 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username':
> 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '22047',
> 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session':
> 'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash':
> '3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '',
> 'kvmEnable': 'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:

Re: [Users] Fatal error during migration

2012-09-20 Thread Michal Skrivanek
Hi,
well, so what is the other side saying? Maybe some connectivity problems 
between those 2 hosts? firewall? 

Thanks,
michal

On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:

> Hello,
> 
> I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> 
> When I try to migrate VM from one host to another, I have an error: Migration 
> failed due to Error: Fatal error during migration.
> 
> vdsm.log:
> Thread-3797::DEBUG::2012-09-20 
> 09:42:56,439::BindingXMLRPC::859::vds::(wrapper) client [192.168.10.10]::call 
> vmMigrate with ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 
> 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {} 
> flowID [180ad979]
> Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::(migrate) {'src': 
> '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId': 
> '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
> Thread-3798::DEBUG::2012-09-20 
> 09:42:56,441::vm::122::vm.Vm::(_setupVdsConnection) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Destination server is: 
> 192.168.10.12:54321
> Thread-3797::DEBUG::2012-09-20 
> 09:42:56,441::BindingXMLRPC::865::vds::(wrapper) return vmMigrate with 
> {'status': {'message': 'Migration process starting', 'code': 0}}
> Thread-3798::DEBUG::2012-09-20 
> 09:42:56,441::vm::124::vm.Vm::(_setupVdsConnection) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Initiating connection with 
> destination
> Thread-3798::DEBUG::2012-09-20 
> 09:42:56,452::libvirtvm::240::vm.Vm::(_getDiskStats) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
> Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::(_prepareGuest) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration Process begins
> Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
> Thread-3798::DEBUG::2012-09-20 
> 09:42:56,888::libvirtvm::427::vm.Vm::(_startUnderlyingMigration) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to 
> qemu+tls://192.168.10.12/system
> Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::325::vm.Vm::(run) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread started
> Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::353::vm.Vm::(run) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration monitor thread
> Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::340::vm.Vm::(cancel) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::canceling migration downtime 
> thread
> Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::390::vm.Vm::(stop) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::stopping migration monitor thread
> Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::337::vm.Vm::(run) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread exiting
> Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::(_recover) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation failed: Failed to 
> connect to remote libvirt URI qemu+tls://192.168.10.12/system
> Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
> Traceback (most recent call last):
> File "/usr/share/vdsm/vm.py", line 223, in run
> File "/usr/share/vdsm/libvirtvm.py", line 451, in _startUnderlyingMigration
> File "/usr/share/vdsm/libvirtvm.py", line 491, in f
> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 82, 
> in wrapper
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1034, in 
> migrateToURI2
> libvirtError: operation failed: Failed to connect to remote libvirt URI 
> qemu+tls://192.168.10.12/system
> 
> Thread-3802::DEBUG::2012-09-20 
> 09:42:57,793::BindingXMLRPC::859::vds::(wrapper) client [192.168.10.10]::call 
> vmGetStats with ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
> Thread-3802::DEBUG::2012-09-20 
> 09:42:57,793::libvirtvm::240::vm.Vm::(_getDiskStats) 
> vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
> Thread-3802::DEBUG::2012-09-20 
> 09:42:57,794::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with 
> {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 
> 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '22047', 
> 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session': 'Unknown', 
> 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash': 
> '3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 
> 'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:0a:08', 'rxDropped': 
> '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 
> 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet6'}}, 
> 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType': 'qxl', 
> 'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency': 
> '0', 'writeLatency': '0'}, u'hda'

Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Itamar Heim

On 09/20/2012 12:58 PM, Eli Mesika wrote:



- Original Message -

From: "Dmitriy A Pyryakov" 
To: "Eli Mesika" 
Cc: "Itamar Heim" , users@ovirt.org
Sent: Thursday, September 20, 2012 12:05:58 PM
Subject: Re: [Users] HP Integrated Lights Out 3





Eli Mesika  написано 20.09.2012 14:55:41: > От:
Eli Mesika  > Кому: Dmitriy A Pyryakov


Копия: users@ovirt.org, Itamar Heim 
Дата: 20.09.2012 14:55
Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
3



- Original Message -

From: "Dmitriy A Pyryakov" 
To: "Itamar Heim" 
Cc: users@ovirt.org
Sent: Thursday, September 20, 2012 9:59:34 AM
Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
3






I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,


please note editing a file on an ovirt node requires you to persist it, 
or it will be lost in next boot.

mike can explain this better than me.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Eli Mesika


- Original Message -
> From: "Dmitriy A Pyryakov" 
> To: "Eli Mesika" 
> Cc: "Itamar Heim" , users@ovirt.org
> Sent: Thursday, September 20, 2012 12:05:58 PM
> Subject: Re: [Users] HP Integrated Lights Out 3
> 
> 
> 
> 
> 
> Eli Mesika  написано 20.09.2012 14:55:41: > От:
> Eli Mesika  > Кому: Dmitriy A Pyryakov
> 
> > Копия: users@ovirt.org, Itamar Heim 
> > Дата: 20.09.2012 14:55
> > Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
> > 3
> > 
> > 
> > 
> > - Original Message -
> > > From: "Dmitriy A Pyryakov" 
> > > To: "Itamar Heim" 
> > > Cc: users@ovirt.org
> > > Sent: Thursday, September 20, 2012 9:59:34 AM
> > > Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
> > > 3
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
> > > second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work.
> > > In
> > > vdsm.log now options presented.
> > 
> > Can you paste here the call to fenceNode from the vdsm.log, thanks
> Of course,
> 
> vdsm.log
> Thread-1882::DEBUG::2012-09-20
> 09:02:52,920::API::1024::vds::(fenceNode)
> fenceNode(addr=192.168.10.103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)

See, here in the PM Status command , options are empty in VDSM

> Thread-1882::DEBUG::2012-09-20
> 09:02:53,951::API::1050::vds::(fenceNode) rc 1 in
> agent=fence_ipmilan
> ipaddr=192.168.10.103
> login=Administrator
> option=status
> passwd=
> out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
> Failed
> err
> 
> engine.log:
> 2012-09-20 15:02:54,034 INFO
> [org.ovirt.engine.core.bll.FencingExecutor] (ajp--0.0.0.0-8009-5)
> Executing  Power Management command, Proxy
> Host:hyper1.ovirt.com, Agent:ipmilan, Target Host:, Management
> IP:192.168.10.103, User:Administrator, Options:lanplus,power_wait=4
> 2012-09-20 15:02:54,056 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> (ajp--0.0.0.0-8009-5) START, FenceVdsVDSCommand(vdsId =
> 0a268762-02d7-11e2-b750-0011856cf23e, targetVdsId =
> c57f5aa0-0301-11e2-8c67-0011856cf23e, action = Status, ip =
> 192.168.10.103, port = , type = ipmilan, user = Administrator,
> password = **, options = 'lanplus,power_wait=4'), log id:
> 5821013b

While we still see that engine sends those options correctly.
CCing Roy
Roy, it seems connected to the bug you had resolved but Dmitriy claims to have 
the right vdsm with the fix , any ideas ?


> 
> > >BindingXMLRPC.py not found on proxy
> > > host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
> > >  написано 14.09.2012 13:46:35:
> > > 
> > > > От: Itamar Heim 
> > > > Кому: Darrell Budic 
> > > > Копия: Dmitriy A Pyryakov ,
> > > > users@ovirt.org
> > > > Дата: 14.09.2012 13:46
> > > > Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3
> > > > 
> > > > On 09/14/2012 02:32 AM, Darrell Budic wrote:
> > > > > That fix worked for me (ipmilan wise, anyway. Still no go on
> > > > > ilo,
> > > > > but we
> > > > > knew that, right?). Thanks Itamar!
> > > > > 
> > > > > Dmitriy, make sure you do this to all your host nodes, it may
> > > > > run
> > > > > the
> > > > > test from any of them. You'll also want to be sure you delete
> > > > > /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
> > > > > compiled
> > > > > python is likely to still get used. Finally, I did need to
> > > > > restart vdsmd
> > > > > on all my nodes, "service vdsmd restart" on my Centos 6.3
> > > > > system.
> > > > > Glad
> > > > > to know you can do that without causing problems for running
> > > > > vms.
> > > > > 
> > > > > I did notice that the ovirt management GUI still shows 3
> > > > > Alerts
> > > > > in the
> > > > > alert area, and they are all "Power Management test failed"
> > > > > errors dated
> > > > > from the first time their particular node was added to the
> > > > > cluster. This
> > > > > is even after restarting a vdsmd again and seeing Host xxx
> > > > > power
> > > > > management was verified successfully." in the event log.
> > > > 
> > > > because the engine doesn't go and run 'test power management'
> > > > all
> > > > the
> > > > time...
> > > > click edit host, power management tab, click 'test'.
> > > > 
> > > 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > > 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov


Hello,

I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.

When I try to migrate VM from one host to another, I have an error:
Migration failed due to Error: Fatal error during migration.

vdsm.log:
Thread-3797::DEBUG::2012-09-20
09:42:56,439::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmMigrate with ({'src': '192.168.10.13', 'dst':
'192.168.10.12:54321', 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',
'method': 'online'},) {} flowID [180ad979]
Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::(migrate)
{'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
'2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::122::vm.Vm::(_setupVdsConnection)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Destination server is:
192.168.10.12:54321
Thread-3797::DEBUG::2012-09-20
09:42:56,441::BindingXMLRPC::865::vds::(wrapper) return vmMigrate with
{'status': {'message': 'Migration process starting', 'code': 0}}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::124::vm.Vm::(_setupVdsConnection)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Initiating connection with
destination
Thread-3798::DEBUG::2012-09-20
09:42:56,452::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
Thread-3798::DEBUG::2012-09-20
09:42:56,457::vm::170::vm.Vm::(_prepareGuest)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration Process begins
Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
Thread-3798::DEBUG::2012-09-20
09:42:56,888::libvirtvm::427::vm.Vm::(_startUnderlyingMigration)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to qemu
+tls://192.168.10.12/system
Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::325::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread
started
Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::353::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration monitor
thread
Thread-3798::DEBUG::2012-09-20
09:42:56,903::libvirtvm::340::vm.Vm::(cancel)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::canceling migration downtime
thread
Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::390::vm.Vm::(stop)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::stopping migration monitor
thread
Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::337::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread
exiting
Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::(_recover)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.10.12/system
Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 223, in run
  File "/usr/share/vdsm/libvirtvm.py", line 451, in
_startUnderlyingMigration
  File "/usr/share/vdsm/libvirtvm.py", line 491, in f
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
82, in wrapper
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1034, in
migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://192.168.10.12/system
Thread-3802::DEBUG::2012-09-20
09:42:57,793::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmGetStats with
('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
Thread-3802::DEBUG::2012-09-20
09:42:57,793::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
Thread-3802::DEBUG::2012-09-20
09:42:57,794::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up',
'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid':
'22047', 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session':
'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash':
'3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable':
'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:0a:08', 'rxDropped':
'0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet6'}},
'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType': 'qxl',
'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency':
'0', 'writeLatency': '0'}, u'hda': {'readLatency': '6183805',
'apparentsize': '11811160064', 'writeLatency': '0', 'imageID':
'd96d19f6-5a28-4fef-892f-4a04549d4e38', 'flushLatency': '0', 'readRate':
'271.87', 'truesize': '11811160064', 'writeRate': '0.00'}},
'monitorResponse': '0', 'statsAge': '0.77', 'cpuIdle': '86.73',
'elapsedTime': '3941', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [],
'guestIPs': '', 'nice': ''}

Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Eli Mesika  написано 20.09.2012 14:55:41:

> От: Eli Mesika 
> Кому: Dmitriy A Pyryakov 
> Копия: users@ovirt.org, Itamar Heim 
> Дата: 20.09.2012 14:55
> Тема: Re: [Users] HA: Re:  HA: Re:  HA: Re:   HP Integrated Lights Out 3
>
>
>
> - Original Message -
> > From: "Dmitriy A Pyryakov" 
> > To: "Itamar Heim" 
> > Cc: users@ovirt.org
> > Sent: Thursday, September 20, 2012 9:59:34 AM
> > Subject: [Users] HA: Re:  HA: Re:  HA: Re:   HP Integrated Lights Out 3
> >
> >
> >
> >
> >
> >
> > I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
> > second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work. In
> > vdsm.log now options presented.
>
> Can you paste here the call to fenceNode from the vdsm.log, thanks
Of course,

vdsm.log
Thread-1882::DEBUG::2012-09-20 09:02:52,920::API::1024::vds::(fenceNode)
fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)
Thread-1882::DEBUG::2012-09-20 09:02:53,951::API::1050::vds::(fenceNode) rc
1 in agent=fence_ipmilan
ipaddr=192.168.10.103
login=Administrator
option=status
passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
 err

engine.log:
2012-09-20 15:02:54,034 INFO  [org.ovirt.engine.core.bll.FencingExecutor]
(ajp--0.0.0.0-8009-5) Executing  Power Management command, Proxy
Host:hyper1.ovirt.com, Agent:ipmilan, Target Host:, Management
IP:192.168.10.103, User:Administrator, Options:lanplus,power_wait=4
2012-09-20 15:02:54,056 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--0.0.0.0-8009-5) START, FenceVdsVDSCommand(vdsId =
0a268762-02d7-11e2-b750-0011856cf23e, targetVdsId =
c57f5aa0-0301-11e2-8c67-0011856cf23e, action = Status, ip = 192.168.10.103,
port = , type = ipmilan, user = Administrator, password = **, options =
'lanplus,power_wait=4'), log id: 5821013b

> >BindingXMLRPC.py not found on proxy
> > host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
> >  написано 14.09.2012 13:46:35:
> >
> > > От: Itamar Heim 
> > > Кому: Darrell Budic 
> > > Копия: Dmitriy A Pyryakov ,
> > > users@ovirt.org
> > > Дата: 14.09.2012 13:46
> > > Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3
> > >
> > > On 09/14/2012 02:32 AM, Darrell Budic wrote:
> > > > That fix worked for me (ipmilan wise, anyway. Still no go on ilo,
> > > > but we
> > > > knew that, right?). Thanks Itamar!
> > > >
> > > > Dmitriy, make sure you do this to all your host nodes, it may run
> > > > the
> > > > test from any of them. You'll also want to be sure you delete
> > > > /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
> > > > compiled
> > > > python is likely to still get used. Finally, I did need to
> > > > restart vdsmd
> > > > on all my nodes, "service vdsmd restart" on my Centos 6.3 system.
> > > > Glad
> > > > to know you can do that without causing problems for running vms.
> > > >
> > > > I did notice that the ovirt management GUI still shows 3 Alerts
> > > > in the
> > > > alert area, and they are all "Power Management test failed"
> > > > errors dated
> > > > from the first time their particular node was added to the
> > > > cluster. This
> > > > is even after restarting a vdsmd again and seeing Host xxx power
> > > > management was verified successfully." in the event log.
> > >
> > > because the engine doesn't go and run 'test power management' all
> > > the
> > > time...
> > > click edit host, power management tab, click 'test'.
> > >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Eli Mesika


- Original Message -
> From: "Dmitriy A Pyryakov" 
> To: "Itamar Heim" 
> Cc: users@ovirt.org
> Sent: Thursday, September 20, 2012 9:59:34 AM
> Subject: [Users] HA: Re:  HA: Re:  HA: Re:   HP Integrated Lights Out 3
> 
> 
> 
> 
> 
> 
> I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
> second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work. In
> vdsm.log now options presented. 

Can you paste here the call to fenceNode from the vdsm.log, thanks

>BindingXMLRPC.py not found on proxy
> host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
>  написано 14.09.2012 13:46:35:
> 
> > От: Itamar Heim 
> > Кому: Darrell Budic 
> > Копия: Dmitriy A Pyryakov ,
> > users@ovirt.org
> > Дата: 14.09.2012 13:46
> > Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3
> > 
> > On 09/14/2012 02:32 AM, Darrell Budic wrote:
> > > That fix worked for me (ipmilan wise, anyway. Still no go on ilo,
> > > but we
> > > knew that, right?). Thanks Itamar!
> > > 
> > > Dmitriy, make sure you do this to all your host nodes, it may run
> > > the
> > > test from any of them. You'll also want to be sure you delete
> > > /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
> > > compiled
> > > python is likely to still get used. Finally, I did need to
> > > restart vdsmd
> > > on all my nodes, "service vdsmd restart" on my Centos 6.3 system.
> > > Glad
> > > to know you can do that without causing problems for running vms.
> > > 
> > > I did notice that the ovirt management GUI still shows 3 Alerts
> > > in the
> > > alert area, and they are all "Power Management test failed"
> > > errors dated
> > > from the first time their particular node was added to the
> > > cluster. This
> > > is even after restarting a vdsmd again and seeing Host xxx power
> > > management was verified successfully." in the event log.
> > 
> > because the engine doesn't go and run 'test power management' all
> > the
> > time...
> > click edit host, power management tab, click 'test'.
> > 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm/engine do not like Infiniband

2012-09-20 Thread Dan Kenigsberg
On Fri, Sep 14, 2012 at 02:13:37PM -0500, Dead Horse wrote:
> This is a test setup so no worries about future breakage via upgrade.
> I ended up stopping the engine service, dumping the database and altering
> the the table vds_interface --> column "mac_addr" and increasing the char
> varying length from 20 to 60.
> I then restore the altered database and go about business as usual.

Please note in the BZ that this is the only change that is required. It
would make pushing this upstream much easier.

Thanks!

> 
> I had to make the edit offline because there are quite a few DB views and
> rules dependent on that table.
> 
> - DHC
> 
> On Fri, Sep 14, 2012 at 2:51 AM, Itamar Heim  wrote:
> 
> > On 09/14/2012 06:59 AM, Dead Horse wrote:
> >
> >> Bug opened BZ857294 
> >> (https://bugzilla.redhat.com/**show_bug.cgi?id=857294
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users