Re: [Users] FW: ovirt 3.2.1 custom bonding options

2013-06-26 Thread Dan Kenigsberg
On Wed, Jun 26, 2013 at 01:56:09AM -0400, Sven Knohsalla wrote:
> Hi,
> 
> for all oVirt users who wants to run their nodes in balance-rr mode:
> Go to "Network Interfaces"
> --> "Setup Host Networks"
> 
> Edit the bond interface you want to change
> --> "Bonding mode"--> "Custom:"
> --> "Custom mode" --> mode=0 miimon=100 (depends on your scenario)

Is there any reason not to include miimon=100 within the predefined
"Mode 0" selection? I believe that Engine should.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] FW: ovirt 3.2.1 custom bonding options

2013-06-26 Thread Sven Knohsalla
Hi Dan,

I first tried this without setting miimode, but then miimode pull was set to 
0ms (oVirt engine 3.2.1)
(via /proc/net/bonding/bond[n])

This was the reason why I added miimode=100 setting to custom bonding option in 
oVirt.
As this is a very common setting, I would suggest to make it default for oVirt 
too ?

Cheers,
Sven.

Sven Knohsalla | System Administration | Netbiscuits

Office +49 631 68036 433 | Fax +49 631 68036 111  |E-Mail 
s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin 
Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

-Ursprüngliche Nachricht-
Von: Dan Kenigsberg [mailto:dan...@redhat.com] 
Gesendet: Mittwoch, 26. Juni 2013 09:04
An: Sven Knohsalla; Lior Vernia
Cc: users@ovirt.org
Betreff: Re: [Users] FW: ovirt 3.2.1 custom bonding options

On Wed, Jun 26, 2013 at 01:56:09AM -0400, Sven Knohsalla wrote:
> Hi,
> 
> for all oVirt users who wants to run their nodes in balance-rr mode:
> Go to "Network Interfaces"
> --> "Setup Host Networks"
> 
> Edit the bond interface you want to change
> --> "Bonding mode"--> "Custom:"
> --> "Custom mode" --> mode=0 miimon=100 (depends on your scenario)

Is there any reason not to include miimon=100 within the predefined "Mode 0" 
selection? I believe that Engine should.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM Pausing with Storage IO errors

2013-06-26 Thread Dafna Ron

you can look at the vm qemu log and libvirt to get more info.
/var/log/libvirtd.log and /var/log/libvirt/qemu/.log

also, if the vm is pausing right after it's starts, in the vds that the 
vm is run on, right after the libvirt xml is logged you can probably see 
a reason why the vm is paused


you can attach all logs (engine, vdsm logs from spm and the host the vm 
is running on, libvirtd log and vm qemu log) and send to the list and 
I'll go through them to see if I can see anything.


Dafna


On 06/25/2013 04:40 PM, Jason Lawer wrote:

Hi,

I have a ovirt 3.2.1 system that has a single VM which pauses soon 
after launch with the message "VM has been paused due to storage IO 
error". When it first occured it mentioned an issue with no storage. 
This is weird as it says I have over half of my 4TB iscsi pool free. I 
have checked the SAN, hosts and engine logs. I haven't been able to 
find anything that indicates the cause.


Is anyone able to at least point me in the direction I should look 
further?


Details :

3x Dell R720 VM Nodes (Dual Proc Sandy bridge Xeon, 96gb ram, boot 
from SAN)


Dell MD 3220i iSCSI storage array

Hosts are runing
  CentOS 6.4
   VDSM : vdsm-4.10.3-0.36.23.el6
   libvirt : libvirt-0.10.2-18.el6_4.8
   kernel : 2.6.32 - 358.11.1.el6.x86_64
   kvm : 0.12.1.2 - 2.355.el6.5

Engine is :  oVirt Engine Version: 3.2.2-1.1.43.el6
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NetApp plugin/package. Is this available or planned ?

2013-06-26 Thread Matt .
Hi,

I have checked this group @ the NetApp communities.

It's unclear to me if this will be free for ever, I think it should and
hope it will as the community can support such modules much more than
someone that pays for it and just want to have it working.

Let's hope for the best.

Matt


2013/6/24 Itamar Heim 

> On 06/24/2013 06:45 PM, Mike Burns wrote:
>
>> On 06/19/2013 06:37 AM, Matt . wrote:
>>
>>> Hi All,
>>>
>>> If I'm right I remember from the past that there was an ovirt-netapp
>>> package available, but I'm not sure. At the moment it's not in the repo,
>>> at least not 3.3
>>>
>>> What is the status of this package/plugin ? I'm able to test this on a
>>> cluster and would like to help here if possible.
>>>
>>> I hope it's there, or at least somewhere.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>
>> Hi Jon,
>>
>> Any info you can share on this?
>>
>
> http://captainkvm.com/2013/06/**virtual-storage-console-for-**rhev/
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fedora upgrading from 3.1 to 3.2

2013-06-26 Thread Alex Lourie



On Tue, Jun 25, 2013 at 7:07 PM, Matthew Curry  
wrote:
Has anyone tried 3.1 to 3.2 in CentOS? or even 3.1 to 
3.3rc/beta/etc?? I 
am going to be forced down that path soon; so I am wondering what 
kind 
of hurt I am in for...

Lol...



Hi Matthew

I suspect your mileage will be much smoother due to much smaller 
changes between CentOS released compared to Fedora :-)




Thanks,
Matthew Curry

On 6/25/13 8:40 AM, Alex Lourie wrote:
> Hi Eli
>
> The following is the error we're getting when upgrading upstream 
3.1 
> to 3.2:

>
> 2013-06-25 14:52:11::DEBUG::common_utils::434::root:: Executing 
> command --> '/usr/share/ovirt-engine/dbscripts/upgrade.sh -s 
localhost 
> -p 5432 -u engine -d engine_2013_06_25_14_49_31'
> 2013-06-25 14:52:16::DEBUG::common_utils::472::root:: output = 
> /usr/share/ovirt-engine/dbscripts /usr/share/ovirt-engine/dbscripts
> upgrade script detected a change in Config, View or Stored 
Procedure...

>
> 2013-06-25 14:52:16::DEBUG::common_utils::473::root:: stderr = 
> psql:drop_old_uuid_functions.sql:25: ERROR:  cannot drop function 
> uuid_nil() because extension uuid-ossp requires it

> HINT:  You can drop extension uuid-ossp instead.
> CONTEXT:  SQL statement "drop function if exists  uuid_nil()"
> PL/pgSQL function __temp_drop_old_uuid_functions() line 10 at SQL 
> statement
> psql:create_functions.sql:671: ERROR:  must be owner of function 
> uuid_generate_v1

>
>
> Any idea how can we fix it manually?
>
> Thanks,
> Alex.
>
> On Tue, Jun 25, 2013 at 3:58 PM, Karli Sjöberg 
 
> wrote:
>> tis 2013-06-25 klockan 14:59 +0300 skrev Moran Goldboim: On 
>> 06/25/2013 02:52 PM, Alex Lourie wrote:

>>> >
>>> > On Tue, Jun 25, 2013 at 1:31 PM, Karli Sjöberg 
>>>  > wrote:

>>> >> mån 2013-06-24 klockan 12:35 +0003 skrev Alex Lourie: 
>>> >>> On Mon, Jun 24, 2013 at 2:09 PM, Karli Sjöberg >>> 
>>>  wrote:

>>> >>> > mån 2013-06-24 klockan 10:54 +0003 skrev Alex Lourie: 
>>> >>> >> On Mon, Jun 24, 2013 at 1:26 PM, Karli Sjöberg >> >>> 
>>>  >> wrote:
>>> >>> >> > sön 2013-06-23 klockan 09:38 +0003 skrev Alex Lourie: 
Hi 
>>> Karli
>>> >>> >> >> >> >> I've written up all we know at [1], please try to 
>>> follow >>> it (if >> you >> >> haven't yet) and let us know how 
it 
>>> goes. If >>> anything goes wrong, >> we >> >> will look for the 
ways 
>>> to resolve it.

>>> >>> >> >> >> >  >> > Hi Alex!
>>> >>> >> > >> > Awesome job, I´m going to test this right away and 
let 
>>> you >>> know >> how >> > it goes. Wish me luck;)

>>> >>> >> > >> >> Good luck! May the force be with you.
>>> >>> >> >  > No no, the source Alex, the source:)
>>> >>> > > /K
>>> >>> >
>>> >>> Sure, that one too.
>>> >>>
>>> >>>
>>> >
>>> > Hi Karli
>>> >
>>> >>
>>> >> Well, it wasn´t with me for long, I have hit BZ 911939 and 
>>> 901786, >> where fedup fails to find my LVM volumes. In the 
dracut 
>>> shell (to >> where it gets dropped), I have good output from 
blkid 
>>> and lvm >> pvs|vgs|lvs but no nodes created in /dev/mapper and so 
>>> the system >> cannot boot and upgrade fails. Do you have any 
>>> suggestions there that >> might help? In the mean time, I´ll try 
>>> upgrading using >> 
>>> http://fedoraproject.org/wiki/Upgrading_Fedora_using_yum instead, 
>>> see >> if that gives better success.

>>> >>
>>> >
>>> > Sorry to hear that. Different people have experienced multiple 
> 
>>> problems upgrading lvm-based F17 systems to F18. I don't think 
that 
>>> > there's a magic solution that would just work, each case is 
>>> usually > solved on its own.

>>> >
>>> > Let us know how you progress with upgrading with yum.
>>> >
>>> > Alex.
>>>
>>> i used same article [1] and had a successful go with it - good 
luck...

>>>
>>> [1] 
>>> 
https://fedoraproject.org/wiki/Upgrading_Fedora_using_yum#Fedora_17_-.3E_Fedora_18 
>>>

>>>
>>> >
>>> >>
>>> >> -- >>
>>> >> Med Vänliga Hälsningar
>>> >> 
>>> 
--- 
>>> >>

>>> >> Karli Sjöberg
>>> >> Swedish University of Agricultural Sciences
>>> >> Box 7079 (Visiting Address Kronåsvägen 8)
>>> >> S-750 07 Uppsala, Sweden
>>> >> Phone:  +46-(0)18-67 15 66
>>> >> karli.sjob...@slu.se
>>> >
>>>
>>>
>>
>> Yes, I had much better success following that article, and managed 
to 
>> upgrade fedora to 18, had to tune my kernel parameters a little 
for 
>> the postgres upgrade to work, but then engine-upgrade fails just 
as 
>> it did the last time we tried. The log is attached. Hoping to hear 
>> back from you soon with ideas on what to try next.

>>
>> -- 
>>

>> Med Vänliga Hälsningar
>> 
--- 
>>

>> Karli Sjöberg
>> Swedish University of Agricultural Sciences
>> Box 7079 (Visiting Address Kronåsvägen 8)
>> S-750 07 Uppsala, Sweden
>> Phone:  +46-(0)18-67 15 66
>> karli.sjob...@slu.se
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


[Users] suspend state

2013-06-26 Thread Nathanaël Blanchet

Hello,

Each time I suspend any vmon ovirt 3.2, it'slong to suspend (2min) and 
very long to resume (about 5 min).Is this a regular behaviour?


--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] suspend state

2013-06-26 Thread Michal Skrivanek

On Jun 26, 2013, at 12:09 , Nathanaël Blanchet  wrote:

> Hello,
> 
> Each time I suspend any vm on ovirt 3.2, it's long to suspend (2 min) and 
> very long to resume (about 5 min).Is this a regular behaviour?
Hi,
on latest master?
Are you using our new fancy RAM snapshot functionality (there's a checkbox)?

Thanks,
michal

> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle exploitation et maintenance
> Département des systèmes d'information
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> 
> blanc...@abes.fr 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Itamar Heim

On 06/26/2013 04:35 AM, Tony Feldmann wrote:

I was messing around with some things and force removed my DC.  My
cluster is still there with the 2 gluster volumes, however I cannot move
that cluster into a new dc, I just get the following error in engine.log:

2013-06-25 20:26:15,218 ERROR
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2)
[7d1289a6] Command org.ovirt.engine.core.bll.AddVdsSpmIdCommand throw
exception: org.springframework.dao.DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}];
ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map"
   Detail: Key (storage_pool_id,
vds_spm_id)=(084def30-1e19-4777-9251-8eb1f7569b53, 1) already exists.
   Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_id,
vds_id, vds_spm_id)
 VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"


I would really like to get this back into a dc without destroying my
gluster volumes and losing my data.  Can anyone please point me in the
right direction?



but if you removed the DC, moving the cluster is meaningless - you can 
just create a new cluster and move the hosts to it?

(the VMs reside in the DC storage domains, not in the cluster)

the above error message looks familiar - i think there was a bug fixed 
for it a while back




On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann mailto:trfeldm...@gmail.com>> wrote:

I have a 2 node cluster with engine running on one of the nodes.  It
has 2 gluster volumes that replicate between the hosts as its shared
storage.  Last night one of my systems crashed.  It looks like all
of my data is present, however the ids file seems to be corrupt on
my master domain.  I tried to do a hexdump -c on the ids file, but
it just gave an input/output error.  Sanlock.log shows error -5.  Is
there a way to rebuild the ids file, or can I tell ovirt to use the
other domain as the master so I can get back up and running?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Tony Feldmann
Ii won't let me move the hosts as they have gluster volumes in that cluster.


On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim  wrote:

> On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>
>> I was messing around with some things and force removed my DC.  My
>> cluster is still there with the 2 gluster volumes, however I cannot move
>> that cluster into a new dc, I just get the following error in engine.log:
>>
>> 2013-06-25 20:26:15,218 ERROR
>> [org.ovirt.engine.core.bll.**AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2)
>> [7d1289a6] Command org.ovirt.engine.core.bll.**AddVdsSpmIdCommand throw
>> exception: org.springframework.dao.**DuplicateKeyException:
>> CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}];
>> ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map"
>>Detail: Key (storage_pool_id,
>> vds_spm_id)=(084def30-1e19-**4777-9251-8eb1f7569b53, 1) already exists.
>>Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_**id,
>> vds_id, vds_spm_id)
>>  VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
>>
>>
>> I would really like to get this back into a dc without destroying my
>> gluster volumes and losing my data.  Can anyone please point me in the
>> right direction?
>>
>>
> but if you removed the DC, moving the cluster is meaningless - you can
> just create a new cluster and move the hosts to it?
> (the VMs reside in the DC storage domains, not in the cluster)
>
> the above error message looks familiar - i think there was a bug fixed for
> it a while back
>
>
>> On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann > > wrote:
>>
>> I have a 2 node cluster with engine running on one of the nodes.  It
>> has 2 gluster volumes that replicate between the hosts as its shared
>> storage.  Last night one of my systems crashed.  It looks like all
>> of my data is present, however the ids file seems to be corrupt on
>> my master domain.  I tried to do a hexdump -c on the ids file, but
>> it just gave an input/output error.  Sanlock.log shows error -5.  Is
>> there a way to rebuild the ids file, or can I tell ovirt to use the
>> other domain as the master so I can get back up and running?
>>
>>
>>
>>
>> __**_
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/**mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Itamar Heim

On 06/26/2013 02:55 PM, Tony Feldmann wrote:

Ii won't let me move the hosts as they have gluster volumes in that cluster.


so the cluster is both virt and gluster?




On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 06/26/2013 04:35 AM, Tony Feldmann wrote:

I was messing around with some things and force removed my DC.  My
cluster is still there with the 2 gluster volumes, however I
cannot move
that cluster into a new dc, I just get the following error in
engine.log:

2013-06-25 20:26:15,218 ERROR
[org.ovirt.engine.core.bll.__AddVdsSpmIdCommand]
(ajp--127.0.0.1-8702-2)
[7d1289a6] Command
org.ovirt.engine.core.bll.__AddVdsSpmIdCommand throw
exception: org.springframework.dao.__DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?,
?)}];
ERROR: duplicate key value violates unique constraint
"pk_vds_spm_id_map"
Detail: Key (storage_pool_id,
vds_spm_id)=(084def30-1e19-__4777-9251-8eb1f7569b53, 1) already
exists.
Where: SQL statement "INSERT INTO
vds_spm_id_map(storage_pool___id,
vds_id, vds_spm_id)
  VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"


I would really like to get this back into a dc without destroying my
gluster volumes and losing my data.  Can anyone please point me
in the
right direction?


but if you removed the DC, moving the cluster is meaningless - you
can just create a new cluster and move the hosts to it?
(the VMs reside in the DC storage domains, not in the cluster)

the above error message looks familiar - i think there was a bug
fixed for it a while back


On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
mailto:trfeldm...@gmail.com>
>> wrote:

 I have a 2 node cluster with engine running on one of the
nodes.  It
 has 2 gluster volumes that replicate between the hosts as
its shared
 storage.  Last night one of my systems crashed.  It looks
like all
 of my data is present, however the ids file seems to be
corrupt on
 my master domain.  I tried to do a hexdump -c on the ids
file, but
 it just gave an input/output error.  Sanlock.log shows
error -5.  Is
 there a way to rebuild the ids file, or can I tell ovirt to
use the
 other domain as the master so I can get back up and running?




_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/__mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Tony Feldmann
yes.


On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim  wrote:

> On 06/26/2013 02:55 PM, Tony Feldmann wrote:
>
>> Ii won't let me move the hosts as they have gluster volumes in that
>> cluster.
>>
>
> so the cluster is both virt and gluster?
>
>
>>
>> On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim > > wrote:
>>
>> On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>>
>> I was messing around with some things and force removed my DC.  My
>> cluster is still there with the 2 gluster volumes, however I
>> cannot move
>> that cluster into a new dc, I just get the following error in
>> engine.log:
>>
>> 2013-06-25 20:26:15,218 ERROR
>> [org.ovirt.engine.core.bll.__**AddVdsSpmIdCommand]
>> (ajp--127.0.0.1-8702-2)
>> [7d1289a6] Command
>> org.ovirt.engine.core.bll.__**AddVdsSpmIdCommand throw
>> exception: org.springframework.dao.__**DuplicateKeyException:
>>
>> CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?,
>> ?)}];
>> ERROR: duplicate key value violates unique constraint
>> "pk_vds_spm_id_map"
>> Detail: Key (storage_pool_id,
>> vds_spm_id)=(084def30-1e19-__**4777-9251-8eb1f7569b53, 1) already
>>
>> exists.
>> Where: SQL statement "INSERT INTO
>> vds_spm_id_map(storage_pool___**id,
>>
>> vds_id, vds_spm_id)
>>   VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
>>
>>
>> I would really like to get this back into a dc without destroying
>> my
>> gluster volumes and losing my data.  Can anyone please point me
>> in the
>> right direction?
>>
>>
>> but if you removed the DC, moving the cluster is meaningless - you
>> can just create a new cluster and move the hosts to it?
>> (the VMs reside in the DC storage domains, not in the cluster)
>>
>> the above error message looks familiar - i think there was a bug
>> fixed for it a while back
>>
>>
>> On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
>> mailto:trfeldm...@gmail.com>
>> >**>
>> wrote:
>>
>>  I have a 2 node cluster with engine running on one of the
>> nodes.  It
>>  has 2 gluster volumes that replicate between the hosts as
>> its shared
>>  storage.  Last night one of my systems crashed.  It looks
>> like all
>>  of my data is present, however the ids file seems to be
>> corrupt on
>>  my master domain.  I tried to do a hexdump -c on the ids
>> file, but
>>  it just gave an input/output error.  Sanlock.log shows
>> error -5.  Is
>>  there a way to rebuild the ids file, or can I tell ovirt to
>> use the
>>  other domain as the master so I can get back up and running?
>>
>>
>>
>>
>> __**___
>> Users mailing list
>> Users@ovirt.org 
>> 
>> http://lists.ovirt.org/__**mailman/listinfo/users
>> 
>> 
>> >
>>
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Itamar Heim

On 06/26/2013 02:59 PM, Tony Feldmann wrote:

yes.


what's the content of this table?
vds_spm_id_map

(also of vds_static)

thanks




On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 06/26/2013 02:55 PM, Tony Feldmann wrote:

Ii won't let me move the hosts as they have gluster volumes in
that cluster.


so the cluster is both virt and gluster?



On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim mailto:ih...@redhat.com>
>> wrote:

 On 06/26/2013 04:35 AM, Tony Feldmann wrote:

 I was messing around with some things and force removed
my DC.  My
 cluster is still there with the 2 gluster volumes,
however I
 cannot move
 that cluster into a new dc, I just get the following
error in
 engine.log:

 2013-06-25 20:26:15,218 ERROR
 [org.ovirt.engine.core.bll.AddVdsSpmIdCommand]
 (ajp--127.0.0.1-8702-2)
 [7d1289a6] Command
 org.ovirt.engine.core.bll.AddVdsSpmIdCommand throw
 exception:
org.springframework.dao.DuplicateKeyException:

 CallableStatementCallback; SQL [{call
insertvds_spm_id_map(?, ?,
 ?)}];
 ERROR: duplicate key value violates unique constraint
 "pk_vds_spm_id_map"
 Detail: Key (storage_pool_id,
 vds_spm_id)=(084def30-1e19-4777-9251-8eb1f7569b53,
1) already

 exists.
 Where: SQL statement "INSERT INTO
 vds_spm_id_map(storage_pool_id,

 vds_id, vds_spm_id)
   VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"


 I would really like to get this back into a dc without
destroying my
 gluster volumes and losing my data.  Can anyone please
point me
 in the
 right direction?


 but if you removed the DC, moving the cluster is
meaningless - you
 can just create a new cluster and move the hosts to it?
 (the VMs reside in the DC storage domains, not in the cluster)

 the above error message looks familiar - i think there was
a bug
 fixed for it a while back


 On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
 mailto:trfeldm...@gmail.com>
>
  >>__> wrote:

  I have a 2 node cluster with engine running on one
of the
 nodes.  It
  has 2 gluster volumes that replicate between the
hosts as
 its shared
  storage.  Last night one of my systems crashed.
  It looks
 like all
  of my data is present, however the ids file seems
to be
 corrupt on
  my master domain.  I tried to do a hexdump -c on
the ids
 file, but
  it just gave an input/output error.  Sanlock.log shows
 error -5.  Is
  there a way to rebuild the ids file, or can I tell
ovirt to
 use the
  other domain as the master so I can get back up
and running?




 ___
 Users mailing list
Users@ovirt.org  >
http://lists.ovirt.org/mailman/listinfo/users

 >







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] suspend state

2013-06-26 Thread Michal Skrivanek

On Jun 26, 2013, at 13:46 , Nathanaël Blanchet  wrote:

> 
> Le 26/06/2013 12:18, Michal Skrivanek a écrit :
>> On Jun 26, 2013, at 12:09 , Nathanaël Blanchet  wrote:
>> 
>>> Hello,
>>> 
>>> Each time I suspend any vm on ovirt 3.2, it's long to suspend (2 min) and 
>>> very long to resume (about 5 min).Is this a regular behaviour?
>> Hi,
>> on latest master?
>> Are you using our new fancy RAM snapshot functionality (there's a checkbox)?
>> 
>> Thanks,
>> michal
> it is ovirt 3.2.2 el6, where can I find this checkbox?
only in master or (future) 3.3:-)  sorry, I meant for snapshots, not 
suspend/resume.

no, it should not take this long for suspend/resume. High load? too big VM? can 
you try with plain virsh?


>>> -- 
>>> Nathanaël Blanchet
>>> 
>>> Supervision réseau
>>> Pôle exploitation et maintenance
>>> Département des systèmes d'information
>>> 227 avenue Professeur-Jean-Louis-Viala
>>> 34193 MONTPELLIER CEDEX 5   
>>> Tél. 33 (0)4 67 54 84 55
>>> Fax  33 (0)4 67 54 84 14
>>> 
>>> blanc...@abes.fr
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle exploitation et maintenance
> Département des systèmes d'information
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ids sanlock error

2013-06-26 Thread Tony Feldmann
I am not sure how to retrieve that info.


On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim  wrote:

> On 06/26/2013 02:59 PM, Tony Feldmann wrote:
>
>> yes.
>>
>
> what's the content of this table?
> vds_spm_id_map
>
> (also of vds_static)
>
> thanks
>
>
>>
>> On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim > > wrote:
>>
>> On 06/26/2013 02:55 PM, Tony Feldmann wrote:
>>
>> Ii won't let me move the hosts as they have gluster volumes in
>> that cluster.
>>
>>
>> so the cluster is both virt and gluster?
>>
>>
>>
>> On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim > 
>> >> wrote:
>>
>>  On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>>
>>  I was messing around with some things and force removed
>> my DC.  My
>>  cluster is still there with the 2 gluster volumes,
>> however I
>>  cannot move
>>  that cluster into a new dc, I just get the following
>> error in
>>  engine.log:
>>
>>  2013-06-25 20:26:15,218 ERROR
>>  [org.ovirt.engine.core.bll.___**_AddVdsSpmIdCommand]
>>  (ajp--127.0.0.1-8702-2)
>>  [7d1289a6] Command
>>  org.ovirt.engine.core.bll.**AddVdsSpmIdCommand throw
>>  exception:
>> org.springframework.dao.**DuplicateKeyException:
>>
>>
>>  CallableStatementCallback; SQL [{call
>> insertvds_spm_id_map(?, ?,
>>  ?)}];
>>  ERROR: duplicate key value violates unique constraint
>>  "pk_vds_spm_id_map"
>>  Detail: Key (storage_pool_id,
>>  vds_spm_id)=(084def30-1e19-___**_4777-9251-8eb1f7569b53,
>>
>> 1) already
>>
>>  exists.
>>  Where: SQL statement "INSERT INTO
>>  vds_spm_id_map(storage_pool___**__id,
>>
>>
>>  vds_id, vds_spm_id)
>>VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
>>
>>
>>  I would really like to get this back into a dc without
>> destroying my
>>  gluster volumes and losing my data.  Can anyone please
>> point me
>>  in the
>>  right direction?
>>
>>
>>  but if you removed the DC, moving the cluster is
>> meaningless - you
>>  can just create a new cluster and move the hosts to it?
>>  (the VMs reside in the DC storage domains, not in the
>> cluster)
>>
>>  the above error message looks familiar - i think there was
>> a bug
>>  fixed for it a while back
>>
>>
>>  On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
>>  mailto:trfeldm...@gmail.com>
>> >
>>  >  > >**>__> wrote:
>>
>>   I have a 2 node cluster with engine running on one
>> of the
>>  nodes.  It
>>   has 2 gluster volumes that replicate between the
>> hosts as
>>  its shared
>>   storage.  Last night one of my systems crashed.
>>   It looks
>>  like all
>>   of my data is present, however the ids file seems
>> to be
>>  corrupt on
>>   my master domain.  I tried to do a hexdump -c on
>> the ids
>>  file, but
>>   it just gave an input/output error.  Sanlock.log
>> shows
>>  error -5.  Is
>>   there a way to rebuild the ids file, or can I tell
>> ovirt to
>>  use the
>>   other domain as the master so I can get back up
>> and running?
>>
>>
>>
>>
>>  __**_
>>  Users mailing list
>> Users@ovirt.org  > >
>> 
>> http://lists.ovirt.org/**mailman/listinfo/users
>> 
>> 
>> >
>>  
>> 
>> 
>> 
>> >>
>>
>>
>>
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lis

Re: [Users] ids sanlock error

2013-06-26 Thread Itamar Heim

On 06/26/2013 03:41 PM, Tony Feldmann wrote:

I am not sure how to retrieve that info.



psql engine postgres -c "select * from vds_static;"
psql engine postgres -c "select * from vds_spm_id_map;"




On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 06/26/2013 02:59 PM, Tony Feldmann wrote:

yes.


what's the content of this table?
vds_spm_id_map

(also of vds_static)

thanks



On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim mailto:ih...@redhat.com>
>> wrote:

 On 06/26/2013 02:55 PM, Tony Feldmann wrote:

 Ii won't let me move the hosts as they have gluster
volumes in
 that cluster.


 so the cluster is both virt and gluster?



 On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim
mailto:ih...@redhat.com>
 >
 
 >
  >>
  
 > 
 >>__>__> wrote:

   I have a 2 node cluster with engine
running on one
 of the
  nodes.  It
   has 2 gluster volumes that replicate
between the
 hosts as
  its shared
   storage.  Last night one of my systems
crashed.
   It looks
  like all
   of my data is present, however the ids
file seems
 to be
  corrupt on
   my master domain.  I tried to do a
hexdump -c on
 the ids
  file, but
   it just gave an input/out

Re: [Users] ids sanlock error

2013-06-26 Thread Tony Feldmann
I ended up removing the gluster volumes and then the cluster.  I was a
little frustrated with why I couldn't get it to work and made a hasty
decision.  Thank you very much for your response, but I am basically going
to have to re-build at this point.


On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim  wrote:

> On 06/26/2013 03:41 PM, Tony Feldmann wrote:
>
>> I am not sure how to retrieve that info.
>>
>>
> psql engine postgres -c "select * from vds_static;"
> psql engine postgres -c "select * from vds_spm_id_map;"
>
>
>
>> On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim > > wrote:
>>
>> On 06/26/2013 02:59 PM, Tony Feldmann wrote:
>>
>> yes.
>>
>>
>> what's the content of this table?
>> vds_spm_id_map
>>
>> (also of vds_static)
>>
>> thanks
>>
>>
>>
>> On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim > 
>> >> wrote:
>>
>>  On 06/26/2013 02:55 PM, Tony Feldmann wrote:
>>
>>  Ii won't let me move the hosts as they have gluster
>> volumes in
>>  that cluster.
>>
>>
>>  so the cluster is both virt and gluster?
>>
>>
>>
>>  On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim
>> mailto:ih...@redhat.com>
>>  >
>>  
>> >
>>   On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>>
>>   I was messing around with some things and
>> force removed
>>  my DC.  My
>>   cluster is still there with the 2 gluster
>> volumes,
>>  however I
>>   cannot move
>>   that cluster into a new dc, I just get the
>> following
>>  error in
>>   engine.log:
>>
>>   2013-06-25 20:26:15,218 ERROR
>>
>>   [org.ovirt.engine.core.bll.___**___AddVdsSpmIdCommand]
>>   (ajp--127.0.0.1-8702-2)
>>   [7d1289a6] Command
>>
>>   org.ovirt.engine.core.bll.**__AddVdsSpmIdCommand throw
>>   exception:
>>  org.springframework.dao.__**DuplicateKeyException:
>>
>>
>>
>>   CallableStatementCallback; SQL [{call
>>  insertvds_spm_id_map(?, ?,
>>   ?)}];
>>   ERROR: duplicate key value violates unique
>> constraint
>>   "pk_vds_spm_id_map"
>>   Detail: Key (storage_pool_id,
>>
>>   vds_spm_id)=(084def30-1e19-___**___4777-9251-8eb1f7569b53,
>>
>>
>>  1) already
>>
>>   exists.
>>   Where: SQL statement "INSERT INTO
>>   vds_spm_id_map(storage_pool___**id,
>>
>>
>>
>>   vds_id, vds_spm_id)
>> VALUES(v_storage_pool_id, v_vds_id,
>> v_vds_spm_id)"
>>
>>
>>   I would really like to get this back into a dc
>> without
>>  destroying my
>>   gluster volumes and losing my data.  Can
>> anyone please
>>  point me
>>   in the
>>   right direction?
>>
>>
>>   but if you removed the DC, moving the cluster is
>>  meaningless - you
>>   can just create a new cluster and move the hosts
>> to it?
>>   (the VMs reside in the DC storage domains, not in
>> the cluster)
>>
>>   the above error message looks familiar - i think
>> there was
>>  a bug
>>   fixed for it a while back
>>
>>
>>   On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
>>   >  > >
>>  >  > >**>
>>   > 
>>  > > > 
>>  > >**>__>__> wrote:
>>
>>I have a 2 node cluster with engine
>> running on one
>>   

[Users] bad order of vdisk

2013-06-26 Thread Nathanaël Blanchet

Hi,

My vm has got 2 regular disk named vda and vdb. I added a 3rd lun one, 
and now vdb is the lun one and vdc the 2nd disk... could be very 
annoying. It would be safer to enumerate number of disk in the order 
they are created, woudn't be? or is it a bug...


--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt Weekly Meeting Minutes -- 2013-06-26

2013-06-26 Thread Mike Burns
Minutes: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-26-14.03.html
Minutes (text): 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-26-14.03.txt
Log: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-26-14.03.log.html



#ovirt: oVirt Weekly Meeting



Meeting started by mburns at 14:03:04 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-26-14.03.log.html
.



Meeting summary
---
* agenda and roll call  (mburns, 14:03:17)
  * 3.2 Update Stream  (mburns, 14:03:38)
  * 3.3 Status Update  (mburns, 14:03:43)
  * Infra report  (mburns, 14:03:53)
  * Conferences and Workshops  (mburns, 14:03:59)
  * Other Topics  (mburns, 14:04:04)

* 3.2 Update Stream  (mburns, 14:06:21)
  * mburns didn't finish vdsm -17 push, will finish now  (mburns,
14:06:51)
  * ACTION: mburns still needs to follow up on node build  (mburns,
14:07:07)
  * upgrade 3.1->3.2 documentation being worked out  (mburns, 14:09:23)
  * no code fix  (mburns, 14:09:25)

* 3.3 Status Update  (mburns, 14:14:10)
  * LINK: http://www.ovirt.org/OVirt_3.3_release-management   (mburns,
14:14:21)
  * 3.2 quick async update -- vdsm -17 pushed to stable...  (mburns,
14:17:10)
  * need owner for html5_spice package  (mburns, 14:21:09)
  * need to get F18, F19, EL6 builds for hosting on ovirt.org  (mburns,
14:21:38)
  * Vinzenz Feenstra nominated for ownership  (mburns, 14:22:07)
  * vdsm requires libvirt from virt-preview for 3.3  (mburns, 14:22:58)
  * ACTION: mburns to update ovirt-release rpm to include virt-preview
repo  (mburns, 14:23:20)
  * ACTION: evilissimo to produce EL6 F18 F19 rpms for html5_spice and
deliver to mburns for hosting on ovirt.org  (mburns, 14:24:24)
  * EL6 libvirt already has the feature enabled  (mburns, 14:26:49)
  * ovirt-node will enable virt-preview repo for fedora based builds
going forward (patch submitted this morning)  (mburns, 14:27:14)
  * virt features should be done by end of month  (mburns, 14:28:07)
  * storage features:  one optional feature is unlikely, one Must is
risky, but should make it, all others are looking good  (mburns,
14:30:46)
  * node features -- all green and merged  (mburns, 14:31:26)
  * UX:  frontend refactor is risky, target for end of july  (mburns,
14:33:40)
  * integration:  self-hosted engine is on track  (mburns, 14:34:05)
  * guest-agent is preparing Ubuntu packaging  (mburns, 14:35:09)
  * UX -- GWT(P) upgrade also at risk (target end of July)  (mburns,
14:36:19)
  * gluster -- hooks mgmt in, rest api for hooks not done but on target,
swift service mgmt posted, api for swift is not done  (mburns,
14:38:16)
  * SLA -- scheduler is tight, but ok  (mburns, 14:41:15)
  * LINK:

https://admin.fedoraproject.org/updates/FEDORA-2013-3792/spice-html5-0.1.2-2.fc18?_csrf_token=b63ad0b0ce0bbfc5133e389bef6899aa015d7e38
(evilissimo, 14:42:12)
  * LINK:

https://admin.fedoraproject.org/updates/FEDORA-EPEL-2013-0644/spice-html5-0.1.2-2.el6?_csrf_token=b63ad0b0ce0bbfc5133e389bef6899aa015d7e38
(evilissimo, 14:42:32)
  * SLA -- QoS is questionable for design, all others in  (mburns,
14:43:33)
  * infra status -- everything on track  (mburns, 14:43:40)
  * html5_spice in Fedora already  (mburns, 14:45:16)
  * network -- all on track for freeze  (mburns, 14:47:31)
  * Summary:  almost everything is on track  (mburns, 14:47:42)
  * At Risk:  UX features -- Refactor and GWT(P) upgrade  (mburns,
14:48:24)
  * tight, but ok for now:  SLA -- scheduler, Storage -- manage storage
connections  (mburns, 14:49:27)
  * All others are on track  (mburns, 14:49:36)

* Infra update  (mburns, 14:50:56)
  * LINK: http://lists.ovirt.org/pipermail/infra/2013-June/003424.html
(ewoud, 14:53:00)
  * work ongoing with slaves @ rackspace  (mburns, 14:53:54)
  * will be updating f17 slave to f19  (mburns, 14:54:05)
  * quaid looking for someone to step as infra project coordinator
(mburns, 14:54:24)
  * AGREED: drop f17 slave in favor of f19 slave, add new slaves for
f18/f19 soon, can add f17 if needed later  (mburns, 14:57:34)

* Conferences and Workshops  (mburns, 14:58:51)
  * planning underway for next Workshop (no location announced yet)
(mburns, 15:00:56)
  * planning underway for oVirt developer meeting in Edinburgh as part
of KVM Forum (late October)  (mburns, 15:01:25)
  * would like to get as many devs as possible there for planning the
future  (mburns, 15:01:43)
  * Theron presented oVirt and Gluster integration at Red Hat Summit
recently  (mburns, 15:02:39)
  * including a live demo in the oVirt booth  (mburns, 15:02:47)
  * mburns has a talk submitted for CloudOpen NA  (mburns, 15:03:04)
  * submission deadline for LinuxCon/CloudOpen EU (in Edinburgh) is
21-July  (mburns, 15:04:34)
  * CFP also open for KVM Forum  (mburns, 15:04:55)
  * no closing date listed, so get submissions in ASAP  (mburns,
15:0

[Users] too much debugging in ovirt-node

2013-06-26 Thread Winfried de Heiden
Hi all,

Using ovirt-node-iso-2.6.1-20120228.fc18.iso (2012 seems to be a typo,
must be 2013?) is logging with too much debugging.

Changing all the"DEBUG" to "WARNOMG" in /etc/vdsm/logger.conf and
"persist /etc/vdsm/logger.conf" solved it for /var/log/vdsm/vdsm.log.

However, /var/log/libvirtd.log also shows tons of debug messages. The
file /etc/libvirt/libvirtd.conf shows:

listen_addr="0.0.0.0"
unix_sock_group="kvm"
unix_sock_rw_perms="0770"
auth_unix_rw="sasl"
host_uuid="06304eff-1c91-4e1e-86e2-d773621dcab3"
log_outputs="1:file:/var/log/libvirtd.log"
ca_file="/etc/pki/vdsm/certs/cacert.pem"
cert_file="/etc/pki/vdsm/certs/vdsmcert.pem"
key_file="/etc/pki/vdsm/keys/vdsmkey.pem"

Changing log_outputs="1:file:/var/log/libvirtd.log" to 
"log_outputs="3:file:/var/log/libvirtd.log"

with persist (or unpersist first, thn persist) doesn't help. After a
reboot log_outputs="1:file:/var/log/libvirtd.log" will appear again.

How to decrease the log level for libvirtd?

Kind regards,

Winfried


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] engine-update problem

2013-06-26 Thread Juan Pablo Lorier
Hi Sandro,

I'm finallyl back at the office and started working on the upgrade.
You'r suggestion worked but I have a different problem now.
This is the output of the engine-upgrade

Updating service configuration...   [ ERROR ]

 **Error: Upgrade failed, rolling back**
 **Reason: Can't find web config file**

In the upgrade log I found this:

2013-06-26 13:58:24::ERROR::engine-upgrade::::root:: Traceback (most
recent call last):
  File "/usr/bin/engine-upgrade", line 1101, in main
runFunc([utils.updateEngineSysconfig], MSG_INFO_UPDATE_ENGINE_SYSCONFIG)
  File "/usr/bin/engine-upgrade", line 604, in runFunc
func()
  File "/usr/share/ovirt-engine/scripts/common_utils.py", line 1695, in
updateEngineSysconfig
raise Exception("Can't find web config file")
Exception: Can't find web config file

I don't know what web config file is it referring to.
Do you want me to attach the full log?
Regard,

Juan Pablo

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] remove vm after failed import

2013-06-26 Thread Tony Feldmann
Hello, I was wondering if anyone knew how I could get a disk image of a
failed vm import unlocked so I can delete the vm.  The host that was
importing crashed during the import and the vm image has been locked
since.  It also looks like the vm disk's storage domain is listed as the
export domain and not the domain on the systems housing vms
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users