[ovirt-users] Network Security / Seperation
Hi Folks, i am currently looking for a way to isolate each vms network traffic so none can sniff others network traffic. currently i am playing around with the neutron integration, which gives me more question marks than answers for now (even documentation seems to be incomplete / outdated). Is there any other solution, which does not require to create a new vlan for each vm, to make sure that noone can sniff others traffic? Cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Migration from NFS backed VM to iSCSI LUN without exporting them on the Ovirt Export Share
On Fri, Jan 17, 2014 at 11:48 PM, Itamar Heim wrote: > On 01/09/2014 10:16 AM, squadra wrote: > >> Hello folks, >> >> since the nfs discussion remembered me that i got some vm left to >> migrate... here some kind of special question. i am open for >> non-bestpractice hacky solutions, too >> >> Situation is: >> >> - 2 Ovirt Cluster - Same DC - one NFS backed, one iSCSI backed >> - NFS Share and iSCSI Share are exported from the same phys. Machine >> - Bot DC use the same filer, just different Luns / Protocolls >> >> >> so, i thought about something simple like just move vm folder from a to >> b and do a little bit database voodoo? >> >> anyone did something like this yet? or is storage live migration already >> working for this? the docs didnt tell me very much about this. >> >> > are the VMs thin or pre-allocated? with or without snapshots? > > mostly thin, but i wouldnt care about loosing the thin feature since the underlaying filer runs zfs with compression. and nop, no snaps through ovirt. you got some ugly hack suggestion for me? :D edit: whoops, sorry wasnt targeted to only send the mail to you itamar. will take more care about! -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
try it, i bet that you will get better latency results with proper configured iscsitarget/initiator. btw, freebsd 10 includes kernel based iscsi-target now. which works pretty good for me since some time, easy to setup and working performing well (zfs not to forget ;) ) On Thu, Jan 9, 2014 at 9:20 AM, Markus Stockhausen wrote: > > Von: Karli Sjöberg [karli.sjob...@slu.se] > > Gesendet: Donnerstag, 9. Januar 2014 08:48 > > An: squa...@gmail.com > > Cc: users@ovirt.org; Markus Stockhausen > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage? > > > > On Thu, 2014-01-09 at 08:35 +0100, squadra wrote: > > Right, try multipathing with nfs :) > > > > Yes, that´s what I meant, maybe could have been more clear about that, > > sorry. Multipathing (and the load-balancing it brings) is what really > > separates iSCSI from NFS. > > > > What I´d be interested in knowing is at what breaking-point, not having > > multipathing becomes an issue. I mean, we might not have such a big > > VM-park, about 300-400 VMs. But so far running without multipathing > > using good ole' NFS and no performance issues this far. Would be good to > > know beforehand if we´re headed for a wall of some sorts, and about > > "when" we´ll hit it... > > > >/K > > If that is really a concern for the initial question about a "low cost NFS > solution" LACP on the NFS filer side will mitigate the bottleneck from > too many hypervisors. > > My personal headache is the I/O performance of QEMU. More details here: > http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html > Or to make it short: Each I/O in a VM gets a penalty of 370us. That is much > more than in ESX environments. > > I would be interested if this the same in ISCSI setups. > > Markus > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Migration from NFS backed VM to iSCSI LUN without exporting them on the Ovirt Export Share
Hello folks, since the nfs discussion remembered me that i got some vm left to migrate... here some kind of special question. i am open for non-bestpractice hacky solutions, too Situation is: - 2 Ovirt Cluster - Same DC - one NFS backed, one iSCSI backed - NFS Share and iSCSI Share are exported from the same phys. Machine - Bot DC use the same filer, just different Luns / Protocolls so, i thought about something simple like just move vm folder from a to b and do a little bit database voodoo? anyone did something like this yet? or is storage live migration already working for this? the docs didnt tell me very much about this. Cheers, Juergn -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
another point is, that a correct configured multipathing is way more solid when it comes to a single path outage. at the software side, i have seen countless nfs servers which where unresponsive because of lockd issues for example, and only a reboot fixed this since its kernel based. another contra for me is, that its rather complicated and a 50/50 chance that a nfs failover in a nfs ha setup works without any clients dying. dont get me wrong, nfs is great for small setups. its easy to setup, easy to scale, i use it very widespread for content sharing and homedirs. but i am healed regarding vm images on nfs. On Thu, Jan 9, 2014 at 8:48 AM, Karli Sjöberg wrote: > On Thu, 2014-01-09 at 08:35 +0100, squadra wrote: > > Right, try multipathing with nfs :) > > Yes, that´s what I meant, maybe could have been more clear about that, > sorry. Multipathing (and the load-balancing it brings) is what really > separates iSCSI from NFS. > > What I´d be interested in knowing is at what breaking-point, not having > multipathing becomes an issue. I mean, we might not have such a big > VM-park, about 300-400 VMs. But so far running without multipathing > using good ole' NFS and no performance issues this far. Would be good to > know beforehand if we´re headed for a wall of some sorts, and about > "when" we´ll hit it... > > /K > > > > > On Jan 9, 2014 8:30 AM, "Karli Sjöberg" wrote: > > On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote: > > > > Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im > > Auftrag von "squadra [squa...@gmail.com] > > > > Gesendet: Mittwoch, 8. Januar 2014 17:15 > > > > An: users@ovirt.org > > > > Betreff: Re: [Users] Experience with low cost NFS-Storage > > as VM-Storage? > > > > > > > > better go for iscsi or something else... i whould avoid > > nfs for vm hosting > > > > Freebsd10 delivers kernel iscsitarget now, which works > > great so far. or go with omnios to get comstar iscsi, which is > > a rocksolid solution > > > > > > > > Cheers, > > > > > > > > Juergen > > > > > > That is usually a matter of taste and the available > > environment. > > > The minimal differences in performance usually only show up > > > if you drive the storage to its limits. I guess you could > > help Sven > > > better if you had some hard facts why to favour ISCSI. > > > > > > Best regards. > > > > > > Markus > > > > Only technical difference I can think of is the iSCSI-level > > load-balancing. With NFS you set up the network with LACP and > > let that > > load-balance for you (and you should probably do that with > > iSCSI as well > > but you don´t strictly have to). I think it has to do with a > > chance of > > trying to go beyond the capacity of 1 network interface at the > > same > > time, from one Host (higher bandwidth) that makes people try > > iSCSI > > instead of plain NFS. I have tried that but was never able to > > achieve > > that effect, so in our situation, there´s no difference. In > > comparing > > them both in benchmarks, there was no performance difference > > at all, at > > least for our storage systems that are based on FreeBSD. > > > > /K > > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
Right, try multipathing with nfs :) On Jan 9, 2014 8:30 AM, "Karli Sjöberg" wrote: > On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote: > > > Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag > von "squadra [squa...@gmail.com] > > > Gesendet: Mittwoch, 8. Januar 2014 17:15 > > > An: users@ovirt.org > > > Betreff: Re: [Users] Experience with low cost NFS-Storage as > VM-Storage? > > > > > > better go for iscsi or something else... i whould avoid nfs for vm > hosting > > > Freebsd10 delivers kernel iscsitarget now, which works great so far. > or go with omnios to get comstar iscsi, which is a rocksolid solution > > > > > > Cheers, > > > > > > Juergen > > > > That is usually a matter of taste and the available environment. > > The minimal differences in performance usually only show up > > if you drive the storage to its limits. I guess you could help Sven > > better if you had some hard facts why to favour ISCSI. > > > > Best regards. > > > > Markus > > Only technical difference I can think of is the iSCSI-level > load-balancing. With NFS you set up the network with LACP and let that > load-balance for you (and you should probably do that with iSCSI as well > but you don´t strictly have to). I think it has to do with a chance of > trying to go beyond the capacity of 1 network interface at the same > time, from one Host (higher bandwidth) that makes people try iSCSI > instead of plain NFS. I have tried that but was never able to achieve > that effect, so in our situation, there´s no difference. In comparing > them both in benchmarks, there was no performance difference at all, at > least for our storage systems that are based on FreeBSD. > > /K > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
There's are already enaugh articles on the web about NFS problems related locking, latency, etc Eh stacking a protocol onto another to fix problem and then maybe one more to glue them together. Google for the suse PDF " why NFS sucks", I don't agree with the whole sheet.. NFS got his place,too. But not as production filer for VM. Cheers, Juergen, the NFS lover On Jan 9, 2014 8:10 AM, "Markus Stockhausen" wrote: > > Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von > "squadra [squa...@gmail.com] > > Gesendet: Mittwoch, 8. Januar 2014 17:15 > > An: users@ovirt.org > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage? > > > > better go for iscsi or something else... i whould avoid nfs for vm > hosting > > Freebsd10 delivers kernel iscsitarget now, which works great so far. or > go with omnios to get comstar iscsi, which is a rocksolid solution > > > > Cheers, > > > > Juergen > > That is usually a matter of taste and the available environment. > The minimal differences in performance usually only show up > if you drive the storage to its limits. I guess you could help Sven > better if you had some hard facts why to favour ISCSI. > > Best regards. > > Markus ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
better go for iscsi or something else... i whould avoid nfs for vm hosting Freebsd10 delivers kernel iscsitarget now, which works great so far. or go with omnios to get comstar iscsi, which is a rocksolid solution Cheers, Juergen On Wed, Jan 8, 2014 at 2:34 PM, noc wrote: > On 8-1-2014 13:18, Sven Kieske wrote: > >> PS: Bonus question: Does someone utilize the NFS-Servers also as >> computenodes ? >> > We do, temporarily. It is NOT recommended :-) because: > - can't update your NFS server without shutting down all VMs > - myriad of other reasons > > Still I did a reboot of our NFS server to update all nodes/engine from > 3.2.2 to 3.3.2. How? > Made a script which did a virsh suspend VM which freezes all I/O and then > ran yum update/reboot on the NFS server. It worked but its not good for > your stress levels. > > Joop > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] virtio-rng / crypto inside vms
haveged is worth mentioning as pretty good alternative solution http://www.issihosts.com/haveged/ Cheers, Juergen On Fri, Dec 13, 2013 at 9:32 AM, Sven Kieske wrote: > Answering myself, it seems > virtio-rng will be in 3.4: > https://bugzilla.redhat.com/show_bug.cgi?id=977079 > > But I don't find it in the planning: > > > https://docs.google.com/spreadsheet/ccc?key=0AuAtmJW_VMCRdHJ6N1M3d1F1UTJTS1dSMnZwMF9XWVE&usp=sharing#gid=0 > > Nevertheless it would be cool if someone could give some advice > how to handle entropy until 3.4 gets released > (and I have time to upgrade). > > Am 13.12.2013 09:09, schrieb Sven Kieske: > > Hi, > > > > I'm just wondering: How is the state > > of the virtio-rng implementation? > > > > I'm asking because I need to regenerate > > ssh host keys in newly deployed vms. > > > > (I seem to be the only person, or everybody > > else has found the solution, or nobody thinks > > about security, or a mixture of the above?) > > > > Additional I found no really guidance > > on how much entropy bits should be > > available to generate a secure key > > inside a vm, beside these numbers: > > > > http://www.ietf.org/rfc/rfc1750.txt > > suggests about 128 bits of entropy > > for a single cryptographic operation. > > > > various other sources mention ranges > > between 100-200 or even at least 4096 > > entropy bits. > > > > Would it be a workaround to add a virtual > > sound device and use this one for /dev/random ? > > (But it would be useless if you have no real sound hardware I guess). > > > > Additional when you want to regenerate host keys in e.g. Ubuntu > > 3 Keys get generated so you need even more entropy to be on the > > save side. > > > > If you got any links to best practices or some > > good news regarding the state of virtio-rng that would be awesome. > > > > Currently my vms have around 130-160 entropy bits available. > > > > -- > Mit freundlichen Grüßen / Regards > > Sven Kieske > > Systemadministrator > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 6 > 32339 Espelkamp > T: +49-5772-293-100 > F: +49-5772-293-333 > https://www.mittwald.de > Geschäftsführer: Robert Meyer > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Ovirt Engine Memory usage going crazy since 3.3 Upgrade
same here, still everything running smooth and flawless after several days and memory usage still at a normal level. no more lags, no more filled 8gb swap partition. again, a huge thanks to Sahina :) On Sat, Dec 14, 2013 at 9:49 PM, Jiří Sléžka wrote: > Thanks, I had same issue. This workaround seems to fix it. > > > > On 12.12.2013 08:42, Sahina Bose wrote: > >> [Adding ovirt users] >> >> On 12/12/2013 12:09 PM, squadra wrote: >> >>> Hello Sahina, >>> >>> CentOS 6.5 >>> >>> [root@ovirt:~]$ rpm -aq |grep jdk >>> java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64 >>> java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64 >>> [root@ovirt:~]$ >>> >>> >>> 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 >>> x86_64 x86_64 GNU/Linux >>> >>> >>> its running inside vmware esx, if that matters? also, previosly it was >>> a 3.2 installation based on dreyous rpms, dunno if that could be the >>> root cause? >>> >>> currently the vm is able to use up to 8gb with 4 cores, running alone >>> on the host. but engine is able to eat 8gb and 12gb of swap. >>> >>> pmap output is here (not going crazy right now, since i restarted it a >>> few minutes ago) >>> >>> http://pastebin.com/fuNEZqMA >>> >>> >> This looks very similar to >> https://bugzilla.redhat.com/show_bug.cgi?id=1028966. >> >> Please use workaround in comment 27. >> >> >>> Cheers, >>> >>> Jürgen >>> >>> >>> >>> On Thu, Dec 12, 2013 at 7:32 AM, Sahina Bose >> <mailto:sab...@redhat.com>> wrote: >>> >>> >>> On 12/12/2013 11:43 AM, squadra wrote: >>> >>>> Hi, >>>> >>>> i upgraded my Ovirt 3.2 a few days ago, with 3.2 memory usage was >>>> within a level i rated as normal (~3gb mem used, for controlling >>>> 6 nodes with about 30 vm on them). after 3.3.1 upgrade, the >>>> engine goes crazy and uses about 3x of memory after some hours. >>>> right after starting ovirt-engine, everything is fine, after >>>> about 20hrs the host starts swapping etc >>>> >>>> heres the process in "crazy state" >>>> >>>> ovirt 8749 107 59.2 10592340 4776040 ? Sl Dec10 2189:35 >>>> ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g >>>> -XX:PermSize=256m -XX:MaxPermSize=256m >>>> -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterva >>>> >>>> here right now after a restart >>>> >>>> ovirt28350 43.7 8.7 3895304 705964 ?Sl 07:09 1:09 >>>> ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g >>>> -XX:PermSize=256m -XX:MaxPermSize=256m >>>> -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval >>>> >>>> also, the webinterface is getting really laggy after a few hours >>>> of runtime (already before the host starts swapping). >>>> >>>> anyone got an idea what is causing this? which logs should i >>>> provide to dig further into this? >>>> >>> >>> Which OS are you running on? And which version of java? >>> >>> Could you attach the output of pmap ? >>> >>> >>>> thanks & cheers, >>>> >>>> >>>> Juergen >>>> >>>> -- >>>> Sent from the Delta quadrant using Borg technology! >>>> >>>> >>>> ___ >>>> Users mailing list >>>> Users@ovirt.org <mailto:Users@ovirt.org> >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >>> >>> >>> -- >>> Sent from the Delta quadrant using Borg technology! >>> >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Ovirt Engine Memory usage going crazy since 3.3 Upgrade
Hi, i upgraded my Ovirt 3.2 a few days ago, with 3.2 memory usage was within a level i rated as normal (~3gb mem used, for controlling 6 nodes with about 30 vm on them). after 3.3.1 upgrade, the engine goes crazy and uses about 3x of memory after some hours. right after starting ovirt-engine, everything is fine, after about 20hrs the host starts swapping etc heres the process in "crazy state" ovirt 8749 107 59.2 10592340 4776040 ?Sl Dec10 2189:35 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterva here right now after a restart ovirt28350 43.7 8.7 3895304 705964 ? Sl 07:09 1:09 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval also, the webinterface is getting really laggy after a few hours of runtime (already before the host starts swapping). anyone got an idea what is causing this? which logs should i provide to dig further into this? thanks & cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] CentOS upgrade from 3.2 to 3.3
Same here, worked absolutly flawless. From me also a huge Thank you! Cheers, Juergen On Mon, Dec 2, 2013 at 3:32 PM, Karli Sjöberg wrote: > Hi all! > > Just wanted to express my deepest admiration for the progress of this > project. You may or may not remember my quest for upgrading from 3.1 to > 3.2 and just how difficult a seemingly trivial thing can turn out to be > quite the ordeal... > > This time around, we followed this post by dreyou on how to go from his > 3.2-repo to the ovirt.org stable repo 3.3: > http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33 > > And the whole process _just worked_! Something that actually made me > even more nervous, left me feeling like "OK, so this is going just too > well, I´ve got a bad feeling about this...":) But no, nothing ever blew > us out the sky and the entire process of upgrading the engine and six > hosts went through in just under the hour from start to finish! Best > part is of course that our customers VM´s never even noticed; a > completely live upgrade. Awesome! > > I mean, I worked my ass off trying to go from 3.1 to 3.2, I´ve actually > put a mental block on the actual time it took, but probably six months > of planning and trial and error, to have this done in under an hour... > All I can say is THANK YOU! Both to you developers of oVirt and a > special thank you to dreyou for posting such a well-written manual. > Thank you. Thank you. Thank you:) > > > -- > > Med Vänliga Hälsningar > > --- > Karli Sjöberg > Swedish University of Agricultural Sciences > Box 7079 (Visiting Address Kronåsvägen 8) > S-750 07 Uppsala, Sweden > Phone: +46-(0)18-67 15 66 > karli.sjob...@slu.se > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Trouble upgrading (was: oVirt 3.3.1 rlease)
http://www.ovirt.org/OVirt_3.3.1_release_notes will answer all your questions On Thu, Nov 21, 2013 at 5:32 PM, Bob Doolittle wrote: > Yay! > > Congratulations to all of the oVirt team. > > I am having trouble locating upgrade instructions, however. There's > nothing in the release notes. > > I discovered through trial-and-error that running "engine-setup" again > handles upgrade of the Engine. > > But I don't know how to upgrade my RH 6.4 KVM host. When I try to run "yum > update" it fails due to dependency errors notably in: > glusterfs > qemu > vdsm > > and also to a multilib version error due to vdsm-python 4.12 and 4.13. > > What's the proper upgrade procedure for a Host? > > Thanks, > Bob > > On 11/21/2013 10:43 AM, Kiril Nesenko wrote: > >> The oVirt development team is very happy to announce the general >> availability of oVirt 3.3.1 as of November 21th 2013. This release >> solidifies oVirt as a leading KVM management application, and open >> source alternative to VMware vSphere. >> >> oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.4 >> (or similar). >> >> See release notes [1] for a list of the new features and bug fixed. >> >> [1] http://www.ovirt.org/OVirt_3.3.1_release_notes >> >> - Kiril >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] compatibility rhev-m (regged, valid entitlement) + ovirt rhev-h for lab part
On Fri, Nov 1, 2013 at 6:43 AM, Itamar Heim wrote: > On 11/01/2013 02:36 AM, squadra wrote: > >> Hi Itamar, >> >> yep i expected some problems, but i didnt plan to mix rhev-h and >> ovirt-nodes in within the same cluster. as long as i dont have to expect >> to break other clusters i whould give it a try. subscriptions for the >> lab/staging are nonsense in that case, just not needed. and a 2nd >> management node, based on ovirt is also overkill. >> >> lets see if its killing my pets >> > > this isn't only about same cluster compatibility, rather potentially > trying to use a verb which doesn't exist in ovirt-3.2 and assumed by rhev > 3.2. so YMMV, a lot. > for *sure* do not try to put them in same DC, or your rhev cluster/DC may > suffer. > okay, stop scaring me ! sadly its the infrastructure without our company simply said is completly down. so i wont try... but something else came to my mind. we are redhat ready partner, so for i got access to nfr subscriptions. will i get in trouble with the rh support when use nfr in lab area controlled by a regular subscription? i know, ask rh support and totally offtopic. sorry for that, but maybe someone knows or got his own experience. especially when it comes to a support case and rh wants log collection. > > >> cheers, >> >> >> >> >> On Fri, Nov 1, 2013 at 12:35 AM, Itamar Heim > <mailto:ih...@redhat.com>> wrote: >> >> On 10/31/2013 12:02 AM, squadra wrote: >> >> hi, >> >> i run a fully subscribed rhev 3.2 cluster, meaning rhev-m + rhev-h >> nodes. but, as most or all of you might understand it whould be >> wishable >> to use the free opensourced version for a extra pair of >> hostsystems >> which will be used as lab / testing envirement before live >> deploy of our >> stuff. >> >> so, anyone got experienice in compatibility of ovirt-node images >> and >> rhev-m (3.2 in my case). does this simply work? >> >> cheers, >> >> juergen >> >> -- >> >> Sent from the Delta quadrant using Borg technology! >> >> >> >> __**___ >> Users mailing list >> Users@ovirt.org <mailto:Users@ovirt.org> >> >> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> >> >> >> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> >> > >> >> >> rhev-m makes assumptions on the compatibility level. not all rhev >> versions are 1:1 with ovirt version as some features get disabled or >> backported to rhev. >> I wouldn't recommend mixing the two together. >> >> >> >> >> -- >> >> Sent from the Delta quadrant using Borg technology! >> >> >> >> __**_ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> >> >> > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] compatibility rhev-m (regged, valid entitlement) + ovirt rhev-h for lab part
Hi Itamar, yep i expected some problems, but i didnt plan to mix rhev-h and ovirt-nodes in within the same cluster. as long as i dont have to expect to break other clusters i whould give it a try. subscriptions for the lab/staging are nonsense in that case, just not needed. and a 2nd management node, based on ovirt is also overkill. lets see if its killing my pets cheers, On Fri, Nov 1, 2013 at 12:35 AM, Itamar Heim wrote: > On 10/31/2013 12:02 AM, squadra wrote: > >> hi, >> >> i run a fully subscribed rhev 3.2 cluster, meaning rhev-m + rhev-h >> nodes. but, as most or all of you might understand it whould be wishable >> to use the free opensourced version for a extra pair of hostsystems >> which will be used as lab / testing envirement before live deploy of our >> stuff. >> >> so, anyone got experienice in compatibility of ovirt-node images and >> rhev-m (3.2 in my case). does this simply work? >> >> cheers, >> >> juergen >> >> -- >> >> Sent from the Delta quadrant using Borg technology! >> >> >> >> __**_ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> >> >> > rhev-m makes assumptions on the compatibility level. not all rhev versions > are 1:1 with ovirt version as some features get disabled or backported to > rhev. > I wouldn't recommend mixing the two together. > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] compatibility rhev-m (regged, valid entitlement) + ovirt rhev-h for lab part
hi, i run a fully subscribed rhev 3.2 cluster, meaning rhev-m + rhev-h nodes. but, as most or all of you might understand it whould be wishable to use the free opensourced version for a extra pair of hostsystems which will be used as lab / testing envirement before live deploy of our stuff. so, anyone got experienice in compatibility of ovirt-node images and rhev-m (3.2 in my case). does this simply work? cheers, juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] so, what do you want next in oVirt?
i whould also vote for equallogic support. or at least the possibility to set iscsi configuration to "manual", which should include the possibility to disable the "must use multipathd". not every iscsi san uses it... On Wed, Sep 11, 2013 at 7:02 AM, Wagner, Kai wrote: > Hi all, > > whats about Live Snapshot delete function? Its great to create live > Snapshots, but for business critical vms its also nessesary to delete > online snapshots. > > Greetz > > > -Ursprüngliche Nachricht- > Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag > von Baptiste AGASSE > Gesendet: Dienstag, 10. September 2013 17:58 > An: Itamar Heim > Cc: users@ovirt.org > Betreff: Re: [Users] so, what do you want next in oVirt? > > Hi all, > > - Mail original - > > De: "Itamar Heim" > > À: users@ovirt.org > > Envoyé: Mardi 20 Août 2013 23:19:16 > > Objet: [Users] so, what do you want next in oVirt? > > > > earlier in the year we did a survey for feature requests / > > improvements / etc. > > > > since a lot of things were added, and priorities usually change, I'd > > like to ask again for "what do you need the most from oVirt / what are > > your pain points" next? > > > > below[1] I've listed my understanding of what already went in from > > previous survey requests (to various degrees of coverage). > > > > Thanks, > > Itamar > > > > [1] from the top 12 > > V Allow disk resize > > V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly > > Available engine - via hosted engine[2] V Open vSwitch integration - > > via neutron integration X Allow cloning VMs without template ? Enable > > hypervisor upgrade/updates through engine[3] V Allow engine on an > > oVirt hosted VM - via hosted engine[2] V Enable guest configuration > > (root password, SSH keys, network) via > >guest agent in engine - via cloud-init X Integrate v2v into engine > > ? Bond/extend ovirtmgmt with a second network for HA/increased > >bandwidth[4] > > X Integrate scheduling of snapshots and VM export for backups in > >engine[5] > > V Spice – support Google Chrome - via mime based launch > > > > > > Other items mentioned in previous survey which should be covered by > > now: > > - Fix timeout when adding local host during all-in-one configuration > > - Fix engine set-up when SELinux is disabled > > - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) > > - Allow multiple VMs to be deployed from the same template at the same > >time > > - ISO domains on local/GlusterS > > - Show IP addresses in Virtual Machines->Network Interfaces > > - OpenStack Quantum support (now called Neutron) > > - noVNC support > > - Support spice.html5 and websocket proxy > > - Add other guest OSes to list > > - Port oVirt guest agent to Ubuntu[6] > > - SLA - Allow resource time-sharing > > - Spice - Mac client (via mime based launch) > > - Spice - port XPI plug-in to Windows (not sure this will happen, but > >mime based launch allows using firefox now) > > - Spice - client for Ubuntu/Debian (should be covered via mime based > >launch) > > > > > > [2] hosted engine is in active development, but not released yet. > > [3] host update is supported, but not for general yum update. > > [4] a lot of improvements were done in this space, but i'm not sure if > > they cover this exact use case > > [5] backup api is now being pushed to master, and orchestration of > > backups should probably happen via 3rd part backup vendors? > > [6] I'm not sure packaging exists yet, but ubuntu is covered for the > > basic functionality of the guest agent. > > ___ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > Thanks for this thread ! > > - ISCSI EqualLogic SAN support or use standard iscsi tools/configuration > - SSO for webui and cli (IPA integration) > - PXE boot for nodes > - VMs dependencies on startup > > Have a nice day. > > Regards. > > --- > Baptiste > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > it-novum GmbH > > i. A. Kai Wagner > Consultant > > Tel: +49 (661) 103-762 > Fax: +49 (661) 103-17762 > > kai.wag...@it-novum.com > > it-novum GmbH * Edelzeller Straße 44 * 36043 Fulda * > http://www.it-novum.com > Handelsregister Amtsgericht Fulda, HRB 1934 * Geschäftsführer: Michael > Kienle " Sitz der Gesellschaft: Fulda > > Der Inhalt dieser E-Mail ist vertraulich. Wenn Sie nicht der eigentliche > Empfänger sein sollten, informieren Sie bitte sofort den Absender oder > vernichten umgehend diese Mail. Jegliche unerlaubte Vervielfältigung oder > Weiterleitung dieser Mail ist strengstens verboten. > This e-mail may contain confidential and/or priviledged information. If > you are not the intended recepient (or have received this e-mail in error) > please notify the sender immediately and destroy this e-mail. Any > unauthorised copy
Re: [Users] CentOS 6.4 + Ovirt 3.2 + NFS Backend Problems
On Mon, Jul 29, 2013 at 1:46 PM, Karli Sjöberg wrote: > ** > mån 2013-07-29 klockan 13:26 +0200 skrev squadra: > > Hi Karli, > > > > i already thought that i am the only one with that combination ;) > > > Well, I happen to be using Fedora as engine/hosts, but when it comes to > the NFS-server, why settle for anything less, right?:) I imagine you´re in > it for the same reason as me too; "the last word in filesystems"... > > exactly :) > > > > On Mon, Jul 29, 2013 at 1:11 PM, Karli Sjöberg > wrote: > > ons 2013-07-24 klockan 23:35 +0200 skrev squadra: > > Maybe found a workaround on the NFS server side, a option for the > mountd service > > > > > -S Tell mountd to suspend/resume execution of the nfsd threads > when- > ever the exports list is being reloaded. This avoids > intermit- > tent access errors for clients that do NFS RPCs while the > exports > are being reloaded, but introduces a delay in RPC response > while > the reload is in progress. If mountd crashes while an exports > load is in progress, mountd must be restarted to get the nfsd > threads running again, if this option is used. > > > so far, i was able to reload the exports list twice, without any random > suspended vm. lets see if this is a real solution or if i just had luck two > times. > > > > It would seem as if we are on the same boat:) Actually I hadn´t thought > about it before, but you´re right; issuing a "service mountd reload" does > pause a large number of VM´s, frickin annoying really. I mean, the NFS > server doesn´t care what or who it´s serving, you could be creating a new > export for a completely different system, and not even have oVirt in mind > before customers start to call, wondering why their VM´s have stopped > responding!? > > > > > exactly the same i have > > > > I actually tried that "-S" but it didn´t work for me at all, and looking > at the man-page for mountd, there´s no mention of it either, even though we > are presumably running the same version: > # uname -r > 9.1-RELEASE > > Or are you perhaps tracking "-STABLE", and there´s a minor difference > there? > > > > > > > i am tracking -STABLE, but the Man Page of "mountd" on a 9.1 Stable > (Snapshot Release) also shows -S Parameter > > > > 9.1-STABLE FreeBSD 9.1-STABLE #0: Sun Jul 7 10:53:46 UTC 2013 > r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > > > 9.2-PRERELEASE FreeBSD 9.2-PRERELEASE #6: Thu Jul 18 02:41:57 CEST > 2013 root@filer1.intern. > > > OK, so we´re not using the same versions, -STABLE != -RELEASE, and I only > use -RELEASE. But that explains it. I guess I can wait for 9.2-RELEASE to > get rid of that nuisance. Thanks for the info! > > Just checked the changes in -stable, here we go... http://svnweb.freebsd.org/base/stable/9/usr.sbin/mountd/mountd.c?revision=243739&view=markup 9.2 is not so far away :) Cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] CentOS 6.4 + Ovirt 3.2 + NFS Backend Problems
Hi Karli, i already thought that i am the only one with that combination ;) On Mon, Jul 29, 2013 at 1:11 PM, Karli Sjöberg wrote: > ** > ons 2013-07-24 klockan 23:35 +0200 skrev squadra: > > Maybe found a workaround on the NFS server side, a option for the mountd > service > > > > > > -S Tell mountd to suspend/resume execution of the nfsd threads > when- > > ever the exports list is being reloaded. This avoids > intermit- > > tent access errors for clients that do NFS RPCs while the > exports > > are being reloaded, but introduces a delay in RPC response > while > > the reload is in progress. If mountd crashes while an > exports > > load is in progress, mountd must be restarted to get the nfsd > > threads running again, if this option is used. > > > > so far, i was able to reload the exports list twice, without any random > suspended vm. lets see if this is a real solution or if i just had luck two > times. > > > It would seem as if we are on the same boat:) Actually I hadn´t thought > about it before, but you´re right; issuing a "service mountd reload" does > pause a large number of VM´s, frickin annoying really. I mean, the NFS > server doesn´t care what or who it´s serving, you could be creating a new > export for a completely different system, and not even have oVirt in mind > before customers start to call, wondering why their VM´s have stopped > responding!? > > exactly the same i have > I actually tried that "-S" but it didn´t work for me at all, and looking > at the man-page for mountd, there´s no mention of it either, even though we > are presumably running the same version: > # uname -r > 9.1-RELEASE > > Or are you perhaps tracking "-STABLE", and there´s a minor difference > there? > > > i am tracking -STABLE, but the Man Page of "mountd" on a 9.1 Stable (Snapshot Release) also shows -S Parameter 9.1-STABLE FreeBSD 9.1-STABLE #0: Sun Jul 7 10:53:46 UTC 2013 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 9.2-PRERELEASE FreeBSD 9.2-PRERELEASE #6: Thu Jul 18 02:41:57 CEST 2013 root@filer1.intern. ´ both systems provide -S for the mountd, and so far i didnt have any more problems. lets see if this keeps going good. > > > but i am still interested in parameters which make the vdsm more > tolerant to short interruptions. instant suspend of a vm after such a short > "outage" is not very nice. > > > +1! > > /Karli > > Cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] CentOS 6.4 + Ovirt 3.2 + NFS Backend Problems
Maybe found a workaround on the NFS server side, a option for the mountd service -S Tell mountd to suspend/resume execution of the nfsd threads when- ever the exports list is being reloaded. This avoids intermit- tent access errors for clients that do NFS RPCs while the exports are being reloaded, but introduces a delay in RPC response while the reload is in progress. If mountd crashes while an exports load is in progress, mountd must be restarted to get the nfsd threads running again, if this option is used. so far, i was able to reload the exports list twice, without any random suspended vm. lets see if this is a real solution or if i just had luck two times. but i am still interested in parameters which make the vdsm more tolerant to short interruptions. instant suspend of a vm after such a short "outage" is not very nice. On Wed, Jul 24, 2013 at 11:04 PM, squadra wrote: > Hi Folks, > > i got a Setup running with the following Specs > > 4 VM Hosts - CentOS 6.4 - latest Ovirt 3.2 from dreyou > > vdsm-xmlrpc-4.10.3-0.36.23.el6.noarch > vdsm-cli-4.10.3-0.36.23.el6.noarch > vdsm-python-4.10.3-0.36.23.el6.x86_64 > vdsm-4.10.3-0.36.23.el6.x86_64 > qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64 > qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64 > qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64 > gpxe-roms-qemu-0.9.7-6.9.el6.noarch > > Management Node is also running latest 3.2 from dreyou > > ovirt-engine-cli-3.2.0.10-1.el6.noarch > ovirt-engine-jbossas711-1-0.x86_64 > ovirt-engine-tools-3.2.1-1.41.el6.noarch > ovirt-engine-backend-3.2.1-1.41.el6.noarch > ovirt-engine-sdk-3.2.0.9-1.el6.noarch > ovirt-engine-userportal-3.2.1-1.41.el6.noarch > ovirt-engine-setup-3.2.1-1.41.el6.noarch > ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch > ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch > ovirt-engine-3.2.1-1.41.el6.noarch > ovirt-engine-genericapi-3.2.1-1.41.el6.noarch > ovirt-engine-restapi-3.2.1-1.41.el6.noarch > > > VM are running from a Freebsd 9.1 NFS Server, which works absolutly > flawless until i need to reload the /etc/exports File on the NFS Server. > For this, the NFS Server itself doesnt need to be restarted, just the > mountd Daemon is "Hup´ed". > > But after sending a HUP to the mountd, Ovirt immidiatly thinks that there > was a problem with the storage backend and suspends random some VM. Luckily > this VM can be resumed instant without further issues. > > The VM Hosts dont show any NFS related errors, so i expect the vdsm or > engine to check the nfs server continous. > > The only thing i can find in the vdsm.log of a related host is > > -- snip -- > > Thread-539::DEBUG::2013-07-24 > 22:29:46,935::resourceManager::830::ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-539::DEBUG::2013-07-24 > 22:29:46,935::resourceManager::864::ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-539::DEBUG::2013-07-24 > 22:29:46,935::task::957::TaskManager.Task::(_decref) > Task=`9332cd24-d899-4226-b0a2-93544ee737b4`::ref 0 aborting False > libvirtEventLoop::INFO::2013-07-24 > 22:29:55,142::libvirtvm::2509::vm.Vm::(_onAbnormalStop) > vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device > virtio-disk0 error e > other > libvirtEventLoop::DEBUG::2013-07-24 > 22:29:55,143::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) > vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::event Suspended detail 2 > opaque No > ne > libvirtEventLoop::INFO::2013-07-24 > 22:29:55,143::libvirtvm::2509::vm.Vm::(_onAbnormalStop) > vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device > virtio-disk0 error e > other > > > -- snip -- > > i am a little bit at a dead end currently, since reloading a nfs servers > export table isnt a unusual task and everything is working like expected. > just ovirt seems way to picky. > > is there any possibility to make this check a little bit more tolerant? > > i try setting "sd_health_check_delay = 30" in vdsm.conf, but this didnt > change anything. > > anyone got an idea how i can get rid of this annoying problem? > > Cheers, > > Juergen > > > -- > > Sent from the Delta quadrant using Borg technology! > > -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] CentOS 6.4 + Ovirt 3.2 + NFS Backend Problems
Hi Folks, i got a Setup running with the following Specs 4 VM Hosts - CentOS 6.4 - latest Ovirt 3.2 from dreyou vdsm-xmlrpc-4.10.3-0.36.23.el6.noarch vdsm-cli-4.10.3-0.36.23.el6.noarch vdsm-python-4.10.3-0.36.23.el6.x86_64 vdsm-4.10.3-0.36.23.el6.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64 qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64 qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64 gpxe-roms-qemu-0.9.7-6.9.el6.noarch Management Node is also running latest 3.2 from dreyou ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-3.2.1-1.41.el6.noarch ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch VM are running from a Freebsd 9.1 NFS Server, which works absolutly flawless until i need to reload the /etc/exports File on the NFS Server. For this, the NFS Server itself doesnt need to be restarted, just the mountd Daemon is "Hup´ed". But after sending a HUP to the mountd, Ovirt immidiatly thinks that there was a problem with the storage backend and suspends random some VM. Luckily this VM can be resumed instant without further issues. The VM Hosts dont show any NFS related errors, so i expect the vdsm or engine to check the nfs server continous. The only thing i can find in the vdsm.log of a related host is -- snip -- Thread-539::DEBUG::2013-07-24 22:29:46,935::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-539::DEBUG::2013-07-24 22:29:46,935::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-539::DEBUG::2013-07-24 22:29:46,935::task::957::TaskManager.Task::(_decref) Task=`9332cd24-d899-4226-b0a2-93544ee737b4`::ref 0 aborting False libvirtEventLoop::INFO::2013-07-24 22:29:55,142::libvirtvm::2509::vm.Vm::(_onAbnormalStop) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device virtio-disk0 error e other libvirtEventLoop::DEBUG::2013-07-24 22:29:55,143::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::event Suspended detail 2 opaque No ne libvirtEventLoop::INFO::2013-07-24 22:29:55,143::libvirtvm::2509::vm.Vm::(_onAbnormalStop) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device virtio-disk0 error e other -- snip -- i am a little bit at a dead end currently, since reloading a nfs servers export table isnt a unusual task and everything is working like expected. just ovirt seems way to picky. is there any possibility to make this check a little bit more tolerant? i try setting "sd_health_check_delay = 30" in vdsm.conf, but this didnt change anything. anyone got an idea how i can get rid of this annoying problem? Cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users