On 05.02.2016 08:56, Nicolas Ecarnot wrote:
> Le 04/02/2016 22:35, Colin Coe a écrit :
>> Is the oVirt agent up to date?
>
> yum -y upgrade
> ... [blah blah blah]
> ... reboot
> and then :
>
> # cat /etc/centos-release
> CentOS Linux release 7.2.1511 (Core)
>
> # rpm -qa|grep -i agent
> ovirt-guest
On 07.09.2015 14:44, Patrick Hurrelmann wrote:
> On 07.09.2015 13:54, Dan Kenigsberg wrote:
>> On Mon, Sep 07, 2015 at 11:47:48AM +0200, Patrick Hurrelmann wrote:
>>> On 06.09.2015 11:30, Dan Kenigsberg wrote:
>>>> On Fri, Sep 04, 2015 at 10:26:39AM +0200, Patrick
On 07.09.2015 13:54, Dan Kenigsberg wrote:
> On Mon, Sep 07, 2015 at 11:47:48AM +0200, Patrick Hurrelmann wrote:
>> On 06.09.2015 11:30, Dan Kenigsberg wrote:
>>> On Fri, Sep 04, 2015 at 10:26:39AM +0200, Patrick Hurrelmann wrote:
>>>> Hi all,
>>>>
&
On 21.11.2014 22:28, Chris Adams wrote:
> I have set up oVirt with hosted engine, on an iSCSI volume. On both
> nodes, the kernel logs the following about every 10 seconds:
>
> Nov 21 15:27:49 node8 kernel: ovirt-ha-broker: sending ioctl 5401 to a
> partition!
>
> Is this a known bug, something
On 02.01.2014 20:12, Dan Ferris wrote:> Hi,
>
> Has anyone run across this error:
>
> Cannot run VM. Invalid time zone for given OS type.
>
> The OS type for these VMs is set to Linux Other. They were all exported
> from an Ovirt 3.2 cluster and are being reimported into an Ovirt 3.3
> cluster. N
On 25.11.2013 12:13, Gianluca Cecchi wrote:
> On Mon, Nov 25, 2013 at 11:59 AM, Vinzenz Feenstra wrote:
>>
>> This should be fixed now :-)
>> https://admin.fedoraproject.org/updates/ovirt-guest-agent-1.0.8-5.el5
>>
>
> Hi, I get now this on CentOS 5.10 x86_64 system
>
> [g.cecchi@c510 ~]$ sudo /s
On 12.11.2013 15:33, Dan Kenigsberg wrote:
> I suspect you are not interested in "excuses" for each of the failures,
> let us look forwards. My conclusions are:
> - Do not require non-yet-existing rpms. If we require a feature that is
> not yet in Fedora/Centos, we must wait. This is already in e
On 12.11.2013 19:15, Mike Burns wrote:
> On 11/12/2013 03:51 PM, Douglas Schilling Landgraf wrote:
>>
>> Indeed, that's bad. It has been included from a patch only on Fedora
>> koji build rawhide. The others points here already have been answered by
>> others developers. Anyway, we have updated the
On 12.11.2013 11:31, Sandro Bonazzola wrote:
> Il 12/11/2013 10:34, Patrick Hurrelmann ha scritto:
>> Hi all,
>>
>> sorry for this rant, but...
>>
>> I now tried several times to test the beta 3.3.1 rpms, but they can't
>> even be installed in the most
On 12.11.2013 11:07, Assaf Muller wrote:
> Regarding the pep8 breakage - Try updating your pep8.
>
Hi,
thanks for the hint, but according to
http://www.ovirt.org/Vdsm_Developers the latest python-pep8
(python-pep8-1.3.3-3.el6) for el6 is already installed.
And further digging shows that probabl
Hi all,
sorry for this rant, but...
I now tried several times to test the beta 3.3.1 rpms, but they can't
even be installed in the most times. One time it required a future
selinux-policy, although the needed selinux fix was delivered in a much
lower version. Now the rpms have broken requirements
>> Alright, just verified it. A vm started on a 6.3 host can be
>> successfully migrated to the new 6.4 host and then back to any other 6.3
>> host. It just won't migrate a vm started on 6.4 to any host running 6.3.
>
> This surprises me. Engine should have used the same emulatedMachine
> value, i
On 07.03.2013 16:18, Dan Kenigsberg wrote:
> On Thu, Mar 07, 2013 at 03:59:27PM +0100, Patrick Hurrelmann wrote:
>> On 05.03.2013 13:49, Dan Kenigsberg wrote:
>>> On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
>>>> On 05.03.20
On 05.03.2013 13:49, Dan Kenigsberg wrote:
> On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
>> On 05.03.2013 11:14, Dan Kenigsberg wrote:
>>
>>>>>>
>>>>>> My version of vdsm as stated
On 05.03.2013 11:14, Dan Kenigsberg wrote:
My version of vdsm as stated by Dreyou:
v 4.10.0-0.46 (.15), builded from
b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
3.1 Branch?
>
On 05.03.2013 10:54, Dan Kenigsberg wrote:
> On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
>> On 04.03.2013 21:52, Itamar Heim wrote:
>>> On 04/03/2013 19:03, Patrick Hurrelmann wrote:
>>>> Hi list,
>>>>
>>>> I tested th
On 04.03.2013 21:52, Itamar Heim wrote:
> On 04/03/2013 19:03, Patrick Hurrelmann wrote:
>> Hi list,
>>
>> I tested the upcoming CentOS 6.4 release with my lab installation of
>> oVirt 3.1 and it fails to play well.
>>
>> Background: freshly installe
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of
oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
both version 3.1. oVirt 3.1 was installed
On 24.01.2013 18:05, Patrick Hurrelmann wrote:
> Hi list,
>
> after rebooting one host (single host dc with local storage) the local
> storage domain can't be attached again. The host was set to maintenance
> mode and all running vms were shutdown prior the reboot.
>
&
On 24.01.2013 18:08, Dafna Ron wrote:
> Before you do this be sure that the export domain is *really* *not
> attached to* *any* *DC*!
> if you look under the storage main tab it should appear as unattached or
> it should not be in the setup or under a DC in any other setup at all.
>
> 1. go to the
Hi list,
in one datacenter I'm facing problems with my export storage. The dc is
of type single host with local storage. On the host I see that the nfs
export domain is still connected, but the engine does not show this and
therefore it cannot be used for exports or detached.
Trying to add attach
On 09.01.2013 15:48, Joern Ott wrote:
>
>
>> -Original Message-
>> From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf
>> Of Rick Beldin
>> Sent: Dienstag, 8. Januar 2013 15:54
>> To: Itamar Heim
>> Cc: users@ovirt.org
>> Subject: Re: [Users] What do you want to see i
On 03.01.2013 17:25, Patrick Hurrelmann wrote:
> On 03.01.2013 17:08, Itamar Heim wrote:
>> Hi Everyone,
>>
>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
>> find good/useful in oVirt, and what they would like to see
>> improved/added
On 03.01.2013 23:13, Moran Goldboim wrote:
> On 01/03/2013 07:42 PM, Darrell Budic wrote:
>>
>> On Jan 3, 2013, at 10:25 AM, Patrick Hurrelmann wrote:
>>
>>> On 03.01.2013 17:08, Itamar Heim wrote:
>>>> Hi Everyone,
>>>>
>>>> as
On 03.01.2013 17:08, Itamar Heim wrote:
> Hi Everyone,
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
> find good/useful in oVirt, and what they would like to see
> improved/added in coming versions?
>
> Thanks,
> Itamar
For me, I'd like to see official rpms for
On 28.09.2012 15:10, Itamar Heim wrote:
> On 09/28/2012 03:04 PM, Patrick Hurrelmann wrote:
>>>> Is there anything I can do to reset that stuck state and bring the VM
>>>> back to live?
>>>>
>>>> Best regards
>>>> Patrick
>>>
>> Is there anything I can do to reset that stuck state and bring the VM
>> back to live?
>>
>> Best regards
>> Patrick
>>
>
> try moving all vm's from that host (migrate them to the other hosts),
> then fence it (or shutdown manually and right click, confirm shutdown)
> to try and release the v
Hi List,
in my test lab the iSCSI SAN crashed and caused some mess. My cluster
has 3 hosts running VMs. The SPM node was fenced and automatically
shutdown due to the storage crash. All VMs running on the other 2 hosts
were put to pause. I recovered the storage and powered on the fenced
node. All V
On 20.09.2012 16:13, Itamar Heim wrote:
> On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:
>> On 20.09.2012 16:01, Itamar Heim wrote:
>>>> Power management is configured for both nodes. But this might be the
>>>> problem: we use the integrated IPMI over LAN powe
On 20.09.2012 16:01, Itamar Heim wrote:
>> Power management is configured for both nodes. But this might be the
>> problem: we use the integrated IPMI over LAN power management - and
>> if I pull the plug on the machine the power management becomes un-
>> available, too.
>>
>> Could this be the pro
30 matches
Mail list logo