Hi, I have a big problem with ovirt. I use version 4.2.7 with self-hosted. The
problem is that when I try to raise the vm of the ovirt-engine with the
command: hosted-engine --vm-start, it appears in the output
"VM exists and is down, cleaning up and restarting"
when running: hosted-engine --vm-s
You may have this one instead. I just encountered it last night, still seems to
be an issue.
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
> On Mar 15, 2019, at 4:25 PM, Hesham Ahmed wrote:
>
> I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126
>
> Has anyone
Upgrading gluster from version 3.12 or 4.1 (included in ovirt 3.x) to 5.3 (in
ovirt 4.3) seems to cause this due to a bug in the gluster upgrade process.
It’s an unfortunate side effect fo us upgrading ovirt hyper-converged systems.
Installing new should be fine, but I’d wait for gluster to get
Thanks,
I can see now from "ovn-sbctl show" on the engine machine that 2
of our hosts haven't deployed ovn
● ovn-controller.service - OVN controller daemon
Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; disabled;
vendor preset: disabled)
Active: inactive (d
I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126
Has anyone else faced this with 4.3.1?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/
Thank you for the information, will monitor the bz update
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovi
Hi,
something that I’m seeing in the vdsm.log, that I think is gluster related is
the following message:
2019-03-15 05:58:28,980-0700 INFO (jsonrpc/6) [root] managedvolume not
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
import os_brick',) (caps:148)
os_brick
I've now got the second host working, the directory /etc/openvswitch/ was owned
by root instead of openvswitch:openvswitch
Thanks,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
__
Hi
I have some oVirt clusters, in various config.
One cluster based on CentOS7 hosts, another based on ovirt-node-ng.
While the second was successfully updated from 4.2.7 to 4.2.8, attempts to
update hosts of the first one ends with:
Error: Package: vdsm-4.20.46-1.el7.x86_64 (ovirt-4.2)
That is essentially the behaviour that I've seen. I wonder if perhaps it
could be related to the increased heal activity that occurs on the volumes
during reboots of nodes after updating.
On Fri, Mar 15, 2019 at 12:43 PM Ron Jerome wrote:
> Just FYI, I have observed similar issues where a volum
On Fri, Mar 15, 2019 at 5:15 PM Miguel Duarte de Mora Barroso
wrote:
>
> On Fri, Mar 15, 2019 at 3:49 PM Staniforth, Paul
> wrote:
> >
> > Thanks,
> > I can see now from "ovn-sbctl show" on the engine machine
> > that 2 of our hosts haven't deployed ovn
> >
> > ● ovn-controller.s
On Fri, Mar 15, 2019 at 3:49 PM Staniforth, Paul
wrote:
>
> Thanks,
> I can see now from "ovn-sbctl show" on the engine machine that
> 2 of our hosts haven't deployed ovn
>
> ● ovn-controller.service - OVN controller daemon
>Loaded: loaded (/usr/lib/systemd/system/ovn-controll
Just FYI, I have observed similar issues where a volume becomes unstable
for a period of time after the upgrade, but then seems to settle down after
a while. I've only witnessed this in the 4.3.x versions. I suspect it's
more of a Gluster issue than oVirt, but troubling none the less.
On Fri, 15
Hi!
We have run into a problem migrating a VM's disk. While doing a live
disk migration it all went well until "Removing Snapshot Auto-generated
for Live Storage Migration". The operation started and got stuck in the
"Preparing to merge" stage. The task was visible as an async task for a
coup
On Fri, Mar 15, 2019 at 2:45 PM Jagi Sarcilla <
jagi.sarci...@cevalogistics.com> wrote:
> Is there a way to specify to disable the PCID flag during Hosted Engine
> setup command line or via Cockpit, because the Processor don't support PCID
> flag
>
> When the setup is trying to start up the ovirt
Is there a way to specify to disable the PCID flag during Hosted Engine setup
command line or via Cockpit, because the Processor don't support PCID flag
When the setup is trying to start up the ovirt appliance it wont start due to
processor flag
any help very much appreciated
__
Yes that is correct. I don't know if the upgrade to 4.3.1 itself caused
issues or simply related somehow to rebooting all hosts again to apply node
updates started causing brick issues for me again. I started having similar
brick issues after upgrading to 4.3 originally that seemed to have
stabili
Il giorno ven 15 mar 2019 alle ore 14:00 Simon Coter
ha scritto:
> Hi,
>
> something that I’m seeing in the vdsm.log, that I think is gluster related
> is the following message:
>
> 2019-03-15 05:58:28,980-0700 INFO (jsonrpc/6) [root] managedvolume not
> supported: Managed Volume Not Supported.
Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:
>
> >I along with others had GlusterFS issues after 4.3 upgrades, the failed
> to dispatch handler issue with bricks going down intermittently. After
> some time it seemed to have corrected itself (at le
Il giorno ven 15 mar 2019 alle ore 13:38 Jayme ha
scritto:
> I along with others had GlusterFS issues after 4.3 upgrades, the failed to
> dispatch handler issue with bricks going down intermittently. After some
> time it seemed to have corrected itself (at least in my enviornment) and I
> hadn't
>I along with others had GlusterFS issues after 4.3 upgrades, the failed to
>dispatch handler issue with bricks going down intermittently. After some time
>it seemed to have corrected itself (at least in my enviornment) and I >hadn't
>had any brick problems in a while. I upgraded my three no
I along with others had GlusterFS issues after 4.3 upgrades, the failed to
dispatch handler issue with bricks going down intermittently. After some
time it seemed to have corrected itself (at least in my enviornment) and I
hadn't had any brick problems in a while. I upgraded my three node HCI
clu
On Thu, Mar 14, 2019 at 3:04 PM Staniforth, Paul
wrote:
>
> Thanks Miguel,
> if we configure it connect to a physical network and
> select the Data Centre Network I assume it will create the overlay network
> on top of that logical network.
Let me clarify; the network
Please ignore this one - I'm just too stupid and i didn't realize that the
Deletion Protection was enabled.
Strahil
В петък, 15 март 2019 г., 11:27:08 ч. Гринуич+2, Strahil Nikolov
написа:
Hi Community,
I have the following problem.A VM was created based on template and after
powerof
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov wrote:
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
>Hi,>can you please explain how you fixed it?
I have set again to global maintenance, defined the Ho
Hi Community,
I have the following problem.A VM was created based on template and after
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
___
Users m
> On 14 Mar 2019, at 18:24, Wood Peter wrote:
>
> Hi,
>
> I need to migrate a few dozen VMs from ovirt-4.1.1 to ovirt-4.2.8.
>
> I did a few following this procedure:
> VMs export -> Detach Export domain -> Attach Export to new ovirt -> Import VMs
you can mark multiple VMs like that, all of
I try to deploy hosted-engine 4.3.2-rc2 on iSCSI
I put a ipv4 portal address and targets get discovered. However they are
are returned by the Synology hosts with both ipv4 and ipv6 adresses.
LUN discovery then fails while attempting to connect to ipv6 address
I tried hosted-engine --deploy --4 to
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov
wrote:
> Ok,
>
> I have managed to recover again and no issues are detected this time.
> I guess this case is quite rare and nobody has experienced that.
>
Hi,
can you please explain how you fixed it?
>
> Best Regards,
> Strahil Nikolov
>
> В сря
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
Best Regards,Strahil Nikolov
В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov
написа:
Dear Simone,
it seems that there is some kin
30 matches
Mail list logo