or even
> worse, weeks.
>
> I am now running of CentOS7 (manual install) with manual Gluster 7.0
> installation and current ovirt. So far so good.
>
> Time will tell :)
>
> On 24/02/2020 18:11, Strahil Nikolov wrote:
> > On February 24, 2020 5:10:40 PM GMT+02:00, Hesham
My issue is with Gluster 6.7 (the default with oVirt 4.3.7) as is the case
with Christian. I still have the failing volume and disks and can share any
information required.
On Mon, Feb 24, 2020 at 6:21 PM Strahil Nikolov
wrote:
> On February 24, 2020 1:55:34 PM GMT+02:00, Hesham Ahmed
>
new gluster volume and copy the disks from the
failing volume as root to resolve this.
Did you create a bug report in Bugzilla for this?
Regards,
Hesham Ahmed
On Wed, Feb 5, 2020 at 1:01 AM Christian Reiss
wrote:
> Thanks for replying,
>
> What I just wrote Stahil was:
>
>
>
; Indivar Nair
>
> On Tue, Mar 26, 2019 at 8:48 PM Hesham Ahmed wrote:
>
>> Create a new Cluster in oVirt and disable it's "Virt Service" while
>> enabling the "Gluster Service". Then add the gluster nodes to this
>> Cluster and you will be able t
Create a new Cluster in oVirt and disable it's "Virt Service" while
enabling the "Gluster Service". Then add the gluster nodes to this
Cluster and you will be able to manage them using oVirt without using
them for virtualization.
On Tue, Mar 26, 2019 at 5:43 PM Indivar Nair wrote:
>
> Hi All,
>
>
')[0]
> vm_service = vms_service.vm_service(vm.id)
> # Find the host:
> hosts_service = connection.system_service().hosts_service()
> host = hosts_service.list(search='name=myhost')[0]
>
>
>
>
> On Mon, Mar 25, 2019 at 2:49 PM Hesham Ahmed wrote:
>
>
I don't think there is a pre-installed CLI tool for export to OVA,
however you can use this
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py
Make sure you change the Engine URL, username, password, VM and Host
values to match your requirements.
On Mon, Mar 25
We're using the upload_disk.py from oVirt sdk examples
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
chmod +x upload_disk.py
Change the engine url/user/pass on line 108-110 copy ca.pem file from
engine /etc/pki/.../apache-ca.pem to working directory and finally s
it last night, still
> seems to be an issue.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
> > On Mar 15, 2019, at 4:25 PM, Hesham Ahmed wrote:
> >
> > I had reported this here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1687126
> >
I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126
Has anyone else faced this with 4.3.1?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/
olume creation from within Cockpit without any issues.
On Sun, Mar 10, 2019 at 2:55 PM Strahil wrote:
>
> Check if you have a repo called sac-gluster-ansible.
>
> Best Regards,
> Strahil NikolovOn Mar 10, 2019 08:21, Hesham Ahmed wrote:
> >
> > On a new 4.3.1 oVirt N
On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
(also when trying adding a new gluster volume to existing clusters)
using Cockpit, an error is displayed "gluster-ansible-roles is not
installed on Host. To continue deployment, please install
gluster-ansible-roles on Host and try ag
Spice HTML5 was removed many versions back. I believe it was no longer
being maintained and there wasn't much interest.
On Thu, Oct 11, 2018 at 11:15 AM wrote:
>
> Hello!
> Strange, but i have no spice-html5 option in vm console settings.
> https://prnt.sc/l4qz00
> Should i add a spice proxy for t
bonded interfaces (2 x 1G each), with one
dedicated for gluster or get a dedicated 10G nic for gluster.
On Thu, Sep 27, 2018 at 4:00 PM Hesham Ahmed wrote:
>
> You can install any CentOS compatible custom software on oVirt nodes
> without much trouble.
> On Thu, Sep 27, 2018 at 3:06 PM S
You can install any CentOS compatible custom software on oVirt nodes
without much trouble.
On Thu, Sep 27, 2018 at 3:06 PM Stefano Danzi wrote:
>
> Hi!
>
> I need to install HP SSA and HP SHM on hosts and I don't know if this is
> supported on oVirt node.
>
> Il 27/09/2
Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt
On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi wrote:
>
> Hello!
>
> I'm almost ready to start with a new oVirt deplyment. I will use CentOS
> 7, self hosting engine and gluser
You would need three servers for gluster based hyperconverged oVirt
deployment.
On Tue, Sep 11, 2018, 8:41 PM Keith Winn wrote:
> Cool, it is good to know that I was on the right track. Thanks Again.
> ___
> Users mailing list -- users@ovirt.org
> To u
Starting oVirt 4.2.4 (also in 4.2.5 and maybe in 4.2.3) I am facing
some sort of memory leak. The memory usage on the hosts keep
increasing till it reaches somewhere around 97%. Putting the host in
maintenance and back resolves it. The memory usage by the qemu-kvm
processes is way above the defined
On Mon, Jul 9, 2018 at 3:52 PM Sahina Bose wrote:
>
>
>
> On Mon, Jul 9, 2018 at 5:41 PM, Hesham Ahmed wrote:
>>
>> Thanks Sahina for the update,
>>
>> I am using gluster geo-replication for DR in a different installation,
>> however I was not aware tha
lumes hosting VM
images a work in progress with a bug tracker or it's something not
expected to change?
On Mon, Jul 9, 2018 at 2:58 PM Sahina Bose wrote:
>
>
>
> On Sun, Jul 8, 2018 at 3:29 PM, Hesham Ahmed wrote:
>>
>> I also noticed that Gluster Snapshots have the
Hesham Ahmed wrote:
>
> I also noticed that Gluster Snapshots have the SAME UUID as the main
> LV and if using UUID in fstab, the snapshot device is sometimes
> mounted instead of the primary LV
>
> For instance:
> /etc/fstab contains the following line:
>
> UUID=a0b85d33-
I also noticed that Gluster Snapshots have the SAME UUID as the main
LV and if using UUID in fstab, the snapshot device is sometimes
mounted instead of the primary LV
For instance:
/etc/fstab contains the following line:
UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
auto i
The correct way to allow hosted engine to use other available gluster
peers in case of failure of the specified peer is to pass the
–config-append option during setup as described
https://ovirt.org/develop/release-management/features/sla/self-hosted-engine-gluster-support/
If you want to change th
ile
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise
convert_to_error(kind, result)
OSError: [Errno 2]
No such file or directory: vdo
On Mon, Jun 25, 2018 at 6:09 AM Hesham Ahmed wrote:
I am receiving the following error in journal repeatedly every few minutes
on all 3 nodes of a hyperconverged oVirt 4.2.3 setup running oVirt Nodes:
Jun 25 06:03:26 vhost01.somedomain.com vdsm[45222]: ERROR Internal server
error
Traceback (most r
Log file attached to the bug. Do let me know if you need anything else.
On Thu, Mar 8, 2018, 4:32 PM Sahina Bose wrote:
> Thanks for your report, we will take a look. Could you attach the
> engine.log to the bug?
>
> On Wed, Mar 7, 2018 at 11:20 PM, Hesham Ahmed wrote:
>
>&g
I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and
now with 4.2.1. The UI doesn't appear as I explained in the bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1530186
I can now see the UI when I clear the cookies and try the snapshots UI from
within the volume details
27 matches
Mail list logo