Il giorno mer 23 feb 2022 alle ore 19:23 Gilboa Davara
ha scritto:
> On Wed, Feb 23, 2022 at 12:46 PM Sandro Bonazzola
> wrote:
>
>> Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
>> ha scritto:
>> >
>> > Hello,
>> >
>> > Gluster is still mentioned in the release page.
>> > Will it be
I’ve always been told that migrating self-hosted-engine storage was a backup,
shutdown, and rebuild from backup procedure.
In my iscsi environment it has never worked. (More due to the history of my
environment, than the procedure itself.)
Since we have too many hosts in a datacenter, we’ve
Hello,
I am currently running Ovirt 4.2.5.3-1.el7 with an Hosted-Engine running on
nfs-based storage.
I am needing to migrate the backend NFS store for the Hosted-Engine to a
new share.
Is there specific documentation for this procedure? I see multiple
references to a general procedure
You can try to play a little bit with the I/O threads (but don't jump too fast).
What is your I/O scheduler and mount options.You can reduce I/O lookups if you
specify the 'noatime' and the selinux context on the mount options.
A real killer of performance is the lattency. What is the lattency
You can always create overrides like this:
/etc/systemd/system/.d/someconfname.conf[]=
Best Regards,Strahil Nikolov
On Wed, Feb 23, 2022 at 14:53, jb wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Nope, it's in the appstream (CentOS Stream), but I never tested it.
Best Regards,Strahil Nikolov
On Wed, Feb 23, 2022 at 12:42, Gilboa Davara wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
On Wed, Feb 23, 2022 at 12:46 PM Sandro Bonazzola
wrote:
> Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
> ha scritto:
> >
> > Hello,
> >
> > Gluster is still mentioned in the release page.
> > Will it be supported as a storage backend in 4.5?
>
>
> As RHGS is going end of life in 2024
Hello All,
I believe the network is performing as expected, I did an iperf test:
[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[ 5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval Transfer
Thanks for the detailed instructions, Nir. I'm going to scrounge up some
hardware.
By the way, if anyone else would like to work on NVMe/TCP support, for
NVMe/TCP target you can either use Lightbits (talk to me offline for
details) or use the upstream Linux NVMe/TCP target. Lightbits is a
On Wed, Feb 23, 2022 at 4:20 PM Muli Ben-Yehuda wrote:
>
> Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte in
> ovirt and vdsm... What's the simplest way to set up a development
> environment? Is it possible to set up a "standalone" vdsm environment to hack
> support
Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte in
ovirt and vdsm... What's the simplest way to set up a development
environment? Is it possible to set up a "standalone" vdsm environment to
hack support for nvme/tcp or do I need "full ovirt" to make it work?
Cheers,
Muli
On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik wrote:
>
> So I started looking in the logs and tried to follow along with the
> code, but things didn't make sense and then I saw it's ovirt 4.3 which
> makes things more complicated :)
> Unfortunately because GUID is sent in the metadata the volume
Have you verified that you're actually getting 10Gbps between the hosts?
-derek
On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek
Hello Derek,
We have a 10Gig connection dedicated to the storage network, nothing else
is on that switch.
On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins wrote:
> Hi,
>
> Another question which I don't see answered: What is the underlying
> connectivity between the Gluster hosts?
>
> -derek
>
>
Hi,
Another question which I don't see answered: What is the underlying
connectivity between the Gluster hosts?
-derek
On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> Hello Sunil,
>
> [root@ovirt1 ~]# gluster --version
> glusterfs 8.6
>
> same on all hosts
>
> On Wed, Feb 23, 2022
Hello Sunil,
[root@ovirt1 ~]# gluster --version
glusterfs 8.6
same on all hosts
On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
shegg...@redhat.com> wrote:
> Hi,
>
> Which version of gluster is in use?
>
> Regards,
>
> Sunil kumar Acharya
>
> Red Hat
>
>
Yes I know, it was a bad workaround, but somehow debian had issues with
auto mount cifs. I fix it now with enabling
systemd-networkd-wait-online, but after I had to override the nginx
service to, to wait for network-online.target.
Am 21.02.22 um 18:14 schrieb Strahil Nikolov:
Don't do that.
So I started looking in the logs and tried to follow along with the
code, but things didn't make sense and then I saw it's ovirt 4.3 which
makes things more complicated :)
Unfortunately because GUID is sent in the metadata the volume is
treated as a vdsm managed volume[2] for the udev rule
Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
ha scritto:
>
> Hello,
>
> Gluster is still mentioned in the release page.
> Will it be supported as a storage backend in 4.5?
As RHGS is going end of life in 2024 it is being deprecated for RHV.
The upstream Gluster project has no plan for
Certainly, thanks for your help!
I put cinderlib and engine.log here:
http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
If you grep for 'mulivm1' you will see for example:
2022-02-22 04:31:04,473-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default
task-10)
On Mon, Feb 21, 2022 at 12:07 PM Strahil Nikolov
wrote:
> You can blacklist packages in dnf with specific version, and thus you
> don't need to blacklist from repo.
>
> Best Regards,
> Strahil Nikolov
>
>
Hello,
Understood.
Perr your qemu 6.2 question, how can I test it? Is it packaged in some
Hello,
Gluster is still mentioned in the release page.
Will it be supported as a storage backend in 4.5?
- Gilboa
On Tue, Feb 22, 2022 at 4:57 PM Sandro Bonazzola
wrote:
> The oVirt development team leads are pleased to inform that the
> schedule for oVirt 4.5.0 has been finalized.
>
> The
Hi,
We haven't tested this, and we do not have any code to handle nvme/tcp
drivers, only iscsi and rbd. Given the path seen in the logs
'/dev/mapper', it looks like it might require code changes to support
this.
Can you share cinderlib[1] and engine logs to see what is returned by
the driver? I
On Wed, 23 Feb 2022, Adam Xu wrote:
How can we convert centos 8 to centos 8 stream? Thanks.
dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf distro-sync
Note that the last command is effectively a yum update that syncs your
packages with all of the installed repos,
Hi everyone,
We are trying to set up ovirt (4.3.10 at the moment, customer preference) to
use Lightbits (https://www.lightbitslabs.com) storage via our openstack cinder
driver with cinderlib. The cinderlib and cinder driver bits are working fine
but when ovirt tries to attach the device to a
Hello All,
We have 3 servers with a raid 50 array each, we are having extreme
performance issues with our gluster, writes on gluster seem to take at
least 3 times longer than on the raid directly. Can this be improved? I've
read through several other performance issues threads but have been
Hello. To install self-hosted engine using hosted-engine --deploy: # yum
install ovirt-hosted-engine-setup.
To install self-hosted engine using https://krunkerio.io the Cockpit user
interface: yum install cockpit-ovirt-dashboard.
___
Users mailing
27 matches
Mail list logo