[ovirt-users] Re: OVS switch type for hosted-engine

2022-03-09 Thread ravi k
> Just to close this thread, we were able to manually convert our hosted-engine 
> 4.1.1

Hello Devin,
Thanks a lot for this. I was setting up a cluster and intended it enable OVN in 
it. I came across this thread when searching for a solution. 
Does this change in 4.3 version or 4.4 so we can specify switch type when 
creating a self-hosted engine. Any ideas?

Regards,
Ravi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/743UMYSQV3MQGBFTGTW7D7UL3N5ECIEZ/


[ovirt-users] Re: mdadm vs. JBOD

2022-03-09 Thread jonas
Thanks to Nikolov and Strahil for the valuable input! I was off for a few 
weeks, so I would like to apologize if I'm potentially reviving a zombie thread.

I am a bit confused about where to go with this environment after the 
discontinuation of the hyperconverged setup. What alternative options are there 
for us? Or do you think going the Gluster way would still be advisable, even 
though it seems as it is being discontinued over time?

Thanks for any input on this!

Best regards,
Jonas

On 1/22/22 14:31, Strahil Nikolov via Users wrote:

> Using the wizzard is utilizing the Gluster Andible roles.
> I would highly recommend using it, unless you know what you are doing (for 
> example storage alignment when using Hardware raid).
> 
> Keep in mind that the DHT xlator (the logic in distributed volumes) is shard 
> aware, so your shards are spread between subvolumes and additional 
> performance can be gained.So using replicated-distributed volumes have their 
> benefits.
> 
> If you decide to avoid the software raid, use only replica3 volumes as with 
> SSDs/NVMEs usually the failures are not physical, but logical (maximum writes 
> reached -> predictive failure -> total failure).
> 
> Also, consider mounting via noatime/relatime and 
> context="system_u:object_r:glusterd_brick_t:s0" for your gluster bricks.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> > On Fri, Jan 21, 2022 at 11:00, Gilboa Davara
> >  mailto:gilb...@gmail.com wrote:
> > 
> > 
> > ___
> > Users mailing list -- users@ovirt.org mailto:users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org 
> > mailto:users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/U2ZEWLRF5D6FENQEI5QXL77CMWB7XF32/
> > 
> > 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26AHNDSOJSIVTGYOEUFOY444YYBZCAIW/


[ovirt-users] Re: Long shot on VM failing to start

2022-03-09 Thread Tomáš Golembiovský
On Wed, Mar 09, 2022 at 12:07:51AM -, si...@justconnect.ie wrote:
> We are moving to an oVirt 4.4.9 environment from a 4.3.4.3 environment and 
> have run into a major issue.
> The method for migration to the new system is to shutdown a VM on the old 
> system, build a new VM on the new system with the same IP/name etc.

This seems unnecessarily elaborate. Was the method choice deliberate?
Why not migrate the storage domains or update the clusters?

> This has worked for several VMs so far but the latest new build failed and we 
> had to roll back - i.e. start the old VM. The old VM fails to start with the 
> following message in the VMs libvirt qemu log:
> 
> libvirtd quit during handshake: Input/output error
> shutting down, reason=failed

This is not too informative but it seems to me that something breaks when
trying to start qemu process. Could you check journal for libvirtd if
there are any errors? Also please check journal for vdsm and make sure
the service did not restart.

Hope this helps,

Tomas

> 
> Any ideas?
> Regards
> Simon...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XOQCOLVNTOL2RWJKGREBVMVEHGTV6OX/

-- 
Tomáš Golembiovský 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HSSKORFJVOMLMKPELEZH4GKRNBDUOAZV/


[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-09 Thread Strahil Nikolov via Users
You can boot CentOS 7 -> troubleshooting and then execute the lspci
Best Regards,Strahil Nikolov
 
 
  On Tue, Mar 8, 2022 at 16:08, Muhammad Riyaz wrote:   
 
  
 
I was unable to run LSPCI command since i am not able to boot with live cd or 
installing Linux to the server

  
 
Here is some more details about driver that I catch after installing windows 
server 2012

  
 
  
 
Device Desscprition:    DELL PERC 6/i Integrated

  
 
Devive Instance path:  
PCI\VEN_1000_0060_1F0C1028_04\4&254D1C7F&0&0020

  
 
hardware ids :   

PCI\VEN_1000_0060_1F0C1028_04

PCI\VEN_1000_0060_1F0C1028

PCI\VEN_1000_0060_010400

PCI\VEN_1000_0060_0104

  
 
  
 
Compatible IDS: 

PCI\VEN_1000_0060_04

PCI\VEN_1000_0060

PCI\VEN_1000_010400

PCI\VEN_1000_0104

PCI\VEN_1000

PCI\CC_010400_0

PCI\CC_010400

PCI\CC_0104_0

PCI\CC_0104

  
 
  
 
driver version:

6.600.21.8

  
 
  
 
Matching device ID

PCI\VEN_1000_0060_1F0C1028

  
 
  
 
Service:

Megasas

  
 
Configuration ID

megasas.inf:PCI\VEN_1000_0060_1F0C1028,Install_INT.NT

Sent from Mail for Windows

  
 

This message (and any associated files) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is confidential, subject to copyright or constitutes a trade secret. If you are 
not the intended recipient you are hereby notified that any dissemination, 
copying or distribution of this message, or files associated with this message, 
is strictly prohibited. If you have received this message in error, please 
notify us immediately by replying to the message and deleting it from your 
computer. Messages sent to and from us may be monitored. Internet 
communications cannot be guaranteed to be secure or error-free as information 
could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
contain viruses. Therefore, we do not accept responsibility for any errors or 
omissions that are present in this message, or any attachment, that have arisen 
as a result of e-mail transmission. If verification is required, please request 
a hard-copy version. Any views or opinions presented are solely those of the 
author and do not necessarily represent those of the company.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDN5LJLEWEO5SFHHZGL34CDEPS3ZN6VD/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L7T5DJ2SHSAHBVLBJ2C55T35QK524XNQ/


[ovirt-users] Re: oVirt reboot fails

2022-03-09 Thread Strahil Nikolov via Users
It seems that you are using EFI and something corrupted the installation.
The fastest approach is to delete the oVirt node from the UI, reinstall and 
readd it back via the UI.
I suspect something happened with your OS FS, but it could be a bug in the OS.
If it was a regular CentOS Stream system, I wiuld follow 
https://access.redhat.com/solutions/3486741
Best Regards,Strahil Nikolov
 
 
  On Tue, Mar 8, 2022 at 8:25, dean--- via Users wrote:   
Rebooting oVirt fails on a RAID array installed on a Cisco UCS C220 M5.  It 
fails using either legacy BIOS or UEFI with the error…

error: ../../grub-core/fs/fshelp.c:258:file 
`//ovirt-node-ng-4.4.10.1-0.20220202.0+1/vmlinuz-4.18.0-358.el8.x86_64’ not 
found.
Error: ../../grub-core/loader/i386/efi/linux.c:94:you need to load the kernel 
first.

Press any key to continue…

    Failed to boot both default and fallback entries.

Press any key to continue…


Any attempts to recover using the installation/rescue ISO also fails and locks 
up.


All Googled solutions I've tried so far have not worked.

Does anyone know how to prevent this from happening and the correct method to 
recover when it does?

Thanks!

... Dean
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UYCJWN4AZZRWDQ47VBA6K37T3W46OF32/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAWKIBPFZXQIYGNO5RR34YU5HSJFTFIC/


[ovirt-users] Re: Account on Zanata

2022-03-09 Thread ちゃーりー
Hi,

Thank you for creating the account.
I fixed a few Japanese translations mistakes found before asking to
create the account.
I'll try some texts not finished yet too.

Thank you,
Yoshihiro Hayashi


2022年3月8日(火) 20:59 Sharon Gratch :

> Hi,
>
>
> I created an account for you on Zanata and added you to the Japanese
> translation group.
> I'll send you the account details in a separate mail.
>
>
> Thanks!
> Sharon
>
> On Mon, Feb 21, 2022 at 10:41 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno sab 19 feb 2022 alle ore 14:18 ちゃーりー  ha
>> scritto:
>>
>>> Hi,
>>>
>>> I'm Yoshihiro Hayashi, just an oVirt user.
>>> I found a mistake in Japanese translation on ovirt Web UI, I'm going to
>>> fix it.
>>> It would be grateful if someone could make an zanata account for me.
>>>
>>
>> Welcome aboard Hayashi-san. Thanks for willing to help!
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.*
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VL5M4H5OBJSBIVKPOXQ2IPPIHCHHEP3G/
>>
>

-- 

林 佳寛
   sir...@gmail.com
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BROXTYI76ORRVVESRYHJ7HCDBX62YBDJ/


[ovirt-users] Re: New oVirt self-hosted engine deployment : design ideas

2022-03-09 Thread ravi k
> If you are going to have more hosts, please note that it's not
> recommended to have more than 8 hosted-engine hosts. So if you'll
> still want to keep them all in the same cluster, some of them will be
> HE hosts and some not - this might be slightly confusing, depending on
> your use case.
Thanks a lot for replying. I'm planning to just have 2 self-hosted hosts in the 
cluster01. I'll create a cluster02 and add the remaining regular hosts to it. 

> You can try searching the list archives for previous discussions about
> topology/architecture that people had over the years.
> 
> You might want to check also this somewhat-old but still mostly
> relevant doc, which is for RHV, but probably applies 99% to oVirt as
> well:
> 
> https://www.redhat.com/en/resources/best-practice-rhv-technology-detail
> 
> Good luck and best regards,
I'll go through this doc and will also search the threads. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCA27NIWKSDPNGMSEH2CLOBTLU37J3IC/


[ovirt-users] Re: New oVirt self-hosted engine deployment : design ideas

2022-03-09 Thread Yedidyah Bar David
On Wed, Mar 9, 2022 at 11:10 AM ravi k  wrote:
>
> Hello,
> We are creating a self-hosted engine deployment and have come up with a draft 
> design. I thought I'll get your thoughts on improving it. It is still a test 
> setup and so we can make changes to make it resilient.
> We have four hosts, host01..04. I did the self-hosted engine deployment on 
> the first node which created a dc, cluster01 and a storage domain 
> hosted_storage. I added host02 also as a self-hosted engine host to cluster01.
> Now the questions :-)
> 1. It is recommended not to use this SD hosted_storage for regular VMs.

Indeed.

> So I'll create another SD dc_sd01. Should I use this dc_sd01 and cluster01  
> when creating regular VMs?

Yes.

> What's the best practice?

These are two slightly different questions, but the answer is yes anyway.

If you are going to have more hosts, please note that it's not
recommended to have more than 8 hosted-engine hosts. So if you'll
still want to keep them all in the same cluster, some of them will be
HE hosts and some not - this might be slightly confusing, depending on
your use case.

> 2. It is a bit confusing to get my head around this concept of running 
> regular VMs on these self-hosted engine hosts. Can I just run regular VMs in 
> these hosts and they'll run fine?

Yes.

> 3. Please do suggest any other recommendations from experience in terms of 
> designing the clusters, storage domains etc. It'll help as it is a new setup 
> and we have the scope to make changes.

You can try searching the list archives for previous discussions about
topology/architecture that people had over the years.

You might want to check also this somewhat-old but still mostly
relevant doc, which is for RHV, but probably applies 99% to oVirt as
well:

https://www.redhat.com/en/resources/best-practice-rhv-technology-detail

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PTGYVFF7AUHATDX7ZZGDYXGVOVEJ2PDK/


[ovirt-users] New oVirt self-hosted engine deployment : design ideas

2022-03-09 Thread ravi k
Hello,
We are creating a self-hosted engine deployment and have come up with a draft 
design. I thought I'll get your thoughts on improving it. It is still a test 
setup and so we can make changes to make it resilient. 
We have four hosts, host01..04. I did the self-hosted engine deployment on the 
first node which created a dc, cluster01 and a storage domain hosted_storage. I 
added host02 also as a self-hosted engine host to cluster01.
Now the questions :-)
1. It is recommended not to use this SD hosted_storage for regular VMs. So I'll 
create another SD dc_sd01. Should I use this dc_sd01 and cluster01  when 
creating regular VMs? What's the best practice?
2. It is a bit confusing to get my head around this concept of running regular 
VMs on these self-hosted engine hosts. Can I just run regular VMs in these 
hosts and they'll run fine?
3. Please do suggest any other recommendations from experience in terms of 
designing the clusters, storage domains etc. It'll help as it is a new setup 
and we have the scope to make changes. 

Regards,
Ravi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPBNU6TW2AOXPANQCF5KIXRHNTCNXL4Q/


[ovirt-users] Re: Important changes to the oVirt Terraform Provider

2022-03-09 Thread marek

Hi Janos,

any news?

Marek


Dne 07/01/2022 v 19:00 Janos Bonic napsal(a):

Hello Marek, hello everyone,

I'm sorry I didn't update you earlier. Unfortunately, we had a key 
team member leave our team, which pushed back our release by some 
time. We are still pursuing the matter according to the original plan 
and will release the TF provider, but we will need some more time to 
work on it.


We'll keep the repository on GitHub updated with the developments we do.

Once again, I'm sorry for the delay.

Janos


On Wed, Jan 5, 2022, 10:03 PM marek  wrote:

Hi,

any plan for release?

Marek

Dne 06/10/2021 v 12:53 Janos Bonic napsal(a):


Dear oVirt community,

We are making sweeping and backwards-incompatible changes to the
oVirt Terraform provider. *We want your feedback before we make
these changes.*

Here’s the short list what we would like to change, please read
the details below.

 1. The current |master| branch will be renamed to |legacy|. The
usage of this provider will be phased out within Red Hat
around the end / beginning of next year. If you want to
create a fork, we are happy to add a link to your fork to the
readme.
 2. A new |main| branch will be created and a *new Terraform
provider* written from scratch on the basis of
go-ovirt-client .
(Preview here
)
This provider will only have limited functionality in its
first release.
 3. This new provider will be released to the Terraform registry,
and will have full test coverage and documentation. This
provider will be released as version v2.0.0 when ready to
signal that it is built on the Terraform SDK v2.
 4. A copy of this new Terraform provider will be kept in the
|v1| branch and backported to the Terraform SDK v1 for the
benefit of the OpenShift Installer
. We will not tag any
releases, and we will not release this backported version in
binary form.
 5. We are hosting a *community call* on the 14th of October at
13:00 UTC on this link
. Please join to
provide feedback and suggest changes to this plan.


Why are we doing this?

The original Terraform provider
 for oVirt
was written four years ago by @Maigard
 at EMSL-MSC
. The oVirt
fork of this provider is about 2 years old and went through rapid
expansion, adding a large number of features.

Unfortunately, this continuous rapid growth came at a price: the
original test infrastructure deteriorated and certain resources,
especially the virtual machine creation ballooned to a size we
feel has become unmaintainable.

If you tried to contribute to the Terraform provider recently,
you may have noticed that our review process has become extremely
slow. We can no longer run the original tests, and our end to end
test suite is not integrated outside of the OpenShift CI system.
Every change to the provider requires one of only 3 people to
review the code and also run a manual test suite that is
currently only runable on one computer.

We also noticed an increasing number of bugs reported on
OpenShift on oVirt/RHV related to the Terraform provider.

Our original plan was that we would fix the test infrastructure
and then subsequently slowly transition API calls to
go-ovirt-client, but that resulted in a PR that is over 5000
lines in code
 and
cannot in good conscience be merged in a single piece. Splitting
it up is difficult, and would likely result in broken
functionality where test coverage is not present.


What are we changing for you, the users?

First of all, documentation. You can already preview the
documentation here

.
You will notice that the provider currently only supports a small
set of features. You can find the full list of features

we are planning for the first release on GitHub. However, if you
are using resources like cluster creation, etc. these will
currently not work and we recommend sticking to the old provider
for the time being.

The second big change will be how resources are treated. Instead
of creating large resources that need to call several of the
oVirt APIs to create, we will create resources that are only
calling one API. This will lead to fewer