Hi Laura,
this is something I was wondering for quite some time but never bothered
to ask about. How can one change the order of storage domains so the right
one shows up by default?
Thank you,
Fil
--
Dmitry Filonov
Linux Administrator
SBGrid Core | Harvard Medical School
250 Longwood Ave, SGM-
Nicolas:
Thank you very much for this! Looks like exactly what I was looking for...
The first burst was somewhat frightening but it ended well and now no
storage domain is overused :)
One quick question: Is it possible to limit balancing to more than one
datacenter? I have 3 datacenters and I'd l
I’m aware of the heal process but it’s unclear to me if the update
continues to run while the volumes are healing and resumes when they are
done. There doesn’t seem to be any indication in the ui (unless I’m
mistaken)
On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane wrote:
> Hello,
>
> Often(?), upd
Hello,
Often(?), updates to a hypervisor that also has (provides) a Gluster
brick takes the hypervisor offline (updates often require a reboot).
This reboot then makes the brick "out of sync" and it has to be resync'd.
I find it a "feature" than another host that is also part of a gluster
do
Hello!
The UX Research team at Red Hat is conducting a new oVirt study and is
looking to test out the usability of a few features in oVirt.
We are currently recruiting participants located in the United States and
internationally. If you are interested, please use this calendar to sign up
for a sp
That worked. Thank you.
On Tue, Aug 6, 2019, 5:07 AM Dominik Holler wrote:
>
>
> On Tue, Aug 6, 2019 at 1:25 PM Sandro Bonazzola
> wrote:
>
>>
>>
>> Il giorno dom 4 ago 2019 alle ore 08:54 Vincent Royer <
>> vinc...@epicenergy.ca> ha scritto:
>>
>>> I had a failed HCI replica 3 deployment, a f
I also am spanned over two switches. You can use bonding, you just can't
use 802.3 mode.
I have MGMT bonded to two gig switches and storage bonded to two 10g
switches for Gluster. Each switch has its own fw/router in HA. So we can
lose either switch, either router, or any single interface or cabl
Hi Jayme,
I can't recall such a healing time.
Can you please retry and attach the engine & vdsm logs so we'll be smarter?
*Regards,*
*Shani Leviim*
On Tue, Aug 6, 2019 at 5:24 PM Jayme wrote:
> I've yet to have cluster upgrade finish updating my three host HCI
> cluster. The most recent try
I've yet to have cluster upgrade finish updating my three host HCI
cluster. The most recent try was today moving from oVirt 4.3.3 to
4.3.5.5. The first host updates normally, but when it moves on to the
second host it fails to put it in maintenance and the cluster upgrade
stops.
I suspect this i
Hi Jason,
A time ago I wrote a "Storage Balancer" exactly for that, move disks
between storage domains to keep them below a maximum threshold of
occupation. You can find the project at [1].
It's not perfect but has been working for us for the last 3 years with
no issues.
That won't avoid p
On Mon, Jul 22, 2019 at 12:34 PM Edoardo Mazza wrote:
> Hello everyone,
> I need to create a bond for vm interfaces but I don't kown what is the
> best solution, you can help me?
>
Do you like to bond the interfaces of a VM?
This might be useful for reliability during live migration, see
https:/
Hi Edoardo,
Can you please supply some more details about the bond you're trying to
create?
In case it's a VLAN bonding, you can assist this one:
https://www.ovirt.org/develop/networking/bonding-vlan-bridge.html#bonding-vlan-bridge
*Regards,*
*Shani Leviim*
On Mon, Jul 22, 2019 at 1:34 PM Edoa
On Tue, Aug 6, 2019 at 1:25 PM Sandro Bonazzola wrote:
>
>
> Il giorno dom 4 ago 2019 alle ore 08:54 Vincent Royer <
> vinc...@epicenergy.ca> ha scritto:
>
>> I had a failed HCI replica 3 deployment, a fresh 4.3.5.1 install. I fixed
>> some things, ran the cleanup script and rebooted.
>>
>> The f
Il giorno mer 31 lug 2019 alle ore 16:41 Michael Frank
ha scritto:
> Hi,
>
> since several days i try to install the hosted engine initially to an
> iscsi multipath device without success.
> Some information on the environment:
> - Version 4.3.3
> - using two 10gbe interfaces as single bond for
Il giorno dom 4 ago 2019 alle ore 08:54 Vincent Royer
ha scritto:
> I had a failed HCI replica 3 deployment, a fresh 4.3.5.1 install. I fixed
> some things, ran the cleanup script and rebooted.
>
> The first host's logs are now full of:
>
> ovs|00330|stream_ssl|ERR|Certificate must be configured
Il giorno gio 25 lug 2019 alle ore 17:22 Rick A ha
scritto:
>
> So I updated our environment a bit ago during a scheduled down time from
> 4.2 to 4.3. Everything went smoothly, but it looks like I forgot to update
> the Data Center Compatibility (See alert message below). My question is,
> can I
Hi Jason,
>From a user experience perspective, you could change the order of the
storage domains so the current first one is not always the first one in
line. This might help to get users to select a variety of storage domains
instead of always selecting the current first one.
Best,
Laura
On Tue
Il giorno mer 17 lug 2019 alle ore 17:12 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:
> Hi Sandro,
>
> It seems this issue is not related to current RC (earliest entry found for
> 02 Feb 2019).
>
> I have followed RH Solution #3338001 (
> https://access.redhat.com/solutions/3338001) and la
Il giorno mar 30 lug 2019 alle ore 15:49 Maton, Brett <
mat...@ltresources.co.uk> ha scritto:
> Hi,
>
> I just ran yum update on my test cluster and ran into the following
> issue:
> I did notice that the python2-ioprocess is currently installed from the
> ovirt-4.2 repo...
>
> Any suggestio
Il giorno sab 3 ago 2019 alle ore 12:38 Strahil ha
scritto:
> Maybe the libvirtd < - > libgfapi communication is somehow broken.
> I'm using FUSE and I have no issues, but my lab is not I/O intensive .
>
I would recommend to use fuse instead of libgfapi on glusterfs 6 storage.
Adding +Sahina Bos
Hello
I'm trying to figure out a way to automatically distribute our storage
domain occupation evenly or at least avoid them getting full. We have a lot
of users creating vms and they seem to select the first available storage
domain, thus one is nearly full and the rest are barely used.
Is there
This has been solved in today ovirt-engine-4.3.5.5 release
Il giorno lun 5 ago 2019 alle ore 21:18 Jayme ha scritto:
> Does anyone know if the snapshot upgrade bug has been resolved in latest
> 4.3.5 versions?
>
> On Wed, Jul 31, 2019 at 7:03 AM Yedidyah Bar David
> wrote:
>
>> On Wed, Jul 31,
The oVirt Team has just released a new version of ovirt-engine package that
fixes one upgrade related issue.[1]
We recommend to users experiencing upgrade issues to try again with this
new release.
Thanks,
[1] https://bugzilla.redhat.com/1734699
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERIN
On 05. 08. 2019 21:20, Vincent Royer wrote:
> I tried deployment of 4.3.5.1 using teams and it didn't work. I did
> get into the engine using the temp url on the host, but the teams
> showed up as individual nics. Any changes made, like assigning a new
> logical network to the nic, failed and I lo
Thank you very much for all the information it help me understand it better.
Unfortunately i cant get it to work in python :(
On Fri, Jul 26, 2019 at 2:09 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:
> On Thu, Jul 25, 2019 at 3:50 PM ada per wrote:
> >
> > Hello everyone,
> >
25 matches
Mail list logo