Hi Jim,
Just to throw my 2 cents in, one of my clusters is very similar to yours,
& I'm not having any of the issues you complain about. One thing I would
strongly recommend you do however is bond your NICs with LACP 802.3ad -
either 2x1Gbit for oVirt & 2x1Gbit for Gluster, or bond all of your NIC
f (relatively) low
bandwidth connections. The only place that might get much benefit from
single 10Gbit links would be on our distributed storage layer, although
with 10 nodes, each with 4x1Gbit LAGGs, even that's holding up quite well.
Let's see how the tests go tomorrow...
On 05/14/2
14, 2018, 11:33 PM Chris Adams wrote:
>
>> Once upon a time, Doug Ingham said:
>> > Correct!
>> >
>> > | Single 1Gbit virtual interface
>> > |
>> > VM Host Switch stack
>> >|
>>
(mode 4)
if possible.
> 2018-05-14 16:20 GMT-03:00 Doug Ingham:
>
>> On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
>>
>>> You should use better hashing algorithms for LACP.
>>>
>>> Take a look at this explanation: https://w
g up LACP between the VM & the host. For reasons of stability, my 4.1
cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my
question, is LACP on the VM possible with that, or will I have to use ALB?
Regards,
Doug
>
>
> On 14 May 2018, at 15:
On 14 May 2018 at 15:01, Juan Pablo wrote:
> LACP is not intended for maximizing throughtput.
> if you are using iscsi, you should use multipathd instead.
>
> regards,
>
Umm, maximising the total throughput for multiple concurrent connections is
most definitely one of the uses of LACP. In this c
Hi All,
My hosts have all of their interfaces bonded via LACP to maximise
throughput, however the VMs are still limited to Gbit virtual interfaces.
Is there a way to configure my VMs to take full advantage of the bonded
physical interfaces?
One way might be adding several VIFs to each VM & using
The two key errors I'd investigate are these...
2018-05-10 03:24:21,048+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
I've plugged this into our monitoring.
When the UPS' are at 50%, it puts the general cluster into global
maintenance & then triggers a shutdown action on all of the VMs in the
cluster's service group via the monitoring agent (you could use an SNMP
trap if you use agentless monitoring). Once all of
Err...by reading the hardware specs in the standard manner? eg. dmidecode,
etc.
On 15 April 2018 at 01:28, TomK wrote:
> From within an oVirt (KVM) guest machine, how can I read the guest
> specific definitions such as memory, CPU, disk etc configuration that the
> guest was given?
>
> I would l
I sent Marek an email with my username last week, offering to do the pt_br
translation, however I've still had no response?
Rgds,
On 22 August 2017 at 09:38, Gianluca Cecchi
wrote:
> On Mon, Aug 14, 2017 at 8:37 PM, Jakub Niedermertl
> wrote:
>
>> Hi all,
>>
>> new VM Portal project [1] - a re
Hi All,
Just today I noticed that guests can now pass discards to the underlying
shared filesystem.
http://www.ovirt.org/develop/release-management/features/storage/pass-discard-from-guest-to-underlying-storage/
Is this supported by all of the main Linux guest OS's running the virt
agent?
And wh
> Only problem I would like to manage is that I have gluster network shared
> with ovirtmgmt one.
> Can I move it now with these updated packages?
>
Are the gluster peers configured with the same hostnames/IPs as your hosts
within oVirt?
Once they're configured on the same network, separating the
Hey Matthew,
I think it's VDSM that handles the pausing & resuming of the VMs.
An analogous small-scale scenario...the Gluster layer for one of our
smaller oVirt clusters temporarily lost quorum the other week, locking all
I/O for about 30 minutes. The VMs all went into pause & then resumed
autom
Hi Bryan,
On 23 March 2017 at 23:54, Bryan Sockel wrote:
>
> Hi,
>
> I am attempting to deploy an appliance to a bonded interface, and i
> getting this error when it attempts to setup the bridge:
>
>
> [ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup
> networks {'ovirtmgmt
Fedora has signed drivers, however whilst I can't speak for Windows 10,
I've still not had any luck getting any of the VirtIO & Spice drivers
working on Windows Server 2016.
The services are *running*, but there doesn't seem to be any actual
communcation going on between the hypervisor & guest...
16GB is just the recommended amount of memory. The more items your Engine
has to manage, the more memory it will consume, so whilst it might not be
using that amount of memory at the moment, it will do as you expand your
cluster.
On 20 February 2017 at 16:22, FERNANDO FREDIANI
wrote:
> Hello fol
Hi Nir,
On 16 Feb 2017 22:41, "Nir Soffer" wrote:
On Fri, Feb 17, 2017 at 3:16 AM, Doug Ingham wrote:
> Well that didn't go so well. I deleted both dom_md/ids & dom_md/leases in
> the cloned volume, and I still can't import the storage domain.
You cannot delet
e='GLUSTERFS',
connectionList='[StorageServerConnections:{id='5e5f6610-c759-448b-a53d-9a456f513681',
connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans=
Hi Nir,
On 16 February 2017 at 13:55, Nir Soffer wrote:
> On Mon, Feb 13, 2017 at 3:35 PM, Doug Ingham wrote:
> > Hi Sahina,
> >
> > On 13 February 2017 at 05:45, Sahina Bose wrote:
> >>
> >> Any errors in the gluster mount logs for this gluster volume
https://github.com/wefixit-AT/oVirtBackup
...although I understand the API calls it uses have been deprecated in 4.1.
On 15 February 2017 at 14:38, Pat Riehecky wrote:
> Has someone got a script to automate scheduling snapshots of a specific
> system (and retaining them for X days)?
>
> Pat
>
>
Regards,
> Andrej
>
> On 13 February 2017 at 21:17, Doug Ingham wrote:
>
>> Hey Guys,
>> I've gone through both oVirt's & Red Hat's API docs, but I can only find
>> info on getting the global maintenance state & setting local maintenance on
&g
Hey Guys,
I've gone through both oVirt's & Red Hat's API docs, but I can only find
info on getting the global maintenance state & setting local maintenance on
specific hosts.
Is it not possible to set global maintenance via the API?
I'm writing up a new script for our engine-backup routine, but
aemon that
runs with VDSM, independently of the HE, so I'd basically have to bring the
volume down & wait for the leases to expire/delete them* before I can
import the domain.
*I understand removing /dom_md/leases/ should do the job?
>
> On Thu, Feb 9, 2017 at 11:57 PM, Doug Ingham
anks, I will do some reading on how gluster handles quorum and heal
> operations but your procedure sounds like a sensible way to operate this
> cluster.
>
> Regards,
>
> Chris.
>
>
> On 2017-02-11 18:08, Doug Ingham wrote:
>
>
>
> On 11 February 2017 at 13:32
On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
chris.bartosiak-jen...@certico.co.uk> wrote:
> Hello list,
>
> Just wanted to get your opinion on my ovirt home lab setup. While this is
> not a production setup I would like it to run relatively reliably so please
> tell me if the following
Hey Guys,
I currently use dedicated interfaces & hostnames to separate gluster
traffic on my "hyperconverged" hosts.
For example, the first node uses "v0" for its management interface & "s0"
for its gluster interface.
With this setup, I notice that all functions under the "Volumes" tab work,
how
On 9 February 2017 at 10:08, Gianluca Cecchi
wrote:
> On Wed, Feb 8, 2017 at 10:59 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> what is considered the best way to shutdown and restart an hypervisor,
>> supposing plain CentOS 7 host?
>>
>> For example to cover these sce
On 9 February 2017 at 15:48, Yaniv Kaul wrote:
>
>
> On Thu, Feb 9, 2017 at 6:00 PM, Doug Ingham wrote:
>
>>
>>
>> On 9 February 2017 at 12:03, Dan Yasny wrote:
>>
>>>
>>> On Thu, Feb 9, 2017 at 9:55 AM, Doug Ingham wrote:
>>>
Some interesting output from the vdsm log...
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
namespace 01_img_60455567-ad30-42e3-a9df-62fe86c7fd25 already registered
(sd:731)
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
namespace 02_vol_604
Hi All,
My original HE died & was proving too much of a hassle to restore, so I've
setup a new HE on a new host & now want to import my previous data storage
domain with my VMs.
The problem is when I try to attach the new domain to the datacenter, it
hangs for a minute and then comes back with, "
On 9 February 2017 at 12:03, Dan Yasny wrote:
>
> On Thu, Feb 9, 2017 at 9:55 AM, Doug Ingham wrote:
>
>> Hi Dan,
>>
>> On 8 February 2017 at 18:26, Dan Yasny wrote:
>>>
>>>
>>> But seriously, above all, I'd recommend you backup the e
Hi Dan,
On 8 February 2017 at 18:26, Dan Yasny wrote:
>
>
> But seriously, above all, I'd recommend you backup the engine (it comes
> with a utility) often and well. I do it via cron every hour in production,
> keeping a rotation of hourly and daily backups, just in case. It doesn't
> take much s
Hi Dan,
On 8 February 2017 at 18:10, Dan Yasny wrote:
>
>
> On Wed, Feb 8, 2017 at 4:07 PM, Doug Ingham wrote:
>
>> Hi Guys,
>> My Hosted-Engine has failed & it looks like the easiest solution will be
>> to install a new one. Now before I try to re-add t
Hi Guys,
My Hosted-Engine has failed & it looks like the easiest solution will be
to install a new one. Now before I try to re-add the old hosts (still
running the guest VMs) & import the storage domain into the new engine, in
case things don't go to plan, I want to make sure I'm able to bring up
On 6 February 2017 at 13:30, Simone Tiraboschi wrote:
>
>
>1. What problems can I expect to have with VMs added/modified
>since the last backup?
>
> Modified VMs will be reverted to the previous configuration;
>>> additional VMs should be seen as external VMs, then you cou
Hi All, Simone,
On 24 January 2017 at 10:11, Simone Tiraboschi wrote:
>
>
> On Tue, Jan 24, 2017 at 1:49 PM, Doug Ingham wrote:
>
>> Hey guys,
>> Just giving this a bump in the hope that someone might be able to
>> advise...
>>
>> Hi all,
>>
On 31 January 2017 at 13:27, Gianluca Cecchi
wrote:
> This in CentOS 7.3 plain hosts used as hypervisors and intended for oVirt
> 4.0 and 4.1 hosts.
> In particular for performance related packages such as
> bwm-ng
> iftop
> htop
> nethogs
> and the like.
> Thanks,
> Gianluca
>
We've been ru
Hey guys,
Would anyone be able to tell me the name/location of the gluster client
log when mounting through libgfapi?
Cheers,
--
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
>
> its memory resynchronised one last time
>
Actually, thinking about it, rather than diffing *all* of the memory on the
first host to resync it at the last moment, the hypervisor probably
simultaneously copies the current state of memory & uses copy-on-write
(COW) to write all new transactions to
Hi Gianluca,
My educated guess...When you live migrate a VM, its state in memory is
copied over to the new host, but the VM still remains online during this
period to minimise downtime. Once its state in memory is fully copied to
the new host, the VM is paused on the original host, its memory
resy
Make sure to enable global maintenance mode before doing so!
The Hosted-Engine is just a manager for the underlying hypervisors, which
will keep running the VMs as usual until the engine comes back online.
Disable maintenance mode afterwards.
On 26 January 2017 at 13:48, Wout Peeters wrote:
> H
On 24 January 2017 at 15:15, emanuel.santosvar...@mahle.com wrote:
> If I access the UI via "ALIAS" I get the Error-Page "The client is not
> authorized to request an authorization. It's required to access the system
> using FQDN.
>
> What can I do to get UI working through ALIAS and real hostname
Hey guys,
Just giving this a bump in the hope that someone might be able to advise...
Hi all,
> One of our engines has had a DB failure* & it seems there was an
> unnoticed problem in its backup routine, meaning the last backup I've got
> is a couple of weeks old.
> Luckily, VDSM has kept the un
Hi all,
One of our engines has had a DB failure* & it seems there was an unnoticed
problem in its backup routine, meaning the last backup I've got is a couple
of weeks old.
Luckily, VDSM has kept the underlying VMs running without any
interruptions, so my objective is to get the HE back online & g
s in the future, however
my HE has since borked itself & I'm now in the process of
restoring/redeploying it. I've got access to the logs, but the engine & API
are now offline.
Doug
> On Tue, Jan 17, 2017 at 1:52 PM, Doug Ingham wrote:
>
>> Hi Tomas,
>>
>&
a GUI issue. VM management was unaffected.
Doug
On Mon, Jan 9, 2017 at 8:09 PM, Doug Ingham wrote:
> Hi all,
> We had some hiccups in our datacenter over the new year which caused some
> problems with our hosted engine.
>
> I've managed to get everything back up & running,
Hey all,
Each of my hosts/nodes also hosts its own gluster bricks for the storage
domains, and peers over a dedicated FQDN & interface.
For example, the first server is setup like the following...
eth0: v0.dc0.example.com (10.10.10.100)
eth1: s0.dc0.example.com (10.123.123.100)
As it's a self-ho
Hi all,
We had some hiccups in our datacenter over the new year which caused some
problems with our hosted engine.
I've managed to get everything back up & running, however now one of the
VMs is listed twice in the UI. When I click on the VM, both items are
highlighted & I'm able to configure & m
49 matches
Mail list logo