Hi,
I have a 4 node cluster setup and my storage options right now are a FC based
storage, one partition per node on a local drive (~200GB each) and a NFS based
NAS device. I want to setup export and ISO domain on the NAS and there are no
issues or questions regarding those two. I wasn't aware
I need to scratch gluster off because setup is based on CentOS 6.5, so
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
Any info regarding FC storage domain would be appreciated though.
Thanks
Ivan
On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
> Hi,
>
> I
One word of caution so far, when exporting any vm, the node that acts as SPM
is stressed out to the max. I releived the stress by a certain margin with
lowering libvirtd and vdsm log levels to WARNING. That shortened out the
export procedure by at least five times. But vdsm process on the SPM no
Hi Andrew,
this is something that I saw in my logs too, first on one node and then
on the other three. When that happend on all four of them, engine was
corrupted beyond repair.
First of all, I think that message is saying that sanlock can't get a
lock on the shared storage that you defined
gration, then things went all to ... My strong
recommendation is not to use self hosted engine feature for production
purposes untill the mentioned bug is resolved. But it would really help
to hear someone from the dev team on this one.
Thanks,
Andrew
On Fri, Jun 6, 2014 at 3:20 PM, combuste
It was all working
except that log message :(
Thanks,
Andrew
On Fri, Jun 6, 2014 at 3:20 PM, combuster
wrote:
Hi Andrew,
this is something that I saw in my logs too, first on one node and
then on
the other three. When that happend on all four of them, engine was
corrupted
beyond repair.
Fi
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of I/O
is running through SPM, but I need to test that. Simply put, for every
virtual disk that you create on the shared
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think t
ded it in reproduce steps.
If you know another steps to reproduce this error, without blocking connection
to storage it also can be wonderful if you can provide them.
Thanks
- Original Message -
From: "Andrew Lau"
To: "combuster"
Cc: "users"
Sent: Monday, Jun
his at the time of the virtual disk
creation in case I have selected to run from the specific node?
On 06/09/2014 01:49 PM, combuster wrote:
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :
, for 15 years I've never seen an extX fs so
badly damaged, and the fact that this happens during migration just
amped this thought up.
On Tue, Jun 10, 2014 at 3:11 PM, combuster wrote:
Nah, I've explicitly allowed hosted-engine vm to be able to access the NAS
device as the NFS
/etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf
, but unfortunately maybe I've jumped to conclusions, last weekend, that
very same thin provisioned vm was running a simple export for 3hrs
before I've killed the process. But I wondered:
1. The process that runs behind the export is qemu-i
Well if it's a bug then it would be resolved by now :)
https://bugzilla.redhat.com/show_bug.cgi?id=1026662
Had the same doubts as you did. I really don't know why it wouldn't
connect to iLO if the default port is specified, but I'm glad that you
found a workaround.
Ivan
On 06/30/2014 08:36
Hi,
you need at least two servers in the cluster for pm test to succeed. If
you do, make sure that IP address of the iLO is pingable from all hosts
in the cluster. oVirt engine log would also help in troubleshooting the
issue.
On 01/18/2016 02:10 PM, alireza sadeh seighalan wrote:
hi every
Hi Fil,
this worked for me a couple of months back:
http://lists.ovirt.org/pipermail/users/2015-November/036235.html
I'll try to set this up again, and see if there are any issues. Which
oVirt release are you running ?
Ivan
On 01/18/2016 02:56 PM, Fil Di Noto wrote:
I'm having trouble sett
39 AM, combuster wrote:
Hi Fil,
this worked for me a couple of months back:
http://lists.ovirt.org/pipermail/users/2015-November/036235.html
I'll try to set this up again, and see if there are any issues. Which oVirt
release are you running ?
Ivan
On 01/18/2016 02:56 PM, Fil Di Noto w
rce False)
bf482d82-d8f9-442d-ba93-da5ec225c8c3::DEBUG::2016-01-19
18:03:20,798::task::993::Storage.TaskManager.Task::(_decref)
Task=`bf482d82-d8f9-442d-ba93-da5ec225c8c3`::ref 1 aborting True
bf482d82-d8f9-442d-ba93-da5ec225c8c3::DEBUG::2016-01-19
18:03:20,799::task::919::Storage.TaskManager.Task::
Increasing network ping timeout and lowering the number of io threads
helped. Disk image gets created, but during that time nodes are pretty
much unresponsive. I should've expected that on my setup...
In any case, I hope this helps...
Ivan
On 01/19/2016 06:43 PM, combuster wrote:
OK, se
Hi Clint, you might want to check the macspoof hook features here:
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof
This should override arp/spoofing filtering, that might be the cause of
your issues with OpenVPN setup (first guess).
On 03/05/2016 07:30 PM, Clint Boggio wrote:
I
there anything required to
be installed on the hypervisor hosts themselves?
Thanks,
Chris
On Mar 5, 2016 1:47 PM, "combuster" <mailto:combus...@gmail.com>> wrote:
Hi Clint, you might want to check the macspoof hook features here:
https://github.com/oVirt/vdsm/tree/m
Ignore the link (minor accident while pasting). Yum will download the
appropriate one from the repos.
On 03/05/2016 08:09 PM, combuster wrote:
Just the hook rpm (vdsm-hook-macspoof
<http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/noarch/vdsm-hook-macspoof-4.16.10-0.el7.noarch.
rd and set the value "true" for it.
If you want to remove filtering for a single interface, then replace
steps 2 and 3 as outlined in the README.
Kind regards,
Ivan
On 03/05/2016 08:21 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:13, combuster wrote:
Ignore the link (minor ac
It's great to know that it's working.
Best of luck Clint.
On 03/05/2016 09:09 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:34, combuster wrote:
Correct procedure would be:
1. On each of your ovirt nodes run:
yum install vdsm-hook-macspoof
2. On the engine run:
sudo engine
that scenario?
On Sat, Mar 5, 2016 at 3:32 PM, combuster wrote:
It's great to know that it's working.
Best of luck Clint.
On 03/05/2016 09:09 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:34, combuster wrote:
Correct procedure would be:
1. On each of your ovirt nodes run:
y
24 matches
Mail list logo