From: Fabian Deutsch fdeut...@redhat.com
To: dougsl...@redhat.com
Cc: Tolik Litovsky tlito...@redhat.com, users@ovirt.org
Sent: Wednesday, 21 January, 2015 8:30:40 PM
Subject: Re: [ovirt-users] oVirt node weekly talk
- Original Message -
On 01/21/2015 10:05 AM, Tolik Litovsky
- Original Message -
On 01/21/2015 10:05 AM, Tolik Litovsky wrote:
Hello Fabian
Can we move the oVirt node weekly talk to another week day?
Or just a bit earlier ?
Both are ok for me.
Let's see: I'd suggest to move it to
Mondays, 3 p.m. UTC
Does that work for everybody?
On Jan 21, 2015, at 9:45 AM, Jorick Astrego j.astr...@netbulae.eu wrote:
Hi,
In the quickstart guide we have the iptables rules for a fedora 19 host,
but currently we run firewalld on the host (Centos 7)
I've converted the rules to a service xml for the zone but I can't
Minutes:http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-21-15.01.html
Minutes (text): http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-21-15.01.txt
Log:
http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-21-15.01.log.html
=
#ovirt: oVirt Weekly
Hi
sounds like you have 1 ISCSI storage domain which includes the disks.
You can create another ISCSI storage domain and then to migrate the disks to
it.
This way all the disks will move to the new domain and you will be able to
remove the old SD without losing the disks.
Regards,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The oVirt team is pleased to announce that the oVirt 3.5.1 Final Release is now
available as of Jan 21st 2015.
The release candidate is available now for Fedora 20, Red Hat Enterprise Linux
6.6, CentOS 6.6, (or similar) and Red Hat Enterprise Linux
Cool! Finally!
... and thank you!
On 21.01.2015 17:10, Sandro Bonazzola wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The oVirt team is pleased to announce that the oVirt 3.5.1 Final Release is
now available as of Jan 21st 2015.
The release candidate is available now for Fedora 20,
Hi,
In the quickstart guide we have the iptables rules for a fedora 19 host,
but currently we run firewalld on the host (Centos 7)
I've converted the rules to a service xml for the zone but I can't
figure out the firewalld translation for -A FORWARD -m physdev !
--physdev-is-bridged -j REJECT
On 01/21/2015 10:05 AM, Tolik Litovsky wrote:
Hello Fabian
Can we move the oVirt node weekly talk to another week day?
Or just a bit earlier ?
Both are ok for me.
--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
All:
I have been asked to provide slide templates for presentations that might be
created for future events. Through 2014, we have been using the green-on-white
slide template, and now, thanks to Eidan Hildesheim, we have a new 2015 edition
that is based more on the Patternfly interface.
Dear all,
I am trying to configure power management on ovirt v3.5
(ovirt-node-iso-3.5.0.ovirt35.20140912.el6) and using two Fujitsu PRIMERGY
RX2540 M1 as node hypervisor. I used Fujitsu iRMC port on power management
configuration. The test gives a message Test failed, argument of type
On 20/01/15 09:30, Lars Nielsen wrote:
On 20/01/15 09:09, Martin Pavlík wrote:
ccing Carlos as he might be able to help
Hi Lars,
1) remove as many restrictions as you can from your NFS export ,
allow all to access and restrict it back step by step if removing
restriction will work
2) make
Added the direct lun, so all hypervisor can see it, no effect.. Still the
same error
2015-01-21 10:39 GMT+01:00 Koen Vanoppen vanoppen.k...@gmail.com:
I noticed that there was a difference in the number of attached LUN's
between the hypervisors. This is because we have a VM with direct LUN's.
Hi,
We have no blockers for oVirt 3.5.1 GA [1] so we're good to go with the release.
We're now gathering final release rpms and performing last sanity tests.
Maintainers:
Please review packages list and provide required rpms
There are still 50 bugs [2] targeted to 3.5.1.
Excluding node and
Hi,
I haven't many news for 3.6 this week:
ACTION: Feature proposed for 3.6.0 must now be collected in the 3.6 Google doc
[1] and reviewed by maintainers.
Finished the review process, the remaining key milestones for this release will
be scheduled.
For reference, external project schedules
Unfortunately the new Master Data Domain remained Active only for a few minutes
and then the following error was thrown (as it can be seen in the Events tab):
Failed to Reconstruct Master Domain for Data Center
Afterwards all the other Data Domains became Inactive and the whole Data Center
I noticed that there was a difference in the number of attached LUN's
between the hypervisors. This is because we have a VM with direct LUN's. Do
these Lun's on this particular vm also be attached to the other hypervisors
or to 1 hypervisor only?
2015-01-21 8:29 GMT+01:00 Koen Vanoppen
Ok, after rebooting all the hypervisors I'm left with the following errors:
Thread-16::ERROR::2015-01-21
13:39:42,644::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 6cf8c48e-fbed-4b68-b376-57eab3039878
Thread-16::ERROR::2015-01-21
Dear All,
The following appears in my ovirt-engine.log recently:
2015-01-21 13:41:44,181 WARN
[org.ovirt.engine.core.bll.AddVmFromScratchCommand]
(DefaultQuartzScheduler_Worker-52) [69997d54] CanDoAction of action
AddVmFromScratch failed.
Hello Fabian
Can we move the oVirt node weekly talk to another week day?
Or just a bit earlier ?
Best Regards.
Tolik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi guys,
I have some VMs with preallocated disks on - still green - iSCSI Storage
domain, where one of disks is going to die, which caused to change iscsi
mount to read only. Of course all VMs allocated on this storage has been
paused or turned off.
Is there any way to save these VMs (disks)?
Hi guys,
I have some VMs with preallocated disks on - still green - iSCSI Storage
domain, where one of disks is going to die, which caused to change iscsi
mount to read only. Of course all VMs allocated on this storage has been
paused or turned off.
Is there any way to save these VMs (disks)?
22 matches
Mail list logo