Re: [ovirt-users] changing Master Storage Domain

2018-01-05 Thread Giorgio Bersano
Hi,
I'm sorry to say your suggestion is infeasible. This is a production
environment and I'm unable to put in maintenance any SD apart the one I
want to shutdown.

I was hoping there is some weighting mechanism to affect the choice of the
MSD but I didn't have the time to look at the algorithm that makes that
choice.

Thank you anyway,
Giorgio.


2018-01-05 10:55 GMT+01:00 Gobinda Das <go...@redhat.com>:

> In that case move other SD to maintenance only the SD which you want to be
> MSD keep Active.Then once you move your present MSD to maintenance,  active
> one will be MSD.
>
> On Wed, Jan 3, 2018 at 10:27 PM, Giorgio Bersano <
> giorgio.bers...@gmail.com> wrote:
>
>> Hello,
>> I've a question regarding Master Storage Domain. I did already search the
>> net without results so I'm asking here.
>>
>> In our production setup (ovirt 4.1.7) we have five iSCSI Data Storage
>> Domains and we unfortunately need to remove the one that is the Master SD.
>> That's not difficult, I know the procedure as I did it in the past:
>> - Move every disk belonging to the MSD to another SD
>> - Put the MSD in maintenance
>> - Detach it then Remove it
>>
>> Another SD is automatically selected as Master and everything should be
>> OK (for the sake of simplicity i did not mention the other operations
>> needed on the storage array and the hosts to phisically remove the LUN).
>>
>> The tricky part is I don't want a random choice, I want to choose which
>> SD becomes the new Master.
>> Is there any specific procedure to obtain this result?
>>
>> TIA,
>> Giorgio.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] changing Master Storage Domain

2018-01-03 Thread Giorgio Bersano
Hello,
I've a question regarding Master Storage Domain. I did already search the
net without results so I'm asking here.

In our production setup (ovirt 4.1.7) we have five iSCSI Data Storage
Domains and we unfortunately need to remove the one that is the Master SD.
That's not difficult, I know the procedure as I did it in the past:
- Move every disk belonging to the MSD to another SD
- Put the MSD in maintenance
- Detach it then Remove it

Another SD is automatically selected as Master and everything should be OK
(for the sake of simplicity i did not mention the other operations needed
on the storage array and the hosts to phisically remove the LUN).

The tricky part is I don't want a random choice, I want to choose which SD
becomes the new Master.
Is there any specific procedure to obtain this result?

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CentOS 7.3 host, VM live migration fails

2016-12-17 Thread Giorgio Bersano
2016-12-17 13:48 GMT+01:00 Yaniv Kaul <yk...@redhat.com>:
>
> On Dec 17, 2016 1:50 PM, "Giorgio Bersano" <giorgio.bers...@gmail.com>
> wrote:
>
> Hello everyone,
> ...
> If so I'm asking to make it very explicit on the ovirt.org website the
> constraint to NOT UPGRADE TO CENTOS 7.3 until  qemu-kvm-ev 2.6 is available.
> If I had read [1] I would  have probably refrained from proceeding but I
> missed the message.
>
> What to do now? ATM I'm not in trouble and I can stand the situation, for
> a few days. Does someone know when qemu-kvm-ev 2.6 will be available (or,
> differently said, how reliable is to use pacakges from ovirt-4.0-pre )?
>
>
> It's already available from ovirt repos.
> Y.
>
>
Are you sure? This is what I get ATM

 [root@vbox92 ~]# yum list \*qemu\*ev\*
 Loaded plugins: fastestmirror, ps, rhnplugin
 This system is receiving updates from RHN Classic or Red Hat Satellite.
 Loading mirror speeds from cached hostfile
  * ovirt-4.0: ftp.nluug.nl
  * ovirt-4.0-epel: epel.besthosting.ua
 Installed Packages
 qemu-img-ev.x86_64
10:2.3.0-31.el7.16.1   @ovirt-4.0
 qemu-kvm-common-ev.x86_64
 10:2.3.0-31.el7.16.1   @ovirt-4.0
 qemu-kvm-ev.x86_64
10:2.3.0-31.el7.16.1   @ovirt-4.0
 qemu-kvm-tools-ev.x86_64
10:2.3.0-31.el7.16.1   @ovirt-4.0
 Available Packages
 qemu-kvm-ev-debuginfo.x86_64
10:2.3.0-31.el7.16.1   ovirt-4.0

 [root@vbox92 ~]# yum update \*qemu\*ev\*
 Loaded plugins: fastestmirror, ps, rhnplugin
 This system is receiving updates from RHN Classic or Red Hat Satellite.
 Loading mirror speeds from cached hostfile
  * ovirt-4.0: ftp.nluug.nl
  * ovirt-4.0-epel: epel.besthosting.ua
 No packages marked for update

Are you suggesting to take packages from ovirt-4.0-pre ? If the packages
are not yet in the main repo I think some big warning sign on the website
(in  4.0.5 and 4.0.6 release notes?) would be welcome. I would have
probably missed it anyway (busy week) but someone else could be luckier
than me.

Best regards,
Giorgio.


> Thank you anyway, oVirt is an amazing product indeed.
> Best regards,
> Giorgio.
>
> 1- CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6
> https://www.mail-archive.com/users@ovirt.org/msg37610.html
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CentOS 7.3 host, VM live migration fails

2016-12-17 Thread Giorgio Bersano
Hello everyone,
I'm happily running an oVirt datacenter in production from Spring 2014 that
was updated to in 4.0.5 two weeks ago.

The engine runs on a machine outside the DataCenter (no Hosted Engine).
Until two days ago the architecture was: engine CentOS 7.2 + iSCSI storage
+ 3 CentOS 7.2 hosts .

Yesterday I updated the hosts to CentOS 7.3 and now I can't live migrate
the VMs. The engine is still CentOS 7.2.

I did not see problems until there was at least an 7.2 host but VMs are now
stuck to their hosts (or you have to shutdown the VM then start it up).
Could it be because of the lack of qemu-kvm-ev 2.6 [1]?

If so I'm asking to make it very explicit on the ovirt.org website the
constraint to NOT UPGRADE TO CENTOS 7.3 until  qemu-kvm-ev 2.6 is available.
If I had read [1] I would  have probably refrained from proceeding but I
missed the message.

What to do now? ATM I'm not in trouble and I can stand the situation, for a
few days. Does someone know when qemu-kvm-ev 2.6 will be available (or,
differently said, how reliable is to use pacakges from ovirt-4.0-pre )?

Thank you anyway, oVirt is an amazing product indeed.
Best regards,
Giorgio.

1- CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6
https://www.mail-archive.com/users@ovirt.org/msg37610.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-28 Thread Giorgio Bersano
Hi everybody,
thank you for taking the time to share your experiences.
It seems we have now some stuff to digest and to do our tests. I think
I'll report back when things will settle down.

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host usb

2016-06-16 Thread Giorgio Bersano
2016-06-16 14:59 GMT+02:00 ffuentes :
> Giorgio,
>
> Thanks for your reply.
>
> Where can I get vdsm from 3.6.7* for my host?
>
> Thanks again!
>
> Regards,
>
> Fernando Fuentes


>From the oVIrt site: "In order to install oVirt 3.6.7 Release
Candidate you've to enable oVirt 3.6 release candidate repository" so
you better use intructions in http://www.ovirt.org/release/3.6.7/

Obviously you have to put the host in mantenance before upgrading vdsm
but this is a normal operating procedure.

Best Regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-16 Thread Giorgio Bersano
2016-06-15 19:20 GMT+02:00 Gianluca Cecchi :
>
> Why not a nuc, or similar device from other vendors that are emerging?
> I don't know for the lan security part, but you can find a nuc5cpyh with
> celeron or nuc5ppyh with pentium respectively at 140 and 170 euros. Both
> have 6w tdp.
> You have to add at least memory but with further 20 euros you get 4gb.
> Disk optional, you can use sdxc or boot from lan
> Just a suggestion for further investigation.
> I'm currently using a top line nuc6 as an hypervisor without any problem
> with 3-4 vms, so I think a bottom line nuc can serve optimally as a thin
> client without its costs.
> Some of them have also Kensington anti theft that can be a good idea due to
> their size.
> Hih,
> Gianluca

Hi Gianluca,
we were looking for a  prepackaged solution because of the lack of
human resources to devote to the project.
But if pursuing this research becomes too exhausting we would probably
develop a linux solution and in that case the kind of terminal you
suggested is interesting indeed.

Thanks,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host usb

2016-06-16 Thread Giorgio Bersano
2016-06-16 6:39 GMT+02:00 Fernando Fuentes :
> I am now getting this when I try to start it:
>
>
> VM methub is down with error. Exit message: Hook Error.
>
> To get the hostusb setup I did:
>
> engine-config -s
> "UserDefinedVMProperties=hostusb=^0x[0-9a-fA-F]{4}:0x[0-9a-fA-F]{4}$"
> ovirt-engine restart
>
> and the host has the usb hooked installed.
> But when I go an add the custom property "hostusb" the vm wont start.
>
> Again this is on oVirt 3.6 Centos 6.5 x86_64
> and a host of Centos 7 x86_64
>
> Any ideas?
>
> TIA!
>

Hi Fernando,
in 3.6 to pass an host USB device to a VM you don't need an hook as it
is now standard in the system.

But if you have vdsm from 3.6.6 it doesn't work because of a bug. It
has already been corrected and if you update vdsm* to 3.6.7rc it will
work. I had the same problem but now it's OK.

Going to the administration portal after the upgrade you should see
your USB device in the "Host devices" tab of the "Host"; if it isn't
there clik "Refresh Capabilities" and it should come up.
Then you go to "Virtual Machines", select your VM, then "Host devices"
and "Add device" and you're almost there.

I learned this thanks to something posted in this list and probably a
bugzilla too but I'm unable to find it ATM.
If you see references to IOMMU ignore them unless you need to expose a
PCI device.

HTH,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
2016-06-15 12:21 GMT+02:00 Donny Davis :
> Do you have a requirement for 3d acceleration on the VDI guests?

On the first deployment no, it is basically for frontend and backend
office activity.
But we would probably also asked to try it for multimedia activities
like casual guests watching internet videos at the local public
library.
So two different kind of devices, I suppose.

Anything to suggest?

Thank you,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
2016-06-15 12:56 GMT+02:00 Ondra Machacek <omach...@redhat.com>:
> On 06/15/2016 12:26 PM, Michal Skrivanek wrote:
>>
>>
>>> On 15 Jun 2016, at 12:18, Giorgio Bersano <giorgio.bers...@gmail.com>
>>> wrote:
>>>
>>> Hi everyone,
>>> I've been asked to deploy a VDI solution based on our oVirt
>>> infrastructure.
>>> What we have in production is a 3.6 manager (standalone, not HE) with
>>> a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
>>> fully redundant networking.
>>>
>>> What is not clear to me is the client side, especially because we have
>>> been asked to implement a thin client solution but I've been almost
>>> unable to find suitable devices.
>>
>>
>> if that client can still be a PC, albeit diskless, it’s still easier and
>> probably cheaper than any other special hw.
>>
>>>
>>> Is there anyone in this list willing to share his/her experience on
>>> this topic? Probably my search skill is low but I've only seen
>>> references to IGEL. Other brands?
>>
>>
>> not that i know of, and even that one had (or still have?) some issues
>> with SPICE performance as it’s not kept up to date
>>
>>> There is another strong requirement: our network infrastructure makes
>>> use of 802.1x to authenticate client devices and it would be highly
>>> advisable to respect that constraint.
>>
>>
>> for the VDI connections? I don’t think SPICE supports that, but please
>> bring it up on spice list to make sure.
>> if it would be for oVirt user portal then, I guess with pluggable aaa we
>> can support anything. Ondro?
>>
>
> It depends on use case, if apache module which uses radius is ok, then yes
> it should work.
> The problem is that we currently support only ldap as authorization backend.

Hi, here I'm speaking of wired network authentication (and nothing more).
What we have in place now: network ports are confined in a VLAN only
useful to authenticate the PC (windows). When the PC boots it
interacts with the radius server (freeradius) using PEAP-MsChapv2. If
the PC is registered in the Active Directory and authenticates against
it (at machine level, not user level) the switch port  is given a VLAN
based on attributes stored in the AD and it is enabled to communicate
without restrictions.

With Thin Client we would like to have something similar but it would
be fine even to directly instruct freeradius to enable the port and
set the VLAN on the basis of the thin client MAC address.

I've just discovered that Wyse ThinOS thin clients (Dell) support
802.1x, I wonder if is compatible with oVirt...
Time to search on the spice lists, as Michal suggested.

Thanks,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
Hi everyone,
I've been asked to deploy a VDI solution based on our oVirt infrastructure.
What we have in production is a 3.6 manager (standalone, not HE) with
a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
fully redundant networking.

What is not clear to me is the client side, especially because we have
been asked to implement a thin client solution but I've been almost
unable to find suitable devices.

Is there anyone in this list willing to share his/her experience on
this topic? Probably my search skill is low but I've only seen
references to IGEL. Other brands?
There is another strong requirement: our network infrastructure makes
use of 802.1x to authenticate client devices and it would be highly
advisable to respect that constraint.

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Networking fails for VM running on Centos6.7.Works on Centos6.5

2015-11-29 Thread Giorgio Bersano
2015-11-29 8:59 GMT+01:00 Dan Kenigsberg :
> On Sat, Nov 28, 2015 at 08:10:06PM +0530, mad Engineer wrote:
>> hello all i am having strange network issue with vms that are running on
>> centos 6.7 ovirt nodes.
>>
>> I recently added one more ovirt node which is running centos6.7 and
>> upgraded from centos6.5 to centos6.7 on all other nodes.
>>
>> All VMs running on nodes with centos6.7 as host Operating system fail to
>> reach network gateway,but if i reboot that same host to centos6.5 kernel
>> everything works fine(with out changing any network configuration).
>>
>> Initially i thought it as configuration issue but its there on all nodes.if
>> i reboot to old kernel everything is working.
>>
>> I am aware about ghost vlan0 issue in centos6.6 kernel.Not aware about any
>> issue in centos6.7 Also all my servers are up to date.
>>
>>
>> All physical interfaces are in access mode VLAN connected to nexus 5k
>> switches.
>>
>>
>> working kernel- 2.6.32-431.20.3.el6.x86_64
>>
>> non working kernel- 2.6.32-573.8.1.el6.x86_64

Do you have the possibility to test with
kernel-2.6.32-504.16.2.el6.x86_64 (it's in CentOS Vault now) ?

What I have seen in our environment is that - regarding this problem -
it is the latest correctly working kernel.
There are clear signs of misbehaviours due to changes in the VLAN code
between that kernel and the next one (2.6.32-504.23.4.el6.x86_64). Not
always, it could also depend on the nic driver involved.

Also take a look at https://bugs.centos.org/view.php?id=9467 and
https://bugzilla.redhat.com/show_bug.cgi?id=1263561 .

> Can you provide the topology of your VM network config (vlan, bond, bond
> options, bridge options)? Do you have an IP address on the bridge?
>
> (I have not seen this happen myself)

Dan, did you see this
https://www.mail-archive.com/users@ovirt.org/msg28561.html  thread?

We have seen this bug in oVirt (bond+VLAN+bridges) but also in simple
KVM with VLAN+bridges and with plain servers using VLANS directly over
nic. Therefore I think the topology is almost unrelated and the
problem regards VLAN code.

I would be very interested in
https://bugzilla.redhat.com/show_bug.cgi?id=1264316 but public access
is denied. Do you have access to it? I hope so.

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue with kernel 2.6.32-573.3.1.el6.x86_64?

2015-10-23 Thread Giorgio Bersano
Hi Michael,
if you have the possibility to test with
kernel-2.6.32-504.16.2.el6.x86_64 and
kernel-2.6.32-504.23.4.el6.x86_64 please do it.

I'm expecting that 2.6.32-504.16.2.el6 is the latest correctly working
kernel. Please tell us if this is your case.
There are clear signs of misbehaviours due to changes in the VLAN code
between the two kernels above.

Also take a look at https://bugs.centos.org/view.php?id=9467 and
https://bugzilla.redhat.com/show_bug.cgi?id=1263561 .

Best regards,
Giorgio.

P.S. sorry for top posting. I tried to revert this thread but I
couldn't do it in an effective way :(



2015-10-21 23:05 GMT+02:00 Michael Kleinpaste
:
> VMs are on different VLANs and use a central Vyos VM as the firewall and
> default gateway.  The only indication I had that the packets were getting
> dropped or being sent out of order was by tshark'ing the traffic.  Tons and
> tons of resends.
>
> The problem was definitely resolved after I dropped back to the prior kernel
> (2.6.32-504.12.2.el6.x86_64).
>
>
> On Tue, Oct 20, 2015 at 11:51 PM Ido Barkan  wrote:
>>
>> Hi Michael,
>> Can you describe your network architecture for this vm (inside the host).
>> Do you know where are the packets get dropped?
>> Ido
>>
>> On Tue, Sep 22, 2015 at 1:19 AM, Michael Kleinpaste
>>  wrote:
>> > Nobody's seen this?
>> >
>> > On Wed, Sep 16, 2015 at 9:08 AM Michael Kleinpaste
>> >  wrote:
>> >>
>> >> So I patched my vhosts and updated the kernel to
>> >> 2.6.32-573.3.1.el6.x86_64.  Afterwards the networking became unstable
>> >> for my
>> >> vyatta firewall vm.  Lots of packet loss and out of order packets
>> >> (based on
>> >> my tshark at the time).
>> >>
>> >> Has anybody else experienced this?
>> >> --
>> >> Michael Kleinpaste
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] problem with iSCSI/multipath.

2015-07-27 Thread Giorgio Bersano
Hi all.
We have an oVirt cluster in production happily running from the
beginning of 2014.
It started as 3.3 beta and now is Version 3.4.4-1.el6 .

Shared storage provided by an HP P2000 G3 iSCSI MSA.
The storage server is fully redundant (2 controllers, dual port disks,
4 iscsi connections per controller) and so is the connectivity (two
switches, multiple ethernet cards per server).

From now on lets only talk about iSCSI connectivity.
The two oldest server have 2 nics each; they have been configured by
hand setting routes aimed to reach every scsi target from every nic.
On the new server we installed ovirt 3.5 to have a look at the
network configuration provided by oVirt.
In Data Center - iSCSI Multipathing we defined an iSCSI Bond binding
together 3 server's nics and the 8 nics of the MSA.
The result is a system that has been functioning for months.

Recently we had to do an upgrade of the storage firmware.
This activity uploads the firmware to one of the MSA controllers then
reboots it. Being successful this is repeated on the other controller.
There is an impact on the I/O performance but there should be no
problems as every volume on the MSA remains visible on other paths.

Well, that's the theory.
On the two hand configured hosts we had no significant problems.
On the 3.5 host VMs started to migrate due to storage problems then
the situation got worse and it took more than an hour to bring again
the system to a good operating level.

I am inclined to believe that the culprit is the server's routing
table. Seems to me that the oVirt generated one is too simplistic and
prone to problems in case of connectivity loss (as in our situation or
when you have to reboot one of the switches).

Anyone on this list with strong experience on similar setup?

I have included below some background information.
I'm available to provide anything useful to further investigate the case.

TIA,
Giorgio.


---
context information
---

oVirt Compatibility Version: 3.4

two FUJITSU PRIMERGY RX300 S5 hosts
CPU:  Intel(R) Xeon(R) E5504 @ 2.00GHz  / Intel Nehalem Family
OS Version: RHEL - 6 - 6.el6.centos.12.2
Kernel Version: 2.6.32 - 504.16.2.el6.x86_64
KVM Version: 0.12.1.2 - 2.448.el6_6.2
LIBVIRT Version: libvirt-0.10.2-46.el6_6.6
VDSM Version: vdsm-4.14.17-0.el6
RAM: 40GB
mom-0.4.3-1.el6.noarch.rpm
ovirt-release34-1.0.3-1.noarch.rpm
qemu-img-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm
qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm
qemu-kvm-rhev-tools-0.12.1.2-2.448.el6_6.2.x86_64.rpm
vdsm-4.14.17-0.el6.x86_64.rpm
vdsm-cli-4.14.17-0.el6.noarch.rpm
vdsm-hook-hostusb-4.14.17-0.el6.noarch.rpm
vdsm-hook-macspoof-4.14.17-0.el6.noarch.rpm
vdsm-python-4.14.17-0.el6.x86_64.rpm
vdsm-python-zombiereaper-4.14.17-0.el6.noarch.rpm
vdsm-xmlrpc-4.14.17-0.el6.noarch.rpm

# ip route list table all |grep 192.168.126.
192.168.126.87 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.86 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.81 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.80 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.77 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.0/24 dev eth4  table 4  proto kernel  scope link  src 192.168.126.65
192.168.126.0/24 dev eth3  proto kernel  scope link  src 192.168.126.64
192.168.126.0/24 dev eth4  proto kernel  scope link  src 192.168.126.65
192.168.126.85 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
192.168.126.84 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
192.168.126.83 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
192.168.126.82 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
192.168.126.76 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
192.168.126.0/24 dev eth3  table 3  proto kernel  scope link  src 192.168.126.64
broadcast 192.168.126.0 dev eth3  table local  proto kernel  scope
link  src 192.168.126.64
broadcast 192.168.126.0 dev eth4  table local  proto kernel  scope
link  src 192.168.126.65
local 192.168.126.65 dev eth4  table local  proto kernel  scope host
src 192.168.126.65
local 192.168.126.64 dev eth3  table local  proto kernel  scope host
src 192.168.126.64
broadcast 192.168.126.255 dev eth3  table local  proto kernel  scope
link  src 192.168.126.64
broadcast 192.168.126.255 dev eth4  table local  proto kernel  scope
link  src 192.168.126.65


one HP ProLiant DL560 Gen8 host
CPU:  Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz / Intel SandyBridge Family
OS Version:RHEL - 6 - 6.el6.centos.12.2
Kernel Version: 2.6.32 - 504.16.2.el6.x86_64
KVM Version: 0.12.1.2 - 2.448.el6_6.2
LIBVIRT Version: libvirt-0.10.2-46.el6_6.6
VDSM Version: vdsm-4.16.14-0.el6
RAM: 256GB
mom-0.4.3-1.el6.noarch.rpm
ovirt-release35-002-1.noarch.rpm
qemu-img-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm
qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm

Re: [ovirt-users] qemu-kvm from Jenkins - renamed packages to rhev?

2014-11-03 Thread Giorgio Bersano
2014-10-30 23:42 GMT+01:00 Itamar Heim ih...@redhat.com:
 On 10/30/2014 09:05 PM, Trey Dockendorf wrote:

 In the past in order to have live snapshots work in CentOS 6.5 I had to
 use RPMs from
 http://jenkins.ovirt.org/job/qemu-kvm-rhev_create-rpms_el6/.  I've
 noticed now that the RPMs have been renamed and no longer match the
 names of those distributed by CentOS.  Are the CentOS builds of QEMU now
 supporting live snapshots, or are these builds in Jenkins still required?

 Thanks,
 - Trey


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 native centos - no
 the jenkins job isn't needed as well as the build is now in the repo (and
 3.5 now requires qemu-kvm-rhev to avoid such problems)
 http://resources.ovirt.org/pub/ovirt-3.5/rpm/el6Server/x86_64/

Hi,
I would like to point out that - thanks to the CentOS 6.6 release -
qemu-{img,kvm}*0.12.1.2-2.448.el6_6.x86_64 are now available but the
qemu*rhev* in the 3.5 repository you mentioned are still stuck at
0.12.1.2-2.415.el6_5.14.x86_64 .
Is there any possibility of an upgrade?

TIA, Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt italian translation

2014-08-11 Thread Giorgio Bersano
2014-08-08 19:25 GMT+02:00 Einav Cohen eco...@redhat.com:
 Hi Giorgio,

 you can apply patch http://gerrit.ovirt.org/#/c/31262/ and patch 
 http://gerrit.ovirt.org/#/c/31263
 ('master', draft) and build in development environment in order to see the 
 results of your Italian
 translation up until now in the 'master' version of the 'ovirt' Zanata 
 project.
 If needed, I will create patches for the 'ovirt-engine-3.5' code-branch / 
 'ovirt-3.5' zanata-version
 as well - let me know if this is needed.

Hi,
I'm now in a place where I've limited connectivity so there is nothing
I can tell about (and will be so for a couple of weeks more). I think
Gianluca is coming back from his holidays so let's see what he thinks.

 reminding in this opportunity that you should keep the Italian translation 
 updated in both the
 'ovirt-3.5' [1] and 'master' [2] versions of the 'ovirt' Zanata project.
 Currently, the Italian translation completion in 'master' stands at 32.19% 
 and in 'ovirt-3.5'
 it stands at 34.31%, which implies that you may have been updating 
 translations only in 'ovirt-3.5'
 recently - so just pointing out that updating those translations in 'master' 
 as well is extremely
 important: updating the translations in only the 'ovirt-3.5' version will 
 result in translations
 regressions in future ovirt-engine releases.

Don't worry, we are keeping the two branches in sync but in the
'master' branch I'm marking as Fuzzy those translations that I think
I have to revise with Gianluca.
So the difference you see in the percentage of completion as Zanata
doesn't count those phrases as completed.

Bye,
Giorgio.

 please let me know if you have any questions.

 
 Thanks,
 Einav


 [1] 
 https://translate.zanata.org/zanata/iteration/view/ovirt/ovirt-3.5?cid=33770
 [2] https://translate.zanata.org/zanata/iteration/view/ovirt/master?cid=33770

 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Einav Cohen eco...@redhat.com, aw...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Friday, August 8, 2014 9:05:23 AM
 Subject: [ovirt-users] oVirt italian translation

 Hi oVirt UX team,
 we (Gianluca and me) are slowly progressing on the job of having
 italian as a fully translated language in oVirt.

 One of our problems is that we don't know what will be the outcome of
 our work until an italian embedding release will be available.
 Probably the first one will not be something to be really proud.
 To have a prototipe to get a glance of how it is now would be really
 useful, don't care about general quality.
 So I asked Sandro if could it be possible to have italian translation
 in the 3.5RC release at the point where it is now (almost 35% done)
 and then remove it from the official 3.5 release but he said it is not
 on his possibilities.

 An alternative he suggested was to have a special 3.5 development
 branch (something like ovirt-3.5-ita) just to give us the opportunity
 to get a taste of what it would be in the end.

 Please tell us what do you think about.
 Thanks, Giorgio.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt italian translation

2014-08-08 Thread Giorgio Bersano
Hi oVirt UX team,
we (Gianluca and me) are slowly progressing on the job of having
italian as a fully translated language in oVirt.

One of our problems is that we don't know what will be the outcome of
our work until an italian embedding release will be available.
Probably the first one will not be something to be really proud.
To have a prototipe to get a glance of how it is now would be really
useful, don't care about general quality.
So I asked Sandro if could it be possible to have italian translation
in the 3.5RC release at the point where it is now (almost 35% done)
and then remove it from the official 3.5 release but he said it is not
on his possibilities.

An alternative he suggested was to have a special 3.5 development
branch (something like ovirt-3.5-ita) just to give us the opportunity
to get a taste of what it would be in the end.

Please tell us what do you think about.
Thanks, Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Auto-SOLVED, but read anyway : Invalid status on Data Center. Setting status to Non Responsive.

2014-05-14 Thread Giorgio Bersano
2014-05-09 15:55 GMT+02:00 Nicolas Ecarnot nico...@ecarnot.net:
 Hi,

 On our second oVirt setup in 3.4.0-1.el6 (that was running fine), I did a
 yum upgrade on the engine (...sigh...).
 Then rebooted the engine.
 This machine is hosting the NFS export domain.
 Though the VM are still running, the storage domain is in invalid status.
 You'll find below the engine.log.

 At first sight, I thought it was the same issue as :
 http://lists.ovirt.org/pipermail/users/2014-March/022161.html
 because it looked very similar.
 But the NFS export domain connection seemed OK (tested).
 I did try every trick I could thought of, restarting, checking anything...
 Our cluster stayed in a broken state.

 On second sight, I saw that when rebooting the engine, then NFS export
 domain was not mounted correctly (I wrote a static /dev/sd-something in
 fstab, and the iscsi manager changed the letter. Next time, I'll use LVM or
 a label).
 So the NFS served was void/empty/black hole.

 I just realized all the above, and spent my afternoon in cold sweat.
 Correcting the NFS mounting and restarting the engine did the trick.
 What still disturbs me is that the unavailability of the NFS export domain
 should NOT be a reason for the MASTER storage domain to break!

 Following the URL above and the BZ opened by the user
 (https://bugzilla.redhat.com/show_bug.cgi?id=1072900), I see this has been
 corrected in 3.4.1. What gives a perfectly connected NFS export domain, but
 empty?

Hi,
sorry for jumping late on an old thread, I'm the one reporting that bug.
I have two things to say:
- taking advantage of a rare opportunity to turn off my production
cluster I put it back in that critical situation and I can confirm
that with oVirt 3.4.1 the problem has been solved.

 PS : I see no 3.4.1 update on CentOS repo.

- me too, until I installed ovirt-release34.rpm (see
http://www.ovirt.org/OVirt_3.4.1_release_notes ). All went smooth
after that.

Best Regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] windows VM console

2014-04-03 Thread Giorgio Bersano
Hi all,
I recently imported in oVirt a Windows 2003R2 VM which previously ran
on virt-manager/libvirt/KVM.
All right (almost).
The only weirdness is that I'm unable to have a VNC or SPICE console.

Clicking the green terminal icon in webadmin makes me download a .rdp
file regardless of what I have set in the VM's definition.

Can anyone tell me what controls the production of a .rdp instead of a .vv?
I think this has something to do with virt-v2v because other windows
VM directly created in oVirt behave properly.

What could I investigate more? Which config file?

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] windows VM console

2014-04-03 Thread Giorgio Bersano
Thank you Tomas,
 - right click the VM and select the Console Options and check there
that did the trick!

I was missing that menu.

Best regards,
Giorgio.


2014-04-03 17:30 GMT+02:00 Tomas Jelinek tjeli...@redhat.com:
 Hi Giorgio,

 you can try this things:
 - have a look at the webadmin at the edit VM dialog at the console-protocol
 - right click the VM and select the Console Options and check there
 - if nothing helps you can try a different browser or delete the browser data 
 (since we persist the last selection on frontend and something could go wrong)

 Tomas

 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: users@ovirt.org Users@ovirt.org
 Sent: Thursday, April 3, 2014 5:08:33 PM
 Subject: [Users] windows VM console

 Hi all,
 I recently imported in oVirt a Windows 2003R2 VM which previously ran
 on virt-manager/libvirt/KVM.
 All right (almost).
 The only weirdness is that I'm unable to have a VNC or SPICE console.

 Clicking the green terminal icon in webadmin makes me download a .rdp
 file regardless of what I have set in the VM's definition.

 Can anyone tell me what controls the production of a .rdp instead of a .vv?
 I think this has something to do with virt-v2v because other windows
 VM directly created in oVirt behave properly.

 What could I investigate more? Which config file?

 TIA,
 Giorgio.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-03-26 Thread Giorgio Bersano
2014-03-26 11:20 GMT+01:00 Sven Kieske s.kie...@mittwald.de:
 Ah cool, I didn't stumble upon the qemu-ga, thanks for that!

 However yes, ubuntu 12.04 is precise release, but I don't
 know of any packaged version of qemu-ga for it, but maybe
 I just missed it as the centos one?


No, it seems to be available only from Raring Ringtails onwards :-(


 Am 26.03.2014 11:13, schrieb Giorgio Bersano:
 Actually qemu-guest-agent IS available in the standard CentOS
 distribution ( ATM
 qemu-guest-agent-0.12.1.2-2.415.el6_5.6.{x86_64,i686}.rpm ).

 Regarding ubuntu 12.04, isn't it  Ubuntu Precise Pangolin LTS (this
 is from oVirt 3.4.0) ?

 HTH,
 Giorgio.

 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SAN storage expansion

2014-03-19 Thread Giorgio Bersano
2014-03-18 10:28 GMT+01:00 Giorgio Bersano giorgio.bers...@gmail.com:
 2014-03-18 9:15 GMT+01:00 Elad Ben Aharon ebena...@redhat.com:
 Hi,

 LUNs size are being updated using the multipath mechanism. In order to 
 update storage domain size, you'll need to update the physical volume size 
 using LVM on you hypervisors.

 Please do the following:
 1) Put the storage domain to maintenance
 2) Execute on your hosts:
 'pvresize --setphysicalvolumesize' with the new physical volume size (150G)
 3) Activate the storage domain

 Thank you Elad,
 I knew it was required a pvresize command but wanted to be sure that I
 wasn't disrupting possible storage parameters bookkeeping by the oVirt
 system.
 I'm wondering if it's because of this that SD has to be put in maintenance.

 Any chance to have the SD resize without putting it offline, for
 example doing pvresize on the SPM node and then pvscan on every other
 host involved?
 I'm sure there are use cases where this is would be of great convenience.


Well, I'm back on this.
First of all, Elad, your procedure works indeed.

Moreover, I tried this sequence:
1) Expand the volume using storage specific tools

2) on the SPM host execute
 pvresize /path/of/the/physical/volume  # (no need to specify the
size, it takes the full volume space)

3) (just to be sure) on every other host involved execute
 pvscan

4) the SD is shown with correct, updated, informations.

Maybe I am overzealous but I prefer to ask the experts:
Not putting the SD to maintenance could give unexpected consequences?

TIA,
Giorgio.


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: users@ovirt.org Users@ovirt.org
 Sent: Monday, March 17, 2014 7:25:36 PM
 Subject: [Users] SAN storage expansion

 Hi all,
 I'm happily continuing my experiments with oVirt and I need a
 suggestion regarding storage management.

 I'm using an iSCSI based Storage Domain and I'm trying to understand
 which is the correct way to extend it's size some time after it's
 creation.

 Please consider the following steps:

 1) Using storage specific tools create a new volume on iSCSI storage
 (e.g. 130.0 GiB)

 2) in oVirt webadmin: Storage - New Domain , select the previously
 created storage (130 GB), OK, after some time it's online:
  Size: 129 GB ; Available: 125 GB ; Used: 4 GB

 3) Use according to your needs (even noop is OK)

 4) Using storage specific tools expand the new volume (e.g. 20.0 GiB
 more, now it says 150.0GiB)

 5) in webadmin: Storage, click on the domain, Edit. Looking at
 LunsTargets , now it correctly sees 150GB

 BUT
 on the Storage tab it continues to be seen with
  Size: 129 GB ; Available: 125 GB ; Used: 4 GB


 Do you think it is possible to make oVirt aware of the new available
 storage space? Am I missing something obvious?

 My setup is oVirt 3.4.0 RC2 with fully patched CentOS 6.5 hosts.

 Best regards,
 Giorgio.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SAN storage expansion

2014-03-18 Thread Giorgio Bersano
2014-03-18 9:15 GMT+01:00 Elad Ben Aharon ebena...@redhat.com:
 Hi,

 LUNs size are being updated using the multipath mechanism. In order to update 
 storage domain size, you'll need to update the physical volume size using LVM 
 on you hypervisors.

 Please do the following:
 1) Put the storage domain to maintenance
 2) Execute on your hosts:
 'pvresize --setphysicalvolumesize' with the new physical volume size (150G)
 3) Activate the storage domain

Thank you Elad,
I knew it was required a pvresize command but wanted to be sure that I
wasn't disrupting possible storage parameters bookkeeping by the oVirt
system.
I'm wondering if it's because of this that SD has to be put in maintenance.

Any chance to have the SD resize without putting it offline, for
example doing pvresize on the SPM node and then pvscan on every other
host involved?
I'm sure there are use cases where this is would be of great convenience.


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: users@ovirt.org Users@ovirt.org
 Sent: Monday, March 17, 2014 7:25:36 PM
 Subject: [Users] SAN storage expansion

 Hi all,
 I'm happily continuing my experiments with oVirt and I need a
 suggestion regarding storage management.

 I'm using an iSCSI based Storage Domain and I'm trying to understand
 which is the correct way to extend it's size some time after it's
 creation.

 Please consider the following steps:

 1) Using storage specific tools create a new volume on iSCSI storage
 (e.g. 130.0 GiB)

 2) in oVirt webadmin: Storage - New Domain , select the previously
 created storage (130 GB), OK, after some time it's online:
  Size: 129 GB ; Available: 125 GB ; Used: 4 GB

 3) Use according to your needs (even noop is OK)

 4) Using storage specific tools expand the new volume (e.g. 20.0 GiB
 more, now it says 150.0GiB)

 5) in webadmin: Storage, click on the domain, Edit. Looking at
 LunsTargets , now it correctly sees 150GB

 BUT
 on the Storage tab it continues to be seen with
  Size: 129 GB ; Available: 125 GB ; Used: 4 GB


 Do you think it is possible to make oVirt aware of the new available
 storage space? Am I missing something obvious?

 My setup is oVirt 3.4.0 RC2 with fully patched CentOS 6.5 hosts.

 Best regards,
 Giorgio.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SAN storage expansion

2014-03-18 Thread Giorgio Bersano
2014-03-18 11:00 GMT+01:00 Liron Aravot lara...@redhat.com:
 Hi Giorgio,
 perhaps i missed something - but why don't you want to extend the domain by 
 right click and editing it?


Hi Liron,
we are talking about an iSCSI storage domain.
In the edit  dialogue I don't see anything regarding resize or extend.
If I just click OK at this point nothing changes. Maybe it's me that
I'm missing something.

Which action could trigger a pvresize ?

In the meantime I'll try Elad's procedure.


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Elad Ben Aharon ebena...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 18, 2014 11:28:14 AM
 Subject: Re: [Users] SAN storage expansion

 2014-03-18 9:15 GMT+01:00 Elad Ben Aharon ebena...@redhat.com:
  Hi,
 
  LUNs size are being updated using the multipath mechanism. In order to
  update storage domain size, you'll need to update the physical volume size
  using LVM on you hypervisors.
 
  Please do the following:
  1) Put the storage domain to maintenance
  2) Execute on your hosts:
  'pvresize --setphysicalvolumesize' with the new physical volume size (150G)
  3) Activate the storage domain

 Thank you Elad,
 I knew it was required a pvresize command but wanted to be sure that I
 wasn't disrupting possible storage parameters bookkeeping by the oVirt
 system.
 I'm wondering if it's because of this that SD has to be put in maintenance.

 Any chance to have the SD resize without putting it offline, for
 example doing pvresize on the SPM node and then pvscan on every other
 host involved?
 I'm sure there are use cases where this is would be of great convenience.


  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: users@ovirt.org Users@ovirt.org
  Sent: Monday, March 17, 2014 7:25:36 PM
  Subject: [Users] SAN storage expansion
 
  Hi all,
  I'm happily continuing my experiments with oVirt and I need a
  suggestion regarding storage management.
 
  I'm using an iSCSI based Storage Domain and I'm trying to understand
  which is the correct way to extend it's size some time after it's
  creation.
 
  Please consider the following steps:
 
  1) Using storage specific tools create a new volume on iSCSI storage
  (e.g. 130.0 GiB)
 
  2) in oVirt webadmin: Storage - New Domain , select the previously
  created storage (130 GB), OK, after some time it's online:
   Size: 129 GB ; Available: 125 GB ; Used: 4 GB
 
  3) Use according to your needs (even noop is OK)
 
  4) Using storage specific tools expand the new volume (e.g. 20.0 GiB
  more, now it says 150.0GiB)
 
  5) in webadmin: Storage, click on the domain, Edit. Looking at
  LunsTargets , now it correctly sees 150GB
 
  BUT
  on the Storage tab it continues to be seen with
   Size: 129 GB ; Available: 125 GB ; Used: 4 GB
 
 
  Do you think it is possible to make oVirt aware of the new available
  storage space? Am I missing something obvious?
 
  My setup is oVirt 3.4.0 RC2 with fully patched CentOS 6.5 hosts.
 
  Best regards,
  Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now available

2014-03-13 Thread Giorgio Bersano
2014-03-12 22:25 GMT+01:00 Alon Bar-Lev alo...@redhat.com:


 - Original Message -
 From: Alon Bar-Lev alo...@redhat.com
 To: Sandro Bonazzola sbona...@redhat.com, Giorgio Bersano 
 giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Wednesday, March 12, 2014 11:24:51 PM
 Subject: Re: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now 
 available

  
   Alon, looks like changes in otopi may have pulled in i686 instead of arch
   specific packages.
   Can you check it?
 
  yes, looks like, it is related to the provides change.
  this is part of the reason a new rc was required.

 Hello Giorgio,

 Can you please test next otopi rc (rc4)[1], I believe I fixed this.
 Eventually I will re-write the entire yum...

Hi,
I had had some problem with http://mirrorlist.centos.org/ but in the
end I was able to test and all went well.

Bye,
Giorgio.



 Oh... forgot... Sandro, can you please publish rc4?


 Regards,
 Alon

 [1] http://jenkins.ovirt.org/job/manual-build-tarball/273/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now available

2014-03-12 Thread Giorgio Bersano
2014-03-11 23:43 GMT+01:00 Nir Soffer nsof...@redhat.com:
 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: annou...@ovirt.org, Users@ovirt.org, engine-devel 
 engine-de...@ovirt.org, arch a...@ovirt.org, VDSM
 Project Development vdsm-de...@lists.fedorahosted.org
 Sent: Tuesday, March 11, 2014 6:17:03 PM
 Subject: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now available

 The oVirt team is pleased to announce that the 3.4.0 Second Release Candidate
 is now available for testing.
 ...
 [1] http://www.ovirt.org/OVirt_3.4.0_release_notes

 I noticed that not all vdsm fixes are listed in the release notes

 74b4a27 xmlrpc: [Fix] Use correct base class for parsing request
 https://bugzilla.redhat.com/1074063

 d456d75 xmlrpc: Support HTTP 1.1
 https://bugzilla.redhat.com/1070476

 Nir


Hi,
just want to let you know that upgrading the engine from 3.4.0 RC -
3.4.0 RC2 pulls in three i686 rpms:
 glibc-2.12-1.132.el6.i686.rpm
 iptables-1.4.7-11.el6.i686.rpm
 nss-softokn-freebl-3.14.3-9.el6.i686.rpm
( this is a fully patched CentOS 6.5 x86_64 install).

Then
 # yum remove iptables.i686  nss-softokn-freebl.i686 glibc.i686
to evict any 32 bit package from my engine and all is fine again.

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now available

2014-03-12 Thread Giorgio Bersano
2014-03-12 14:53 GMT+01:00 Sandro Bonazzola sbona...@redhat.com:
 Il 12/03/2014 14:45, Giorgio Bersano ha scritto:
 2014-03-11 23:43 GMT+01:00 Nir Soffer nsof...@redhat.com:
 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: annou...@ovirt.org, Users@ovirt.org, engine-devel 
 engine-de...@ovirt.org, arch a...@ovirt.org, VDSM
 Project Development vdsm-de...@lists.fedorahosted.org
 Sent: Tuesday, March 11, 2014 6:17:03 PM
 Subject: [Users] [ANN] oVirt 3.4.0 Second Release Candidate is now 
 available

 The oVirt team is pleased to announce that the 3.4.0 Second Release 
 Candidate
 is now available for testing.
 ...
 [1] http://www.ovirt.org/OVirt_3.4.0_release_notes

 I noticed that not all vdsm fixes are listed in the release notes

 74b4a27 xmlrpc: [Fix] Use correct base class for parsing request
 https://bugzilla.redhat.com/1074063

 d456d75 xmlrpc: Support HTTP 1.1
 https://bugzilla.redhat.com/1070476

 Nir


 Hi,
 just want to let you know that upgrading the engine from 3.4.0 RC -
 3.4.0 RC2 pulls in three i686 rpms:
  glibc-2.12-1.132.el6.i686.rpm
  iptables-1.4.7-11.el6.i686.rpm
  nss-softokn-freebl-3.14.3-9.el6.i686.rpm
 ( this is a fully patched CentOS 6.5 x86_64 install).


 Sounds really weird. Anybody else hit this? Can you determine which package 
 pulled in that dep?


It surely is iptables that got the two other rpm in.

I think it was requested in some way because during the
 # engine-setup
phase I answered Yes to the firewall question
 Do you want Setup to configure the firewall? (Yes, No) [Yes]:

In the very same situation in the past I had no 32 bit packages installed.

I think there are no useful messages in the setup log (extract follows):

 ...
2014-03-12 13:46:55 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum queue package iptables for install
Loading mirror speeds from cached hostfile
 * base: artfiles.org
 * epel: be.mirror.eurid.eu
 * extras: artfiles.org
 * ovirt-epel: be.mirror.eurid.eu
 * updates: centos.bio.lmu.de
2014-03-12 13:47:02 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum processing package
iptables-1.4.7-11.el6.i686 for install
2014-03-12 13:47:04 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum package iptables-1.4.7-11.el6.i686 queued
2014-03-12 13:47:04 DEBUG otopi.context context._executeMethod:138
Stage packages METHOD
otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin.packages
Checking for new repos for mirrors
2014-03-12 13:47:04 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum queue package ovirt-engine for install
2014-03-12 13:47:07 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum processing package
ovirt-engine-3.4.0-0.13.rc.el6.noarch for install
2014-03-12 13:47:07 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum package
ovirt-engine-3.4.0-0.13.rc.el6.noarch queued
2014-03-12 13:47:07 DEBUG otopi.context context._executeMethod:138
Stage packages METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._packages
2014-03-12 13:47:07 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Building transaction
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Transaction built
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Transaction Summary:
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum install- glibc-2.12-1.132.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum install- iptables-1.4.7-11.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum install-
nss-softokn-freebl-3.14.3-9.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum updated-
ovirt-engine-3.4.0-0.12.master.20140228075627.el6.noarch
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum update -
ovirt-engine-3.4.0-0.13.rc.el6.noarch
  ...
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager._packages:254 Transaction Summary:
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager._packages:259 install - glibc-2.12-1.132.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager._packages:259 install - iptables-1.4.7-11.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager._packages:259 install -
nss-softokn-freebl-3.14.3-9.el6.i686
2014-03-12 13:47:08 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager._packages:259 updated -
ovirt-engine-3.4.0-0.12.master.20140228075627.el6.noarch
2014-03-12 13:47:08 DEBUG

Re: [Users] Win8 on oVirt

2014-03-11 Thread Giorgio Bersano
2014-03-11 12:40 GMT+01:00 Michael Wagenknecht wagenkne...@fuh-e.de:
 The node runs with CentOS 6.5
 I Understand, no Win8 guests on CentOS nodes.

Hi,
not tried with Win8 but regarding flags I have this:

oVirt 3.4.0 RC

host: CentOS 6.5 with current updates

[root@host1 ~]# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
stepping: 5
cpu MHz : 2000.165
cache size  : 4096 KB
physical id : 0
siblings: 4
core id : 0
cpu cores   : 4
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2
ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm dts tpr_shadow
vnmi flexpriority ept vpid
bogomips: 4000.33
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:
[root@host1 ~]#


guest: CentOS 6.5 with current updates

[root@guest1 ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel Core i7 9xx (Nehalem Class Core i7)
stepping : 3
cpu MHz : 2000.066
cache size : 4096 KB
fpu : yes
fpu_exception : yes
cpuid level : 4
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall nx lm constant_tsc
unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt hypervisor
lahf_lm
bogomips : 4000.13
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
[root@guest1 ~]#


SEP and NX are present in guest as in host.

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-05 Thread Giorgio Bersano
2014-03-04 17:32 GMT+01:00 Sven Kieske s.kie...@mittwald.de:
 Would you mind sharing the link to it?
 I didn't find it.

 Thanks!


Here it is: BZ 1072900 ( https://bugzilla.redhat.com/show_bug.cgi?id=1072900 )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-05 Thread Giorgio Bersano
2014-03-04 22:35 GMT+01:00 Liron Aravot lara...@redhat.com:


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 6:10:27 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Liron Aravot lara...@redhat.com
  Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org
  Users@ovirt.org, fsimo...@redhat.com
  Sent: Tuesday, March 4, 2014 5:31:01 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
   Hi Giorgio,
   Apperantly the issue is caused because there is no connectivity to the
   export domain and than we fail on spmStart - that's obviously a bug that
   shouldn't happen.
 
  Hi Liron,
  we are reaching the same conclusion.
 
   can you open a bug for the issue?
  Surely I will
 
   in the meanwhile, as it seems to still exist - seems to me like the way
   for
   solving it would be either to fix the connectivity issue between vdsm
   and
   the storage domain or to downgrade your vdsm version to before this
   issue
   was introduced.
 
 
  I have some problems with your suggestion(s):
  - I cannot fix the connectivity between vdsm and the storage domain
  because, as I already said, it is exposed by a VM by this very same
  DataCenter and if the DC doesn't goes up, the NFS server can't too.
  - I don't understand what does it mean to downgrade the vdsm: to which
  point in time?
 
  It seems I've put myself - again - in a situation of the the egg or
  the chicken type, where the SD depends from THIS export domain but
  the export domain isn't available if the DC isn't running.
 
  This export domain isn't that important to me. I can throw it away
  without any problem.
 
  What if we edit the DB and remove any instances related to it? Any
  adverse consequences?
 
 
  Ok, please perform a full db backup before attempting the following:
  1. right click on the the domain and choose Destory
  2. move all hosts to maintenance
  3. log in into the database and run the following sql command:
  update storage_pool where id = '{you id goes here}' set
  master_domain_version = master_domain_version + 1;
  4. activate a host.

 Ok Liron, that did the trick!


Just for the record, the correct command was this one:

update storage_pool  set master_domain_version = master_domain_version + 1
where id = '{your id goes here}' ;

Best regards,
Giorgio.

 Up and running again, even that VM supposed to be the server acting as
 export domain.

 Now I've to run away as I'm late to a meeting but tomorrow I'll file a
 bug regarding this.

 Thanks to you and Meital for your assistance,
 Giorgio.

 Sure, happy that everything is fine!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-05 Thread Giorgio Bersano
2014-03-05 14:58 GMT+01:00 Liron Aravot lara...@redhat.com:

 Giorgio,
 I've added the following patch to resolve the issue: 
 http://gerrit.ovirt.org/#/c/25424/
 Have you opened the bug for it? if so, please provide me the bug number so i 
 could assign the patch to it.
 Thanks.

Good that you already have a patch. BZ 1072900 .
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 StorageDomainDoesNotExist: Storage domain does not exist: 
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)

 What's the output of:
 lvs
 vdsClient -s 0 getStorageDomainsList

 If it exists in the list, please run:
 vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312


I'm attaching a compressed archive to avoid mangling by googlemail client.

Indeed the NFS storage with that id is not in the list of available
storage as it is brought up by a VM that has to be run in this very
same cluster. Obviously it isn't running at the moment.

You find this in the DB:

COPY storage_domain_static (id, storage, storage_name,
storage_domain_type, storage_type, storage_domain_format_type,
_create_date, _update_date, recoverable, last_time_used_as_master,
storage_description, storage_comment) FROM stdin;
...
1810e5eb-9eb6-4797-ac50-8023a939f312
11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
 0   2014-02-28 18:11:23.17092+01\N  t   0   \N
  \N
...

Also, disks for that VM are carved from the Master Data Domain that is
not available ATM.

To say in other words: I thought that availability of an export domain
wasn't critical to switch on a Data Center. Am I wrong?

Thanks,
Giorgio.


lvs+vdsclient.txt.gz
Description: GNU Zip compressed data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 Master data domain must be reachable in order for the DC to be up.
 Export domain shouldn't affect the dc status.
 Are you sure that you've created the export domain as an export domain, and 
 not as a regular nfs?


Yes, I am.

Don't know how to extract this info from DB, but in webadmin, in the
storage list, I have these info:

Domain Name: nfs02EXPORT
Domain Type: Export
Storage Type: NFS
Format: V1
Cross Data-Center Status: Inactive
Total Space: [N/A]
Free Space: [N/A]

ATM my only Data Domain is based on iSCSI, no NFS.





 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:16:19 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 
  What's the output of:
  lvs
  vdsClient -s 0 getStorageDomainsList
 
  If it exists in the list, please run:
  vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
 

 I'm attaching a compressed archive to avoid mangling by googlemail client.

 Indeed the NFS storage with that id is not in the list of available
 storage as it is brought up by a VM that has to be run in this very
 same cluster. Obviously it isn't running at the moment.

 You find this in the DB:

 COPY storage_domain_static (id, storage, storage_name,
 storage_domain_type, storage_type, storage_domain_format_type,
 _create_date, _update_date, recoverable, last_time_used_as_master,
 storage_description, storage_comment) FROM stdin;
 ...
 1810e5eb-9eb6-4797-ac50-8023a939f312
 11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
  0   2014-02-28 18:11:23.17092+01\N  t   0   \N
   \N
 ...

 Also, disks for that VM are carved from the Master Data Domain that is
 not available ATM.

 To say in other words: I thought that availability of an export domain
 wasn't critical to switch on a Data Center. Am I wrong?

 Thanks,
 Giorgio.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 Ok, and is the iscsi functional at the moment?


I think so.
For example I see in the DB that the id of my Master Data Domain ,
dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then

[root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
  LV   VG
 Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
  5c8bb733-4b0c-43a9-9471-0fde3d159fb2
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
  7b617ab1-70c1-42ea-9303-ceffac1da72d
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
  e4b86b91-80ec-4bba-8372-10522046ee6b
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
  ids
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
  inbox
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
  leases
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
  master
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
  metadata
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
  outbox
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m

I can read from the LVs that have the LVM Available bit set:

[root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
bs=1M of=/dev/null
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s

[root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
bs=1M |od -xc |head -20
00020101221000200030200
020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
0200001
 \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
04000010007
001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
0603661393862633033
 \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
100372d33342d6532343136622d64662d31
  -   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
120386231623439636435386264
  b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
1403638343839663932
 \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
160612d62372d6638346564622d38302d35
  -   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
200656363306539353766306364762e6f62
  c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
22037782e307270006926de
  x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
[root@vbox70 ~]#

Obviously I can't read from LVs that aren't available:

[root@vbox70 ~]# dd
if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
bs=1M of=/dev/null
dd: apertura di
`/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
No such file or directory
[root@vbox70 ~]#

But those LV are the VM's disks and I suppose it's availability is
managed by oVirt



 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 

 Yes, I am.

 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:

 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]

 ATM my only Data Domain is based on iSCSI, no NFS.





  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export 
 domain and than we fail on spmStart - that's obviously a bug that shouldn't 
 happen.

Hi Liron,
we are reaching the same conclusion.

 can you open a bug for the issue?
Surely I will

 in the meanwhile, as it seems to still exist - seems to me like the way for 
 solving it would be either to fix the connectivity issue between vdsm and the 
 storage domain or to downgrade your vdsm version to before this issue was 
 introduced.


I have some problems with your suggestion(s):
- I cannot fix the connectivity between vdsm and the storage domain
because, as I already said, it is exposed by a VM by this very same
DataCenter and if the DC doesn't goes up, the NFS server can't too.
- I don't understand what does it mean to downgrade the vdsm: to which
point in time?

It seems I've put myself - again - in a situation of the the egg or
the chicken type, where the SD depends from THIS export domain but
the export domain isn't available if the DC isn't running.

This export domain isn't that important to me. I can throw it away
without any problem.

What if we edit the DB and remove any instances related to it? Any
adverse consequences?




 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04 
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group 
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist: 
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)




 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 

 Yes, I am.

 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:

 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]

 ATM my only Data Domain is based on iSCSI, no NFS.





  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the NFS storage with that id is not in the list of available
  storage as it is brought up by a VM that has to be run in this very
  same cluster. Obviously it isn't running at the moment.
 
  You find this in the DB:
 
  COPY storage_domain_static (id, storage, storage_name

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:25 GMT+01:00 Liron Aravot lara...@redhat.com:


 - Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:03:44 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export
 domain and than we fail on spmStart - that's obviously a bug that shouldn't
 happen.
 can you open a bug for the issue?
 in the meanwhile, as it seems to still exist - seems to me like the way for
 solving it would be either to fix the connectivity issue between vdsm and
 the storage domain or to downgrade your vdsm version to before this issue
 was introduced.

 by the way, solution that we can go with is to remove the domain manually 
 from the engine and forcibly cause to reconstruction of the pool metadata, so 
 that issue should be resolved.


Do you mean Destroy from the webadmin?



 note that if it'll happen for further domains in the future the same 
 procedure would be required.
 up to your choice we can proceed with solution - let me know on which way 
 you'd want to go.

 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)




 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
StorageDomainDoesNotExist: Storage domain does not exist:
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
   
What's the output of:
lvs
vdsClient -s 0 getStorageDomainsList
   
If it exists in the list, please run:
vdsClient -s 0 getStorageDomainInfo
1810e5eb-9eb6-4797-ac50-8023a939f312
   
  
   I'm attaching a compressed archive to avoid mangling by googlemail
   client.
  
   Indeed the NFS storage with that id is not in the list of available
   storage as it is brought up by a VM that has to be run in this very
   same cluster. Obviously it isn't running at the moment.
  
   You find this in the DB:
  
   COPY storage_domain_static (id, storage, storage_name,
   storage_domain_type

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 5:31:01 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
  Hi Giorgio,
  Apperantly the issue is caused because there is no connectivity to the
  export domain and than we fail on spmStart - that's obviously a bug that
  shouldn't happen.

 Hi Liron,
 we are reaching the same conclusion.

  can you open a bug for the issue?
 Surely I will

  in the meanwhile, as it seems to still exist - seems to me like the way for
  solving it would be either to fix the connectivity issue between vdsm and
  the storage domain or to downgrade your vdsm version to before this issue
  was introduced.


 I have some problems with your suggestion(s):
 - I cannot fix the connectivity between vdsm and the storage domain
 because, as I already said, it is exposed by a VM by this very same
 DataCenter and if the DC doesn't goes up, the NFS server can't too.
 - I don't understand what does it mean to downgrade the vdsm: to which
 point in time?

 It seems I've put myself - again - in a situation of the the egg or
 the chicken type, where the SD depends from THIS export domain but
 the export domain isn't available if the DC isn't running.

 This export domain isn't that important to me. I can throw it away
 without any problem.

 What if we edit the DB and remove any instances related to it? Any
 adverse consequences?


 Ok, please perform a full db backup before attempting the following:
 1. right click on the the domain and choose Destory
 2. move all hosts to maintenance
 3. log in into the database and run the following sql command:
 update storage_pool where id = '{you id goes here}' set master_domain_version 
 = master_domain_version + 1;
 4. activate a host.

Ok Liron, that did the trick!

Up and running again, even that VM supposed to be the server acting as
export domain.

Now I've to run away as I'm late to a meeting but tomorrow I'll file a
bug regarding this.

Thanks to you and Meital for your assistance,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How do you move an host with local-storage into regular Data Center?

2014-02-27 Thread Giorgio Bersano
2014-02-26 17:35 GMT+01:00 Dafna Ron d...@redhat.com:
 you did not remove the storage before moving the host.

Well, I tried but then there was always something impossible to do,
i.e. the Local Storage Domain was the only SD so, beeng it the Master
SD, it wasn't possible to remove it...
something like the chicken or the egg dilemma.

 If you select the force remove DC option it should clean all object under
 that DC (just make sure you are selecting the one you want to remove ;))

That worked. I was just trying to avoid that forced option, but you
confirmed me it was the only one.

Thank you,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How do you move an host with local-storage into regular Data Center?

2014-02-27 Thread Giorgio Bersano
2014-02-27 14:46 GMT+01:00 Gadi Ickowicz gicko...@redhat.com:
 Actually to remove the the last domain in a DC the following steps will do it 
 in and clean up everything without resorting to manual steps:

 1. Deactivate and then detach all domains in the DC except last one
 2. Deactivate last domain
 3. Remove the datacenter (this takes a while since actually the host has to 
 contend for SPM again, like reactivating the domain, in order to remove the 
 DC)
 4. DC should be removed and domain should remain unattached - now domain can 
 be removed
 5. Move host to maintenance and switch it's cluster to proper dc (should now 
 be possible since it's DC has been removed and is now blank) or move the host 
 directly to proper cluster

 One thing to note when using force remove for DC is that it does *not* clean 
 up the actual storage - only references to it from the DB (and that is why it 
 disappears from the UI). You have to manually go to the host and clean the 
 storage itself to free up that space

 Gadi Ickowicz


Thank you Gadi,
I'll try this procedure as soon as I'll have time to experiment.
Was it already documented somewhere? In that case please forgive my ignorance.

Giorgio
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-26 Thread Giorgio Bersano
2014-02-25 15:39 GMT+01:00 Gianluca Cecchi gianluca.cec...@gmail.com:
 2014-02-24 17:59 GMT+01:00 Itamar Heim ih...@redhat.com:

 with oVirt 3.4 getting close to GA with many many great features, time
 to collect requests for 3.5...

 Signed rpms as in:
 http://lists.ovirt.org/pipermail/users/2014-January/019627.html
 and the mentioned ticket inside Dan answer:
 https://fedorahosted.org/ovirt/ticket/99

 +1

 Hopefully in 3.4 too... ;-)

 +1
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] How do you move an host with local-storage into regular Data Center?

2014-02-26 Thread Giorgio Bersano
Hi all,
I need your help to clean up my DataCenter.

I'm new to this wonderful product so I'm exploring to understand what's offered.
I'm talking about 3.4.0beta3 (Centos 6.5).

In the beginning it was a normal system with two hosts, an iSCSI
storage, and the engine installed as a regular KVM guest on another
host (external to the oVirt setup).
So far so good.

Then I selected one of the two hosts, put it in maintenance mode,
clicked on Configure Local Storage, accepted the defaults for Data
Center, Cluster and Storage, put in an appropriate path to local
storage...
Now I have another DC (hostname-Local), another Cluster
(hostname-Local) and another SD (hostname-Local). This host has been
migrated from the original Cluster and it is now in the
hostname-Local Cluster. Nothing to regret, I was expecting something
like that.

Obviously I have now a single host in my main DC and so I'm unable
to do migration of VMs and so on.

After some tests I decide to go back to the original situation.
I put again the host in maintenance mode, Edit, select the correct
DC and the host is now back in his place.

Now I try to remove the spurious DC to clean-up the situation but the
result is an error popup:
Error while executing action: Cannot remove Data Center. There is no
active Host in the Data Center.

OK, I move again the host in. But now when I select the DC the
Remove button is obviously greyed out.

Well, for the moment I'm happy to move again the host into the regular
DC to gain full functionality of my cluster but I would like to clean
my setup removing the other, useless, DC.
Does anyone know how to get out from this?
I'm probably missing something obvious but here I'm stuck.

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-25 Thread Giorgio Bersano
2014-02-24 17:59 GMT+01:00 Itamar Heim ih...@redhat.com:
 with oVirt 3.4 getting close to GA with many many great features, time to 
 collect requests for 3.5...


My favourite RFE is already in bugzilla:
Enable Hosted Engine Configuration when environment is configured with
SAN Storage Backend (iSCSI, FC).
https://bugzilla.redhat.com/show_bug.cgi?id=1036731 (Thank you Scott Herold)

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-25 Thread Giorgio Bersano
2014-02-25 15:02 GMT+01:00 Giorgio Bersano giorgio.bers...@gmail.com:
 2014-02-24 17:59 GMT+01:00 Itamar Heim ih...@redhat.com:
 with oVirt 3.4 getting close to GA with many many great features, time to 
 collect requests for 3.5...


 My favourite RFE is already in bugzilla:
 Enable Hosted Engine Configuration when environment is configured with
 SAN Storage Backend (iSCSI, FC).
 https://bugzilla.redhat.com/show_bug.cgi?id=1036731 (Thank you Scott Herold)


Maybe this one is even more important: the ability to import a Storage
Domain http://www.ovirt.org/Features/ImportStorageDomain .

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Asking for advice on hosted engine

2014-02-17 Thread Giorgio Bersano
Hello everybody,
I discovered oVirt a couple of months ago when I was looking for the
best way to manage our small infrastructure. I have read any document
I considered useful but I would like to receive advice from the many
experts that are on this list.

I think it worths an introduction (I hope doesn't get you bored).

I work in a small local government entity and I try to manage
effectively our limited resources.
We have many years of experience with Linux and especially with CentOS
which we have deployed on PC (i.e. for using as firewall in remote
locations) and moreover on servers.

We have been using Xen virtualization from the early days of CentOS 5
and  we have built our positive experience on KVM too.
I have to say that libvirt in a small environment like ours is really
a nice tool.
So nothing to regret.

Trying to go a little further, as already said, I stumbled upon oVirt
and I've found the project intriguing.

At the moment we are thinking of deploying it on a small environment
of four very similar servers each having:
- a couple of Xeon E5504
- 6 x 1Gb ethernet interfaces
- 40 GB of RAM
two of them have 72 GB of disk (mirrored)
two of them have almost 500GB of useful RAID array

Moreover we have an HP iSCSI storage that should easily satisfy our
current storage requirement.

So, given our small server pool, the necessity of another host just to
run the supervisor seems a requirement too high.

Enter hosted engine and the picture takes brighter colors. Well, I'm
usually not the adventurous guy but after experimenting a little with
oVirt 3.4 I developed better confidence.
We would want to install the engine over the two hosts with smaller disks.

For what I know, installing hosted engine mandates NFS storage. But we
want this to be highly available too, and possibly to have it on the
very same hosts.

Here is my solution: make a gluster replicated volume across the two
hosts and take advantage of that NFS server.
Then I put 127.0.0.1 as the address of the NFS server in the
hosted-engine-setup so  the host is always able to reach the storage
server (itself).
GlusterFS configuration is done outside of oVirt that, regarding
engine's storage, doesn't even know that it's a gluster thing.

Relax, we've finally reached the point where I'm asking advice :-)

Storage and virtualization experts, do you see in this configuration
any pitfall that I've overlooked given my inexperience in oVirt,
Gluster, NFS or clustered filesystems?
Do you think that not only it's feasable (I know it is, I made it and
it's working now) but it's also reliable and dependable and I'm not
risking my neck on this setup?

I've obviously made some test but I'm not at the confidence level of
saying that all is right in the way it is designed.

OK, I think I've already written too much, better I stop and humbly
wait for your opinion but I'm obviously here if any clarification by
my part  is needed.

Thank you very much for reading until this point.
Best Regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] hosted-engine deployment doesn't accept legal email address (ovirt-3.4.0-prerelease)

2014-02-14 Thread Giorgio Bersano
2014-02-12 17:28 GMT+01:00 Doron Fediuck dfedi...@redhat.com:

 Hi Giorgio.
 Patches are always welcomed!

 I'd start with opening a bug, as mail verification may need
 some attention.

 Next, to get the code you can use-
 git clone git://gerrit.ovirt.org/ovirt-hosted-engine-setup

 and then make a patch and push it to gerrit.

 Thanks again!
 Doron

Hi Doron,
here it is: https://bugzilla.redhat.com/show_bug.cgi?id=1065269 .

To speedup things I've put the patch in there, as I'm not familiar
with gerrit ATM .
Please push it by yourself if you agree; in the meantime I'll try to
learn how to use it.

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] hosted-engine deployment doesn't accept legal email address (ovirt-3.4.0-prerelease)

2014-02-12 Thread Giorgio Bersano
Hi,
I am happily testing the hosted engine feature, sorry I was late on
the test day but I dare to provide some feedback.
To be more clear I prefer to report one item per email.

This is related to a really minor issue I think it doesn't deserve a
Bugzilla but I'm ready to do it if requested.

So, during
  #  hosted-engine --deploy
in the step named --== HOSTED ENGINE CONFIGURATION ==-- we are asked
to provide a couple of email addresses but addresses with an hypen
before the @ sign (ex. my-n...@mydomain.com ) aren't allowed and
throwns the following:
  [ ERROR ] Invalid input, please try again

As a workaround I used another address but, looking at
/usr/share/ovirt-hosted-engine-setup/plugins/ovirt-hosted-engine-setup/ha/ha_notifications.py
(ovirt-hosted-engine-setup-1.1.0-0.5.beta2.el6.noarch)
I've found that _RE_EMAIL_ADDRESS is using a simplified pattern (hope
that gmail doesn't mangles text too much):

[a-zA-Z0-9_.+]+
@
[a-z0-9.-]+

even avoiding the intricacy of the whole possibilities (see Local
part in http://en.wikipedia.org/wiki/Email_address ) I think the
following (not tested) could be  more appropriate:

[a-zA-Z0-9]
[a-zA-Z0-9_.+-]+
[a-zA-Z0-9]
@
[a-z0-9.-]+

don't know if it really works as I've never programmed in python but I
hope you get the idea.

Best regards,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users