Re: [ovirt-users] Changing MAC Pool

2017-06-18 Thread Michael Burman
Hi Mahdi

What version are you running? it is possible to extend the MAC pool range.

Before 4.1 you can extend the MAC pool range globally with engine-config
command, for example:

- engine-config -s
MacPoolRanges=00:00:00:00:00:00-00:00:00:10:00:00,00:00:00:02:00:00-00:03:00:00:00:0A


- restart ovirt-engine service

>From version 4.1, the MAC pool range moved to be in the cluster level and
it's now possible to edit/create/extend the MAC pool range per each cluster
separately  via the UI:

- 'Clusters' > edit cluster > 'MAC Address pool' range sub tab >
add/extend/edit/remove
- Or via 'Configure' it is possible to create MAC pool entities and then
assign them to desired clusters.

Cheers)

On Sun, Jun 18, 2017 at 1:25 PM, Mahdi Adnan 
wrote:

> Hi,
>
>
> I ran into an issue where i have no more MAC in the MAC pool.
>
> I used the default MAC pool and now i want to create a new one for the
> Cluster.
>
> Is it possible to create new MAC pool for the cluster without affecting
> the VMs ?
>
>
> Appreciate your help.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Changing MAC Pool

2017-06-18 Thread Mahdi Adnan
Hi,


I ran into an issue where i have no more MAC in the MAC pool.

I used the default MAC pool and now i want to create a new one for the Cluster.

Is it possible to create new MAC pool for the cluster without affecting the VMs 
?


Appreciate your help.


--

Respectfully
Mahdi A. Mahdi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-06-18 Thread Mike DePaulo
On Thu, May 18, 2017 at 10:03 AM, Sachidananda URS  wrote:

> Hi,
>
> On Thu, May 18, 2017 at 7:08 PM, Sahina Bose  wrote:
>
>>
>>
>> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo 
>> wrote:
>>
>>> Well, I tried both of the following:
>>> 1. Having only a boot partition and a PV for the OS that does not take
>>> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
>>> 2. Having not only a boot partition and a PV for the OS, but also an
>>> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
>>> Then, specfying "sda3" in Hosted Engine Setup.
>>>
>>> Both attempts resulted in errors like this:
>>> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
>>> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
>>> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>>>
>>
>> Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log
>>
>>
>>>
>>> It seems like having gluster bricks on the same disk as the OS doesn't
>>> work at all.
>>>
>>>
>
> Hi, /dev/sda3 should work, the error here is possibly due to filesystem
> signature.
>
> Can you please set wipefs=yes? For example
>
> [pv]
> action=create
> wipefs=yes
> devices=/dev/sda3
>
> -sac
>
>
Sorry for the long delay.

This worked. Thank you very much.

-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-06-18 Thread Evgenia Tokar
Hi Gianluca,

So sorry for the late reply, we have tried reproducing this bug and tried
to figure out a solution.

As Sharon mentioned earlier the graphics configuration for the vm is wrong
and unsupported.
We weren't able to reproduce this, and we don't have a solution for this.

Are you experiencing this issue in any other environment? If not, it might
be that something was misconfigured at some earlier stage and caused the
error now.

Thanks,
Jenny


On Fri, Jun 16, 2017 at 11:10 PM, Yaniv Kaul  wrote:

>
>
> On Fri, Jun 16, 2017 at 5:20 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, Apr 27, 2017 at 11:25 AM, Evgenia Tokar 
>> wrote:
>>
>>> Hi,
>>>
>>> It looks like the graphical console fields are not editable for hosted
>>> engine vm.
>>> We are trying to figure out how to solve this issue, it is not
>>> recommended to change db values manually.
>>>
>>> Thanks,
>>> Jenny
>>>
>>>
>>> On Thu, Apr 27, 2017 at 10:49 AM, Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
 On Thu, Apr 27, 2017 at 9:46 AM, Gianluca Cecchi <
 gianluca.cec...@gmail.com> wrote:

>
>
> BTW: if I try to set the video type to Cirrus from web admin gui (and
> automatically the Graphics Protocol becomes "VNC"), I get this when I 
> press
> the OK button:
>
> Error while executing action:
>
> HostedEngine:
>
>- There was an attempt to change Hosted Engine VM values that are
>locked.
>
> The same if I choose "VGA"
> Gianluca
>


 I verified that I already have in place this parameter:

 [root@ractorshe ~]# engine-config -g AllowEditingHostedEngine
 AllowEditingHostedEngine: true version: general
 [root@ractorshe ~]#


>>>
>> Hello is there a solution for this problem?
>> I'm now in 4.1.2 but still not able to access the engine console
>>
>
> I thought https://bugzilla.redhat.com/show_bug.cgi?id=1441570 was
> supposed to handle it...
> Can you share more information in the bug?
> Y.
>
>
>>
>> [root@ractor ~]# hosted-engine --add-console-password --password=pippo
>> no graphics devices configured
>> [root@ractor ~]#
>>
>> In web admin
>>
>> Graphics protocol: None  (while in edit vm screen it appears as "SPICE"
>> and still I can't modify it)
>> Video Type: QXL
>>
>> Any chance for upcoming 4.1.3? Can I test it it there is new changes
>> related to this problem.
>>
>> the qemu-kvm command line for hosted engine is now this one:
>>
>> qemu  8761 1  0 May30 ?01:33:29 /usr/libexec/qemu-kvm
>> -name guest=c71,debug-threads=on -S -object secret,id=masterKey0,format=ra
>> w,file=/var/lib/libvirt/qemu/domain-3-c71/master-key.aes -machine
>> pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m
>> size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp
>> 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
>> node,nodeid=0,cpus=0,mem=1024 -uuid 202e6f2e-f8a1-4e81-a079-c775e86a58d5
>> -smbios type=1,manufacturer=oVirt,product=oVirt
>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-
>> 8056-C4C04F30354A,uuid=202e6f2e-f8a1-4e81-a079-c775e86a58d5
>> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
>> r/lib/libvirt/qemu/domain-3-c71/monitor.sock,server,nowait -mon
>> chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2017-05-30T13:18:37,driftfix=slew -global
>> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
>> -drive if=none,id=drive-ide0-1-0,readonly=on -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>> file=/rhev/data-center/0001-0001-0001-0001-00ec/
>> 556abaa8-0fcc-4042-963b-f27db5e03837/images/7d5dd44f-
>> f5d1-4984-9e76-2b2f5e42a915/6d873dbd-c59d-4d6c-958f-
>> a4a389b94be5,format=raw,if=none,id=drive-virtio-disk0,
>> serial=7d5dd44f-f5d1-4984-9e76-2b2f5e42a915,cache=none,
>> werror=stop,rerror=stop,aio=threads -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virti
>> o-disk0,id=virtio-disk0,bootindex=1 -netdev
>> tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3
>> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2
>> 02e6f2e-f8a1-4e81-a079-c775e86a58d5.com.redhat.rhevm.vdsm,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel
>> 0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/2
>> 02e6f2e-f8a1-4e81-a079-c775e86a58d5.org.qemu.guest_agent.0,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel
>> 1,id=channel1,name=org.qemu.guest_agent.0 -chardev
>> spicevmc,id=charchannel2,name=vdagent -device
>> 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-18 Thread Evgenia Tokar
Hi,

What version are you running?

For the hosted engine vm to be imported and displayed in the engine, you
must first create a master storage domain.

What do you mean the hosted engine commands are failing? What happens when
you run hosted-engine --vm-status now?

Jenny Tokar


On Thu, Jun 15, 2017 at 6:32 PM, cmc  wrote:

> Hi,
>
> I've migrated from a bare-metal engine to a hosted engine. There were
> no errors during the install, however, the hosted engine did not get
> started. I tried running:
>
> hosted-engine --status
>
> on the host I deployed it on, and it returns nothing (exit code is 1
> however). I could not ping it either. So I tried starting it via
> 'hosted-engine --vm-start' and it returned:
>
> Virtual machine does not exist
>
> But it then became available. I logged into it successfully. It is not
> in the list of VMs however.
>
> Any ideas why the hosted-engine commands fail, and why it is not in
> the list of virtual machines?
>
> Thanks for any help,
>
> Cam
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Markus Stockhausen
> Von: Yaniv Kaul [yk...@redhat.com]
> Gesendet: Sonntag, 18. Juni 2017 09:58
> An: Markus Stockhausen
> Cc: Ovirt Users
> Betreff: Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS 
> contraproductive

On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen 
> wrote:
Hi,

we just set up a new 4.1.2 OVirt cluster. It is a quite normal
HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
Inside the VMs we use XFS too.

To our surprise we observe abysmal high IO during mkfs.xfs
and fstrim inside the VM. A simple example:

Step 1: Create 100G Thin disk
Result 1: Disk occupies ~10M on storage

Step 2: Format disk inside VM with mkfs.xfs
Result 2: Disk occupies 100G on storage

Changing the discard flag on the disk does not have any effect.

> Are you sure it's discarding, at all?
> 1. NFS: only NFSv4.2 supports discard. Is that the case in your setup?
> 2. What's the value of /sys/block//queue/discard_granularity ?
> 3. Can you share the mkfs.xfs command line?
> 4. Are you sure it's not a raw-sparse image?

Questions should answered in BZ1462504. When talking about thin
provisioned disks I'm only referring to the OVirt disk-option. So I
might mix up something here. Nevertheless the following is more than
strange for me:

- Create disk image: File on storage is small
- Format inside VM: File on storage is fully allocated
- Move around in Ovirt to another NFS storage: File is small again.

That means:
- mkfs.xfs inside VM and so qemu is hammering (empty) data into all blocks
- But this data must be zeros as they can be compated afterwards.

Best regards.

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Idan Shaby
Right, but I just wanted to emphasize that disabling "Enable Discard" for
that disk will cause qemu to ignore these UNMAP commands and not pass it on
to the underlying storage.
So if you've got this flag disabled, there's no reason to use fstrim. It
makes sense to use it only when enabling "Discard enabled".


Regards,
Idan

On Sun, Jun 18, 2017 at 11:13 AM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

>
> > Le 18 juin 2017 à 08:00, Idan Shaby  a écrit :
> > If you don't need live discarding, shutdown the VM and disable the
> "Enable Discard" option. That will cause qemu to ignore the live UNMAP SCSI
> commands coming from the guest and not pass it on to the underlying storage.
> > Note that this makes fstrim completely redundant, as the purpose of the
> command is to discard unused blocks under the given path.
>
> Redundant ? Useless you mean ? From my comprehension, the purpose to
> fstrim is to send UNMAP SCSI on batch instead of mount -o discard that send
> them synchronously.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Markus Stockhausen
Thanks for all your feedback.

Im trying to collect all the infos in BZ1462504.


Von: Fabrice Bacchella [fabrice.bacche...@orange.fr]
Gesendet: Sonntag, 18. Juni 2017 10:13
An: Idan Shaby
Cc: Markus Stockhausen; Ovirt Users
Betreff: Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS 
contraproductive

> Le 18 juin 2017 à 08:00, Idan Shaby  a écrit :
> If you don't need live discarding, shutdown the VM and disable the "Enable 
> Discard" option. That will cause qemu to ignore the live UNMAP SCSI commands 
> coming from the guest and not pass it on to the underlying storage.
> Note that this makes fstrim completely redundant, as the purpose of the 
> command is to discard unused blocks under the given path.

Redundant ? Useless you mean ? From my comprehension, the purpose to fstrim is 
to send UNMAP SCSI on batch instead of mount -o discard that send them 
synchronously.

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Fabrice Bacchella

> Le 18 juin 2017 à 08:00, Idan Shaby  a écrit :
> If you don't need live discarding, shutdown the VM and disable the "Enable 
> Discard" option. That will cause qemu to ignore the live UNMAP SCSI commands 
> coming from the guest and not pass it on to the underlying storage.
> Note that this makes fstrim completely redundant, as the purpose of the 
> command is to discard unused blocks under the given path.

Redundant ? Useless you mean ? From my comprehension, the purpose to fstrim is 
to send UNMAP SCSI on batch instead of mount -o discard that send them 
synchronously.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-18 Thread Yaniv Kaul
On Sat, Jun 17, 2017 at 12:50 AM,  wrote:

> If I reinstall and the rerun the hosted-engine setup how do I get the VMs
> in their current running state back into and being recognised by the new
> hosted engine?
>

Current running state is again quite challenging. You'll need to fix the
hosted-engine.

Can import the storage domain? (not for running VMs)
Y.


> Kind regards
>
> Andrew
>
> On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:
>
>
>
> On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent 
> wrote:
>
>> Hi
>>
>> Well I've got myself into a fine mess.
>>
>> host01 was setup with hosted-engine v4.1. This was successful.
>> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
>> running with more VMs on it)
>> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
>> but I couldn't add any storage domains to it. Cannot remember why.
>> In Ovirt engine UI I removed host02.
>> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me
>> it was already there (but it wasn't listed in the UI).
>> Renamed the reinstalled host02 to host03, changed the ipaddress, reconfig
>> the DNS server and added host03 into the Ovirt Engine UI.
>> All good, and I was able to import more VMs to it.
>> I was also able to shutdown a VM on host01 assign it to host03 and start
>> the VM. Cool, everything working.
>> The above was all last couple of weeks.
>>
>> This week I performed some yum updates on the Engine VM. No reboot.
>> Today noticed that the Ovirt services in the Engine VM were in a endless
>> restart loop. They would be up for a 5 minutes and then die.
>> Looking into /var/log/ovirt-engine/engine.log and I could only see
>> errors relating to host02. Ovirt was trying to find it and failing. Then
>> falling over.
>> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
>> remove bad references to hosts, but now realise that was a really bad idea
>> as it didn't do what I'd hoped.
>> At this point the sequence below worked, I could login to Ovirt UI but
>> after 5 minutes the services would be off
>> service ovirt-engine restart
>> service ovirt-websocket-proxy restart
>> service httpd restart
>>
>> I saw some reference to having to remove hosts from the database by hand
>> in situations where under the hood of Ovirt a decommission host was still
>> listed, but wasn't showing in the GUI.
>> So I removed reference to host02 (vds_id and host_id) in the following
>> tables in this order.
>> vds_dynamic
>> vds_statistics
>> vds_static
>> host_device
>>
>> Now when I try to start ovirt-websocket it will not start
>> service ovirt-websocket start
>> Redirecting to /bin/systemctl start  ovirt-websocket.service
>> Failed to start ovirt-websocket.service: Unit not found.
>>
>> I'm now thinking that I need to do the following in the engine VM
>>
>> # engine-cleanup
>> # yum remove ovirt-engine
>> # yum install ovirt-engine
>> # engine-setup
>>
>> But to run engine-cleanup I need to put the engine-vm into maintenance
>> mode and because of the --clean-metadata that I ran earlier on host01 I
>> cannot do that.
>>
>> What is the best course of action from here?
>>
>
> To be honest, with all the steps taken above, I'd install everything
> (including OS) from scratch...
> There's a bit too much mess to try to clean up properly here.
> Y.
>
>
>>
>> Cheers
>>
>>
>> Andrew
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Yaniv Kaul
On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen  wrote:

> Hi,
>
> we just set up a new 4.1.2 OVirt cluster. It is a quite normal
> HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
> Inside the VMs we use XFS too.
>
> To our surprise we observe abysmal high IO during mkfs.xfs
> and fstrim inside the VM. A simple example:
>
> Step 1: Create 100G Thin disk
> Result 1: Disk occupies ~10M on storage
>
> Step 2: Format disk inside VM with mkfs.xfs
> Result 2: Disk occupies 100G on storage
>
> Changing the discard flag on the disk does not have any effect.
>

Are you sure it's discarding, at all?
1. NFS: only NFSv4.2 supports discard. Is that the case in your setup?
2. What's the value of /sys/block//queue/discard_granularity ?
3. Can you share the mkfs.xfs command line?
4. Are you sure it's not a raw-sparse image?
Y.


> Am I missing something?
>
> Best regards.
>
> Markus
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt adding routes

2017-06-18 Thread Edward Haas
Hi Alan,

The oVirt host agent (VDSM) has a multi-gateway/sourceroute feature which
allows gateways to be specified per network, in addition to the host level
routes.
By default and depending on which version you use, only the ovirtmgmt
(management) network defines the host default route but all networks
(including ovirtmgmt) have a gateway definition which is set on a per
network base.

The way network based routes are defined is using different routing tables
and rules.
For more information, please see the feature page:
http://www.ovirt.org/develop/release-management/features/network/multiple-gateways

It should not block or interfere with your traffic, unless you are doing
something unexpected, like having the default that collides the the one
defined using ovirtmgmt network.

Thanks,
Edy.


On Tue, Jun 13, 2017 at 6:28 PM, Alan Griffiths 
wrote:

> Hi,
>
> When installing an ovirt host I got these routes automatically added
>
> default dev ovirtmgmt  table 2886865805  scope link
> 172.18.19.128/26 via 172.18.19.141 dev ovirtmgmt  table 2886865805
>
> What is their intended purpose? It seems to be stopping packets from being
> correctly routed to the local gateway.
>
> Thanks,
>
> Alan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recognizing Subinterfaces on oVirt Host

2017-06-18 Thread Edward Haas
Hello,

The sub-interface you are referring to is just an alias for adding a
secondary IP to the same interface.
In networking terms, it is the same network with multiple subnets (some can
even overlap) and there is no separation between them.

oVirt networks are considered layer 2, as such, only sub-interfaces that
reflect that are supported: VLAN/s.

I am not sure if an interface named em1:1 collide is some way with the root
interface em1, we have never tested such a scenario in oVirt.
But assuming it does not, and em1:1 is just another interface that the
oVirt host ignores, you can attach the VM/s to the em1 interface and do
whatever you will like with em1:1.

Another path to go with this is composing your own handling of the scenario
you need using the oVirt hooks. See:
http://www.ovirt.org/develop/developer-guide/vdsm/hooks
Here is one specific for defining secondary addresses:
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/extra_ipv4_addrs

Thanks,
Edy.

On Sat, Jun 17, 2017 at 12:14 AM, Adam Mills  wrote:

> Hey Team!
>
> We are trying to nest some of our existing technology into the oVirt host
> as to not have to reinvent tooling, etc. Our proposal is to have a
> sub-interface on the 10G nic and place the VMs in that network. The network
> will be advertised to the Top of Rack switch via BGP.
>
> My current issue is that the oVirt web interface does not recognize the
> existence of an em1:1 interface of network. Given the above parameters, is
> there another way to accomplish what we are trying to do?
>
> Thanks in advance!
>
> Please refer to MS Paint style Visio for a visual
>
> [image: Inline image 1]
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Nir Soffer
בתאריך יום א׳, 18 ביוני 2017, 9:01, מאת Idan Shaby ‏:

> Hi Markus,
>
> AFAIK, mkfs.xfs tries to discard all the blocks before formatting the
> device.
> If you don't want it to do that, you can use the "-K Do not attempt to
> discard blocks at mkfs time" option of mkfs.xfs.
>
> In oVirt 4.1 we introduced the "Enable Discard" flag for a virtual
> machine's disk.
> When enabled, qemu is configured to pass on live UNMAP SCSI commands from
> the guest to the underlying storage.
> If you don't need live discarding, shutdown the VM and disable the "Enable
> Discard" option. That will cause qemu to ignore the live UNMAP SCSI
> commands coming from the guest and not pass it on to the underlying storage.
> Note that this makes fstrim completely redundant, as the purpose of the
> command is to discard unused blocks under the given path.
>

I think we need a bug for this, both fo documrnting this issue, and to
investigate why discarding unused blocks allocate and zero all blocks. This
behaviour is unhelpful.

Markus, can you check if performing the same discard from the host leads to
same result?



>
> Regards,
> Idan
>
> On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen <
> stockhau...@collogia.de> wrote:
>
>> Hi,
>>
>> we just set up a new 4.1.2 OVirt cluster. It is a quite normal
>> HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
>> Inside the VMs we use XFS too.
>>
>> To our surprise we observe abysmal high IO during mkfs.xfs
>> and fstrim inside the VM. A simple example:
>>
>> Step 1: Create 100G Thin disk
>> Result 1: Disk occupies ~10M on storage
>>
>> Step 2: Format disk inside VM with mkfs.xfs
>> Result 2: Disk occupies 100G on storage
>>
>> Changing the discard flag on the disk does not have any effect.
>>
>> Am I missing something?
>>
>> Best regards.
>>
>> Markus
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Idan Shaby
Hi Markus,

AFAIK, mkfs.xfs tries to discard all the blocks before formatting the
device.
If you don't want it to do that, you can use the "-K Do not attempt to
discard blocks at mkfs time" option of mkfs.xfs.

In oVirt 4.1 we introduced the "Enable Discard" flag for a virtual
machine's disk.
When enabled, qemu is configured to pass on live UNMAP SCSI commands from
the guest to the underlying storage.
If you don't need live discarding, shutdown the VM and disable the "Enable
Discard" option. That will cause qemu to ignore the live UNMAP SCSI
commands coming from the guest and not pass it on to the underlying storage.
Note that this makes fstrim completely redundant, as the purpose of the
command is to discard unused blocks under the given path.


Regards,
Idan

On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen  wrote:

> Hi,
>
> we just set up a new 4.1.2 OVirt cluster. It is a quite normal
> HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
> Inside the VMs we use XFS too.
>
> To our surprise we observe abysmal high IO during mkfs.xfs
> and fstrim inside the VM. A simple example:
>
> Step 1: Create 100G Thin disk
> Result 1: Disk occupies ~10M on storage
>
> Step 2: Format disk inside VM with mkfs.xfs
> Result 2: Disk occupies 100G on storage
>
> Changing the discard flag on the disk does not have any effect.
>
> Am I missing something?
>
> Best regards.
>
> Markus
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users