Re: [ovirt-users] [Users] Post-Install Engine VM Changes Feasible?

2014-05-14 Thread Giuseppe Ragusa
Hi all,
sorry for the late reply.

I noticed that I missed the deviceId property on my additional-nic line below, 
but I can confirm that the engine vm (installed with my previously modified 
template in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in as 
outlined below) is still up and running (apparently) ok without it (I verified 
that the deviceId property has not been added automatically in 
/etc/ovirt-hosted-engine/vm.conf).

I admit that modifying a package file not marked as configuration (under 
/usr/share... may the FHS forgive me... :) is not best practice, but modifying 
the configuration one (under /etc...) afterwards seemed more error prone (needs 
propagation to further nodes).

In order to have a clear picture of the matter (and write/add-to a wiki page on 
engine vm customization) I'd like to read more on the syntax of these vm.conf 
files (they are neither libvirt XML files nor OTOPI files) and which properties 
are default/needed/etc.

From simple analogy, as an example, I thought that an unique index property 
would be needed (as in ide/virtio disk devices) for adding a nic, but Andrew 
example does not add it...

Any pointers to doc/code for further enlightenment?

Many thanks in advance,
Giuseppe

 Date: Thu, 10 Apr 2014 08:40:25 +0200
 From: sbona...@redhat.com
 To: and...@andrewklau.com
 CC: giuseppe.rag...@hotmail.com; j...@wrale.com; users@ovirt.org
 Subject: Re: [Users] Post-Install Engine VM Changes Feasible?
 
 
 
 Hi,
 
 
 Il 10/04/2014 02:40, Andrew Lau ha scritto:
  On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau and...@andrewklau.com wrote:
  On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola sbona...@redhat.com 
  wrote:
  Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
  Hi Joshua,
 
  --
  Date: Sat, 15 Mar 2014 02:32:59 -0400
  From: j...@wrale.com
  To: users@ovirt.org
  Subject: [Users] Post-Install Engine VM Changes Feasible?
 
  Hi,
 
  I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using 
  hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across 
  six nodes.
 
  I have a layered networking topology ((V)LANs for public, internal, 
  storage, compute and ipmi).  I am comfortable doing the bridging for each
  interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
 
  Here's my desired topology: 
  http://www.asciiflow.com/#Draw6325992559863447154
 
  Here's my keepalived setup: 
  https://gist.github.com/josh-at-knoesis/98618a16418101225726
 
  I'm writing a lot of documentation of the many steps I'm taking.  I hope 
  to eventually release a distributed introspective all-in-one (including
  distributed storage) guide.
 
 I hope you'll publish it also on ovirt.org wiki :-)
 
 
  Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default 
  end up with one interface on my engine, probably on my internal VLAN, as
  that's where I'd like the control traffic to flow.  I definitely could 
  do NAT, but I'd be most happy to see the engine have a presence on all 
  of the
  LANs, if for no other reason than because I want to send backups 
  directly over the storage VLAN.
 
  I'll cut to it:  I believe I could successfully alter the vdsm template 
  (vm.conf.in http://vm.conf.in) to give me the extra interfaces I 
  require.
  It hit me, however, that I could just take the defaults for the initial 
  install.  Later, I think I'll be able to come back with virsh and make my
  changes to the gracefully disabled VM.  Is this true?
 
  [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
 
  Thanks,
  Joshua
 
 
  I started from the same reference[1] and ended up statically modifying 
  vm.conf.in before launching setup, like this:
 
  cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
  cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
  vmId=@VM_UUID@
  memSize=@MEM_SIZE@
  display=@CONSOLE_TYPE@
  devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, 
  bus:1,
  type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
  devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
  slot:0x06, domain:0x, type:pci, 
  function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
  devices={device:scsi,model:virtio-scsi,type:controller}
  devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
  slot:0x03, domain:0x, type:pci, 
  function:0x0},device:bridge,type:interface@BOOT_PXE@}
  devices={index:8,nicModel:pv

Re: [ovirt-users] [Users] Post-Install Engine VM Changes Feasible?

2014-04-10 Thread Sandro Bonazzola


Hi,


Il 10/04/2014 02:40, Andrew Lau ha scritto:
 On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau and...@andrewklau.com wrote:
 On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola sbona...@redhat.com 
 wrote:
 Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
 Hi Joshua,

 --
 Date: Sat, 15 Mar 2014 02:32:59 -0400
 From: j...@wrale.com
 To: users@ovirt.org
 Subject: [Users] Post-Install Engine VM Changes Feasible?

 Hi,

 I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using 
 hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six 
 nodes.

 I have a layered networking topology ((V)LANs for public, internal, 
 storage, compute and ipmi).  I am comfortable doing the bridging for each
 interface myself via /etc/sysconfig/network-scripts/ifcfg-*.

 Here's my desired topology: 
 http://www.asciiflow.com/#Draw6325992559863447154

 Here's my keepalived setup: 
 https://gist.github.com/josh-at-knoesis/98618a16418101225726

 I'm writing a lot of documentation of the many steps I'm taking.  I hope 
 to eventually release a distributed introspective all-in-one (including
 distributed storage) guide.

I hope you'll publish it also on ovirt.org wiki :-)


 Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default 
 end up with one interface on my engine, probably on my internal VLAN, as
 that's where I'd like the control traffic to flow.  I definitely could do 
 NAT, but I'd be most happy to see the engine have a presence on all of the
 LANs, if for no other reason than because I want to send backups directly 
 over the storage VLAN.

 I'll cut to it:  I believe I could successfully alter the vdsm template 
 (vm.conf.in http://vm.conf.in) to give me the extra interfaces I require.
 It hit me, however, that I could just take the defaults for the initial 
 install.  Later, I think I'll be able to come back with virsh and make my
 changes to the gracefully disabled VM.  Is this true?

 [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

 Thanks,
 Joshua


 I started from the same reference[1] and ended up statically modifying 
 vm.conf.in before launching setup, like this:

 cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
 /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
 cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
 vmId=@VM_UUID@
 memSize=@MEM_SIZE@
 display=@CONSOLE_TYPE@
 devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
 type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
 devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
 function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
 devices={device:scsi,model:virtio-scsi,type:controller}
 devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
 vmName=@NAME@
 spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
 smp=@VCPUS@
 cpuType=@CPU_TYPE@
 emulatedMachine=@EMULATED_MACHINE@
 EOM


 Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf 
 after setup:
 - put the system in global maintenance
 - edit the vm.conf file on all the hosts running the hosted engine
 - shutdown the vm: hosted-engine --vm-shutdown
 - start again the vm: hosted-engine --vm-start
 - exit global maintenance

 Giuseppe, Joshua: can you share your changes in a guide for Hosted engine 
 users on ovirt.org wiki?



 So would you simply just add a new line under the original devices line? ie.
 devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci,
 function:0x0},device:bridge,type:interface}

 Are there any good practices for getting the mac addr so it won't be
 possible to clash with ones vdsm would generate? I assume the same
 applies for deviceid?
 Did you also change the slot?

 
 This worked successfully:
 
 yum -y install python-virtinst
 
 # generate uuid and mac address
 echo  'import

Re: [ovirt-users] [Users] Post-Install Engine VM Changes Feasible?

2014-04-09 Thread Andrew Lau
On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau and...@andrewklau.com wrote:
 On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola sbona...@redhat.com wrote:
 Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
 Hi Joshua,

 --
 Date: Sat, 15 Mar 2014 02:32:59 -0400
 From: j...@wrale.com
 To: users@ovirt.org
 Subject: [Users] Post-Install Engine VM Changes Feasible?

 Hi,

 I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted 
 engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

 I have a layered networking topology ((V)LANs for public, internal, 
 storage, compute and ipmi).  I am comfortable doing the bridging for each
 interface myself via /etc/sysconfig/network-scripts/ifcfg-*.

 Here's my desired topology: 
 http://www.asciiflow.com/#Draw6325992559863447154

 Here's my keepalived setup: 
 https://gist.github.com/josh-at-knoesis/98618a16418101225726

 I'm writing a lot of documentation of the many steps I'm taking.  I hope to 
 eventually release a distributed introspective all-in-one (including
 distributed storage) guide.

 Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default end 
 up with one interface on my engine, probably on my internal VLAN, as
 that's where I'd like the control traffic to flow.  I definitely could do 
 NAT, but I'd be most happy to see the engine have a presence on all of the
 LANs, if for no other reason than because I want to send backups directly 
 over the storage VLAN.

 I'll cut to it:  I believe I could successfully alter the vdsm template 
 (vm.conf.in http://vm.conf.in) to give me the extra interfaces I require.
 It hit me, however, that I could just take the defaults for the initial 
 install.  Later, I think I'll be able to come back with virsh and make my
 changes to the gracefully disabled VM.  Is this true?

 [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

 Thanks,
 Joshua


 I started from the same reference[1] and ended up statically modifying 
 vm.conf.in before launching setup, like this:

 cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
 /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
 cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
 vmId=@VM_UUID@
 memSize=@MEM_SIZE@
 display=@CONSOLE_TYPE@
 devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
 type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
 devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
 function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
 devices={device:scsi,model:virtio-scsi,type:controller}
 devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
 vmName=@NAME@
 spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
 smp=@VCPUS@
 cpuType=@CPU_TYPE@
 emulatedMachine=@EMULATED_MACHINE@
 EOM


 Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf 
 after setup:
 - put the system in global maintenance
 - edit the vm.conf file on all the hosts running the hosted engine
 - shutdown the vm: hosted-engine --vm-shutdown
 - start again the vm: hosted-engine --vm-start
 - exit global maintenance

 Giuseppe, Joshua: can you share your changes in a guide for Hosted engine 
 users on ovirt.org wiki?



 So would you simply just add a new line under the original devices line? ie.
 devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci,
 function:0x0},device:bridge,type:interface}

 Are there any good practices for getting the mac addr so it won't be
 possible to clash with ones vdsm would generate? I assume the same
 applies for deviceid?
 Did you also change the slot?


This worked successfully:

yum -y install python-virtinst

# generate uuid and mac address
echo  'import virtinst.util ; print
virtinst.util.uuidToString(virtinst.util.randomUUID())' | python
echo  'import virtinst.util

Re: [Users] Post-Install Engine VM Changes Feasible?

2014-04-08 Thread Andrew Lau
On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola sbona...@redhat.com wrote:
 Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
 Hi Joshua,

 --
 Date: Sat, 15 Mar 2014 02:32:59 -0400
 From: j...@wrale.com
 To: users@ovirt.org
 Subject: [Users] Post-Install Engine VM Changes Feasible?

 Hi,

 I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted 
 engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

 I have a layered networking topology ((V)LANs for public, internal, storage, 
 compute and ipmi).  I am comfortable doing the bridging for each
 interface myself via /etc/sysconfig/network-scripts/ifcfg-*.

 Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154

 Here's my keepalived setup: 
 https://gist.github.com/josh-at-knoesis/98618a16418101225726

 I'm writing a lot of documentation of the many steps I'm taking.  I hope to 
 eventually release a distributed introspective all-in-one (including
 distributed storage) guide.

 Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default end 
 up with one interface on my engine, probably on my internal VLAN, as
 that's where I'd like the control traffic to flow.  I definitely could do 
 NAT, but I'd be most happy to see the engine have a presence on all of the
 LANs, if for no other reason than because I want to send backups directly 
 over the storage VLAN.

 I'll cut to it:  I believe I could successfully alter the vdsm template 
 (vm.conf.in http://vm.conf.in) to give me the extra interfaces I require.
 It hit me, however, that I could just take the defaults for the initial 
 install.  Later, I think I'll be able to come back with virsh and make my
 changes to the gracefully disabled VM.  Is this true?

 [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

 Thanks,
 Joshua


 I started from the same reference[1] and ended up statically modifying 
 vm.conf.in before launching setup, like this:

 cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
 /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
 cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
 vmId=@VM_UUID@
 memSize=@MEM_SIZE@
 display=@CONSOLE_TYPE@
 devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
 type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
 devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
 function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
 devices={device:scsi,model:virtio-scsi,type:controller}
 devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
 vmName=@NAME@
 spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
 smp=@VCPUS@
 cpuType=@CPU_TYPE@
 emulatedMachine=@EMULATED_MACHINE@
 EOM


 Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf 
 after setup:
 - put the system in global maintenance
 - edit the vm.conf file on all the hosts running the hosted engine
 - shutdown the vm: hosted-engine --vm-shutdown
 - start again the vm: hosted-engine --vm-start
 - exit global maintenance

 Giuseppe, Joshua: can you share your changes in a guide for Hosted engine 
 users on ovirt.org wiki?



So would you simply just add a new line under the original devices line? ie.
devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00,
slot:0x03, domain:0x, type:pci,
function:0x0},device:bridge,type:interface}

Are there any good practices for getting the mac addr so it won't be
possible to clash with ones vdsm would generate? I assume the same
applies for deviceid?
Did you also change the slot?



 I simply added a second nic (with a fixed MAC address from the 
 locally-administered pool, since I didn't know how to auto-generate one) and 
 added an
 index for nics too (mimicking the the storage devices setup already present).

 My network setup is much simpler than yours: ovirtmgmt bridge

Re: [Users] Post-Install Engine VM Changes Feasible?

2014-03-17 Thread Sandro Bonazzola
Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
 Hi Joshua,
 
 --
 Date: Sat, 15 Mar 2014 02:32:59 -0400
 From: j...@wrale.com
 To: users@ovirt.org
 Subject: [Users] Post-Install Engine VM Changes Feasible?
 
 Hi,
 
 I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted 
 engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
 
 I have a layered networking topology ((V)LANs for public, internal, storage, 
 compute and ipmi).  I am comfortable doing the bridging for each
 interface myself via /etc/sysconfig/network-scripts/ifcfg-*. 
 
 Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
 
 Here's my keepalived setup: 
 https://gist.github.com/josh-at-knoesis/98618a16418101225726
 
 I'm writing a lot of documentation of the many steps I'm taking.  I hope to 
 eventually release a distributed introspective all-in-one (including
 distributed storage) guide. 
 
 Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default end 
 up with one interface on my engine, probably on my internal VLAN, as
 that's where I'd like the control traffic to flow.  I definitely could do 
 NAT, but I'd be most happy to see the engine have a presence on all of the
 LANs, if for no other reason than because I want to send backups directly 
 over the storage VLAN. 
 
 I'll cut to it:  I believe I could successfully alter the vdsm template 
 (vm.conf.in http://vm.conf.in) to give me the extra interfaces I require. 
 It hit me, however, that I could just take the defaults for the initial 
 install.  Later, I think I'll be able to come back with virsh and make my
 changes to the gracefully disabled VM.  Is this true?
 
 [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
 
 Thanks,
 Joshua
 
 
 I started from the same reference[1] and ended up statically modifying 
 vm.conf.in before launching setup, like this:
 
 cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
 /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
 cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
 vmId=@VM_UUID@
 memSize=@MEM_SIZE@
 display=@CONSOLE_TYPE@
 devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
 type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
 devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
 function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
 devices={device:scsi,model:virtio-scsi,type:controller}
 devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
 function:0x0},device:bridge,type:interface@BOOT_PXE@}
 devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
 vmName=@NAME@
 spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
 smp=@VCPUS@
 cpuType=@CPU_TYPE@
 emulatedMachine=@EMULATED_MACHINE@
 EOM


Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf 
after setup:
- put the system in global maintenance
- edit the vm.conf file on all the hosts running the hosted engine
- shutdown the vm: hosted-engine --vm-shutdown
- start again the vm: hosted-engine --vm-start
- exit global maintenance

Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users 
on ovirt.org wiki?



 
 I simply added a second nic (with a fixed MAC address from the 
 locally-administered pool, since I didn't know how to auto-generate one) and 
 added an
 index for nics too (mimicking the the storage devices setup already present).
 
 My network setup is much simpler than yours: ovirtmgmt bridge is on an 
 isolated oVirt-management-only network without gateway, my actual LAN with
 gateway and Internet access (for package updates/installation) is connected 
 to lan bridge and the SAN/migration LAN is a further (not bridged) 10
 Gib/s isolated network for which I do not expect to need Engine/VMs 
 reachability (so no third interface for Engine) since all actions should be
 performed from Engine but only through vdsm hosts (I use a split-DNS setup 
 by means of carefully crafted hosts files on Engine and vdsm hosts)
 
 I can confirm that the engine vm gets created as expected and that network

[Users] Post-Install Engine VM Changes Feasible?

2014-03-15 Thread Joshua Dotson
Hi,

I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted
engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

I have a layered networking topology ((V)LANs for public, internal,
storage, compute and ipmi).  I am comfortable doing the bridging for each
interface myself via /etc/sysconfig/network-scripts/ifcfg-*.

Here's my desired topology:
http://www.asciiflow.com/#Draw6325992559863447154

Here's my keepalived setup:
https://gist.github.com/josh-at-knoesis/98618a16418101225726

I'm writing a lot of documentation of the many steps I'm taking.  I hope to
eventually release a distributed introspective all-in-one (including
distributed storage) guide.

Looking at vm.conf.in, it looks like I'd by default end up with one
interface on my engine, probably on my internal VLAN, as that's where I'd
like the control traffic to flow.  I definitely could do NAT, but I'd be
most happy to see the engine have a presence on all of the LANs, if for no
other reason than because I want to send backups directly over the storage
VLAN.

I'll cut to it:  I believe I could successfully alter the vdsm template (
vm.conf.in) to give me the extra interfaces I require.  It hit me, however,
that I could just take the defaults for the initial install.  Later, I
think I'll be able to come back with virsh and make my changes to the
gracefully disabled VM.  Is this true?

[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

Thanks,
Joshua
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Post-Install Engine VM Changes Feasible?

2014-03-15 Thread Giuseppe Ragusa
Hi Joshua,

Date: Sat, 15 Mar 2014 02:32:59 -0400
From: j...@wrale.com
To: users@ovirt.org
Subject: [Users] Post-Install Engine VM Changes Feasible?

Hi,

I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted 
engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

I have a layered networking topology ((V)LANs for public, internal, storage, 
compute and ipmi).  I am comfortable doing the bridging for each interface 
myself via /etc/sysconfig/network-scripts/ifcfg-*.  


Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154

Here's my keepalived setup: 
https://gist.github.com/josh-at-knoesis/98618a16418101225726


I'm writing a lot of documentation of the many steps I'm taking.  I hope to 
eventually release a distributed introspective all-in-one (including 
distributed storage) guide.  


Looking at vm.conf.in, it looks like I'd by default end up with one interface 
on my engine, probably on my internal VLAN, as that's where I'd like the 
control traffic to flow.  I definitely could do NAT, but I'd be most happy to 
see the engine have a presence on all of the LANs, if for no other reason than 
because I want to send backups directly over the storage VLAN.  


I'll cut to it:  I believe I could successfully alter the vdsm template 
(vm.conf.in) to give me the extra interfaces I require.  It hit me, however, 
that I could just take the defaults for the initial install.  Later, I think 
I'll be able to come back with virsh and make my changes to the gracefully 
disabled VM.  Is this true? 


[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

Thanks,
Joshua



I started from the same reference[1] and ended up statically modifying 
vm.conf.in before launching setup, like this:

cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
vmId=@VM_UUID@
memSize=@MEM_SIZE@
display=@CONSOLE_TYPE@
devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
devices={device:scsi,model:virtio-scsi,type:controller}
devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
vmName=@NAME@
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp=@VCPUS@
cpuType=@CPU_TYPE@
emulatedMachine=@EMULATED_MACHINE@
EOM

I simply added a second nic (with a fixed MAC address from the 
locally-administered pool, since I didn't know how to auto-generate one) and 
added an index for nics too (mimicking the the storage devices setup already 
present).

My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated 
oVirt-management-only network without gateway, my actual LAN with gateway and 
Internet access (for package updates/installation) is connected to lan bridge 
and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network 
for which I do not expect to need Engine/VMs reachability (so no third 
interface for Engine) since all actions should be performed from Engine but 
only through vdsm hosts (I use a split-DNS setup by means of carefully 
crafted hosts files on Engine and vdsm hosts)

I can confirm that the engine vm gets created as expected and that network 
connectivity works.

Unfortunately I cannot validate the whole design yet, since I'm still debugging 
HA-agent problems that prevent a reliable Engine/SD startup.

Hope it helps.

Greetings,
Giuseppe

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users