Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?

--
Regards,
Sébastien Han.


On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 W
 ell, I think I will have to go with one ip per service and forget about
 load balancing.  It seems as though with LVS routing requests internally
 through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
 like a shame not to be able to distribute the work among the controller
 nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute nodes
 behind them.   I am trying to set the two public nodes in for HA and load
 balancing,  I plan to run all the openstack services on these two nodes in
 Active/Active where possible.   I currently have MySQL and RabbitMQ setup
 in pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
 my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Well I don't know your setup, if you use LB for API service or if you
 use an active/passive pacemaker but at the end it's not that much IPs I
 guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your
 setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 T
 he only real problem is that it would consume a lot of IP addresses
 when exposing the public interfaces.   I _think_ I may have the solution in
 your blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
 swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
 started on?   It seems like you either need to set 
 net.ipv4.ip_nonlocal_bind
 = 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com
 a écrit :

 I'm currently updating that part of the documentation - indeed it
 states that two IPs are used, but in fact, you end up with only one VIP 
 for
 the API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it
 own IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or 
 p_ip_api if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API
 service IP address is 192.168.42.104 as in the configuration explained
 here, you would use the following line in your nova.conf file : 
 glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host
 = 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the
 OpenStack services are kept alive with a combination of haproxy and
 keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a 
 virtual ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This 

Re: [Openstack] Ability to view Ubuntu boot process in VNC console

2013-02-15 Thread JuanFra Rodriguez Cardoso
Hi Andrii:

In this post we talk about it.
http://openstack.markmail.org/thread/tqza3vv4ap4out2q

Regards!
-- 
JuanFra


2013/2/14 Andrii Loshkovskyi loshkovs...@gmail.com

 Hello,

 I tried setting the value with/without brackets and encountered the
 following error:

 Invalid output terminal ttyS0

 As far as I know I can check the kernel boot parameters this way:

 cat /proc/cmdline
 root=/dev/vda console=ttyS0 selinux=0

 The line above is same all of the time, even if I applied some new changes
 to the GRUB config.

 On Thu, Feb 14, 2013 at 4:37 PM, Joe Breu joseph.b...@rackspace.comwrote:

  Hello Andrii,

  Can you try setting GRUB_TERMINAL to ttyS0, update the grub config, and
 boot the image?

   ---
 Joseph Breu
 Deployment Engineer
 Rackspace Private Cloud
 210-312-3508

  On Feb 14, 2013, at 8:11 AM, Andrii Loshkovskyi wrote:

  Currently, when I boot up an Ubuntu virtual machine, the boot process
 messages are not shown up in the VNC console. Everything that I see in the
 console is iPXE Booting from ROM..., black screen and login entry at the
 end. I am using OpenStack Essex and my VM's image was built from Ubuntu
 Server 12.04 LTS. I tried editing the GRUB config file /etc/default/grub
 and updating configuraton with update-grub afterwards. Particularly, I made
 sure there is no quiet option in the Linux command line parameters,
 GRUB_TERMINAL=console is uncommented, etc.

  The problem is it looks like Nova does not use this GRUB config at all.
 Any changes applied to the GRUB config are not visible on the boot process.
 I studied documentation and googled a lot trying to understand how the VMs
 boot process works in OpenStack but with no success.

  I need your help on this issue. I would appreciate if someone shares an
 advice how to view the boot messages while booting up a VM or point me to
 the proper documentation. Let me know if you need more details on my
 configuration as it looks to be a rather general issue.

  Thank you for your help.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Kind regards,
 Andrii Loshkovskyi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Hello Anil,

I can not ssh into the VM so I cant do ifconfig from vm.

I am using quantum and, quantum-plugin-openvswitch-agent,
quantum-dhcp-agent, quantum-l3-agent as described in the guide.

Thanks.

-
Chathura Madhusanka Sarathchandra Magurawalage.
1NW.2.1, Desk 2
School of Computer Science and Electronic Engineering
University Of Essex
United Kingdom.

Email: csar...@essex.ac.uk
  chathura.sarathchan...@gmail.com 77.chath...@gmail.com
  77.chath...@gmail.com


On 15 February 2013 07:34, Anil Vishnoi vishnoia...@gmail.com wrote:

 Did your VM got ip address ? Can you paste the output of ifconfig from
 your vm. Are you using nova-network or quantum ? If quantum - which plugin
 are you using ?


  On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 
 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread JuanFra Rodriguez Cardoso
Have you tried to do ping from own host to vm?
Have you enabled PING and SSH in 'Access and security policies'?

Regards!
JuanFra


2013/2/15 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Anil,

 I can not ssh into the VM so I cant do ifconfig from vm.

 I am using quantum and, quantum-plugin-openvswitch-agent,
 quantum-dhcp-agent, quantum-l3-agent as described in the guide.

 Thanks.


 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com 77.chath...@gmail.com
   77.chath...@gmail.com


 On 15 February 2013 07:34, Anil Vishnoi vishnoia...@gmail.com wrote:

 Did your VM got ip address ? Can you paste the output of ifconfig from
 your vm. Are you using nova-network or quantum ? If quantum - which plugin
 are you using ?


  On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage
 77.chath...@gmail.com wrote:

  Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc 
 version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 
 15:48:03 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 
 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Jean-Baptiste RANSY
Hello Chathura;

It's normal that your compute node have no route to the tenant network.
Quantum and openvswitch provide Layer2 link and as i can see, the VM
obtain a IP address.
So we can assume that quantum and openvswitch are setup correctly.

Same question as JuanFra : Have you enabled PING and SSH in 'Access and
security policies'?

Other things :

Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
is running on your Compute Node ?
If yes, check you nova.conf.

Regards,

Jean-Baptiste RANSY


On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello,

 I followed the folsom basic install instructions
 in 
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug CPUs
 [0.00] PM: Registered nosave memory: 0009b000 - 
 0009c000
 [0.00] PM: Registered nosave memory: 0009c000 - 
 000a
 [0.00] PM: Registered nosave memory: 000a - 
 000f
 [0.00] PM: Registered nosave 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Guilherme Russi
Hello guys,

 I got the same problem, I have enabled the SSH and Ping policies, when I
type sudo ifconfig -a inside my VM (Through VNC) the only IP showed is
the lo IP.
 What am I missing?

Regards.

Guilherme.


2013/2/15 Jean-Baptiste RANSY jean-baptiste.ra...@alyseo.com

  Hello Chathura;

 It's normal that your compute node have no route to the tenant network.
 Quantum and openvswitch provide Layer2 link and as i can see, the VM
 obtain a IP address.
 So we can assume that quantum and openvswitch are setup correctly.

 Same question as JuanFra : Have you enabled PING and SSH in 'Access and
 security policies'?

 Other things :

 Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
 is running on your Compute Node ?
 If yes, check you nova.conf.

 Regards,

 Jean-Baptiste RANSY



 On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:

  Hello,

  I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

  But now I am not able to ping either the private or the floating ip of
 the instances.

  Can someone please help?

  Instance log:

  [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread JuanFra Rodriguez Cardoso
Hi Guilherme:

Try to issue: 'dhclient eth1' in your VM (from VNC console). It could be
problem with net rules in udev.


Regards,
JuanFra


2013/2/15 Guilherme Russi luisguilherme...@gmail.com

 Hello guys,

  I got the same problem, I have enabled the SSH and Ping policies, when I
 type sudo ifconfig -a inside my VM (Through VNC) the only IP showed is
 the lo IP.
  What am I missing?

 Regards.

 Guilherme.



 2013/2/15 Jean-Baptiste RANSY jean-baptiste.ra...@alyseo.com

  Hello Chathura;

 It's normal that your compute node have no route to the tenant network.
 Quantum and openvswitch provide Layer2 link and as i can see, the VM
 obtain a IP address.
 So we can assume that quantum and openvswitch are setup correctly.

 Same question as JuanFra : Have you enabled PING and SSH in 'Access and
 security policies'?

 Other things :

 Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
 is running on your Compute Node ?
 If yes, check you nova.conf.

 Regards,

 Jean-Baptiste RANSY



 On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:

  Hello,

  I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

  But now I am not able to ping either the private or the floating ip of
 the instances.

  Can someone please help?

  Instance log:

  [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 
 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 

Re: [Openstack] Ability to view Ubuntu boot process in VNC console

2013-02-15 Thread Andrii Loshkovskyi
Hello,

Thank you for your answer.

Actually, I already tried updating Grub with those parameters. No effect.

It looks to be a libvirt issue. I'm studying our libvirt configs and
templates now. It was found that libvirt uses a template with the following
line:

cmdlineconsole=ttyS0/cmdline

I suppose that I need to change it to tty1 as the last is shown on my VNC
console. I'll keep investigation.



On Fri, Feb 15, 2013 at 11:11 AM, JuanFra Rodriguez Cardoso 
juanfra.rodriguez.card...@gmail.com wrote:

 Hi Andrii:

 In this post we talk about it.
 http://openstack.markmail.org/thread/tqza3vv4ap4out2q

 Regards!
 --
 JuanFra


 2013/2/14 Andrii Loshkovskyi loshkovs...@gmail.com

 Hello,

 I tried setting the value with/without brackets and encountered the
 following error:

 Invalid output terminal ttyS0

 As far as I know I can check the kernel boot parameters this way:

 cat /proc/cmdline
 root=/dev/vda console=ttyS0 selinux=0

 The line above is same all of the time, even if I applied some new
 changes to the GRUB config.

 On Thu, Feb 14, 2013 at 4:37 PM, Joe Breu joseph.b...@rackspace.comwrote:

  Hello Andrii,

  Can you try setting GRUB_TERMINAL to ttyS0, update the grub config,
 and boot the image?

   ---
 Joseph Breu
 Deployment Engineer
 Rackspace Private Cloud
 210-312-3508

  On Feb 14, 2013, at 8:11 AM, Andrii Loshkovskyi wrote:

  Currently, when I boot up an Ubuntu virtual machine, the boot process
 messages are not shown up in the VNC console. Everything that I see in the
 console is iPXE Booting from ROM..., black screen and login entry at the
 end. I am using OpenStack Essex and my VM's image was built from Ubuntu
 Server 12.04 LTS. I tried editing the GRUB config file /etc/default/grub
 and updating configuraton with update-grub afterwards. Particularly, I made
 sure there is no quiet option in the Linux command line parameters,
 GRUB_TERMINAL=console is uncommented, etc.

  The problem is it looks like Nova does not use this GRUB config at
 all. Any changes applied to the GRUB config are not visible on the boot
 process. I studied documentation and googled a lot trying to understand how
 the VMs boot process works in OpenStack but with no success.

  I need your help on this issue. I would appreciate if someone shares
 an advice how to view the boot messages while booting up a VM or point me
 to the proper documentation. Let me know if you need more details on my
 configuration as it looks to be a rather general issue.

  Thank you for your help.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Kind regards,
 Andrii Loshkovskyi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Kind regards,
Andrii Loshkovskyi
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Sylvain Bauza


Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a écrit :


How can I log into the VM from VNC? What are the credentials?



You have multiple ways to get VNC access. The easiest one is thru 
Horizon. Other can be looking at the KVM command-line for the desired 
instance (on the compute node) and check the vnc port in use (assuming 
KVM as hypervisor).

This is basic knowledge of Nova.



nova-api-metadata is running fine in the compute node.



Make sure the metadata port is avaible thanks to telnet or netstat, 
nova-api can be running without listening on metadata port.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Anil Vishnoi
If you are using ubuntu cloud image then the only way to log-in is to do
ssh with the public key. For that you have to create ssh key pair and
download the ssh key. You can create this ssh pair using horizon/cli.


On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
sylvain.ba...@digimind.comwrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is thru Horizon.
 Other can be looking at the KVM command-line for the desired instance (on
 the compute node) and check the vnc port in use (assuming KVM as
 hypervisor).
 This is basic knowledge of Nova.



  nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or netstat,
 nova-api can be running without listening on metadata port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




-- 
Thanks  Regards
--Anil Kumar Vishnoi
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Getting the 2012.2.3 release in Ubuntu

2013-02-15 Thread Chuck Short

Hi,

Sorry for not getting back sooner, we are in the middle of getting 
2012.2.3 updated for the Cloud Archive, however it follows the same 
stable release update processs (SRU) as normal updates for 12.04. You 
can read about the process at 
https://wiki.ubuntu.com/StableReleaseUpdates and follow the update 
process at https://bugs.launchpad.net/bugs/1116671.


If you have any questions let me know.

chuck

On 13-02-13 02:42 PM, Martinx - ジェームズ wrote:

+10 for 2012.2.3 @ Ubuntu Cloud Archives!

On 11 February 2013 18:48, Logan McNaughton lo...@bacoosta.com 
mailto:lo...@bacoosta.com wrote:


Hi,

I've added the Ubuntu Cloud Archive to my apt sources, however it
looks like the latest release there is 2012.2.1.

How can I get the latest 2012.2.3 release in Ubuntu 12.04?

___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] UK Trademark issue with Python

2013-02-15 Thread Doug Hellmann
The Python Software Foundation, maintainers of the IP behind the Python
language, are currently fighting a trademark claim against a company in the
UK that is trying to trademark the use of the term Python for all
software, services, and servers apparently as part of branding their cloud
service. If you work for a company with an office in an EU Community member
state, the PSF would appreciate your help documenting the community's prior
claim to the name Python. For details about where to send supporting
evidence, please see the PSF blog post [1].

Thanks,
Doug

[1]
http://pyfound.blogspot.com/2013/02/python-trademark-at-risk-in-europe-we.html
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Sylvain Bauza

Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
If a VM is unable to reach metadata service, then it won't be able to 
get its public key, so you won't be able to connect, unless you 
specifically go thru a Password authentication (provided password auth 
is enabled in /etc/ssh/sshd_config, which is not the case with Ubuntu 
cloud archive).
There is also a side effect, the boot process is longer as the instance 
is waiting for the curl timeout (60sec.) to finish booting up.


Re: Quantum, the metadata API is actually DNAT'd from Network node to 
the Nova-api node (here 172.16.0.1 as internal management IP) :

Chain quantum-l3-agent-PREROUTING (1 references)
target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 
to:172.16.0.1:8775



Anyway, the first step is to :
1. grab the console.log
2. access thru VNC to the desired instance

Troubleshooting will be easier once that done.

-Sylvain



Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

Hello Guys,

Not sure if this is the right port but these are the results:

*Compute node:*

root@computenode:~# netstat -an | grep 8775
tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:* 
  LISTEN


*Controller: *

root@controller:~# netstat -an | grep 8775
tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:* 
  LISTEN


*Additionally I cant curl 169.254.169.254 from the compute node. I am 
not sure if this is related to not being able to PING the VM.*


curl -v http://169.254.169.254
* About to connect() to 169.254.169.254 port 80 (#0)
*   Trying 169.254.169.254...

Thanks for your help


-
Chathura Madhusanka Sarathchandra Magurawalage.
1NW.2.1, Desk 2
School of Computer Science and Electronic Engineering
University Of Essex
United Kingdom.

Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
  chathura.sarathchan...@gmail.com mailto:77.chath...@gmail.com
77.chath...@gmail.com mailto:77.chath...@gmail.com


On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com 
mailto:vishnoia...@gmail.com wrote:


If you are using ubuntu cloud image then the only way to log-in is
to do ssh with the public key. For that you have to create ssh key
pair and download the ssh key. You can create this ssh pair using
horizon/cli.


On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com
wrote:


Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
écrit :


How can I log into the VM from VNC? What are the credentials?


You have multiple ways to get VNC access. The easiest one is
thru Horizon. Other can be looking at the KVM command-line for
the desired instance (on the compute node) and check the vnc
port in use (assuming KVM as hypervisor).
This is basic knowledge of Nova.



nova-api-metadata is running fine in the compute node.


Make sure the metadata port is avaible thanks to telnet or
netstat, nova-api can be running without listening on metadata
port.




___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp




-- 
Thanks  Regards

--Anil Kumar Vishnoi





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Sure...  I have undone these settings but I saved a copy:

two hosts:
test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24

VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP
for public APIs

k
eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2



in /etc/sysctl.conf:
   net.ipv4.conf.all.arp_ignore = 1
   net.ipv4.conf.eth0.arp_ignore = 1
   net.ipv4.conf.all.arp_announce = 2
   net.ipv4.conf.eth0.arp_announce = 2

root# sysctl -p

in /etc/sysctl.conf:

checktimeout=
3


checkinterval=
5


autoreload=
yes


logfile=/var/log/ldirectord.log

quiescent=no

virtual=10.21.21.1:5000

real=10.2
1
.0.1:5000 gate

real=10.2
1
.0.2:5000 gate


scheduler=
w
rr

  protocol=tcp

  checktype=connect
  checkport=5000

virtual=10.21.21.1:
35357

real=10.2
1
.0.1:
35357
gate

real=10.2
1
.0.2:
35357
gate


scheduler=
w
rr

  protocol=tcp

  checktype=connect
  checkport=35357


crm shell:




primitive
p_openstack_
ip ocf:heartbeat:IPaddr2 \






op monitor interval=60 timeout=20 \






params ip=
10.21.21.1



cidr_netmask=
16



lvs_support=true


p
rimitive
p_openstack_ip_lo
 ocf:heartbeat:IPaddr2 \






op monitor interval=60 timeout=20 \






params ip=
10.21.21.1
 nic=lo

cidr_netmask=32



primitive
p_openstack_
lvs ocf:heartbeat:ldirectord \






op monitor interval=20 timeout=10



group
g_openstack_
ip
_
lvs
p_openstack_
ip
p_openstack_
lvs



clone
c_openstack_ip_lo

p_openstack_ip_lo
meta interleave=true



colocation
 co_openstack_lo_never_lvs
-inf: c
_openstack_ip_lo

g_openstack_ip_lvs

Thanks for taking a look at this.

Sam





On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
wrote:

 Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
wrote:

 W
 ell, I think I will have to go with one ip per service and forget about
load balancing.  It seems as though with LVS routing requests internally
through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
like a shame not to be able to distribute the work among the controller
nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.com
wrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute
nodes behind them.   I am trying to set the two public nodes in for HA and
load balancing,  I plan to run all the openstack services on these two
nodes in Active/Active where possible.   I currently have MySQL and
RabbitMQ setup in pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.com
wrote:

 Well I don't know your setup, if you use LB for API service or if you
use an active/passive pacemaker but at the end it's not that much IPs I
guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the
resources simply create a resource group with all the openstack service
inside it (it's ugly but if it's what you want :)). Give me more info about
your setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach swinc...@gmail.com
wrote:

 T
 he only real problem is that it would consume a lot of IP addresses
when exposing the public interfaces.   I _think_ I may have the solution in
your blog actually:
http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han 
han.sebast...@gmail.com wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
started on?   It seems like you either need to set
net.ipv4.ip_nonlocal_bind = 1 or create a colocation in pacemaker (which
would either require all services to be on the same host, or have an
ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua 
razique.mahr...@gmail.com a écrit :

 I'm currently updating that part of the documentation - indeed it
states that two IPs are used, but in fact, you end up with only one VIP for
the API service.
 I'll send the patch tonight

 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 

[Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread Samuel Winchenbach
Hi All,

Can anyone give me a recommendation for a good shared-storage cluster
filesystem?   I am running kvm-libvirt and would like to enable live
migration.

I have a number of hosts (up to 16) each with 2xTB drives.  These hosts are
also my compute/network/controller nodes.

The three I am considering are:

GlusterFS - I have the most experience with this, and it seems the easiest.

CephFS/RADOS - Interesting because glance supports the rbd backend.
 Slightly worried because of this though Important:

Mount the CephFS filesystem on the client machine, not the cluster machine.

(I wish it said why...)  and CephFS is not quite as stable as the block
device and the object storage gateway.


Lustre - A little hesitant now that Oracle is involved with it.


If anyone has any advice, or can point out another that I should consider
it would be greatly appreciated.

Thanks!

Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread JuanFra Rodriguez Cardoso
Another one:

 - MooseFS (
http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html
)
 - GlusterFS
 - Ceph
 - Lustre

Regards,
JuanFra


2013/2/15 Samuel Winchenbach swinc...@gmail.com

 Hi All,

 Can anyone give me a recommendation for a good shared-storage cluster
 filesystem?   I am running kvm-libvirt and would like to enable live
 migration.

 I have a number of hosts (up to 16) each with 2xTB drives.  These hosts
 are also my compute/network/controller nodes.

 The three I am considering are:

 GlusterFS - I have the most experience with this, and it seems the easiest.

 CephFS/RADOS - Interesting because glance supports the rbd backend.
  Slightly worried because of this though Important:
 Mount the CephFS filesystem on the client machine, not the cluster
 machine.
 (I wish it said why...)  and CephFS is not quite as stable as the block
 device and the object storage gateway.
  Lustre - A little hesitant now that Oracle is involved with it.


 If anyone has any advice, or can point out another that I should consider
 it would be greatly appreciated.

 Thanks!

 Sam


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Thanks for your reply.

first of all I do not see the following rule in my iptables

target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 to:
x.x.x.x:8775 http://172.16.0.1:8775/

Please find the console log at the beginning of the post.

Since I am using Ubuntu cloud image I am not able to log in to it through
VNC console. I can't even ping. This is the main problem.

Any help will greatly appreciated.



On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance is
 waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80
 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am not
 sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help



  On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com** wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com 
 mailto:sylvain.bauza@**digimind.comsylvain.ba...@digimind.com
 

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 

 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 -- Thanks  Regards
 --Anil Kumar Vishnoi




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Folsom] CPU mode host-model generated in libvirt.xml gives error when booting

2013-02-15 Thread Sylvain Bauza

Hi,

Nova is generating libvirt.xml for each instance with   cpu 
mode=host-model match=exact/
As a result, virsh (and nova-compute) refuses to start instance as 
complaining :

error : internal error Cannot find suitable CPU model for given data


Libvirt is 0.9.13-0ubuntu12.2~cloud0 and kvm is qemu-1.3 (from source)

Please find attached my virsh capabilities (virsh_capabilities.txt)


I looked at my previous Essex install (with same kvm version) and no 
cpu tag is given in libvirt.xml.template.
I know that libvirt.xml file generation has been rewritten in Folsom, so 
I can't see what's wrong, neither how to fix it.



Thanks,
-Sylvain
capabilities

  host
uuid14ac1a75-a391-1724-d3fb-015701849cd4/uuid
cpu
  archx86_64/arch
  modelNehalem/model
  vendorIntel/vendor
  topology sockets='1' cores='2' threads='2'/
  feature name='rdtscp'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='tm2'/
  feature name='est'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu
power_management
  suspend_mem/
  suspend_disk/
/power_management
migration_features
  live/
  uri_transports
uri_transporttcp/uri_transport
  /uri_transports
/migration_features
topology
  cells num='1'
cell id='0'
  cpus num='4'
cpu id='0'/
cpu id='1'/
cpu id='2'/
cpu id='3'/
  /cpus
/cell
  /cells
/topology
  /host

  guest
os_typehvm/os_type
arch name='i686'
  wordsize32/wordsize
  emulator/usr/bin/qemu-system-x86_64/emulator
  machinepc-1.3/machine
  machinenone/machine
  machine canonical='pc-1.3'pc/machine
  machinepc-1.2/machine
  machinepc-1.1/machine
  machinepc-1.0/machine
  machinepc-0.15/machine
  machinepc-0.14/machine
  machinepc-0.13/machine
  machinepc-0.12/machine
  machinepc-0.11/machine
  machinepc-0.10/machine
  machineisapc/machine
  machine canonical='q35-next'q35/machine
  machineq35-next/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/bin/kvm/emulator
machinepc-1.3/machine
machinenone/machine
machine canonical='pc-1.3'pc/machine
machinepc-1.2/machine
machinepc-1.1/machine
machinepc-1.0/machine
machinepc-0.15/machine
machinepc-0.14/machine
machinepc-0.13/machine
machinepc-0.12/machine
machinepc-0.11/machine
machinepc-0.10/machine
machineisapc/machine
machine canonical='q35-next'q35/machine
machineq35-next/machine
  /domain
/arch
features
  cpuselection/
  pae/
  nonpae/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest

  guest
os_typehvm/os_type
arch name='x86_64'
  wordsize64/wordsize
  emulator/usr/bin/qemu-system-x86_64/emulator
  machinepc-1.3/machine
  machinenone/machine
  machine canonical='pc-1.3'pc/machine
  machinepc-1.2/machine
  machinepc-1.1/machine
  machinepc-1.0/machine
  machinepc-0.15/machine
  machinepc-0.14/machine
  machinepc-0.13/machine
  machinepc-0.12/machine
  machinepc-0.11/machine
  machinepc-0.10/machine
  machineisapc/machine
  machine canonical='q35-next'q35/machine
  machineq35-next/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/bin/kvm/emulator
machinepc-1.3/machine
machinenone/machine
machine canonical='pc-1.3'pc/machine
machinepc-1.2/machine
machinepc-1.1/machine
machinepc-1.0/machine
machinepc-0.15/machine
machinepc-0.14/machine
machinepc-0.13/machine
machinepc-0.12/machine
machinepc-0.11/machine
machinepc-0.10/machine
machineisapc/machine
machine canonical='q35-next'q35/machine
machineq35-next/machine
  /domain
/arch
features
  cpuselection/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest

/capabilities


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Errno 111] Connection refused - quantum

2013-02-15 Thread Lukáš Vízner
Hi,

I have problem with this issue. I have installed OpenStack Folsom
succesfully by 
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

I do not know why, but after few days when I want to call some command
to quantum, for example quantum router-list, It is failed with [Errno
111] Connection refused.

Does anybody know why it is?

Thank you

--
Lukáš Vízner

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Errno 111] Connection refused - quantum

2013-02-15 Thread Sylvain Bauza


Le 15/02/2013 18:20, Lukáš Vízner a écrit :

Hi,

I have problem with this issue. I have installed OpenStack Folsom
succesfully by 
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

I do not know why, but after few days when I want to call some command
to quantum, for example quantum router-list, It is failed with [Errno
111] Connection refused.

Does anybody know why it is?



Make sure your SERVICE_ENDPOINT value is correct.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Folsom] CPU mode host-model generated in libvirt.xml gives error when booting

2013-02-15 Thread Vishvananda Ishaya
I seem to recall something similar happening when I built from source. One 
option is to update your /etc/nova/nova.conf to:

libvirt_cpu_mode=host-passthrough

Vish

On Feb 15, 2013, at 9:07 AM, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Hi,
 
 Nova is generating libvirt.xml for each instance with   cpu 
 mode=host-model match=exact/
 As a result, virsh (and nova-compute) refuses to start instance as 
 complaining :
 error : internal error Cannot find suitable CPU model for given data
 
 
 Libvirt is 0.9.13-0ubuntu12.2~cloud0 and kvm is qemu-1.3 (from source)
 
 Please find attached my virsh capabilities (virsh_capabilities.txt)
 
 
 I looked at my previous Essex install (with same kvm version) and no cpu 
 tag is given in libvirt.xml.template.
 I know that libvirt.xml file generation has been rewritten in Folsom, so I 
 can't see what's wrong, neither how to fix it.
 
 
 Thanks,
 -Sylvain
 virsh_capabilities.txt___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Folsom] CPU mode host-model generated in libvirt.xml gives error when booting

2013-02-15 Thread Sylvain Bauza

You're great. You made my day, it works.
-Sylvain

Le 15/02/2013 18:31, Vishvananda Ishaya a écrit :

I seem to recall something similar happening when I built from source. One 
option is to update your /etc/nova/nova.conf to:

libvirt_cpu_mode=host-passthrough

Vish

On Feb 15, 2013, at 9:07 AM, Sylvain Bauza sylvain.ba...@digimind.com wrote:


Hi,

Nova is generating libvirt.xml for each instance with   cpu mode=host-model 
match=exact/
As a result, virsh (and nova-compute) refuses to start instance as complaining :
error : internal error Cannot find suitable CPU model for given data


Libvirt is 0.9.13-0ubuntu12.2~cloud0 and kvm is qemu-1.3 (from source)

Please find attached my virsh capabilities (virsh_capabilities.txt)


I looked at my previous Essex install (with same kvm version) and no cpu tag 
is given in libvirt.xml.template.
I know that libvirt.xml file generation has been rewritten in Folsom, so I 
can't see what's wrong, neither how to fix it.


Thanks,
-Sylvain
virsh_capabilities.txt___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Ok but why direct routing instead of NAT? If the public IPs are _only_
on LVS there is no point to use LVS-DR.

LVS has the public IPs and redirects to the private IPs, this _must_ work.

Did you try NAT? Or at least can you give it a shot?
--
Regards,
Sébastien Han.


On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com wrote:
 Sure...  I have undone these settings but I saved a copy:

 two hosts:
 test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
 test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24

 VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP for
 public APIs

 k
 eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2



 in /etc/sysctl.conf:
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

 root# sysctl -p

 in /etc/sysctl.conf:

 checktimeout=
 3


 checkinterval=
 5


 autoreload=
 yes


 logfile=/var/log/ldirectord.log

 quiescent=no

 virtual=10.21.21.1:5000

 real=10.2
 1
 .0.1:5000 gate

 real=10.2
 1
 .0.2:5000 gate

 scheduler=
 w
 rr
   protocol=tcp
   checktype=connect
   checkport=5000

 virtual=10.21.21.1:
 35357

 real=10.2
 1
 .0.1:
 35357
 gate

 real=10.2
 1
 .0.2:
 35357
 gate

 scheduler=
 w
 rr
   protocol=tcp
   checktype=connect
   checkport=35357


 crm shell:


 primitive
 p_openstack_
 ip ocf:heartbeat:IPaddr2 \


 op monitor interval=60 timeout=20 \


 params ip=
 10.21.21.1
 
 cidr_netmask=
 16
 
 lvs_support=true

 p
 rimitive
 p_openstack_ip_lo
  ocf:heartbeat:IPaddr2 \


 op monitor interval=60 timeout=20 \


 params ip=
 10.21.21.1
  nic=lo
 cidr_netmask=32

 primitive
 p_openstack_
 lvs ocf:heartbeat:ldirectord \


 op monitor interval=20 timeout=10

 group
 g_openstack_
 ip
 _
 lvs
 p_openstack_
 ip
 p_openstack_
 lvs

 clone
 c_openstack_ip_lo

 p_openstack_ip_lo
 meta interleave=true

 colocation
 co_openstack_lo_never_lvs
 -inf: c
 _openstack_ip_lo

 g_openstack_ip_lvs

 Thanks for taking a look at this.

 Sam




 On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
 wrote:

 Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
 there are just IPs... Can I see your conf?

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:

 W
 ell, I think I will have to go with one ip per service and forget about
 load balancing.  It seems as though with LVS routing requests internally
 through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
 like a shame not to be able to distribute the work among the controller
 nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.com
 wrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute nodes
 behind them.   I am trying to set the two public nodes in for HA and load
 balancing,  I plan to run all the openstack services on these two nodes in
 Active/Active where possible.   I currently have MySQL and RabbitMQ setup 
 in
 pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
 my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.com
 wrote:

 Well I don't know your setup, if you use LB for API service or if you
 use an active/passive pacemaker but at the end it's not that much IPs I
 guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your 
 setup
 and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
 swinc...@gmail.com wrote:

 T
 he only real problem is that it would consume a lot of IP addresses
 when exposing the public interfaces.   I _think_ I may have the solution 
 in
 your blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per 
 service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han
 han.sebast...@gmail.com wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach
 swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
 started on?   It seems like you either need to set 
 net.ipv4.ip_nonlocal_bind
 = 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua
 razique.mahr...@gmail.com wrote:

 There we go
 

[Openstack] Use a running ESXi hypervisor

2013-02-15 Thread Logan McNaughton
I'm sorry if this question has been asked before:

Is it possible to add an already running ESXi hypervisor (with live VM's)
into OpenStack?

For instance if I start a VM on ESX and install and configure nova-compute,
can the running VM's somehow be imported into OpenStack without disruption?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Use a running ESXi hypervisor

2013-02-15 Thread Jay Pipes
No.

On 02/15/2013 02:15 PM, Logan McNaughton wrote:
 I'm sorry if this question has been asked before:
 
 Is it possible to add an already running ESXi hypervisor (with live
 VM's) into OpenStack?
 
 For instance if I start a VM on ESX and install and configure
 nova-compute, can the running VM's somehow be imported into OpenStack
 without disruption?
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread Sébastien Han
Hi,

Important: Mount the CephFS filesystem on the client machine, not the
 cluster machine.


It's just like NFS, if you mount an NFS export on the NFS server, you get
kernel locks.

Unfortunately even if love Ceph far more than the other, I won't go with
CephFS, at least not know. But if are in a hurry and looking for a DFS then
GlusterFS seems to be a good candidate. NFS works pretty well too.

Cheers.

--
Regards,
Sébastien Han.


On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso 
juanfra.rodriguez.card...@gmail.com wrote:

 Another one:

  - MooseFS (
 http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html
 )
  - GlusterFS
  - Ceph
  - Lustre

 Regards,
 JuanFra


 2013/2/15 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 Can anyone give me a recommendation for a good shared-storage cluster
 filesystem?   I am running kvm-libvirt and would like to enable live
 migration.

 I have a number of hosts (up to 16) each with 2xTB drives.  These hosts
 are also my compute/network/controller nodes.

 The three I am considering are:

 GlusterFS - I have the most experience with this, and it seems the
 easiest.

 CephFS/RADOS - Interesting because glance supports the rbd backend.
  Slightly worried because of this though Important:
 Mount the CephFS filesystem on the client machine, not the cluster
 machine.
 (I wish it said why...)  and CephFS is not quite as stable as the block
 device and the object storage gateway.
  Lustre - A little hesitant now that Oracle is involved with it.


 If anyone has any advice, or can point out another that I should consider
 it would be greatly appreciated.

 Thanks!

 Sam


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
I
 didn't give NAT a shot because it didn't seem as well documented.

I will give NAT a shot.  Will I need to enable to iptables and add a rule
to the nat table?   None of the documentation mentioned that but every
time I have ever done NAT I had to setup a rule like... iptables -t nat -A
POSTROUTING -o eth0 -j MASQUERADE

Thanks for helping me with this.


On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_ work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP
 for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
  wrote:
 
  Hum I don't see the problem, it's possible to load-balance VIPs with
 LVS,
  there are just IPs... Can I see your conf?
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
 
  wrote:
 
  W
  ell, I think I will have to go with one ip per service and forget about
  load balancing.  It seems as though with LVS routing requests
 internally
  through the VIP is difficult (impossible?) at least with LVS-DR.  It
 seems
  like a shame not to be able to distribute the work among the controller
  nodes.
 
 
  On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  Hi Sébastien,
 
  I have two hosts with public interfaces with a number (~8) compute
 nodes
  behind them.   I am trying to set the two public nodes in for HA and
 load
  balancing,  I plan to run all the openstack services on these two
 nodes in
  Active/Active where possible.   I currently have MySQL and RabbitMQ
 setup in
  pacemaker with a drbd backend.
 
  That is a quick summary.   If there is anything else I can answer
 about
  my setup please let me know.
 
  Thanks,
  Sam
 
 
  On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.com
  wrote:
 
  Well I don't know your setup, if you use LB for API service or if you
  use an active/passive pacemaker but at the end it's not that much
 IPs I
  guess. I dare to say that Keepalived sounds outdated to me...
 
  If you use pacemaker and want to have the same IP for all the
 resources
  simply create a resource group with all the openstack service inside
 it
  (it's ugly but if it's what you want :)). Give me more info about
 your setup
  and we can go further in the discussion :).
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
  swinc...@gmail.com wrote:
 
  T
  he only real problem is that it would consume a lot of IP addresses
  when exposing the public interfaces.   I _think_ I may have the
 solution in
  your blog actually:
  http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
  and
  http://clusterlabs.org/wiki/Using_ldirectord
 
  I am trying to weigh the pros and cons of this method vs
  keepalived/haproxy and just biting the bullet and using one 

Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread Samuel Winchenbach
Thanks,

I think I will go with GlusterFS.   MooseFS looks interesting, but
maintaining a package outside the repo/cloud archive is not something I
want to deal with.

Along the same lines...   is it possible to mount a GlusterFS volume in
pacemaker?  I have tried both ocf:heartbeat:Filesystem and
ocf:redhat:netfs.sh without much luck.   I have managed to get the service
started with upstart though.

Thanks,
Sam


On Fri, Feb 15, 2013 at 2:29 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Hi,


 Important: Mount the CephFS filesystem on the client machine, not the
 cluster machine.


 It's just like NFS, if you mount an NFS export on the NFS server, you get
 kernel locks.

 Unfortunately even if love Ceph far more than the other, I won't go with
 CephFS, at least not know. But if are in a hurry and looking for a DFS then
 GlusterFS seems to be a good candidate. NFS works pretty well too.

 Cheers.

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Another one:

  - MooseFS (
 http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html
 )
  - GlusterFS
  - Ceph
  - Lustre

 Regards,
 JuanFra


 2013/2/15 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 Can anyone give me a recommendation for a good shared-storage cluster
 filesystem?   I am running kvm-libvirt and would like to enable live
 migration.

 I have a number of hosts (up to 16) each with 2xTB drives.  These hosts
 are also my compute/network/controller nodes.

 The three I am considering are:

 GlusterFS - I have the most experience with this, and it seems the
 easiest.

 CephFS/RADOS - Interesting because glance supports the rbd backend.
  Slightly worried because of this though Important:
 Mount the CephFS filesystem on the client machine, not the cluster
 machine.
 (I wish it said why...)  and CephFS is not quite as stable as the block
 device and the object storage gateway.
  Lustre - A little hesitant now that Oracle is involved with it.


 If anyone has any advice, or can point out another that I should
 consider it would be greatly appreciated.

 Thanks!

 Sam


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Well if you follow my article, you will get LVS-NAT running. It's fairly
easy, no funky stuff. Yes you will probably need the postrouting rule, as
usual :). Let me know how it goes ;)

--
Regards,
Sébastien Han.


On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a rule
 to the nat table?   None of the documentation mentioned that but every time
 I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_ work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
 
  wrote:
 
  Hum I don't see the problem, it's possible to load-balance VIPs with
 LVS,
  there are just IPs... Can I see your conf?
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  W
  ell, I think I will have to go with one ip per service and forget
 about
  load balancing.  It seems as though with LVS routing requests
 internally
  through the VIP is difficult (impossible?) at least with LVS-DR.  It
 seems
  like a shame not to be able to distribute the work among the
 controller
  nodes.
 
 
  On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  Hi Sébastien,
 
  I have two hosts with public interfaces with a number (~8) compute
 nodes
  behind them.   I am trying to set the two public nodes in for HA and
 load
  balancing,  I plan to run all the openstack services on these two
 nodes in
  Active/Active where possible.   I currently have MySQL and RabbitMQ
 setup in
  pacemaker with a drbd backend.
 
  That is a quick summary.   If there is anything else I can answer
 about
  my setup please let me know.
 
  Thanks,
  Sam
 
 
  On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.com
  wrote:
 
  Well I don't know your setup, if you use LB for API service or if
 you
  use an active/passive pacemaker but at the end it's not that much
 IPs I
  guess. I dare to say that Keepalived sounds outdated to me...
 
  If you use pacemaker and want to have the same IP for all the
 resources
  simply create a resource group with all the openstack service
 inside it
  (it's ugly but if it's what you want :)). Give me more info about
 your setup
  and we can go further in the discussion :).
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
  swinc...@gmail.com wrote:
 
  T
  he only real problem is that it would consume a lot of IP addresses
  when exposing the public interfaces.   

Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread JR
Is there anyone using GPFS (General Parallel Filesystem) from IBM.  It's
high performing, posix compliant, can do internal replication, etc...?

To make it work, would one simply have to modify the nova-volume (or
cinder) code that creates a volume-group using the corresponding GPFS
commands?  Or, are there other complexities?  Which code would I look in
to see what's involved?

JR


On 2/15/2013 2:54 PM, Samuel Winchenbach wrote:
 Thanks,
 
 I think I will go with GlusterFS.   MooseFS looks interesting, but
 maintaining a package outside the repo/cloud archive is not something I
 want to deal with.
 
 Along the same lines...   is it possible to mount a GlusterFS volume in
 pacemaker?  I have tried both ocf:heartbeat:Filesystem and
 ocf:redhat:netfs.sh without much luck.   I have managed to get the
 service started with upstart though.
 
 Thanks,
 Sam
 
 
 On Fri, Feb 15, 2013 at 2:29 PM, Sébastien Han han.sebast...@gmail.com
 mailto:han.sebast...@gmail.com wrote:
 
 Hi,
 
 
 Important: Mount the CephFS filesystem on the client machine,
 not the cluster machine.
 
 
 It's just like NFS, if you mount an NFS export on the NFS server,
 you get kernel locks.
 
 Unfortunately even if love Ceph far more than the other, I won't go
 with CephFS, at least not know. But if are in a hurry and looking
 for a DFS then GlusterFS seems to be a good candidate. NFS works
 pretty well too.
 
 Cheers.
 
 --
 Regards,
 Sébastien Han.
 
 
 On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso
 juanfra.rodriguez.card...@gmail.com
 mailto:juanfra.rodriguez.card...@gmail.com wrote:
 
 Another one:
 
  - MooseFS
 
 (http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html)
  - GlusterFS
  - Ceph
  - Lustre
 
 Regards,
 JuanFra
 
 
 2013/2/15 Samuel Winchenbach swinc...@gmail.com
 mailto:swinc...@gmail.com
 
 Hi All,
 
 Can anyone give me a recommendation for a good
 shared-storage cluster filesystem?   I am running
 kvm-libvirt and would like to enable live migration.
 
 I have a number of hosts (up to 16) each with 2xTB drives.
  These hosts are also my compute/network/controller nodes.  
 
 The three I am considering are:
 
 GlusterFS - I have the most experience with this, and it
 seems the easiest.
 
 CephFS/RADOS - Interesting because glance supports the rbd
 backend.  Slightly worried because of this though Important:
 Mount the CephFS filesystem on the client machine, not the
 cluster machine.
 (I wish it said why...)  and CephFS is not quite as stable
 as the block device and the object storage gateway.
 Lustre - A little hesitant now that Oracle is involved with it.
 
 
 If anyone has any advice, or can point out another that I
 should consider it would be greatly appreciated.
 
 Thanks!
 
 Sam
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread John Griffith
On Fri, Feb 15, 2013 at 1:08 PM, JR botem...@gmail.com wrote:

 Is there anyone using GPFS (General Parallel Filesystem) from IBM.  It's
 high performing, posix compliant, can do internal replication, etc...?

 To make it work, would one simply have to modify the nova-volume (or
 cinder) code that creates a volume-group using the corresponding GPFS
 commands?  Or, are there other complexities?  Which code would I look in
 to see what's involved?

 JR


 On 2/15/2013 2:54 PM, Samuel Winchenbach wrote:
  Thanks,
 
  I think I will go with GlusterFS.   MooseFS looks interesting, but
  maintaining a package outside the repo/cloud archive is not something I
  want to deal with.
 
  Along the same lines...   is it possible to mount a GlusterFS volume in
  pacemaker?  I have tried both ocf:heartbeat:Filesystem and
  ocf:redhat:netfs.sh without much luck.   I have managed to get the
  service started with upstart though.
 
  Thanks,
  Sam
 
 
  On Fri, Feb 15, 2013 at 2:29 PM, Sébastien Han han.sebast...@gmail.com
  mailto:han.sebast...@gmail.com wrote:
 
  Hi,
 
 
  Important: Mount the CephFS filesystem on the client machine,
  not the cluster machine.
 
 
  It's just like NFS, if you mount an NFS export on the NFS server,
  you get kernel locks.
 
  Unfortunately even if love Ceph far more than the other, I won't go
  with CephFS, at least not know. But if are in a hurry and looking
  for a DFS then GlusterFS seems to be a good candidate. NFS works
  pretty well too.
 
  Cheers.
 
  --
  Regards,
  Sébastien Han.
 
 
  On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso
  juanfra.rodriguez.card...@gmail.com
  mailto:juanfra.rodriguez.card...@gmail.com wrote:
 
  Another one:
 
   - MooseFS
  (
 http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html
 )
   - GlusterFS
   - Ceph
   - Lustre
 
  Regards,
  JuanFra
 
 
  2013/2/15 Samuel Winchenbach swinc...@gmail.com
  mailto:swinc...@gmail.com
 
  Hi All,
 
  Can anyone give me a recommendation for a good
  shared-storage cluster filesystem?   I am running
  kvm-libvirt and would like to enable live migration.
 
  I have a number of hosts (up to 16) each with 2xTB drives.
   These hosts are also my compute/network/controller nodes.
 
  The three I am considering are:
 
  GlusterFS - I have the most experience with this, and it
  seems the easiest.
 
  CephFS/RADOS - Interesting because glance supports the rbd
  backend.  Slightly worried because of this though Important:
  Mount the CephFS filesystem on the client machine, not the
  cluster machine.
  (I wish it said why...)  and CephFS is not quite as stable
  as the block device and the object storage gateway.
  Lustre - A little hesitant now that Oracle is involved with
 it.
 
 
  If anyone has any advice, or can point out another that I
  should consider it would be greatly appreciated.
 
  Thanks!
 
  Sam
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  mailto:openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  mailto:openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Don't know about folks that might have their own implementation, but
currently nothing in Cinder.  The closest thing to use to get an idea of
the driver is the pending Gluster work:
https://review.openstack.org/#/c/21342/

John
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Hrmmm it isn't going so well:

root@test1# ip a s dev eth0
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
inet 10.21.21.1/16 scope global secondary eth0
inet6 fe80::225:90ff:fe10:78/64 scope link
   valid_lft forever preferred_lft forever


root@test1# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  - RemoteAddress:Port   Forward Weight ActiveConn InActConn
TCP  10.21.21.1:5000 wlc persistent 600
  - 10.21.0.1:5000   Masq1000  1
  - 10.21.0.2:5000   Masq1000  0
TCP  10.21.21.1:35357 wlc persistent 600
  - 10.21.0.1:35357  Masq1000  0
  - 10.21.0.2:35357  Masq1000  0

root@test1# iptables -L -v -tnat
Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
 pkts bytes target prot opt in out source
destination

Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
 pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
 pkts bytes target prot opt in out source
destination

Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
 pkts bytes target prot opt in out source
destination
  277 16700 MASQUERADE  all  --  anyeth0anywhere
anywhere

root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
root@test1:~# keystone user-list
No handlers could be found for logger keystoneclient.client
Unable to communicate with identity service: [Errno 113] No route to host.
(HTTP 400)


I still have some debugging to do with tcpdump, but I thought I would post
my initial results.


On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Well if you follow my article, you will get LVS-NAT running. It's fairly
 easy, no funky stuff. Yes you will probably need the postrouting rule, as
 usual :). Let me know how it goes ;)

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a rule
 to the nat table?   None of the documentation mentioned that but every time
 I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_
 work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han 
 

[Openstack] Initial quantum network state broken

2013-02-15 Thread Greg Chavez
Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
scale-ready installation described in these instructions:

https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

Basically:

(o) controller node on a mgmt and public net
(o) network node (quantum and openvs) on a mgmt, net-config, and public net
(o) compute node is on a mgmt and net-config net

Took me just over an hour and ran into only a few easily-fixed speed bumps.
 But the VM networks are totally non-functioning.  VMs launch but no
network traffic can go in or out.

I'm particularly befuddled by these problems:

( 1 ) This error in nova-compute:

ERROR nova.network.quantumv2 [-] _get_auth_token() failed

( 2 ) No NAT rules on the compute node, which probably explains why the VMs
complain about not finding a network or being able to get metadata from
169.254.169.254.

root@kvm-cs-sn-10i:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N nova-api-metadat-OUTPUT
-N nova-api-metadat-POSTROUTING
-N nova-api-metadat-PREROUTING
-N nova-api-metadat-float-snat
-N nova-api-metadat-snat
-N nova-compute-OUTPUT
-N nova-compute-POSTROUTING
-N nova-compute-PREROUTING
-N nova-compute-float-snat
-N nova-compute-snat
-N nova-postrouting-bottom
-A PREROUTING -j nova-api-metadat-PREROUTING
-A PREROUTING -j nova-compute-PREROUTING
-A OUTPUT -j nova-api-metadat-OUTPUT
-A OUTPUT -j nova-compute-OUTPUT
-A POSTROUTING -j nova-api-metadat-POSTROUTING
-A POSTROUTING -j nova-compute-POSTROUTING
-A POSTROUTING -j nova-postrouting-bottom
-A nova-api-metadat-snat -j nova-api-metadat-float-snat
-A nova-compute-snat -j nova-compute-float-snat
-A nova-postrouting-bottom -j nova-api-metadat-snat
-A nova-postrouting-bottom -j nova-compute-snat

(3) A lastly, no default secgroup rules, whose function governs... what
exactly?  Connections to the VM's public or private IPs?  I guess I'm just
not sure if this is relevant to my overall problem of ZERO VM network
connectivity.

I seek guidance please.  Thanks.


-- 
\*..+.-
--Greg Chavez
+//..;};
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Community Weekly Newsletter (Feb 8 – 15)

2013-02-15 Thread Stefano Maffulli


   Highlights of the week


 Important CLA changes coming in 10 days
 http://markmail.org/message/azrdwianmnt2j5oc

Starting on February 24, 2013 all contributors MUST review and agree to 
the new OpenStack Individual Contributor License Agreement and provide 
updated contact information at 
https://review.openstack.org/#/settings/agreements. On that day the 
Gerrit interface will be changing to present the new CLA text referring 
to the OpenStack Foundation, and will prompt you to agree to it there. 
Any previous agreement with OpenStack LLC will be marked expired at that 
time. The text of the new agreement is available for your convenience 
https://review.openstack.org/static/cla.html (just changes “LLC” to 
“Foundation” and corrects a few typographical errors). You must also 
sign up for an OpenStack Foundation Individual Membership with the same 
E-mail address as used for your Gerrit contact information: 
http://openstack.org/register/.



 OpenStack Object Storage (aka Swift) for new contributors
 http://swiftstack.com/blog/2013/12/03/swift-for-new-contributors/

As a developer, jumping into a mature codebase can be somewhat daunting. 
How is the code structured? What is the request flow? What’s the process 
for getting my changes contributed upstream? Find answers to these 
questions on this post by SwiftStack.



 Evolution of the incubation process
 
http://lists.openstack.org/pipermail/openstack-tc/2013-February/000119.html

The Technical Committee approved a set of changes to the incubation 
process, the process through which a project becomes part of the 
co-ordinated, integrated OpenStack release. One of the visible change is 
the switch from using the term “Core projects” to “Integrated”.



 Upstream University at the OpenStack summit
 http://dachary.org/?p=1846

Upstream University is organizing a session 
http://upstream-university.org/apply/ in advance of the next OpenStack 
summit http://www.openstack.org/summit/portland-2013/, in Portland. If 
you can fly in two days ahead of the event to spend the weekend 
improving your OpenStack contribution skills, please consider submitting 
an application http://upstream-university.org/apply/ to attend the 
workshop.



 Python trademark at risk in Europe: Python Foundation needs your
 help
 
http://pyfound.blogspot.co.uk/2013/02/python-trademark-at-risk-in-europe-we.html?m=1

For anyone who works in a company that has an office in a EU Community 
member state, the Python Software Foundation needs your help. There is a 
company in the UK that is trying to trademark the use of the term 
“Python” for all software, services, servers… pretty much anything 
having to do with a computer. The PSF is asking a letter on company 
letterhead to forward to their EU counsel. More details on PSF 
http://PSF/News 
http://pyfound.blogspot.co.uk/2013/02/python-trademark-at-risk-in-europe-we.html?m=1.



 Report of Openstack project on SF State University campus
 http://commons.sfsu.edu/report-openstack-project-campus

Two students (Brandon Lai and Pascal Schuele) under supervision of prof. 
Sameer Verma worked on exploring the cloud computing space in Fall 2012. 
They built a demo/prototype of a private cloud platform on campus and 
presented at the end of the semester. Prof. Verma hopes to continue to 
expand this project in Spring 2013.



   Security Advisories

 * CVE-2013-0247 : Keystone denial of service through invalid token
   requests
   
http://secstack.org/2013/02/cve-2013-0247-keystone-denial-of-service-through-invalid-token-requests/


   Tips and Tricks

 * By Matthias Runge http://www.matthias-runge.de/: How to create a
   custom theme for Horizon
   
http://www.matthias-runge.de/2013/02/15/how-to-create-a-custom-theme-for-horizon/
 * By Julien Danjou http://julien.danjou.info/blog/: Cloud tools for
   Debian http://julien.danjou.info/blog/2013/cloud-init-utils-debian
 * By Derek Higgins http://goodsquishy.com/: Looking for a Fedora 18
   qcow2 image to use on openstack
   http://goodsquishy.com/2013/02/fedora-18-qcow2-image/


   Upcoming Events

 * Second Swiss OpenStack User Group Meeting
   http://www.meetup.com/zhgeeks/events/97648722/ Feb 19, 2013 –
   Zurich, Switzerland Details
   http://www.meetup.com/zhgeeks/events/97648722/
 * SCALE 11x https://www.socallinuxexpo.org/scale11x/ Feb 22 – 24,
   2013 – Los Angeles, CA Details
   https://www.socallinuxexpo.org/scale11x/
 * OpenStack Delhi NCR Meetup
   http://www.meetup.com/Indian-OpenStack-User-Group/events/102301202/ Feb
   22, 2013 – India Details
   http://www.meetup.com/Indian-OpenStack-User-Group/events/102301202/
 * OpenStack in Production at Scale
   http://www.meetup.com/meetup-group-NjZdcegA/events/103564202/ Feb
   28, 2013 – Chicago, IL Details
   http://www.meetup.com/meetup-group-NjZdcegA/events/103564202/
 * Pulse Open Cloud Summit
   

[Openstack] Python-novaclient version 2.11.1

2013-02-15 Thread Vishvananda Ishaya
Hi Everyone,

I pushed another version of python novaclient (2.11.1) to pypi[1]. There was a 
bug[2] with using the gnome keyring that was affecting some users, so the only 
change from 2.11.0 is the inclusion of a fix for the bug.

[1] http://pypi.python.org/pypi/python-novaclient/
[2] https://bugs.launchpad.net/python-novaclient/+bug/1116302


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Well I got it to work.  I was being stupid, and forgot to change over the
endpoints in keystone.

One thing I find interesting is that if I call keystone user-list from
test1 it _always_ sends the request to test2 and vice versa.

Also I did not need to add the POSTROUTING rule... I am not sure why.


On Fri, Feb 15, 2013 at 3:44 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hrmmm it isn't going so well:

 root@test1# ip a s dev eth0
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
 1000
 link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
 inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
 inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
 inet 10.21.21.1/16 scope global secondary eth0
 inet6 fe80::225:90ff:fe10:78/64 scope link
valid_lft forever preferred_lft forever


 root@test1# ipvsadm -L -n
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   - RemoteAddress:Port   Forward Weight ActiveConn InActConn
 TCP  10.21.21.1:5000 wlc persistent 600
   - 10.21.0.1:5000   Masq1000  1
   - 10.21.0.2:5000   Masq1000  0
 TCP  10.21.21.1:35357 wlc persistent 600
   - 10.21.0.1:35357  Masq1000  0
   - 10.21.0.2:35357  Masq1000  0

 root@test1# iptables -L -v -tnat
 Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
  pkts bytes target prot opt in out source
 destination
   277 16700 MASQUERADE  all  --  anyeth0anywhere
 anywhere

 root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
 root@test1:~# keystone user-list
 No handlers could be found for logger keystoneclient.client
 Unable to communicate with identity service: [Errno 113] No route to host.
 (HTTP 400)


 I still have some debugging to do with tcpdump, but I thought I would post
 my initial results.


 On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Well if you follow my article, you will get LVS-NAT running. It's fairly
 easy, no funky stuff. Yes you will probably need the postrouting rule, as
 usual :). Let me know how it goes ;)

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a
 rule to the nat table?   None of the documentation mentioned that but every
 time I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_
 work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs 

Re: [Openstack] Optionally force instances to stay put on resize

2013-02-15 Thread Michael J Fork
Adding general and operators for additional feedback.

Michael J Fork/Rochester/IBM wrote on 02/15/2013 10:59:46 AM:

 From: Michael J Fork/Rochester/IBM
 To: openstack-...@lists.openstack.org,
 Date: 02/15/2013 10:59 AM
 Subject: Optionally force instances to stay put on resize

 The patch for the configurable-resize-placement blueprint (https://
 blueprints.launchpad.net/nova/+spec/configurable-resize-placement)
 has generated a discussion on the review boards and needed to be
 brought to the mailing list for broader feedback.

 tl;dr would others find useful the addition of a new config option
 resize_to_same_host with values allow, require, forbid that
 deprecates allow_resize_to_same_host (functionality equivalent to
 allow and forbid in resize_to_same_host)?  Existing use cases
 and default behaviors are retained unchanged.  The new use case is
 resize_to_same_host = require retains the exact same external API
 sematics and would make it such that no user actions can cause a VM
 migration (and the network traffic with it).  An administrator can
 still perform a manual migration that would allow a subsequent
 resize to succeed.  This patch would be most useful in environments
 with 1GbE or with large ephemeral disks.

 Blueprint  Description

  Currently OpenStack has a boolean allow_resize_to_same_host
  config option that constrains
  placement during resize. When this value is false, the
  ignore_hosts option is passed to the scheduler.
  When this value is true, no options are passed to the scheduler
  and the current host can be
  considered. In some use cases - e.g. PowerVM - a third option of
  require same host' is desirable.
 
  This blueprint will deprecate the allow_resize_to_same_host
  config option and replace it with
  resize_to_same_host that supports 3 values - allow, forbid,
  require. Allow is equivalent to true in the
  current use case (i.e. not scheduler hint, current host is
  considered), forbid to false in current use case
  (i.e. the ignore_hosts scheduler hint is set), and require forces
  the same host through the use of the
  force_hosts scheduler hint.

 To avoid incorrectly paraphrasing others, the review comments
 against the change are below in their entirety followed by my
 comments to those concerns.  The question we are looking to answer -
 would others find this function useful and / or believe that
 OpenStack should have this option?

 Comments from https://review.openstack.org/#/c/21139/:

  I still think this is a bad idea. The only reason the flag was
  there in the first place was so we could
  run tempest on devstack in the gate and test resize. Semantically
  this changes the meaning of resize
  in a way that I don't think should be done.

  I understand what the patch does, and I even think it appears to
  be functionally correct based on
  what the intention appears to be. However, I'm not convinced that
  the option is a useful addition.
 
  First, it really just doesn't seem in the spirit of OpenStack or
  cloud to care this much about where
  the instance goes like this. The existing option was only a hack
  for testing, not something expected
  for admins to care about.
 
  If this really *is* something admins need to care about, I'd like
  to better understand why. Further, if
  that's the case, I'm not sure a global config option is the right
  way to go about it. I think it may make
  more sense to have this be API driven. I'd like to see some
  thoughts from others on this point.

  I completely agree with the spirit of cloud argument. I further
  think that exposing anything via the
  API that would support this (i.e. giving the users control or even
  indication of where their instance lands)
  is a dangerous precedent to set.
 
  I tend to think that this use case is so small and specialized,
  that it belongs in some other sort of policy
  implementation, and definitely not as yet-another-config-option to
  be exposed to the admins. That, or in
  some other project entirely :)

 and my response to those concerns:

  I agree this is not an 80% use case, or probably even that popular
  in the other 20%, but resize today
  is the only user facing API that can trigger the migration of a VM
  to a new machine. In some environments,
  this network traffic is undesirable - especially 1GBe - and may
  want to be explicitly controlled by an
  Administrator. In this implementation, an Admin can still invoke a
  migration manually to allow the resize to
  succeed. I would point to the Island work by Sina as an example,
  they wrote an entire Cinder driver
  designed to minimize network traffic.
 
  I agree with the point above that exposing this on an end-user API
  is not correct, users should not know
  or care where this goes. However, as the cloud operator, I should
  be able to have that level of control
  and this puts it in their hands.
 
  Obviously this option would need documented to allow
  administrators to decide if they need to change it,
  

Re: [Openstack] [openstack-dev] Optionally force instances to stay put on resize

2013-02-15 Thread Michael Basnight

On Feb 15, 2013, at 9:35 PM, Michael J Fork wrote:

 Adding general and operators for additional feedback.
 
 Michael J Fork/Rochester/IBM wrote on 02/15/2013 10:59:46 AM:
 
  From: Michael J Fork/Rochester/IBM
  To: openstack-...@lists.openstack.org, 
  Date: 02/15/2013 10:59 AM
  Subject: Optionally force instances to stay put on resize
  
  The patch for the configurable-resize-placement blueprint (https://
  blueprints.launchpad.net/nova/+spec/configurable-resize-placement) 
  has generated a discussion on the review boards and needed to be 
  brought to the mailing list for broader feedback.
  
  tl;dr would others find useful the addition of a new config option 
  resize_to_same_host with values allow, require, forbid that 
  deprecates allow_resize_to_same_host (functionality equivalent to 
  allow and forbid in resize_to_same_host)?  Existing use cases 
  and default behaviors are retained unchanged.  The new use case is 
  resize_to_same_host = require retains the exact same external API 
  sematics and would make it such that no user actions can cause a VM 
  migration (and the network traffic with it).  An administrator can 
  still perform a manual migration that would allow a subsequent 
  resize to succeed.  This patch would be most useful in environments 
  with 1GbE or with large ephemeral disks. 
  
  Blueprint  Description
  
   Currently OpenStack has a boolean allow_resize_to_same_host 
   config option that constrains
   placement during resize. When this value is false, the 
   ignore_hosts option is passed to the scheduler. 
   When this value is true, no options are passed to the scheduler 
   and the current host can be
   considered. In some use cases - e.g. PowerVM - a third option of 
   require same host' is desirable.
  
   This blueprint will deprecate the allow_resize_to_same_host 
   config option and replace it with 
   resize_to_same_host that supports 3 values - allow, forbid, 
   require. Allow is equivalent to true in the
   current use case (i.e. not scheduler hint, current host is 
   considered), forbid to false in current use case 
   (i.e. the ignore_hosts scheduler hint is set), and require forces 
   the same host through the use of the
   force_hosts scheduler hint.
  
  To avoid incorrectly paraphrasing others, the review comments 
  against the change are below in their entirety followed by my 
  comments to those concerns.  The question we are looking to answer -
  would others find this function useful and / or believe that 
  OpenStack should have this option?
  
  Comments from https://review.openstack.org/#/c/21139/:
  
   I still think this is a bad idea. The only reason the flag was 
   there in the first place was so we could 
   run tempest on devstack in the gate and test resize. Semantically 
   this changes the meaning of resize
   in a way that I don't think should be done.
  
   I understand what the patch does, and I even think it appears to 
   be functionally correct based on
   what the intention appears to be. However, I'm not convinced that 
   the option is a useful addition.
  
   First, it really just doesn't seem in the spirit of OpenStack or 
   cloud to care this much about where 
   the instance goes like this. The existing option was only a hack 
   for testing, not something expected 
   for admins to care about.
  
   If this really *is* something admins need to care about, I'd like 
   to better understand why. Further, if 
   that's the case, I'm not sure a global config option is the right 
   way to go about it. I think it may make 
   more sense to have this be API driven. I'd like to see some 
   thoughts from others on this point.
  
   I completely agree with the spirit of cloud argument. I further
   think that exposing anything via the 
   API that would support this (i.e. giving the users control or even
   indication of where their instance lands) 
   is a dangerous precedent to set.
  
   I tend to think that this use case is so small and specialized, 
   that it belongs in some other sort of policy 
   implementation, and definitely not as yet-another-config-option to
   be exposed to the admins. That, or in 
   some other project entirely :)
  
  and my response to those concerns:
  
   I agree this is not an 80% use case, or probably even that popular
   in the other 20%, but resize today 
   is the only user facing API that can trigger the migration of a VM
   to a new machine. In some environments, 
   this network traffic is undesirable - especially 1GBe - and may 
   want to be explicitly controlled by an 
   Administrator. In this implementation, an Admin can still invoke a
   migration manually to allow the resize to 
   succeed. I would point to the Island work by Sina as an example, 
   they wrote an entire Cinder driver 
   designed to minimize network traffic.
  
   I agree with the point above that exposing this on an end-user API
   is not correct, users should not know 
   or care where this goes. 

[Openstack-ubuntu-testing-notifications] Build Failure: cloud-archive_grizzly_proposed_deploy #1

2013-02-15 Thread openstack-testing-bot
Title: cloud-archive_grizzly_proposed_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_grizzly_proposed_deploy/1/Project:cloud-archive_grizzly_proposed_deployDate of build:Fri, 15 Feb 2013 07:01:14 -0500Build duration:20 minBuild cause:Started by user James PageBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesBuild Artifactslogs/test-11.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 5519 lines...]INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-10.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonERROR:root:Unable to get information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonERROR:root:Unable to get information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonERROR:root:Unable to get information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonERROR:root:Unable to get information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonERROR:root:Unable to get information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonERROR:root:Unable to get information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonERROR:root:Unable to get information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.Traceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ [[ 1 != 0 ]]+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: cloud-archive_grizzly_proposed_deploy #2

2013-02-15 Thread openstack-testing-bot
Title: cloud-archive_grizzly_proposed_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_grizzly_proposed_deploy/2/Project:cloud-archive_grizzly_proposed_deployDate of build:Fri, 15 Feb 2013 08:04:54 -0500Build duration:16 minBuild cause:Started by user James PageBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesBuild Artifactslogs/test-02.os.magners.qa.lexington-log.tar.gzlogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-11.os.magners.qa.lexington-log.tar.gzlogs/test-12.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 2721 lines...]INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonWARNING:paramiko.transport:Oops, unhandled type 3ERROR:root:Coult not create tarball of logs on test-05.os.magners.qa.lexingtonINFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonERROR:root:Unable to get information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.Traceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ [[ 1 != 0 ]]+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_cinder_trunk #145

2013-02-15 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/145/Project:precise_grizzly_cinder_trunkDate of build:Fri, 15 Feb 2013 17:02:11 -0500Build duration:4 min 17 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAllow create_volume() to retry when exception happenedby zhiteng.huangeditsetup.pyeditcinder/tests/scheduler/test_host_filters.pyeditcinder/volume/api.pyaddcinder/scheduler/filters/retry_filter.pyeditcinder/tests/scheduler/test_filter_scheduler.pyeditcinder/volume/rpcapi.pyeditcinder/scheduler/manager.pyeditcinder/tests/test_volume_rpcapi.pyeditcinder/tests/test_volume.pyeditcinder/volume/manager.pyeditcinder/scheduler/filter_scheduler.pyeditcinder/scheduler/driver.pyConsole Output[...truncated 5297 lines...]Distribution: precise-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 60Job: cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dscMachine Architecture: amd64Package: cinderPackage-Time: 133Source-Version: 2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1Space: 23976Status: attemptedVersion: 2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1Finished at 20130215-1706Build needed 00:02:13, 23976k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpv4Za26/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpv4Za26/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a128.ga1deb1c+git201302151702~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_glance_trunk #120

2013-02-15 Thread openstack-testing-bot
Title: precise_grizzly_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_glance_trunk/120/Project:precise_grizzly_glance_trunkDate of build:Fri, 15 Feb 2013 19:31:09 -0500Build duration:10 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesUpdated_at not being passed to db in image createby iccha.sethieditglance/db/__init__.pyeditglance/tests/unit/test_db.pyAdd migration.py based on the one in nova.by treinishaddglance/db/migration.pyFix issues with migration 012by revieweditglance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.pyConsole Output[...truncated 5781 lines...]ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpsjqIaX/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpsjqIaX/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 86c73141a859c80265f3fbf835b15f7b6e964136..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/glance/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [31cae6a] Fix issues with migration 012dch -a [b584b83] Add migration.py based on the one in nova.dch -a [75dd3fb] Updated_at not being passed to db in image createdch -a [0756cf1] Clean dangling image fragments in filesystem storedch -a [3d4a77d] Sample config and doc for the show_image_direct_url option.dch -a [9f4fb2c] Avoid dangling partial image on size/checksum mismatchdch -a [6b706f1] Fix version issue during nosetests rundch -a [9518b9d] Adding database layer for image members domain modeldch -a [286dd79] Image Member Domain Modeldch -a [93cd456] Additional image member informationdch -a [c857149] Adding finer notifications.dch -a [a754aec] Add LazyPluggable utility from nova.dch -a [cb664f9] Update .coveragercdch -a [07e18c6] Replace nose plugin with testtools details.dch -a [68010c2] Convert some prints to addDetails calls.dch -a [68d71ea] Add _FATAL_EXCEPTION_FORMAT_ERRORS global.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A glance_2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_glance_trunk #134

2013-02-15 Thread openstack-testing-bot
Title: raring_grizzly_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_trunk/134/Project:raring_grizzly_glance_trunkDate of build:Fri, 15 Feb 2013 19:31:09 -0500Build duration:15 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesUpdated_at not being passed to db in image createby iccha.sethieditglance/db/__init__.pyeditglance/tests/unit/test_db.pyAdd migration.py based on the one in nova.by treinishaddglance/db/migration.pyFix issues with migration 012by revieweditglance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.pyConsole Output[...truncated 6893 lines...]ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpUKD2uC/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpUKD2uC/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 86c73141a859c80265f3fbf835b15f7b6e964136..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/glance/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [31cae6a] Fix issues with migration 012dch -a [b584b83] Add migration.py based on the one in nova.dch -a [75dd3fb] Updated_at not being passed to db in image createdch -a [0756cf1] Clean dangling image fragments in filesystem storedch -a [3d4a77d] Sample config and doc for the show_image_direct_url option.dch -a [9f4fb2c] Avoid dangling partial image on size/checksum mismatchdch -a [6b706f1] Fix version issue during nosetests rundch -a [9518b9d] Adding database layer for image members domain modeldch -a [286dd79] Image Member Domain Modeldch -a [93cd456] Additional image member informationdch -a [c857149] Adding finer notifications.dch -a [a754aec] Add LazyPluggable utility from nova.dch -a [cb664f9] Update .coveragercdch -a [07e18c6] Replace nose plugin with testtools details.dch -a [68010c2] Convert some prints to addDetails calls.dch -a [68d71ea] Add _FATAL_EXCEPTION_FORMAT_ERRORS global.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A glance_2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'glance_2013.1.a98.g19be6cc+git201302151931~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_deploy #9

2013-02-15 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/9/Project:raring_grizzly_deployDate of build:Fri, 15 Feb 2013 20:48:38 -0500Build duration:20 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 5637 lines...]INFO:root:Setting up connection to test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Connected (version 2.0, client OpenSSH_6.1p1)INFO:paramiko.transport:Authentication (publickey) successful!INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_glance_trunk #135

2013-02-15 Thread openstack-testing-bot
Title: raring_grizzly_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_trunk/135/Project:raring_grizzly_glance_trunkDate of build:Fri, 15 Feb 2013 21:32:12 -0500Build duration:17 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdding image members in glance v2 apiby iccha.sethieditglance/tests/functional/v2/test_images.pyaddglance/api/v2/image_members.pyeditglance/tests/unit/test_auth.pyeditglance/domain.pyeditglance/gateway.pyeditglance/tests/unit/test_domain.pyeditglance/api/authorization.pyeditglance/tests/unit/test_db.pyeditglance/api/v2/router.pyaddglance/tests/unit/v2/test_image_members_resource.pyConsole Output[...truncated 6936 lines...]INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpr6osMm/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpr6osMm/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 86c73141a859c80265f3fbf835b15f7b6e964136..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/glance/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a100.gea5b2b0+git201302152132~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [44ceb21] Adding image members in glance v2 apidch -a [31cae6a] Fix issues with migration 012dch -a [b584b83] Add migration.py based on the one in nova.dch -a [75dd3fb] Updated_at not being passed to db in image createdch -a [0756cf1] Clean dangling image fragments in filesystem storedch -a [3d4a77d] Sample config and doc for the show_image_direct_url option.dch -a [9f4fb2c] Avoid dangling partial image on size/checksum mismatchdch -a [6b706f1] Fix version issue during nosetests rundch -a [9518b9d] Adding database layer for image members domain modeldch -a [286dd79] Image Member Domain Modeldch -a [93cd456] Additional image member informationdch -a [c857149] Adding finer notifications.dch -a [a754aec] Add LazyPluggable utility from nova.dch -a [cb664f9] Update .coveragercdch -a [07e18c6] Replace nose plugin with testtools details.dch -a [68010c2] Convert some prints to addDetails calls.dch -a [68d71ea] Add _FATAL_EXCEPTION_FORMAT_ERRORS global.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1.a100.gea5b2b0+git201302152132~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A glance_2013.1.a100.gea5b2b0+git201302152132~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'glance_2013.1.a100.gea5b2b0+git201302152132~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'glance_2013.1.a100.gea5b2b0+git201302152132~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_glance_trunk #121

2013-02-15 Thread openstack-testing-bot
Title: precise_grizzly_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_glance_trunk/121/Project:precise_grizzly_glance_trunkDate of build:Fri, 15 Feb 2013 21:40:17 -0500Build duration:11 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesAdding image members in glance v2 apiby iccha.sethieditglance/tests/unit/test_domain.pyeditglance/api/authorization.pyaddglance/tests/unit/v2/test_image_members_resource.pyeditglance/api/v2/router.pyeditglance/tests/unit/test_auth.pyeditglance/gateway.pyeditglance/tests/functional/v2/test_images.pyeditglance/domain.pyeditglance/tests/unit/test_db.pyaddglance/api/v2/image_members.pyConsole Output[...truncated 5824 lines...]INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmp_jnF2C/glancemk-build-deps -i -r -t apt-get -y /tmp/tmp_jnF2C/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 86c73141a859c80265f3fbf835b15f7b6e964136..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/glance/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a100.gea5b2b0+git201302152140~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [44ceb21] Adding image members in glance v2 apidch -a [31cae6a] Fix issues with migration 012dch -a [b584b83] Add migration.py based on the one in nova.dch -a [75dd3fb] Updated_at not being passed to db in image createdch -a [0756cf1] Clean dangling image fragments in filesystem storedch -a [3d4a77d] Sample config and doc for the show_image_direct_url option.dch -a [9f4fb2c] Avoid dangling partial image on size/checksum mismatchdch -a [6b706f1] Fix version issue during nosetests rundch -a [9518b9d] Adding database layer for image members domain modeldch -a [286dd79] Image Member Domain Modeldch -a [93cd456] Additional image member informationdch -a [c857149] Adding finer notifications.dch -a [a754aec] Add LazyPluggable utility from nova.dch -a [cb664f9] Update .coveragercdch -a [07e18c6] Replace nose plugin with testtools details.dch -a [68010c2] Convert some prints to addDetails calls.dch -a [68d71ea] Add _FATAL_EXCEPTION_FORMAT_ERRORS global.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1.a100.gea5b2b0+git201302152140~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A glance_2013.1.a100.gea5b2b0+git201302152140~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'glance_2013.1.a100.gea5b2b0+git201302152140~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'glance_2013.1.a100.gea5b2b0+git201302152140~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_deploy #10

2013-02-15 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/10/Project:raring_grizzly_deployDate of build:Fri, 15 Feb 2013 21:42:41 -0500Build duration:19 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 5219 lines...]INFO:root:Setting up connection to test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Connected (version 2.0, client OpenSSH_6.1p1)INFO:paramiko.transport:Authentication (publickey) successful!INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: folsom_coverage #471

2013-02-15 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/folsom_coverage/471/

--
Started by command line by jenkins
Building on master in workspace 
http://10.189.74.7:8080/job/folsom_coverage/ws/
No emails were triggered.
[workspace] $ /bin/bash -x /tmp/hudson380615350009477590.sh
+ juju status
2013-02-15 22:02:09,129 INFO Connecting to environment...
2013-02-15 22:02:10,263 INFO Connected to environment.
2013-02-15 22:02:11,327 INFO 'status' command finished successfully
machines:
  0:
agent-state: running
dns-name: test-11.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-a144bb1e-1ddf-11e2-89df-d4bed9a84493/
instance-state: unknown
  1715:
agent-state: running
dns-name: test-12.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-a22a5e62-1ddf-11e2-80fb-d4bed9a84493/
instance-state: unknown
  1716:
agent-state: running
dns-name: test-04.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-955ea44a-1ddf-11e2-89df-d4bed9a84493/
instance-state: unknown
  1717:
agent-state: running
dns-name: test-05.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-95a20aa0-1ddf-11e2-89df-d4bed9a84493/
instance-state: unknown
  1718:
agent-state: running
dns-name: test-02.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-e2318cde-1dde-11e2-80fb-d4bed9a84493/
instance-state: unknown
  1719:
agent-state: running
dns-name: test-06.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-993ffdac-1ddf-11e2-80fb-d4bed9a84493/
instance-state: unknown
  1720:
agent-state: running
dns-name: test-07.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-9b255c8e-1ddf-11e2-89df-d4bed9a84493/
instance-state: unknown
  1721:
agent-state: not-started
dns-name: test-09.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-9e8dd04a-1ddf-11e2-89df-d4bed9a84493/
instance-state: unknown
  1722:
agent-state: not-started
dns-name: test-08.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-9d23a6b2-1ddf-11e2-80fb-d4bed9a84493/
instance-state: unknown
  1723:
agent-state: not-started
dns-name: test-03.os.magners.qa.lexington
instance-id: /MAAS/api/1.0/nodes/node-94fb3cde-1ddf-11e2-80fb-d4bed9a84493/
instance-state: unknown
services:
  ceph:
charm: local:raring/ceph-93
relations:
  mon:
  - ceph
units:
  ceph/99:
agent-state: install-error
machine: 1717
public-address: test-05.os.magners.qa.lexington
  cinder:
charm: local:raring/cinder-1003
relations: {}
units:
  cinder/95:
agent-state: pending
machine: 1722
public-address: null
  glance:
charm: local:raring/glance-1004
relations: {}
units:
  glance/95:
agent-state: pending
machine: 1723
public-address: null
  keystone:
charm: local:raring/keystone-1004
relations: {}
units:
  keystone/98:
agent-state: pending
machine: 1718
public-address: test-02.os.magners.qa.lexington
  mysql:
charm: local:raring/mysql-165
relations: {}
units:
  mysql/95:
agent-state: pending
machine: 1720
public-address: test-07.os.magners.qa.lexington
  nova-cloud-controller:
charm: local:raring/nova-cloud-controller-1004
relations: {}
units:
  nova-cloud-controller/95:
agent-state: pending
machine: 1716
public-address: test-04.os.magners.qa.lexington
  nova-compute:
charm: local:raring/nova-compute-1006
relations:
  compute-peer:
  - nova-compute
units:
  nova-compute/95:
agent-state: pending
machine: 1715
public-address: test-12.os.magners.qa.lexington
  openstack-dashboard:
charm: local:raring/openstack-dashboard-1003
relations: {}
units:
  openstack-dashboard/95:
agent-state: pending
machine: 1721
public-address: null
  rabbitmq:
charm: local:raring/rabbitmq-server-37
relations:
  cluster:
  - rabbitmq
units:
  rabbitmq/95:
agent-state: pending
machine: 1719
public-address: test-06.os.magners.qa.lexington
+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/inspect_environment.sh
Inspecting deployed environment.
No handlers could be found for logger keystoneclient.client
Authorization Failed: Unable to communicate with identity service: [Errno 111] 
Connection refused. (HTTP 400)
No handlers could be found for logger keystoneclient.client
Authorization Failed: Unable to communicate with identity service: [Errno 111] 
Connection refused. (HTTP 400)
No handlers could be found for logger keystoneclient.client
Authorization Failed: Unable to communicate with identity service: [Errno 111] 
Connection refused. (HTTP 400)
Writing envrc
ERROR:root:Could not setup SSH connection to None

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_cinder_trunk #146

2013-02-15 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/146/Project:precise_grizzly_cinder_trunkDate of build:Fri, 15 Feb 2013 23:31:08 -0500Build duration:4 min 29 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesCreate a RemoteFsDriver classby eharneyeditcinder/tests/test_nfs.pyeditcinder/volume/drivers/nfs.pyAdding support for Coraid AoE SANs Appliances.by jean-baptiste.ransyeditcinder/volume/volume_types.pyaddcinder/volume/drivers/coraid.pyaddcinder/tests/test_coraid.pyConsole Output[...truncated 5332 lines...]Install-Time: 84Job: cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dscMachine Architecture: amd64Package: cinderPackage-Time: 161Source-Version: 2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1Space: 24084Status: attemptedVersion: 2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1Finished at 20130215-2335Build needed 00:02:41, 24084k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpkKUeio/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpkKUeio/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [abd3475] Create a RemoteFsDriver classdch -a [d17cc23] Allow create_volume() to retry when exception happeneddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a132.g3be9c5c+git201302152331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_cinder_trunk #147

2013-02-15 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/147/Project:precise_grizzly_cinder_trunkDate of build:Sat, 16 Feb 2013 01:31:08 -0500Build duration:4 min 20 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdd an update option to run_tests.shby treinisheditrun_tests.shConsole Output[...truncated 5334 lines...]Job: cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dscMachine Architecture: amd64Package: cinderPackage-Time: 147Source-Version: 2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1Space: 24084Status: attemptedVersion: 2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1Finished at 20130216-0135Build needed 00:02:27, 24084k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpkIpSv6/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpkIpSv6/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [f06f5e1] Add an update option to run_tests.shdch -a [abd3475] Create a RemoteFsDriver classdch -a [d17cc23] Allow create_volume() to retry when exception happeneddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_cinder_trunk #149

2013-02-15 Thread openstack-testing-bot
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/149/Project:raring_grizzly_cinder_trunkDate of build:Sat, 16 Feb 2013 01:31:09 -0500Build duration:6 min 14 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdd an update option to run_tests.shby treinisheditrun_tests.shConsole Output[...truncated 6251 lines...]Job: cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dscMachine Architecture: amd64Package: cinderPackage-Time: 134Source-Version: 2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1Space: 24068Status: attemptedVersion: 2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1Finished at 20130216-0137Build needed 00:02:14, 24068k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmps6Bxsl/cindermk-build-deps -i -r -t apt-get -y /tmp/tmps6Bxsl/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [f06f5e1] Add an update option to run_tests.shdch -a [abd3475] Create a RemoteFsDriver classdch -a [d17cc23] Allow create_volume() to retry when exception happeneddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a134.gee59cc0+git201302160131~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp