Re: [Openstack] [Quantum] Scalable agents

2012-07-23 Thread Dan Wendlandt
On Sun, Jul 22, 2012 at 5:51 AM, Gary Kotton  wrote:

> **
>
>
> This is an interesting idea. In addition to the creation we will also need
> the update. I would prefer that the agents would have one topic - that is
> for all updates. When an agent connects to the plugin it will register the
> type of operations that are supported on the specific agent. The agent
> operations can be specific as bit masks.
>
> I have implemented something similar in
> https://review.openstack.org/#/c/9591
>
> This can certainly be improved and optimized. What are your thoughts?
>

Based on your follow-up emails, I think we're now thinking similarly about
this.  Just to be clear though, for updates I was talking about a different
topic for each entity that has its own UUID (e.g., topic
port-update-f01c8dcb-d9c1-4bd6-9101-1924790b4b45)


>
> In addition to this we have a number of issues where the plugin does not
> expose the information via the standard API's - for example the VLAN tag
> (this is being addressed via extensions in the provider networks feature)
>

Agreed.  There are a couple options here: direct DB access (no polling,
just direct fetching), admin API extensions, or custom RPC calls.  Each has
pluses and minuses.  Perhaps my real goal here would be better described as
"if there's an existing plugin agnostic way to doing X, our strong bias
should be to use it until presented with concrete  evidence to the
contrary".   For example, should a DHCP client create a port for the DHCP
server via the standard API, or via a custom API or direct DB access?  My
strong bias would be toward using the standard API.


> 3. Logging. At the moment the agents do not have a decent logging
> mechanism. This makes debugging the RPC code terribly difficult. This was
> scheduled for F-3. I'll be happy to add this if there are no objections.
>

That sounds valuable.


> 4. We need to discuss the notifications that Yong added and how these two
> methods can interact together. More specifically I think that we need to
> address the configuration files.
>

Agreed.  I think we need to decide on this at monday's IRC meeting, so we
can move forward.  Given F-3 deadlines, I'm well aware that I'll have to be
pragmatic here :)


>
> The RPC code requires that the eventlet monkey patch be set. This cause
> havoc when I was using the events from pyudev for new device creation. At
> the moment I have moved the event driven support to polling (if anyone who
> reads this is familiar with the issue or has an idea on how to address it
> any help will be great)
>

Sorry, wish I could help, but I'm probably in the same boat as you on this
one.

I'm going to make sure we have a good chunk of time to discuss this during
the IRC meeting on monday (sorry, I know that's late night for you...).

Dan




>
> Thanks
> Gary
>
>  Dan
>
>
>
>  ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~
>
>
>


-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [Quantum] Scalable agents

2012-07-23 Thread Salvatore Orlando
On 23 July 2012 09:02, Dan Wendlandt  wrote:

>
>
> On Sun, Jul 22, 2012 at 5:51 AM, Gary Kotton  wrote:
>
>> **
>>
>>
>> This is an interesting idea. In addition to the creation we will also
>> need the update. I would prefer that the agents would have one topic - that
>> is for all updates. When an agent connects to the plugin it will register
>> the type of operations that are supported on the specific agent. The agent
>> operations can be specific as bit masks.
>>
>> I have implemented something similar in
>> https://review.openstack.org/#/c/9591
>>
>> This can certainly be improved and optimized. What are your thoughts?
>>
>
> Based on your follow-up emails, I think we're now thinking similarly about
> this.  Just to be clear though, for updates I was talking about a different
> topic for each entity that has its own UUID (e.g., topic
> port-update-f01c8dcb-d9c1-4bd6-9101-1924790b4b45)
>


>From my limited experience with RPC, I have never seen per-object topics as
we are proposing here. Nevertheless, I think they're a good idea and I am
not aware of any reasons for which this should impact the scalability of
the underlying message queue.


>
>>
>> In addition to this we have a number of issues where the plugin does not
>> expose the information via the standard API's - for example the VLAN tag
>> (this is being addressed via extensions in the provider networks feature)
>>
>
> Agreed.  There are a couple options here: direct DB access (no polling,
> just direct fetching), admin API extensions, or custom RPC calls.  Each has
> pluses and minuses.  Perhaps my real goal here would be better described as
> "if there's an existing plugin agnostic way to doing X, our strong bias
> should be to use it until presented with concrete  evidence to the
> contrary".   For example, should a DHCP client create a port for the DHCP
> server via the standard API, or via a custom API or direct DB access?  My
> strong bias would be toward using the standard API.
>

I totally agree with this approach. Should we be presented with an
"evidence of the contrary", I would then use API extensions first and then,
only if necessary, custom RPC calls. If we end up in a situation where we
feel we need direct DB access, I would say we are in a very bad place and
need to back to the drawing board!


>
>
>> 3. Logging. At the moment the agents do not have a decent logging
>> mechanism. This makes debugging the RPC code terribly difficult. This was
>> scheduled for F-3. I'll be happy to add this if there are no objections.
>>
>
> That sounds valuable.
>
>
>> 4. We need to discuss the notifications that Yong added and how these two
>> methods can interact together. More specifically I think that we need to
>> address the configuration files.
>>
>
> Agreed.  I think we need to decide on this at monday's IRC meeting, so we
> can move forward.  Given F-3 deadlines, I'm well aware that I'll have to be
> pragmatic here :)
>

I believe Yong stated in a different thread (or in the code review
discussion) that his notification mechanism was trying to address a
somewhat different use case. Given the looming deadline, I would probably
discuss in today's (or tomorrow's for the non-Euro netstacker's)  meeting
whether there is any major reason for which both patches cannot live
together and then proceed to merge both. When planning Grizzly we can then
look back at them and see if and how these mechanisms could be merged.


>
>>
>> The RPC code requires that the eventlet monkey patch be set. This cause
>> havoc when I was using the events from pyudev for new device creation. At
>> the moment I have moved the event driven support to polling (if anyone who
>> reads this is familiar with the issue or has an idea on how to address it
>> any help will be great)
>>
>
> Sorry, wish I could help, but I'm probably in the same boat as you on this
> one.
>

I am afraid I cannot be of great help too. But there's a high chance
nova+libvirt developers already faced and solved this issue.


>
> I'm going to make sure we have a good chunk of time to discuss this during
> the IRC meeting on monday (sorry, I know that's late night for you...).
>
> Dan
>
>
>
>
>>
>> Thanks
>> Gary
>>
>>  Dan
>>
>>
>>
>>  ~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com
>> twitter: danwendlandt
>> ~~~
>>
>>
>>
>
>
> --
> ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~
>
>
> ___
> OpenStack-dev mailing list
> openstack-...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [openstack] nova-compute always dead on RHEL6.1

2012-07-23 Thread 延生 付
 
Dear all,
 
When I deply nova-compute based on epel repository, I found 
openstack-nova-compute always dead but pid file exists.
While there is no any log file generated in /var/log/nova.
The OS is RHEL6.1. The nova.conf is copied from controller node.
 [root@comp02-r11 nova]# service openstack-nova-compute status
openstack-nova-compute dead but pid file exists 
 
Does anybody have clues? Thanks in advance.

Regards,


Will___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] inter-tenant and VM-to-bare-metal communication policies/restrictions.

2012-07-23 Thread Christian Parpart
On Fri, Jul 6, 2012 at 6:39 AM, romi zhang  wrote:

>  I am also very interesting about this and also try to find a way to
> forbid the talking between VMs on same compute+network node. J
>
> ** **
>
> Romi
>
> ** **
>
> *发件人:* openstack-bounces+romizhang1968=163@lists.launchpad.net[mailto:
> openstack-bounces+romizhang1968=163@lists.launchpad.net] *代表 *Christian
> Parpart
> *发送时间:* 2012年7月5日 星期四 23:48
> *收件人:* 
> *主题:* [Openstack] inter-tenant and VM-to-bare-metal communication
> policies/restrictions.
>
> ** **
>
> Hi all,
>
> ** **
>
> I am running multiple compute nodes and a single nova-network node, that
> is to act
>
> as a central gateway for the tenant's VMs.
>
> ** **
>
> However, since this nova-network node (of course) knows all routes, every
> VM of
>
> any tenant can talk to each other, including to the physical nodes, which*
> ***
>
> I highly disagree with and would like to restrict that. :-)
>
> ** **
>
> root@gw1:~# ip route show
>
> default via $UPLINK_IP dev eth1  metric 100 
>
> 10.10.0.0/19 dev eth0  proto kernel  scope link  src 10.10.30.5 
>
> 10.10.40.0/21 dev br100  proto kernel  scope link  src 10.10.40.1 
>
> 10.10.48.0/24 dev br101  proto kernel  scope link  src 10.10.48.1 
>
> 10.10.49.0/24 dev br102  proto kernel  scope link  src 10.10.49.1 
>
> $PUBLIC_NET/28 dev eth1  proto kernel  scope link  src $PUBLIC_IP
>
> 192.168.0.0/16 dev eth0  proto kernel  scope link  src 192.168.2.1
>
> ** **
>
> - 10.10.0.0/19 is the network for bare metal nodes, switches, PDUs, etc.**
> **
>
> - 10.10.40.0/21(br100) is the "production" tenant
>
> - 10.10.48.0/24 (br101) is the "staging" tenant
>
> - 10.10.49.0/24 (br102) is the "playground" tenant.
>
> - 192.168.0.0/16 is the legacy network (management and VM nodes)
>
> ** **
>
> No tenant's VM shall be able to talk to a VM of another tenant.
>
> And ideally no tenant's VM should be able to talk to the management
>
> network either.
>
> ** **
>
> Unfortunately, since we're migrating a live system, and we also have
>
> production services on the bare-metal nodes, I had to add special routes**
> **
>
> to allow the legacy installations to communicate to the new "production"**
> **
>
> VMs for the transition phase. I hope I can remove that ASAP.
>
> ** **
>
> Now, checking iptables on the nova-network node:
>
> ** **
>
> root@gw1:~# iptables -t filter -vn -L FORWARD
>
> Chain FORWARD (policy ACCEPT 64715 packets, 13M bytes)
>
>  pkts bytes target prot opt in out source
> destination 
>
>   36M   29G nova-filter-top  all  --  *  *   0.0.0.0/0
> 0.0.0.0/0   
>
>   36M   29G nova-network-FORWARD  all  --  *  *   0.0.0.0/0
>  0.0.0.0/0   
>
> ** **
>
> root@gw1:~# iptables -t filter -vn -L nova-filter-top
>
> Chain nova-filter-top (2 references)
>
>  pkts bytes target prot opt in out source
> destination 
>
>   36M   29G nova-network-local  all  --  *  *   0.0.0.0/0
>0.0.0.0/0   
>
> ** **
>
> root@gw1:~# iptables -t filter -vn -L nova-network-local
>
> Chain nova-network-local (1 references)
>
>  pkts bytes target prot opt in out source
> destination   
>
>   
>
> root@gw1:~# iptables -t filter -vn -L nova-network-FORWARD
>
> Chain nova-network-FORWARD (1 references)
>
>  pkts bytes target prot opt in out source
> destination 
>
> 0 0 ACCEPT all  --  br102  *   0.0.0.0/0
> 0.0.0.0/0   
>
> 0 0 ACCEPT all  --  *  br102   0.0.0.0/0
> 0.0.0.0/0   
>
> 0 0 ACCEPT udp  --  *  *   0.0.0.0/0
>  10.10.49.2   udp dpt:1194
>
>   18M   11G ACCEPT all  --  br100  *   0.0.0.0/0
> 0.0.0.0/0   
>
>   18M   18G ACCEPT all  --  *  br100   0.0.0.0/0
> 0.0.0.0/0   
>
> 0 0 ACCEPT udp  --  *  *   0.0.0.0/0
>  10.10.40.2   udp dpt:1194
>
>  106K   14M ACCEPT all  --  br101  *   0.0.0.0/0
> 0.0.0.0/0   
>
> 79895   23M ACCEPT all  --  *  br101   0.0.0.0/0
> 0.0.0.0/0   
>
> 0 0 ACCEPT udp  --  *  *   0.0.0.0/0
>  10.10.48.2   udp dpt:1194
>
> ** **
>
> Now I see, that all traffic from tenant "staging" (br101) for example
> allows any traffic from/to any destination (-j ACCEPT).
>
> I'd propose to reduce this limitation to the public gateway interface
> (eth1 in my case) and that this value
>
> shall be configurable in the nova.conf file.
>
> ** **
>
> Is there any other thing, I might have overseen, to disallow inter-tenant
> communication and to disallow
>
> tenant-VM-to-bare-metal communication?
>
> ** **
>
> Many thanks in advance,
>
> Christian Parpart.
>

Am I (almost) the only one interested in disallowing inter

Re: [Openstack] inter-tenant and VM-to-bare-metal communication policies/restrictions.

2012-07-23 Thread Wolfgang Hennerbichler

On 07/23/2012 10:49 AM, Christian Parpart wrote:


Am I (almost) the only one interested in disallowing inter-tenant
communication, or am I overseeing something in the docs? :-(


I do have the same need, but I'm still fighting with other issues, so 
I've not reached the piont to bitch about it :)


In my small world using vlans as separators and a (dedicated, 
non-openstack-aware) firewall would be ideal.



Christian.


Wolfgang



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] When are the IRC meetings for Nova, Glance and Swift?

2012-07-23 Thread Thierry Carrez
Sheng Bo Hou wrote:
> From this link http://wiki.openstack.org/Meetings/, I cannot find when
> the IRC meetings for Nova, Glance and Swift will be.
> Can someone tell me when these meetings take place?

There is no regular meeting specifically scheduled for any of those
projects. They hold exceptional meetings from time to time. And some
Nova subteams organize regular meetings, they are listed on the page.

Note that we get regular progress reports for all core projects during
the weekly project/release meeting.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving logging

2012-07-23 Thread Thierry Carrez
Michael Still wrote:
> On 21/07/12 00:08, Jay Pipes wrote:
> 
>> Not that I've seen, but I think it would be good to standardize on one.
>> How about just "ops"?
> 
> Works for me.

Added to http://wiki.openstack.org/BugTags and as official tag for all
core projects.

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Scalable agents

2012-07-23 Thread Gary Kotton

On 07/23/2012 11:02 AM, Dan Wendlandt wrote:



On Sun, Jul 22, 2012 at 5:51 AM, Gary Kotton > wrote:




This is an interesting idea. In addition to the creation we will
also need the update. I would prefer that the agents would have
one topic - that is for all updates. When an agent connects to the
plugin it will register the type of operations that are supported
on the specific agent. The agent operations can be specific as bit
masks.

I have implemented something similar in
https://review.openstack.org/#/c/9591

This can certainly be improved and optimized. What are your thoughts?


Based on your follow-up emails, I think we're now thinking similarly 
about this.  Just to be clear though, for updates I was talking about 
a different topic for each entity that has its own UUID (e.g., topic 
port-update-f01c8dcb-d9c1-4bd6-9101-1924790b4b45)


Printout from the rabbit queues (this is for linux bridge where at the 
moment the port update and network deletion are of interest (unless we 
decide to change the way in which the agent is implemented))


openstack@openstack:~/devstack$ sudo rabbitmqctl list_queues
Listing queues ...
q-agent-network-update0
q-agent-network-update.10351797001a4a2312790
q-plugin0
q-agent-port-update.10351797001a4a2312790

10351797001a4a231279 == IP and MAC of the host



In addition to this we have a number of issues where the plugin
does not expose the information via the standard API's - for
example the VLAN tag (this is being addressed via extensions in
the provider networks feature)


Agreed.  There are a couple options here: direct DB access (no 
polling, just direct fetching), admin API extensions, or custom RPC 
calls.  Each has pluses and minuses.  Perhaps my real goal here would 
be better described as "if there's an existing plugin agnostic way to 
doing X, our strong bias should be to use it until presented with 
concrete  evidence to the contrary".   For example, should a DHCP 
client create a port for the DHCP server via the standard API, or via 
a custom API or direct DB access?  My strong bias would be toward 
using the standard API.


Good question. I think that if the standard API's can be used then we 
should go for it. Problem is that these require additional configurations.



3. Logging. At the moment the agents do not have a decent logging
mechanism. This makes debugging the RPC code terribly difficult.
This was scheduled for F-3. I'll be happy to add this if there are
no objections.


That sounds valuable.


Hopefully I'll be able to find some time for this.


4. We need to discuss the notifications that Yong added and how
these two methods can interact together. More specifically I think
that we need to address the configuration files.


Agreed.  I think we need to decide on this at monday's IRC meeting, so 
we can move forward.  Given F-3 deadlines, I'm well aware that I'll 
have to be pragmatic here :)



The RPC code requires that the eventlet monkey patch be set. This
cause havoc when I was using the events from pyudev for new device
creation. At the moment I have moved the event driven support to
polling (if anyone who reads this is familiar with the issue or
has an idea on how to address it any help will be great)


Sorry, wish I could help, but I'm probably in the same boat as you on 
this one.


I have a solution that works. In the long term it would be better if 
this was event driven. This all depends on how the discussions above 
play out.
I'm going to make sure we have a good chunk of time to discuss this 
during the IRC meeting on monday (sorry, I know that's late night for 
you...).


:). Tomorrow is jet lag day!


Dan



Thanks
Gary


Dan


~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com 
twitter: danwendlandt
~~~






--
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com 
twitter: danwendlandt
~~~



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to use Cloudpipe

2012-07-23 Thread Razique Mahroua
Hi kevin, I just added the extra instructions for connecting to your cloudpipe project using an OpenVPN clientCheers
Nuage & Co - Razique Mahroua razique.mahr...@gmail.com

Le 18 juil. 2012 à 16:18, Razique Mahroua a écrit :Hi Kevin, just submitted a patch here for the doc https://review.openstack.org/#/c/9965/1It explains how to create an image, the next update will present how to use it  (retrieve the openvpn client, connecting to the instance, and so on)
Nuage & Co - Razique Mahroua razique.mahr...@gmail.com

Le 17 juil. 2012 à 12:16, Kevin Jackson a écrit :Hi All,So I've been looking at CloudPipe and have got to a stage where I can successfully (I presume) create a CloudPipe image and it launches.But now what? :)My understanding is that you now execute, from your desktop, openvpn against the public IP of the CloudPipe image which then allows you access to the previously unrouted private (fixed) network range.  But what settings do you use against this?
Under the old deprecated auth, nova-manage packaged up the relevant config file as part of the project.  Under Keystone there isn't that notion of projects.As much as I love the much improved docs on creating the image, I don't see that final step on how to use it once its running.
I did try to take a template that was buried deep in the Python dist-packages [using Ubuntu 12.04] area and filled in some details - but it suggests that this (being a  template) should create the final file for a user to use.  This seemed to try to  connect and didn't complete [kept retrying] so I'm not sure if it was the process I used to create the image or configuration issues that caused my problem
Cheers,KevOn 21 June 2012 17:23, Anne Gentle  wrote:
The docs team is on it!

https://bugs.launchpad.net/openstack-manuals/+bug/1015937

I <3 the docs team.

Anne

On Thu, Jun 21, 2012 at 3:00 AM, Sébastien Han  wrote:
> Hi,
>
> The official doc needs to be updated at some points. If you want to make
> this compatible with Ubuntu 12.04.
> You can check my article
> here http://www.sebastien-han.fr/blog/2012/06/20/setup-cloud-pipe-vpn-in-openstack/
> and the fork of the mirantis
> repo https://github.com/leseb/cloudpipe-image-auto-creation/blob/master/cloudpipeconf.sh
>
> I will also try to  update the OpenStacj wiki asap.
>
> Cheers.
>
>
> On Thu, Jun 21, 2012 at 7:01 AM, Atul Jha  wrote:
>>
>> Hi Naveen,
>> 
>> From: openstack-bounces+atul.jha=csscorp@lists.launchpad.net
>> [openstack-bounces+atul.jha=csscorp@lists.launchpad.net] on behalf of
>> Naveen Kuna [naveen.k...@oneconvergence.com]
>> Sent: Thursday, June 21, 2012 8:22 AM
>> To: openstack@lists.launchpad.net
>> Subject: [Openstack] How to use Cloudpipe
>>
>> Hi All,
>>
>> Can anyone help me in making cloudpipe image and how to use cloudpipe
>> image for VPN service ?
>>
>>
>> http://docs.openstack.org/trunk/openstack-compute/admin/content/cloudpipe-per-project-vpns.html

>>
>> Please go through documentation pages next time onwards before asking
>> questions which are already easily available.
>>
>> Thanks in Advance
>>
>> Regards,
>> Naveen
>>
>> Cheers!!
>>
>> Atul
>> http://www.csscorp.com/common/email-disclaimer.php
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
-- Kevin Jackson@itarchitectkev

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack] nova-compute always dead on RHEL6.1

2012-07-23 Thread Pádraig Brady
On 07/23/2012 09:44 AM, 延生 付 wrote:
>  
> Dear all,
>  
> When I deply nova-compute based on epel repository, I found 
> openstack-nova-compute always dead but pid file exists.
> While there is no any log file generated in /var/log/nova.
> The OS is RHEL6.1. The nova.conf is copied from controller node.
>  
> [root@comp02-r11 nova]# service openstack-nova-compute status
> openstack-nova-compute dead but pid file exists
>  
> Does anybody have clues? Thanks in advance.

RHEL6.2 is the first version targeted by the EPEL packages,
though others have successfully used 6.1.
One thing to consider is upgrading libvirt.
Strange you don't get anything in the logs.
Perhaps you could run manually to debug:

/usr/bin/nova-compute --config-file /etc/nova/nova.conf

cheers,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] High Available queues in rabbitmq

2012-07-23 Thread Alessandro Tagliapietra
Hi guys,

just an idea, i'm deploying Openstack trying to make it HA.
The missing thing is rabbitmq, which can be easily started in active/active 
mode, but it needs to declare the queues adding an x-ha-policy entry.
http://www.rabbitmq.com/ha.html
It would be nice to add a config entry to be able to declare the queues in that 
way.
If someone know where to edit the openstack code, else i'll try to do that in 
the next weeks maybe.

Any feedback is appreciated.

Best Regards

Alessandro___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HPC] BoF at SC12

2012-07-23 Thread John Paul Walters
Hi Lorin,

Thanks for the followup.  I'm perfectly happy to go the Openstack-specific 
route, but I haven't received much feedback from the Openstack community.  It 
would be helpful if we could get some sense of community interest (and 
likelihood of attending) to accompany our submission.  What do others think?  
Would others be interested in attending?

JP


On Jul 22, 2012, at 9:12 PM, Lorin Hochstein wrote:

> On Jul 6, 2012, at 1:28 PM, John Paul Walters wrote:
> 
>> I'm strongly considering putting together a proposal for a BoF (birds of a 
>> feather) session at this year's Supercomputing in Salt Lake City.  For those 
>> of you who are likely to attend, is anyone else interested?  It's not a huge 
>> amount of time invested on my end to put together the proposal, but I'd like 
>> to gauge the community interest before doing so.  I would likely broaden 
>> things a bit from being exclusively Openstack and instead turn it into more 
>> of an HPC in the Cloud session so that we could, perhaps, take some input 
>> from other HPC cloud projects.   The submissions are due July 31, so we've 
>> got a little bit of time, but not too much.  Anyone else interested?
>> 
>> best,
>> JP
> 
> 
> JP:
> 
> I think this was a great idea, we were thinking about proposing this if 
> nobody else did. I would suggest making it OpenStack-specific, since there 
> was  an "HPC in the Cloud" BoF last year 
> (http://sc11.supercomputing.org/schedule/event_detail.php?evid=bof140), and 
> they'll probably re-apply this year as well. I think we can get critical mass 
> for an OpenStack BoF.
> 
> Along these lines: Chris Hoge from U. Oregon gave a talk last week at OSCON 
> about their use of OpenStack on HPC 
> http://www.oscon.com/oscon2012/public/schedule/detail/24261
> 
> (There are some good slides attached to that web page)
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 回复: Keystone client could not behave well, call for help

2012-07-23 Thread Adam Young

On 07/22/2012 09:12 AM, 延生 付 wrote:

reply: 'HTTP/1.1 503 Service Unavailable\r\n'


This seems to be the main problem.  The error message "/string indices 
must be integers, not str" /seems to be a bug in trying to parse the 
error page.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fw:Re: Questions about ceilometer

2012-07-23 Thread Doug Hellmann
On Wed, Jul 18, 2012 at 7:51 AM, 张家龙  wrote:

>
> Hi,all
> Now,I modify the file named 
> ceilometer/collector/manager.pyas
>  the previous mail send by John HTran.While, there is also errors in my 
> environment.
> The follow is errors:
>
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/eventlet/hubs/poll.py", line 97,
> in wait
> readers.get(fileno, noop).cb(fileno)
>   File "/usr/lib/python2.6/site-packages/eventlet/green/select.py", line
> 48, in on_read
> current.switch(([original], [], []))
>
>   File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
> 192, in main
> result = function(*args, **kwargs)
>   File "/usr/lib/python2.6/site-packages/nova/service.py", line 101, in
> run_server
> server.start()
>   File "/usr/lib/python2.6/site-packages/nova/service.py", line 162, in
> start
> self.manager.init_host()
>   File
> "/usr/lib/python2.6/site-packages/ceilometer-0-py2.6.egg/ceilometer/collector/manager.py",
> line 75, in init_host
> self.connection.create_worker(
>   File "/usr/lib/python2.6/site-packages/nova/rpc/amqp.py", line 132, in
> __getattr__
> return getattr(self.connection, key)
> AttributeError: 'Connection' object has no attribute 'create_worker'
> Removing descriptor: 10
>
> Is there any one can help me ? Thanks .
>

Ah, the nova service management code we are importing uses the older RPC
library, which does not have the worker feature I added in Folsom for
ceilometer. We have a ticket open to address this (
https://bugs.launchpad.net/ceilometer/+bug/1024093) by moving that service
code into openstack.common, where we will be able to use it safely.

Doug


> **
> --
> Best Regards
>
> ZhangJialong
> **
> 
>
> -- Original --
>  *From: * "John HTran";
>  *Date: * Wed, Jul 18, 2012 01:01 AM
>  *To: * "张家龙"; **
>  *Cc: * "openstack"; **
>  *Subject: * Re: [Openstack] Questions about ceilometer
>
> That URL works for me.  Anyhow, here is the patch:
>
>
> https://review.openstack.org/gitweb?p=stackforge/ceilometer.git;a=commitdiff;h=2b41a361b83140c1ebabcd3e15dff7502cbaecb6;hp=5affdd159a08f81b33a595fa51ed0cb63aaa70f2
>
>   diff --git 
> a/ceilometer/collector/manager.py
>  
> b/ceilometer/collector/manager.py
> index 
> 82f164f
> ..0cc220d
>  100644 (file)
>  --- 
> a/ceilometer/collector/manager.py
> +++ 
> b/ceilometer/collector/manager.py
>  @@ 
> -66,7
> +66,7@@
>   class
> CollectorManager(manager.Manager):
>  # invocation protocol (they do not include a "method"
>   # parameter).
>  self.connection.declare_topic_consumer(
>  -topic='%s.info' % flags.FLAGS.notification_topics[0],
>  +topic='%s.info' % cfg.CONF.notification_topics[0],
>   callback=self.compute_handler.notify)
>
>   # Set ourselves up as a separate worker for the metering data,
>
> On Mon, Jul 16, 2012 at 7:41 PM, 张家龙  wrote:
>
>> Hi,Doug,
>>It`s a bad news that the patch (
>> https://bugs.launchpad.net/ceilometer/+bug/1024563) has been removed .
>> This page showed "page not found".
>>   Anyway,Thanks for your help.
>>
>>  * *
>> --
>>  Best Regards
>>
>> ZhangJialong
>>  * *
>>  * * * *

Re: [Openstack] best practices for merging common into specific projects

2012-07-23 Thread Doug Hellmann
On Wed, Jul 18, 2012 at 7:00 PM, Thierry Carrez wrote:

> Mark McLoughlin wrote:
> >> Making our multiple projects converge onto consolidated and
> >> well-accepted APIs is a bit painful work, but it is a prerequisite to
> >> turning openstack-common into a proper library (or set of libraries).
> >>
> >> I'd say the whole thing suffers from not having a proper
> >> team/leader/coordinator dedicated to it: relying on existing,
> >> overstretched PTLs to lead that effort might not be the fastest path.
> >
> > While I was on vacation, I read in the weekly newsletter:
> >
> >   "It developed into a request for leadership for openstack-common"
> >
> > and was like "WTF do you call the work that e.g. I, Jason, Russell and
> > Doug have been doing?"
> >
> > But I see your point is a little different - you feel there should be an
> > elected/appointed "PTL without a PPB vote" or whatever to represent the
> > project. I guess that could help clarify things since it's what folks
> > are used to with other projects.
>
> Right. So far we said that openstack-common was driven by "all the
> PTLs", but that didn't prove particularly fast and efficient. Having a
> clear face associated with it, someone specific taking the "lead" on
> this project, will, I think, help a bit in getting to the next step.


Sorry if this rekindles old arguments, but could someone summarize the
reasons for an openstack-common "PTL" without voting rights? I would have
defaulted to giving them a vote *especially* because the code in common is,
well, common to all of the projects.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-23 Thread Thierry Carrez
Doug Hellmann wrote:
> 
> On Wed, Jul 18, 2012 at 7:00 PM, Thierry Carrez  > wrote:
> 
> Mark McLoughlin wrote:
> >> Making our multiple projects converge onto consolidated and
> >> well-accepted APIs is a bit painful work, but it is a prerequisite to
> >> turning openstack-common into a proper library (or set of libraries).
> >>
> >> I'd say the whole thing suffers from not having a proper
> >> team/leader/coordinator dedicated to it: relying on existing,
> >> overstretched PTLs to lead that effort might not be the fastest path.
> >
> > While I was on vacation, I read in the weekly newsletter:
> >
> >   "It developed into a request for leadership for openstack-common"
> >
> > and was like "WTF do you call the work that e.g. I, Jason, Russell and
> > Doug have been doing?"
> >
> > But I see your point is a little different - you feel there should
> be an
> > elected/appointed "PTL without a PPB vote" or whatever to
> represent the
> > project. I guess that could help clarify things since it's what folks
> > are used to with other projects.
> 
> Right. So far we said that openstack-common was driven by "all the
> PTLs", but that didn't prove particularly fast and efficient. Having a
> clear face associated with it, someone specific taking the "lead" on
> this project, will, I think, help a bit in getting to the next step.
> 
> 
> Sorry if this rekindles old arguments, but could someone summarize the
> reasons for an openstack-common "PTL" without voting rights? I would
> have defaulted to giving them a vote *especially* because the code in
> common is, well, common to all of the projects.

So far, the PPB considered openstack-common to be driven by "all PTLs",
so it didn't have a specific PTL.

As far as future governance is concerned (technical committee of the
Foundation), openstack-common would technically be considered a
supporting library (rather than a core project) -- those can have leads,
but those do not get granted an automatic TC seat.

[ Avoiding the need to distinguish between "worthy" and "unworthy"
projects leads was one of the many reasons why I preferred the TC to be
completely directly-elected. ]

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-23 Thread Doug Hellmann
On Mon, Jul 23, 2012 at 12:00 PM, Thierry Carrez wrote:

> Doug Hellmann wrote:
> >
> > On Wed, Jul 18, 2012 at 7:00 PM, Thierry Carrez  > > wrote:
> >
> > Mark McLoughlin wrote:
> > >> Making our multiple projects converge onto consolidated and
> > >> well-accepted APIs is a bit painful work, but it is a
> prerequisite to
> > >> turning openstack-common into a proper library (or set of
> libraries).
> > >>
> > >> I'd say the whole thing suffers from not having a proper
> > >> team/leader/coordinator dedicated to it: relying on existing,
> > >> overstretched PTLs to lead that effort might not be the fastest
> path.
> > >
> > > While I was on vacation, I read in the weekly newsletter:
> > >
> > >   "It developed into a request for leadership for openstack-common"
> > >
> > > and was like "WTF do you call the work that e.g. I, Jason, Russell
> and
> > > Doug have been doing?"
> > >
> > > But I see your point is a little different - you feel there should
> > be an
> > > elected/appointed "PTL without a PPB vote" or whatever to
> > represent the
> > > project. I guess that could help clarify things since it's what
> folks
> > > are used to with other projects.
> >
> > Right. So far we said that openstack-common was driven by "all the
> > PTLs", but that didn't prove particularly fast and efficient. Having
> a
> > clear face associated with it, someone specific taking the "lead" on
> > this project, will, I think, help a bit in getting to the next step.
> >
> >
> > Sorry if this rekindles old arguments, but could someone summarize the
> > reasons for an openstack-common "PTL" without voting rights? I would
> > have defaulted to giving them a vote *especially* because the code in
> > common is, well, common to all of the projects.
>
> So far, the PPB considered openstack-common to be driven by "all PTLs",
> so it didn't have a specific PTL.
>
> As far as future governance is concerned (technical committee of the
> Foundation), openstack-common would technically be considered a
> supporting library (rather than a core project) -- those can have leads,
> but those do not get granted an automatic TC seat.
>

OK, I can see the distinction there. I think the project needs an official
leader, even if we don't call them a PTL in the sense meant for other
projects. And I would expect anyone willing to take on the PTL role for
common to be qualified to run for one of the open positions on the new TC,
if they wanted to participate there.


>
> [ Avoiding the need to distinguish between "worthy" and "unworthy"
> projects leads was one of the many reasons why I preferred the TC to be
> completely directly-elected. ]


That does make sense.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [KeyStone] Requestid, context, notification in Keystone

2012-07-23 Thread Jay Pipes
On 07/21/2012 02:57 AM, Joseph Heck wrote:
> Hey Nachi 
> 
> If by this you mean the idea that a request ID is created at a user request 
> action, and then propagated through all relevant systems and API calls to 
> make tracing the distributed calls easier, I'm totally in favor of the idea. 
> Distributed tracing through the calls has been a real pain in the a... 
> 
> I'm afraid I haven't been watching the other projects closely enough to 
> realize that this was getting implemented - any chance you could point out 
> the relevant change reviews so I could see where/how the other projects have 
> been doing this?

Hey Joe,

Here is a relevant patch for Glance:

https://review.openstack.org/#/c/9545/

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [KeyStone] Requestid, context, notification in Keystone

2012-07-23 Thread Joseph Heck
Thanks Jay!

On Jul 23, 2012, at 9:49 AM, Jay Pipes wrote:

> On 07/21/2012 02:57 AM, Joseph Heck wrote:
>> Hey Nachi 
>> 
>> If by this you mean the idea that a request ID is created at a user request 
>> action, and then propagated through all relevant systems and API calls to 
>> make tracing the distributed calls easier, I'm totally in favor of the idea. 
>> Distributed tracing through the calls has been a real pain in the a... 
>> 
>> I'm afraid I haven't been watching the other projects closely enough to 
>> realize that this was getting implemented - any chance you could point out 
>> the relevant change reviews so I could see where/how the other projects have 
>> been doing this?
> 
> Hey Joe,
> 
> Here is a relevant patch for Glance:
> 
> https://review.openstack.org/#/c/9545/
> 
> Best,
> -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [KeyStone] Requestid, context, notification in Keystone

2012-07-23 Thread Nachi Ueno
Hi Joe,Jay

> Thanks Jay

> Joe
This is from quantum

- Quantum
Quantum Notification
https://blueprints.launchpad.net/quantum/+spec/quantum-notifications

Improve scalability by eliminating agent DB polling
https://blueprints.launchpad.net/quantum/+spec/scalable-agent-comms
( This blueprint trying to introduce context parameter in quantum )

IMO, it is great if we can provide a common way to handle request
across projects.
We can work for Keystone for F3 :)

Nachi

2012/7/23 Joseph Heck :
> Thanks Jay!
>
> On Jul 23, 2012, at 9:49 AM, Jay Pipes wrote:
>
>> On 07/21/2012 02:57 AM, Joseph Heck wrote:
>>> Hey Nachi
>>>
>>> If by this you mean the idea that a request ID is created at a user request 
>>> action, and then propagated through all relevant systems and API calls to 
>>> make tracing the distributed calls easier, I'm totally in favor of the 
>>> idea. Distributed tracing through the calls has been a real pain in the a...
>>>
>>> I'm afraid I haven't been watching the other projects closely enough to 
>>> realize that this was getting implemented - any chance you could point out 
>>> the relevant change reviews so I could see where/how the other projects 
>>> have been doing this?
>>
>> Hey Joe,
>>
>> Here is a relevant patch for Glance:
>>
>> https://review.openstack.org/#/c/9545/
>>
>> Best,
>> -jay
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Incremental Backup of Instances

2012-07-23 Thread Jay Pipes
On 07/22/2012 11:22 PM, Kobagana Kumar wrote:
> Hi All,
> 
> I am working on *Delta Changes *of an instance. Can you please tell me
> The procedure to take *Incremental Backups (Delta Changes) *of VMs,
> instead of taking the snapshot of entire instance.

The only non-commerical solution I know of for QEMU/KVM is livebackup:

http://wiki.qemu.org/Features/Livebackup

But AFAIK, no work has been done on integrating this into Nova's libvirt
driver.

Patches always welcome :)

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] next swift release just around the corner

2012-07-23 Thread John Dickinson
The next swift release is scheduled for public release next Monday (July 30). 
That means we've got a little bit of work to do this week to get it ready.

In order to allow Cloud Files QA time to check it, we need to have packages 
built by the middle of the day Wednesday. This means all outstanding reviews 
that should get in to the next release should be merged by the end of the day 
Tuesday (or very early on Wednesday). I think we have a few outstanding reviews 
that could and should make it in.

Overall, this looks like a pretty good of features to release. Here is my WIP 
changelog for the release: 
https://github.com/notmyname/swift/blob/1.5.1-changelog/CHANGELOG


--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Jay Pipes
On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
> Hi guys,
> 
> just an idea, i'm deploying Openstack trying to make it HA.
> The missing thing is rabbitmq, which can be easily started in
> active/active mode, but it needs to declare the queues adding an
> x-ha-policy entry.
> http://www.rabbitmq.com/ha.html
> It would be nice to add a config entry to be able to declare the queues
> in that way.
> If someone know where to edit the openstack code, else i'll try to do
> that in the next weeks maybe.

https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py

You'll need to add the config options there and the queue is declared
here with the options supplied to the ConsumerBase constructor:

https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [nova] core members

2012-07-23 Thread Vishvananda Ishaya
Hello Everyone, 
Here are the current vote counts from nova-core members on the 4 new core 
member proposals from last week:

Yun Mao: 7
Padraig Brady: 6
Michael Still: 5
Sean Dague: 2

That is enough votes for the first three. If they don't get any -1s by 
Wednesday, They will be joining nova-core. Sean still needs a few votes, so if 
you want to see him added, please vote for him.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] nova-manage is getting deprecated?

2012-07-23 Thread Joe Gordon
On Fri, Jul 20, 2012 at 11:43 AM, Tong Li  wrote:

>  Awhile back, there was a comment on a nova-manage defect stated that
> nova-manage is getting deprecated. Can any one tell what and when the
> replacement will be? Thanks.
>

Last I heard, python-novaclient will be replacing most of nova-manage.
 There will always be a few commands that cannot be run via the API
(python-novaclient), such as db sync, so those will stay in nova-manage.

best,
Joe

>
>
> Tong Li
> Emerging Technologies & Standards
> Building 501/B205
> liton...@us.ibm.com
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal for Sean Dague to join nova-core

2012-07-23 Thread Eric Windisch
On Friday, July 20, 2012 at 13:49 PM, Vishvananda Ishaya wrote:
> 
> When I was going through the list of reviewers to see who would be good for 
> nova-core a few days ago, I left one out. Sean has been doing a lot of 
> reviews lately[1] and did the refactor and cleanup of the driver loading 
> code. I think he would also be a great addition to nova-core.
+1.  I've read through the list and gerrit. Sean seems to be doing a great job. 

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Hi,

I'm working on a RabbitMQ H/A patch right now.

It actually involves more than just using H/A queues (unless you're
willing to add a TCP load balancer on top of your RMQ cluster).
You also need to add support for multiple RabbitMQ's directly to nova.
This is not hard at all, and I have the patch ready and tested in
production.

Alessandro, if you need this urgently, I can send you the patch right
now before the discussion codereview for inclusion in core nova.

The only problem is, it breaks backward compatibility a bit: my patch
assumes you have a flag "rabbit_addresses" which should look like
"rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
rabbit_port flags.

Guys, can you advise on a way to do this without being ugly and
without breaking compatibility?
Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
weird, as their names are in singular.
Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
"rabbit_port2" (assuming we only have clusters of 2 nodes)?
Something else?

On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
> On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
>> Hi guys,
>>
>> just an idea, i'm deploying Openstack trying to make it HA.
>> The missing thing is rabbitmq, which can be easily started in
>> active/active mode, but it needs to declare the queues adding an
>> x-ha-policy entry.
>> http://www.rabbitmq.com/ha.html
>> It would be nice to add a config entry to be able to declare the queues
>> in that way.
>> If someone know where to edit the openstack code, else i'll try to do
>> that in the next weeks maybe.
>
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
>
> You'll need to add the config options there and the queue is declared
> here with the options supplied to the ConsumerBase constructor:
>
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>
> Best,
> -jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] core members

2012-07-23 Thread Jay Pipes
On 07/23/2012 02:31 PM, Vishvananda Ishaya wrote:
> Sean Dague: 2

I'm not nova-core, but I'd recommend Sean as a core committer. He's been
active in both reviews and patches recently.

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Jay Pipes
On 07/23/2012 02:58 PM, Eugene Kirpichov wrote:
> The only problem is, it breaks backward compatibility a bit: my patch
> assumes you have a flag "rabbit_addresses" which should look like
> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
> rabbit_port flags.
> 
> Guys, can you advise on a way to do this without being ugly and
> without breaking compatibility?
> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
> weird, as their names are in singular.
> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
> Something else?

I think that the "standard" (in Nova at least) is to go with a single
ListOpt flag that is a comma-delimited list of the URIs. We do that for
Glance APi servers, for example, in the glance_api_servers flag:

https://github.com/openstack/nova/blob/master/nova/flags.py#L138

So, perhaps you can add a rabbit_ha_servers ListOpt flag that, when
filled, would be used instead of rabbit_host and rabbit_port. That way
you won't break backwards compat?

Best,
-jay

> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
>> On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
>>> Hi guys,
>>>
>>> just an idea, i'm deploying Openstack trying to make it HA.
>>> The missing thing is rabbitmq, which can be easily started in
>>> active/active mode, but it needs to declare the queues adding an
>>> x-ha-policy entry.
>>> http://www.rabbitmq.com/ha.html
>>> It would be nice to add a config entry to be able to declare the queues
>>> in that way.
>>> If someone know where to edit the openstack code, else i'll try to do
>>> that in the next weeks maybe.
>>
>> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
>>
>> You'll need to add the config options there and the queue is declared
>> here with the options supplied to the ConsumerBase constructor:
>>
>> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>>
>> Best,
>> -jay
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eric Windisch
> 
> The only problem is, it breaks backward compatibility a bit: my patch
> assumes you have a flag "rabbit_addresses" which should look like
> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
> rabbit_port flags.
> 
> Guys, can you advise on a way to do this without being ugly and
> without breaking compatibility?
> 
> 

One way would to use the matchmaker which I introduced to solve a similar 
problem with the ZeroMQ driver. The matchmaker is a client-side emulation of 
bindings/exchanges for mapping topic keys to an array of topic/host pairs.

You would query the matchmaker with a topic (key) and it would return tuples in 
the form of:
 ("topic", broker_ip)

In the ZeroMQ case, the "broker_ip" is always the peer, but with RabbitMQ, this 
would be one (or more) of your selected brokers.  Generally, you would return 
multiple brokers when you're doing fanout messaging.


Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal for Sean Dague to join nova-core

2012-07-23 Thread Johannes Erdfelt
On Fri, Jul 20, 2012, Vishvananda Ishaya  wrote:
> When I was going through the list of reviewers to see who would be good
> for nova-core a few days ago, I left one out. Sean has been doing a lot
> of reviews lately[1] and did the refactor and cleanup of the driver
> loading code. I think he would also be a great addition to nova-core.

+1

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Hi Jay,

Great idea. Thanks. I'll amend and test my patch, and then upload it
to codereview.

On Mon, Jul 23, 2012 at 12:18 PM, Jay Pipes  wrote:
> On 07/23/2012 02:58 PM, Eugene Kirpichov wrote:
>> The only problem is, it breaks backward compatibility a bit: my patch
>> assumes you have a flag "rabbit_addresses" which should look like
>> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
>> rabbit_port flags.
>>
>> Guys, can you advise on a way to do this without being ugly and
>> without breaking compatibility?
>> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
>> weird, as their names are in singular.
>> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
>> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
>> Something else?
>
> I think that the "standard" (in Nova at least) is to go with a single
> ListOpt flag that is a comma-delimited list of the URIs. We do that for
> Glance APi servers, for example, in the glance_api_servers flag:
>
> https://github.com/openstack/nova/blob/master/nova/flags.py#L138
>
> So, perhaps you can add a rabbit_ha_servers ListOpt flag that, when
> filled, would be used instead of rabbit_host and rabbit_port. That way
> you won't break backwards compat?
>
> Best,
> -jay
>
>> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
>>> On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.
>>>
>>> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
>>>
>>> You'll need to add the config options there and the queue is declared
>>> here with the options supplied to the ConsumerBase constructor:
>>>
>>> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>>>
>>> Best,
>>> -jay
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Oleg Gelbukh
Eugene,

I suggest just add option 'rabbit_servers' that will override
'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
my understanding.

--
Best regards,
Oleg Gelbukh
Mirantis, Inc.

On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov wrote:

> Hi,
>
> I'm working on a RabbitMQ H/A patch right now.
>
> It actually involves more than just using H/A queues (unless you're
> willing to add a TCP load balancer on top of your RMQ cluster).
> You also need to add support for multiple RabbitMQ's directly to nova.
> This is not hard at all, and I have the patch ready and tested in
> production.
>
> Alessandro, if you need this urgently, I can send you the patch right
> now before the discussion codereview for inclusion in core nova.
>
> The only problem is, it breaks backward compatibility a bit: my patch
> assumes you have a flag "rabbit_addresses" which should look like
> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
> rabbit_port flags.
>
> Guys, can you advise on a way to do this without being ugly and
> without breaking compatibility?
> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
> weird, as their names are in singular.
> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
> Something else?
>
> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
> > On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
> >> Hi guys,
> >>
> >> just an idea, i'm deploying Openstack trying to make it HA.
> >> The missing thing is rabbitmq, which can be easily started in
> >> active/active mode, but it needs to declare the queues adding an
> >> x-ha-policy entry.
> >> http://www.rabbitmq.com/ha.html
> >> It would be nice to add a config entry to be able to declare the queues
> >> in that way.
> >> If someone know where to edit the openstack code, else i'll try to do
> >> that in the next weeks maybe.
> >
> >
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
> >
> > You'll need to add the config options there and the queue is declared
> > here with the options supplied to the ConsumerBase constructor:
> >
> >
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
> >
> > Best,
> > -jay
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
>
> --
> Eugene Kirpichov
> http://www.linkedin.com/in/eugenekirpichov
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Yup, that's basically the same thing that Jay suggested :) Obvious in
retrospect...

On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh  wrote:
> Eugene,
>
> I suggest just add option 'rabbit_servers' that will override
> 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
> my understanding.
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis, Inc.
>
>
> On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov 
> wrote:
>>
>> Hi,
>>
>> I'm working on a RabbitMQ H/A patch right now.
>>
>> It actually involves more than just using H/A queues (unless you're
>> willing to add a TCP load balancer on top of your RMQ cluster).
>> You also need to add support for multiple RabbitMQ's directly to nova.
>> This is not hard at all, and I have the patch ready and tested in
>> production.
>>
>> Alessandro, if you need this urgently, I can send you the patch right
>> now before the discussion codereview for inclusion in core nova.
>>
>> The only problem is, it breaks backward compatibility a bit: my patch
>> assumes you have a flag "rabbit_addresses" which should look like
>> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
>> rabbit_port flags.
>>
>> Guys, can you advise on a way to do this without being ugly and
>> without breaking compatibility?
>> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
>> weird, as their names are in singular.
>> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
>> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
>> Something else?
>>
>> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
>> > On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
>> >> Hi guys,
>> >>
>> >> just an idea, i'm deploying Openstack trying to make it HA.
>> >> The missing thing is rabbitmq, which can be easily started in
>> >> active/active mode, but it needs to declare the queues adding an
>> >> x-ha-policy entry.
>> >> http://www.rabbitmq.com/ha.html
>> >> It would be nice to add a config entry to be able to declare the queues
>> >> in that way.
>> >> If someone know where to edit the openstack code, else i'll try to do
>> >> that in the next weeks maybe.
>> >
>> >
>> > https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
>> >
>> > You'll need to add the config options there and the queue is declared
>> > here with the options supplied to the ConsumerBase constructor:
>> >
>> >
>> > https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>> >
>> > Best,
>> > -jay
>> >
>> > ___
>> > Mailing list: https://launchpad.net/~openstack
>> > Post to : openstack@lists.launchpad.net
>> > Unsubscribe : https://launchpad.net/~openstack
>> > More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>> --
>> Eugene Kirpichov
>> http://www.linkedin.com/in/eugenekirpichov
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
+openstack-dev@

To openstack-dev: this is a discussion of an upcoming patch about
native RabbitMQ H/A support in nova. I'll post the patch for
codereview today.

On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov  wrote:
> Yup, that's basically the same thing that Jay suggested :) Obvious in
> retrospect...
>
> On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh  wrote:
>> Eugene,
>>
>> I suggest just add option 'rabbit_servers' that will override
>> 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
>> my understanding.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>> Mirantis, Inc.
>>
>>
>> On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov 
>> wrote:
>>>
>>> Hi,
>>>
>>> I'm working on a RabbitMQ H/A patch right now.
>>>
>>> It actually involves more than just using H/A queues (unless you're
>>> willing to add a TCP load balancer on top of your RMQ cluster).
>>> You also need to add support for multiple RabbitMQ's directly to nova.
>>> This is not hard at all, and I have the patch ready and tested in
>>> production.
>>>
>>> Alessandro, if you need this urgently, I can send you the patch right
>>> now before the discussion codereview for inclusion in core nova.
>>>
>>> The only problem is, it breaks backward compatibility a bit: my patch
>>> assumes you have a flag "rabbit_addresses" which should look like
>>> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
>>> rabbit_port flags.
>>>
>>> Guys, can you advise on a way to do this without being ugly and
>>> without breaking compatibility?
>>> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
>>> weird, as their names are in singular.
>>> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
>>> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
>>> Something else?
>>>
>>> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
>>> > On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
>>> >> Hi guys,
>>> >>
>>> >> just an idea, i'm deploying Openstack trying to make it HA.
>>> >> The missing thing is rabbitmq, which can be easily started in
>>> >> active/active mode, but it needs to declare the queues adding an
>>> >> x-ha-policy entry.
>>> >> http://www.rabbitmq.com/ha.html
>>> >> It would be nice to add a config entry to be able to declare the queues
>>> >> in that way.
>>> >> If someone know where to edit the openstack code, else i'll try to do
>>> >> that in the next weeks maybe.
>>> >
>>> >
>>> > https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
>>> >
>>> > You'll need to add the config options there and the queue is declared
>>> > here with the options supplied to the ConsumerBase constructor:
>>> >
>>> >
>>> > https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>>> >
>>> > Best,
>>> > -jay
>>> >
>>> > ___
>>> > Mailing list: https://launchpad.net/~openstack
>>> > Post to : openstack@lists.launchpad.net
>>> > Unsubscribe : https://launchpad.net/~openstack
>>> > More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>> --
>>> Eugene Kirpichov
>>> http://www.linkedin.com/in/eugenekirpichov
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
>
> --
> Eugene Kirpichov
> http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-23 Thread Blake Yeager
>
> 
>
> (BTW, I'd like to point out the Boson proposal and thread…)
>

Good point, we also need to think through how distributed quotas will work
across multiple cells.  I can see a lot of overlap between these two use
cases: I want to limit users to a specific quota for a specific flavor and
I want to limit users to a specific quota for a given cell - both of which
would be independent of a user's overall quota.

IMHO, we need to address both of these use cases at the same time.

-Blake
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-23 Thread Jay Pipes
On 07/21/2012 09:00 PM, Matt Joyce wrote:
> Preamble:
> 
> Until now, all data that is made available by the metadata server has
> been data that cannot be found anywhere else at the time it may be needed.
> 
> In short, an instance can't be passed it's instance id before it's
> instance id has been allocated so a user cannot pass it to an instance
> that is being started up.  So whether a user wants to jump through a few
> hoops or not to pass their instance the instance id of itself... they
> simply cannot without metadata api being there to provide it at creation
> time.

This is only due to the asinine EC2 API -- or rather the asinine
implementation in EC2 that doesn't create an instance ID before the
instance is launched.

> This means that the metadata server holds an uneasy place as a necessary
> clearing house ( evil? ) of data that just doesn't have another place to
> be.  It's not secure, it's not authenticated, and it's a little scary
> that it exists at all.

Agreed. I wish people didn't use the EC2 API at all, since it's a
complete bag of fail and a beautiful example of a terribly thought-out
API. That said, the OpenStack Compute API v2 has its share of pockmarks
to be sure.

But... unfortunately, if you're going to use the EC2 API this hard-coded
169.254.169.254 address is what we have to deal with.

> I wish to add some data to the metadata server that can be found
> somewhere else.  That a user could jump through a hoop or two to add to
> their instances.  Esteemed personages are concerned that I would be
> crossing the rubicon in terms of opening up the metadata api for wanton
> abuse.  They are not without a right or reason to be concerned.  And
> that is why I am going to attempt to explicitly classify a new category
> of data that we might wish to allow into the metadata server.  If we can
> be clear about what we are allowing we can avoid abuse.
> 
> I want to provide a uniform ( standardized? ) way for instances in the
> openstack cloud to communicate back to the OpenStack APIs without having
> to be provided data by the users of the cloud services.

Let's be clear here... are you talking about the OpenStack Compute API
or are you talking about the OpenStack Metadata service which is merely
the EC2 Metadata API? We already have the config-drive extension [1]
that allows information and files to be injected into the instance and
loaded as a readonly device. The information in the config-drive can
include things like the Keystone URI for the cell/AZ in which an
instance resides.

> Today the
> mechanism by which this is done is catastrophically difficult for a new
> user.

Are you specifically referring here to the calls that, say, cloud-init
makes to the (assumed to be running) EC2 metadata API service at
http://169.254.169.254/latest/? Or something different? Just want to
make sure I'm understanding what you are referring to as difficult.

> This uniform way for instances to interact with the openstack API that I
> want already sort of exists in the keystone catalog service.  The
> problem is that you need to know where the keystone server is in the
> world to access it.  That of course changes from deployment to
> deployment.  Especially with the way SSL endpoints are being handled.

This can be done using config-drive and the OpenStack community coming
up with a standard file or tool that would be injected into the config
drive. This would be similar to the calls currently executed by
cloud-init that are hard-coded to look for 169.254.169.254. Would that work?

> But the metadata API server is generally known as it uses a default ip
> address value that can be found on any amazon compatible deployment.  In
> fact to my knowledge it is the only known way to query openstack for
> data relevant to interacting with it without user interaction.  And
> that's the key to this whole thing.  I want to direct users or
> automation baked into instances to the keystone api and catalog
> service.  And the only way I know how to do that is the metadata service.

As mentioned above, config-drive extension was built for just this
purpose IIRC. Chris Macgown, who wrote the original extension, cc'd,
should be able to comment on this further.

> This api data can be classified as being first and foremost OpenStack
> infrastructure related.  Additionally it is not available without a user
> providing it anywhere else.  And finally it is a catalog service.
> 
> I'd love some more input on whether this makes sense, or can be improved
> upon as an idea and formalized as a rule for using the metadata api
> without abusing it.

Well, we know we can't change the EC2 Metadata API since we don't own or
have any control over the Amazon APIs. We can however come up with an
OpenStack-centric tool using config-drive and a tool that would query a
Keystone endpoint for a local OpenStack Compute API endpoint and then
use the existing OpenStack Compute API calls for server metadata [2]?

That sounds doable to you?

Be

Re: [Openstack] [nova] nova-manage is getting deprecated?

2012-07-23 Thread Edgar Magana (eperdomo)
Hi,

Quantum CLI  will handle all the networking functionality:
https://review.openstack.org/#/c/9754/

Thanks,

Edgar

From: openstack-bounces+eperdomo=cisco@lists.launchpad.net 
[mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf Of 
Joe Gordon
Sent: Monday, July 23, 2012 11:33 AM
To: Tong Li
Cc: openstack-bounces+litong01=us.ibm@lists.launchpad.net; Openstack 
(openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [nova] nova-manage is getting deprecated?


On Fri, Jul 20, 2012 at 11:43 AM, Tong Li 
mailto:liton...@us.ibm.com>> wrote:

Awhile back, there was a comment on a nova-manage defect stated that 
nova-manage is getting deprecated. Can any one tell what and when the 
replacement will be? Thanks.

Last I heard, python-novaclient will be replacing most of nova-manage.  There 
will always be a few commands that cannot be run via the API 
(python-novaclient), such as db sync, so those will stay in nova-manage.

best,
Joe


Tong Li
Emerging Technologies & Standards
Building 501/B205
liton...@us.ibm.com

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-23 Thread Matt Joyce
> Agreed. I wish people didn't use the EC2 API at all, since it's a
> complete bag of fail and a beautiful example of a terribly thought-out
> API. That said, the OpenStack Compute API v2 has its share of pockmarks
> to be sure.
>
> But... unfortunately, if you're going to use the EC2 API this hard-coded
> 169.254.169.254 address is what we have to deal with.
>
>
Agreed on all counts.


> > I wish to add some data to the metadata server that can be found
> > somewhere else.  That a user could jump through a hoop or two to add to
> > their instances.  Esteemed personages are concerned that I would be
> > crossing the rubicon in terms of opening up the metadata api for wanton
> > abuse.  They are not without a right or reason to be concerned.  And
> > that is why I am going to attempt to explicitly classify a new category
> > of data that we might wish to allow into the metadata server.  If we can
> > be clear about what we are allowing we can avoid abuse.
> >
> > I want to provide a uniform ( standardized? ) way for instances in the
> > openstack cloud to communicate back to the OpenStack APIs without having
> > to be provided data by the users of the cloud services.
>
> Let's be clear here... are you talking about the OpenStack Compute API
> or are you talking about the OpenStack Metadata service which is merely
> the EC2 Metadata API? We already have the config-drive extension [1]
> that allows information and files to be injected into the instance and
> loaded as a readonly device. The information in the config-drive can
> include things like the Keystone URI for the cell/AZ in which an
> instance resides.
>
>
I mean the OpenStack Metadata service.  The config drive extension does not
as far as I am aware produce a "uniform" path for data like this.  This API
query should be the same from openstack deployment to openstack deployment
to ensure portability of instances relying on this API query to figure out
where the catalog service is.  By "uniform" I mean it has all the love care
and backwards versioning support as a traditional API query.  The
config-drive seems more intended to be user customized rather than
considered a community supported API query.


> > Today the
> > mechanism by which this is done is catastrophically difficult for a new
> > user.
>
> Are you specifically referring here to the calls that, say, cloud-init
> makes to the (assumed to be running) EC2 metadata API service at
> http://169.254.169.254/latest/? Or something different? Just want to
> make sure I'm understanding what you are referring to as difficult.
>
>
I am referring to the whole new user experience.  Anything custom to a
deployment of openstack is now outside of our control and is not portable.
Also a new user will not be prepared to inject user data properly.  Going
further and a bit onto an irate tangent.  Horizon has a really round about
and completely non intuitive way of providing users with info on where API
servers are.  IE you have to generate an openstack credentials file.
download it.  and look at it in a text editor and then know what it is you
are looking at.  To find your tenant_name you have to guess in the dark
that horizon is referring to your tenant name as a project.  The whole
thing is insane.  What I am talking about here is a first step in allowing
image builders to integrate into openstack in a uniform way across all
installations ( or most ).  And that will allow people to reduce the
overall pain on new users of cloud at their pleasure.  I am asking for this
based on my experience trying to do this outside of openstack development.


> > This uniform way for instances to interact with the openstack API that I
> > want already sort of exists in the keystone catalog service.  The
> > problem is that you need to know where the keystone server is in the
> > world to access it.  That of course changes from deployment to
> > deployment.  Especially with the way SSL endpoints are being handled.
>
> This can be done using config-drive and the OpenStack community coming
> up with a standard file or tool that would be injected into the config
> drive. This would be similar to the calls currently executed by
> cloud-init that are hard-coded to look for 169.254.169.254. Would that
> work?
>

I don't know.  I'd say maybe.  But I'd prefer it was tracked as an API
call.  It will have in that area legitimate support from the community and
backwards version compatibility requirements.  I think ultimately it
belongs in the API as much as any other query.  While I think this sort of
solves the issue and may smooth a few folks feathers about the issue.  I
think it's probably the wrong way to handle it and likely to bite us in the
ass down the road when someone starts mangling that file or doesn't realize
config-drive is a dependency for that sort of fundamental query.


> > But the metadata API server is generally known as it uses a default ip
> > address value that can be found on any amazon compatible deployment.  

Re: [Openstack] Improving logging

2012-07-23 Thread Michael Still
On 23/07/12 19:54, Thierry Carrez wrote:
> Michael Still wrote:
>> On 21/07/12 00:08, Jay Pipes wrote:
>>
>>> Not that I've seen, but I think it would be good to standardize on one.
>>> How about just "ops"?
>>
>> Works for me.
> 
> Added to http://wiki.openstack.org/BugTags and as official tag for all
> core projects.

Thanks! I didn't even know that wiki page existed...

Mikal


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 回复: [openstack] nova-compute always dead on RHEL6.1

2012-07-23 Thread 延生 付
Dear Padraig,
 
Thanks for the help. I can see the log from console, really strange there is no 
log generated under /var/log/nova, the permission is open to nova.
Another question is Apache Qpid can not be connected, even from the same 
server. I setup from default, no further config.
I can see the port 5672 is listened, and I also turned iptables off. Is there 
any Qpid log I can refer?

Regards,

Will 


发件人: Pádraig Brady 
收件人: 延生 付  
抄送: "openstack@lists.launchpad.net"  
发送日期: 2012年7月23日, 星期一, 下午 8:08
主题: Re: [Openstack] [openstack] nova-compute always dead on RHEL6.1

On 07/23/2012 09:44 AM, 延生 付 wrote:
>  
> Dear all,
>  
> When I deply nova-compute based on epel repository, I found 
> openstack-nova-compute always dead but pid file exists.
> While there is no any log file generated in /var/log/nova.
> The OS is RHEL6.1. The nova.conf is copied from controller node.
>  
> [root@comp02-r11 nova]# service openstack-nova-compute status
> openstack-nova-compute dead but pid file exists
>  
> Does anybody have clues? Thanks in advance.

RHEL6.2 is the first version targeted by the EPEL packages,
though others have successfully used 6.1.
One thing to consider is upgrading libvirt.
Strange you don't get anything in the logs.
Perhaps you could run manually to debug:

/usr/bin/nova-compute --config-file /etc/nova/nova.conf

cheers,
Pádraig.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 回复: 回复: Keystone client could not behave well, call for help

2012-07-23 Thread 延生 付
Dear Adam,
 
Thanks. The error appeared due to a proxy.
I didn't aware I have setup a proxy on this server. 
Issue solved. Thanks.

Regards,

Will


发件人: Adam Young 
收件人: openstack@lists.launchpad.net 
发送日期: 2012年7月23日, 星期一, 下午 9:42
主题: Re: [Openstack] 回复: Keystone client could not behave well, call for help


On 07/22/2012 09:12 AM, 延生 付 wrote:

reply: 'HTTP/1.1 503 Service Unavailable\r\n'
This seems to be the main problem.  The error message "string indices must be 
integers, not str" seems to be a bug in trying to parse the error page. 
___
Mailing list: https://launchpad.net/~openstack
Post to    : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [qpidd]Could not be connected IO error on RHEL6.1

2012-07-23 Thread 延生 付
Dear all,
 
When I deploy qpidd-cpp-server with default config on RHEL 6.1, only open the 
log.
It boots up normally as the log below.
---
2012-07-24 09:38:45 info Registered replication exchange
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:amq.direct
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:amq.topic
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:amq.fanout
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:amq.match
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:qpid.management
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:qmf.default.topic
2012-07-24 09:38:45 debug Management object (V1) added: 
org.apache.qpid.broker:exchange:qmf.default.direct
2012-07-24 09:38:45 info SASL enabled
2012-07-24 09:38:45 notice Listening on TCP port 5672
2012-07-24 09:38:45 info Policy file not specified. ACL Disabled, no ACL 
checking being done!
2012-07-24 09:38:45 debug Daemon ready on port: 5672
2012-07-24 09:38:45 notice Broker running
2012-07-24 09:38:55 debug Management agent periodic processing: management 
snapshot: 1 packages, 0 objects (0 deleted), 11 new objects  (0 deleted), 0 
pending deletes
2012-07-24 09:38:55 trace Management agent periodic processing: new objects
   org.apache.qpid.broker:system:7cbd658f-ac3f-40c5-90c3-2e820b559ff7
   org.apache.qpid.broker:broker:amqp-broker
   org.apache.qpid.broker:vhost:org.apache.qpid.broker:broker:amqp-broker,/
   org.apache.qpid.broker:exchange:
   org.apache.qpid.broker:exchange:amq.direct
   org.apache.qpid.broker:exchange:amq.topic
   org.apache.qpid.broker:exchange:amq.fanout
   org.apache.qpid.broker:exchange:amq.match
   org.apache.qpid.broker:exchange:qpid.management
   org.apache.qpid.broker:exchange:qmf.default.topic
   org.apache.qpid.broker:exchange:qmf.default.direct
2012-07-24 09:38:55 trace Changed V1 properties 
org.apache.qpid.broker:broker:amqp-broker len=158
2012-07-24 09:38:55 trace Changed V1 statistics 
org.apache.qpid.broker:broker:amqp-broker len=102
---
 
While on the same server, nova-network/compute/cert/scheduler could not connect 
to this queue.
The log is as below
---
2012-07-24 09:55:04 INFO nova.rpc.common [-] Reconnecting to AMQP server on 
192.168.11.100:5672
2012-07-24 09:55:04 ERROR nova.rpc.common [-] AMQP server on 
192.168.11.100:5672 is unreachable: Socket closed. Trying again in 30 seconds.
2012-07-24 09:55:04 TRACE nova.rpc.common Traceback (most recent call last):
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_kombu.py", line 446, in 
reconnect
2012-07-24 09:55:04 TRACE nova.rpc.common self._connect()
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_kombu.py", line 423, in _connect
2012-07-24 09:55:04 TRACE nova.rpc.common self.connection.connect()
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/kombu/connection.py", line 173, in connect
2012-07-24 09:55:04 TRACE nova.rpc.common return self.connection
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/kombu/connection.py", line 585, in connection
2012-07-24 09:55:04 TRACE nova.rpc.common self._connection = 
self._establish_connection()
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/kombu/connection.py", line 546, in 
_establish_connection
2012-07-24 09:55:04 TRACE nova.rpc.common conn = 
self.transport.establish_connection()
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/kombu/transport/amqplib.py", line 244, in 
establish_connection
2012-07-24 09:55:04 TRACE nova.rpc.common 
connect_timeout=conninfo.connect_timeout)
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/kombu/transport/amqplib.py", line 54, in 
__init__
2012-07-24 09:55:04 TRACE nova.rpc.common super(Connection, 
self).__init__(*args, **kwargs)
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/connection.py", line 135, 
in __init__
2012-07-24 09:55:04 TRACE nova.rpc.common (10, 10), # start
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/abstract_channel.py", line 
95, in wait
2012-07-24 09:55:04 TRACE nova.rpc.common self.channel_id, allowed_methods)
2012-07-24 09:55:04 TRACE nova.rpc.common   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/connection.py", line 202, 
i

Re: [Openstack] [HPC] BoF at SC12

2012-07-23 Thread Lorin Hochstein
JP:

I suggest you also try asking on the OpenStack Operators mailing list 
 to 
gauge interest.

Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com





On Jul 23, 2012, at 9:18 AM, John Paul Walters wrote:

> Hi Lorin,
> 
> Thanks for the followup.  I'm perfectly happy to go the Openstack-specific 
> route, but I haven't received much feedback from the Openstack community.  It 
> would be helpful if we could get some sense of community interest (and 
> likelihood of attending) to accompany our submission.  What do others think?  
> Would others be interested in attending?
> 
> JP
> 
> 
> On Jul 22, 2012, at 9:12 PM, Lorin Hochstein wrote:
> 
>> On Jul 6, 2012, at 1:28 PM, John Paul Walters wrote:
>> 
>>> I'm strongly considering putting together a proposal for a BoF (birds of a 
>>> feather) session at this year's Supercomputing in Salt Lake City.  For 
>>> those of you who are likely to attend, is anyone else interested?  It's not 
>>> a huge amount of time invested on my end to put together the proposal, but 
>>> I'd like to gauge the community interest before doing so.  I would likely 
>>> broaden things a bit from being exclusively Openstack and instead turn it 
>>> into more of an HPC in the Cloud session so that we could, perhaps, take 
>>> some input from other HPC cloud projects.   The submissions are due July 
>>> 31, so we've got a little bit of time, but not too much.  Anyone else 
>>> interested?
>>> 
>>> best,
>>> JP
>> 
>> 
>> JP:
>> 
>> I think this was a great idea, we were thinking about proposing this if 
>> nobody else did. I would suggest making it OpenStack-specific, since there 
>> was  an "HPC in the Cloud" BoF last year 
>> (http://sc11.supercomputing.org/schedule/event_detail.php?evid=bof140), and 
>> they'll probably re-apply this year as well. I think we can get critical 
>> mass for an OpenStack BoF.
>> 
>> Along these lines: Chris Hoge from U. Oregon gave a talk last week at OSCON 
>> about their use of OpenStack on HPC 
>> http://www.oscon.com/oscon2012/public/schedule/detail/24261
>> 
>> (There are some good slides attached to that web page)
>> 
>> Take care,
>> 
>> Lorin
>> --
>> Lorin Hochstein
>> Lead Architect - Cloud Services
>> Nimbis Services, Inc.
>> www.nimbisservices.com
>> 
>> 
>> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceph performance as volume & image store?

2012-07-23 Thread Jonathan Proulx
Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Looking for an openstack developer job

2012-07-23 Thread Hengqing Hu

Hi,

Sorry for the disturbance.

This is Hengqing Hu from Shanghai, China, 29 years old, male,
looking for an openstack developer job.
I prefer to work from home, legally allowed
to work in China, also accept oversea jobs if provided.
If you are seeking for an openstack developer,
have a look at my resume here:
https://www.dropbox.com/s/41gzc974s6ay9uy/ResumeOfHengqingHuDetailed.pdf

Any kind people who would help to refer me to their employer is also 
appreciated.


I may not good at express myself, but good at solving technical problems.

Best Regards, Hengqing Hu

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Incremental Backup of Instances

2012-07-23 Thread Wolfgang Hennerbichler
You could solve something like this with bacula incremental backups, I'm 
doing this (but not with openstack) like this:
make an lvm snapshot (or qcow2 snapshot) of the instance, write a script 
that mounts the filesystem and let bacula back it up (incrementally). 
bacula can do the actual snapshotting and mounting, so you don't need to 
coordinate when to create a snapshot and so on. this works very well, 
but is hard to configure, too. to restore the machine, one creates a lv 
in my case and restores the whole filesystem there. This works very well 
for linux vms, might be harder with windows vms.


Wolfgang

On 07/23/2012 05:22 AM, Kobagana Kumar wrote:

Hi All,

I am working on *Delta Changes *of an instance. Can you please tell me

The procedure to take *Incremental Backups (Delta Changes) *of VMs,

instead of taking the snapshot of entire instance.

And also please tell me how to apply those delta changes to instance.

*Thanks & Regards,***

*Bharath Kumar Kobagana*

DISCLAIMER == This e-mail may contain privileged and
confidential information which is the property of Persistent Systems
Ltd. It is intended only for the use of the individual or entity to
which it is addressed. If you are not the intended recipient, you are
not authorized to read, retain, copy, print, distribute or use this
message. If you have received this communication in error, please notify
the sender and delete all copies of this message. Persistent Systems
Ltd. does not accept any liability for virus infected mails.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp