Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-11-29 Thread Chris Friesen

On 11/29/2013 06:37 PM, David Koo wrote:

On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:

We're currently running Grizzly (going to Havana soon) and we're
running into an issue where if the active controller is ungracefully
killed then nova-compute on the compute node doesn't properly
connect to the new rabbitmq server on the newly-active controller
node.



Interestingly, killing and restarting nova-compute on the compute
node seems to work, which implies that the retry code is doing
something less effective than the initial startup.

Has anyone doing HA controller setups run into something similar?


As a followup, it looks like if I wait for 9 minutes or so I see a 
message in the compute logs:


2013-11-30 00:02:14.756 1246 ERROR nova.openstack.common.rpc.common [-] 
Failed to consume message from queue: Socket closed


It then reconnects to the AMQP server and everything is fine after that. 
 However, any instances that I tried to boot during those 9 minutes 
stay stuck in the "BUILD" status.





 So the rabbitmq server and the controller are on the same node?


Yes, they are.

> My

guess is that it's related to this bug 856764 (RabbitMQ connections
lack heartbeat or TCP keepalives). The gist of it is that since there
are no heartbeats between the MQ and nova-compute, if the MQ goes down
ungracefully then nova-compute has no way of knowing. If the MQ goes
down gracefully then the MQ clients are notified and so the problem
doesn't arise.


Sounds about right.


 We got bitten by the same bug a while ago when our controller node
got hard reset without any warning!. It came down to this bug (which,
unfortunately, doesn't have a fix yet). We worked around this bug by
implementing our own crude fix - we wrote a simple app to periodically
check if the MQ was alive (write a short message into the MQ, then
read it out again). When this fails n-times in a row we restart
nova-compute. Very ugly, but it worked!


Sounds reasonable.

I did notice a kombu heartbeat change that was submitted and then backed 
out again because it was buggy. I guess we're still waiting on the real fix?


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-11-29 Thread David Koo
On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
> We're currently running Grizzly (going to Havana soon) and we're
> running into an issue where if the active controller is ungracefully
> killed then nova-compute on the compute node doesn't properly
> connect to the new rabbitmq server on the newly-active controller
> node.
> 
> I saw a bugfix in Folsom
> (https://bugs.launchpad.net/nova/+bug/718869) to retry the
> connection to rabbitmq if it's lost, but it doesn't seem to be
> properly handling this case.
> 
> Interestingly, killing and restarting nova-compute on the compute
> node seems to work, which implies that the retry code is doing
> something less effective than the initial startup.
> 
> Has anyone doing HA controller setups run into something similar?

So the rabbitmq server and the controller are on the same node? My
guess is that it's related to this bug 856764 (RabbitMQ connections
lack heartbeat or TCP keepalives). The gist of it is that since there
are no heartbeats between the MQ and nova-compute, if the MQ goes down
ungracefully then nova-compute has no way of knowing. If the MQ goes
down gracefully then the MQ clients are notified and so the problem
doesn't arise.

We got bitten by the same bug a while ago when our controller node
got hard reset without any warning!. It came down to this bug (which,
unfortunately, doesn't have a fix yet). We worked around this bug by
implementing our own crude fix - we wrote a simple app to periodically
check if the MQ was alive (write a short message into the MQ, then
read it out again). When this fails n-times in a row we restart
nova-compute. Very ugly, but it worked!

--
Koo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework

2013-11-29 Thread Gary Duan
FYI, I pushed code review for the blueprint. The patch is missing unitest
and tempest. It's only submitted for discussion.

The patch implements two-step commit process similar to ML2, but it's not
intended to solve all race conditions. Another thing I think worth
discussing is how to work with agent scheduler.

Thanks,
Gary


On Thu, Oct 24, 2013 at 11:56 AM, Gary Duan  wrote:

> Hi,
>
> I've registered a BP for L3 router service integration with service
> framework.
>
> https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type
>
> In general, the implementation will align with how LBaaS is integrated
> with the framework. One consideration we heard from several team members is
> to be able to support vendor specific features and extensions in the
> service plugin.
>
> Any comment is welcome.
>
> Thanks,
> Gary
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-11-29 Thread Chris Friesen

Hi,

We're currently running Grizzly (going to Havana soon) and we're running 
into an issue where if the active controller is ungracefully killed then 
nova-compute on the compute node doesn't properly connect to the new 
rabbitmq server on the newly-active controller node.


I saw a bugfix in Folsom (https://bugs.launchpad.net/nova/+bug/718869) 
to retry the connection to rabbitmq if it's lost, but it doesn't seem to 
be properly handling this case.


Interestingly, killing and restarting nova-compute on the compute node 
seems to work, which implies that the retry code is doing something less 
effective than the initial startup.


Has anyone doing HA controller setups run into something similar?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-11-29 Thread David Chadwick
Hi Arvind

I have added my two-penneth to the latest version.

I look forward to your comments

regards

David


On 26/11/2013 23:07, Tiwari, Arvind wrote:
> Hi David,
> 
> Thanks for your time and valuable comments. I have replied to your comments 
> and try to explain why I am advocating to this BP.
> 
> Let me know your thoughts, please feel free to update below etherpad   
> https://etherpad.openstack.org/p/service-scoped-role-definition
> 
> Thanks again,
> Arvind
> 
> -Original Message-
> From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
> Sent: Monday, November 25, 2013 12:12 PM
> To: Tiwari, Arvind; OpenStack Development Mailing List
> Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
> Subject: Re: [openstack-dev] [keystone] Service scoped role definition
> 
> Hi Arvind
> 
> I have just added some comments to your blueprint page
> 
> regards
> 
> David
> 
> 
> On 19/11/2013 00:01, Tiwari, Arvind wrote:
>> Hi,
>>
>>  
>>
>> Based on our discussion in design summit , I have redone the service_id
>> binding with roles BP
>> .
>> I have added a new BP (link below) along with detailed use case to
>> support this BP.
>>
>> https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition
>>
>> Below etherpad link has some proposals for Role REST representation and
>> pros and cons analysis
>>
>>  
>>
>> https://etherpad.openstack.org/p/service-scoped-role-definition
>>
>>  
>>
>> Please take look and let me know your thoughts.
>>
>>  
>>
>> It would be awesome if we can discuss it in tomorrow's meeting.
>>
>>  
>>
>> Thanks,
>>
>> Arvind
>>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Doug Hellmann
On Fri, Nov 29, 2013 at 2:14 PM, Sandy Walsh wrote:

> So, as I mention in the branch, what about deployments that haven't
> transitioned to the library but would like to cherry pick this feature?
>
> "after it starts moving into a library" can leave a very big gap when the
> functionality isn't available to users.
>

Are those deployments tracking trunk or a stable branch? Because IIUC, we
don't add features like this to stable branches for the main components,
either, and if they are tracking trunk then they will get the new feature
when it ships in a project that uses it. Are you suggesting something in
between?

Doug



>
> -S
>
> 
> From: Eric Windisch [e...@cloudscaling.com]
> Sent: Friday, November 29, 2013 2:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [oslo] maintenance policy for code graduating
> from the incubator
>
> > Based on that, I would like to say that we do not add new features to
> > incubated code after it starts moving into a library, and only provide
> > "stable-like" bug fix support until integrated projects are moved over to
> > the graduated library (although even that is up for discussion). After
> all
> > integrated projects that use the code are using the library instead of
> the
> > incubator, we can delete the module(s) from the incubator.
>
> +1
>
> Although never formalized, this is how I had expected we would handle
> the graduation process. It is also how we have been responding to
> patches and blueprints offerings improvements and feature requests for
> oslo.messaging.
>
> --
> Regards,
> Eric Windisch
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Pinging people for reviews in IRC

2013-11-29 Thread David Kranz
Folks, I understand that the review latency can be too long. We just 
added two core reviewers and I am sure we can do better still. But 
please, if you feel you must ping some one by name for a review, do so 
in #openstack-qa rather than pinging on a private channel. That way 
other people might see it as well.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Sandy Walsh
So, as I mention in the branch, what about deployments that haven't 
transitioned to the library but would like to cherry pick this feature? 

"after it starts moving into a library" can leave a very big gap when the 
functionality isn't available to users.

-S


From: Eric Windisch [e...@cloudscaling.com]
Sent: Friday, November 29, 2013 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] maintenance policy for code graduating from 
the incubator

> Based on that, I would like to say that we do not add new features to
> incubated code after it starts moving into a library, and only provide
> "stable-like" bug fix support until integrated projects are moved over to
> the graduated library (although even that is up for discussion). After all
> integrated projects that use the code are using the library instead of the
> incubator, we can delete the module(s) from the incubator.

+1

Although never formalized, this is how I had expected we would handle
the graduation process. It is also how we have been responding to
patches and blueprints offerings improvements and feature requests for
oslo.messaging.

--
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-11-29 Thread David Kranz
In preparing to fail builds with log errors I have been trying to make 
things easier for projects by maintaining a whitelist. But these bugs in 
ceilometer are coming in so fast that I can't keep up. So I am  just 
putting ".*" in the white list for any cases I find before gate failing 
is turned on, hopefully early this week.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Boris Pavlovic
Doug,


Based on that, I would like to say that we do not add new features to
incubated code after it starts moving into a library, and only provide
"stable-like" bug fix support until integrated projects are moved over to
the graduated library (although even that is up for discussion). After all
integrated projects that use the code are using the library instead of the
incubator, we can delete the module(s) from the incubator.


Sounds good and right.


Best regards,
Boris Pavlovic



On Fri, Nov 29, 2013 at 10:47 PM, Eric Windisch wrote:

> > Based on that, I would like to say that we do not add new features to
> > incubated code after it starts moving into a library, and only provide
> > "stable-like" bug fix support until integrated projects are moved over to
> > the graduated library (although even that is up for discussion). After
> all
> > integrated projects that use the code are using the library instead of
> the
> > incubator, we can delete the module(s) from the incubator.
>
> +1
>
> Although never formalized, this is how I had expected we would handle
> the graduation process. It is also how we have been responding to
> patches and blueprints offerings improvements and feature requests for
> oslo.messaging.
>
> --
> Regards,
> Eric Windisch
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Eric Windisch
> Based on that, I would like to say that we do not add new features to
> incubated code after it starts moving into a library, and only provide
> "stable-like" bug fix support until integrated projects are moved over to
> the graduated library (although even that is up for discussion). After all
> integrated projects that use the code are using the library instead of the
> incubator, we can delete the module(s) from the incubator.

+1

Although never formalized, this is how I had expected we would handle
the graduation process. It is also how we have been responding to
patches and blueprints offerings improvements and feature requests for
oslo.messaging.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Doug Hellmann
We have a review up (https://review.openstack.org/#/c/58297/) to add some
features to the notification system in the oslo incubator. THe notification
system is being moved into oslo.messaging, and so we have the question of
whether to accept the patch to the incubated version, move it to
oslo.messaging, or carry it in both.

As I say in the review, from a practical standpoint I think we can't really
support continued development in both places. Given the number of times the
topic of "just make everything a library" has come up, I would prefer that
we focus our energy on completing the transition for a given module or
library once it the process starts. We also need to avoid feature drift,
and provide a clear incentive for projects to update to the new library.

Based on that, I would like to say that we do not add new features to
incubated code after it starts moving into a library, and only provide
"stable-like" bug fix support until integrated projects are moved over to
the graduated library (although even that is up for discussion). After all
integrated projects that use the code are using the library instead of the
incubator, we can delete the module(s) from the incubator.

Before we make this policy official, I want to solicit feedback from the
rest of the community and the Oslo core team.

Thanks,

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Julien Danjou
On Fri, Nov 29 2013, Sandy Walsh wrote:

> For our purposes we aren't interested in the collector. We're purely
> testing the performance of the storage drivers and the underlying
> databases.

Then the connection would map to: with how many SQL connection are you
injecting things in the DB in parallel?

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Boris Pavlovic
Sandy,

Seems like we should think about how we can combine our approaches.
Rally makes load using python clients (e.g. ceilometer python client) using
different amount of users/tenatns/active_users/... So it address #2 point.

About profiling part.
Actually we attempted to make profiling system based on tomograph + zipkin.
But after we finished work around it we got complex and unstable solution.
So we take a look at ceilometer and seems like it is the perfect solution
for storing profiling data. So we are almost done with this part. Only
thing that we need now is virtualization system, that could be ported from
zipkin.


So, it will be nice if you will be able to join our efforts. And help with
testing ceilometer & build OpenStack profiling system.


Best regards,
Boris Pavlovic



On Fri, Nov 29, 2013 at 9:05 PM, Sandy Walsh wrote:

>
>
> On 11/29/2013 11:32 AM, Nadya Privalova wrote:
> > Hello Sandy,
> >
> > I'm very interested in performance results for Ceilometer. Now we have
> > successfully installed Ceilometer in the HA-lab with 200 computes and 3
> > controllers. Now it works pretty good with MySQL. Our next steps are:
> >
> > 1. Configure alarms
> > 2. Try to use Rally for OpenStack performance with MySQL and MongoDB
> > (https://wiki.openstack.org/wiki/Rally)
> >
> > We are open to any suggestions.
>
> Awesome, as a group we really need to start a similar effort as the
> storage driver tests for ceilometer in general.
>
> I assume you're just pulling Samples via the agent? We're really just
> focused on event storage and retrieval.
>
> There seems to be three levels of load testing required:
> 1. testing through the collectors (either sample or event collection)
> 2. testing load on the CM api
> 3. testing the storage drivers.
>
> Sounds like you're addressing #1, we're addressing #3 and Tempest
> integration tests will be handling #2.
>
> I should also add that we've instrumented the db and ceilometer hosts
> using Diamond to statsd/graphite for tracking load on the hosts while
> the tests are underway. This will help with determining how many
> collectors we need, where the bottlenecks are coming from, etc.
>
> It might be nice to standardize on that so we can compare results?
>
> -S
>
> >
> > Thanks,
> > Nadya
> >
> >
> >
> > On Wed, Nov 27, 2013 at 9:42 PM, Sandy Walsh  > > wrote:
> >
> > Hey!
> >
> > We've ballparked that we need to store a million events per day. To
> > that end, we're flip-flopping between sql and no-sql solutions,
> > hybrid solutions that include elastic search and other schemes.
> > Seems every road we go down has some limitations. So, we've started
> > working on test suite for load testing the ceilometer storage
> > drivers. The intent is to have a common place to record our findings
> > and compare with the efforts of others.
> >
> > There's an etherpad where we're tracking our results [1] and a test
> > suite that we're building out [2]. The test suite works against a
> > fork of ceilometer where we can keep our experimental storage driver
> > tweaks [3].
> >
> > The test suite hits the storage drivers directly, bypassing the api,
> > but still uses the ceilometer models. We've added support for
> > dumping the results to statsd/graphite for charting of performance
> > results in real-time.
> >
> > If you're interested in large scale deployments of ceilometer, we
> > would welcome any assistance.
> >
> > Thanks!
> > -Sandy
> >
> > [1]
> https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> > [2] https://github.com/rackerlabs/ceilometer-load-tests
> > [3] https://github.com/rackerlabs/instrumented-ceilometer
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Sandy Walsh


On 11/29/2013 11:32 AM, Nadya Privalova wrote:
> Hello Sandy,
> 
> I'm very interested in performance results for Ceilometer. Now we have
> successfully installed Ceilometer in the HA-lab with 200 computes and 3
> controllers. Now it works pretty good with MySQL. Our next steps are:
> 
> 1. Configure alarms
> 2. Try to use Rally for OpenStack performance with MySQL and MongoDB
> (https://wiki.openstack.org/wiki/Rally)
> 
> We are open to any suggestions.

Awesome, as a group we really need to start a similar effort as the
storage driver tests for ceilometer in general.

I assume you're just pulling Samples via the agent? We're really just
focused on event storage and retrieval.

There seems to be three levels of load testing required:
1. testing through the collectors (either sample or event collection)
2. testing load on the CM api
3. testing the storage drivers.

Sounds like you're addressing #1, we're addressing #3 and Tempest
integration tests will be handling #2.

I should also add that we've instrumented the db and ceilometer hosts
using Diamond to statsd/graphite for tracking load on the hosts while
the tests are underway. This will help with determining how many
collectors we need, where the bottlenecks are coming from, etc.

It might be nice to standardize on that so we can compare results?

-S

> 
> Thanks,
> Nadya
> 
> 
> 
> On Wed, Nov 27, 2013 at 9:42 PM, Sandy Walsh  > wrote:
> 
> Hey!
> 
> We've ballparked that we need to store a million events per day. To
> that end, we're flip-flopping between sql and no-sql solutions,
> hybrid solutions that include elastic search and other schemes.
> Seems every road we go down has some limitations. So, we've started
> working on test suite for load testing the ceilometer storage
> drivers. The intent is to have a common place to record our findings
> and compare with the efforts of others.
> 
> There's an etherpad where we're tracking our results [1] and a test
> suite that we're building out [2]. The test suite works against a
> fork of ceilometer where we can keep our experimental storage driver
> tweaks [3].
> 
> The test suite hits the storage drivers directly, bypassing the api,
> but still uses the ceilometer models. We've added support for
> dumping the results to statsd/graphite for charting of performance
> results in real-time.
> 
> If you're interested in large scale deployments of ceilometer, we
> would welcome any assistance.
> 
> Thanks!
> -Sandy
> 
> [1] https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> [2] https://github.com/rackerlabs/ceilometer-load-tests
> [3] https://github.com/rackerlabs/instrumented-ceilometer
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Sandy Walsh


On 11/29/2013 11:41 AM, Julien Danjou wrote:
> On Fri, Nov 29 2013, Nadya Privalova wrote:
> 
>> I'm very interested in performance results for Ceilometer. Now we have
>> successfully installed Ceilometer in the HA-lab with 200 computes and 3
>> controllers. Now it works pretty good with MySQL. Our next steps are:
> 
> What I'd like to know in both your and Sandy's tests, is the number of
> collector you are running in parallel.

For our purposes we aren't interested in the collector. We're purely
testing the performance of the storage drivers and the underlying
databases.




> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][metrics] Additional metrics

2013-11-29 Thread Murray, Paul (HP Cloud Services)
Hi Abbass, 

I am in the process of coding some of this now - take a look at 

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking - now 
has a specification document attached 
https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetrics - the 
design summit session on this topic

see what you think and feel free to comment - I think it covers exactly what 
you describe.

Paul.


Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as "HP CONFIDENTIAL".



-Original Message-
From: Lu, Lianhao [mailto:lianhao...@intel.com] 
Sent: 22 November 2013 02:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][scheduler][metrics] Additional metrics


Abbass MAROUNI wrote on 2013-11-21:
> Hello,
> 
> I'm in the process of writing a new scheduling algorithm for openstack nova.
> I have a set of compute nodes that I'm going to filter and weigh according to 
> some metrics collected from these compute nodes.
> I saw nova.compute.resource_tracker and metrics (ram, disk and cpu) 
> that it collects from compute nodes and updates the rows corresponding to 
> compute nodes in the database.
> 
> I'm planning to write some modules that will collect the new metrics 
> but I'm wondering if I need to modify the database schema by adding 
> more columns in the 'compute_nodes' table for my new metrics. Will 
> this require some modification to the compute model ? Then how can I use 
> these metrics during the scheduling process, do I fetch each compute node row 
> from the database ? Is there any easier way around this problem ?
> 
> Best Regards,

There are currently some effort on this:
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking 

- Lianhao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Working group on language packs

2013-11-29 Thread Monty Taylor


On 11/25/2013 12:39 PM, Georgy Okrokvertskhov wrote:
> Hi,
> 
> Just for clarification for Windows images, I think Windows image
> creation is closer to Docker approach. In order to create a special
> Windows image we use KVM\QEMU VM with initial base image, then install
> all necessary components, configure them and then run special tool
> sysprep to remove all machine specific information like passwords and
> SIDS and then create a snapshot. 
> 
> I got an impression that Docker does the same, installs application on
> running VM and then creates a snapshot.
> 
> It looks like that this can be done with using Heat + HOT software
> orchestration\Deployment tools without any additional services. This
> solution scales very well as all configuration steps are executed inside
> a VM. 

Right. This is essentially what diskimage-builder does now. You don't
even need heat for it- it does all of that locally and makes a nice qcow
for you - but it starts with a base cloud image, executes commands in
it, and saves the results.

> 
> On Sat, Nov 23, 2013 at 4:30 PM, Clayton Coleman  > wrote:
> 
> 
> 
> > On Nov 23, 2013, at 6:48 PM, Robert Collins
> mailto:robe...@robertcollins.net>> wrote:
> >
> > Ok, so no - diskimage-builder builds regular OpenStack full disk
> disk images.
> >
> > Translating that to a filesystem is easy; doing a diff against another
> > filesystem version is also doable, and if the container service for
> > Nova understands such partial container contents you could certainly
> > glue it all in together, but we don't have any specific glue for that
> > today.
> >
> > I think docker is great, and if the goal of solum is to deploy via
> > docker, I'd suggest using docker - no need to make diskimage-builder
> > into a docker clone.
> >
> > OTOH if you're deploying via heat, I think Diskimage-builder is
> > targeted directly at your needs : we wrote it for deploying OpenStack
> > after all.
> 
> I think we're targeting all possible deployment paths, rather than
> just one.  Docker simply represents one emerging direction for
> deployments due to its speed and efficiency (which vms can't match).
> 
> The base concept (images and image like constructs that can be
> started by nova) provides a clean abstraction - how those images are
> created is specific to the ecosystem or organization.  An
> organization that is heavily invested in a particular image creation
> technology already can still take advantage of Solum, because all
> that is necessary for Solum to know about is a thin shim around
> transforming that base image into a deployable image.  The developer
> and administrative support roles can split responsibilities - one
> that maintains a baseline, and one that consumes that baseline.
> 
> >
> > -Rob
> >
> >
> >> On 24 November 2013 12:24, Adrian Otto  > wrote:
> >>
> >> On Nov 23, 2013, at 2:39 PM, Robert Collins
> mailto:robe...@robertcollins.net>>
> >> wrote:
> >>
> >>> On 24 November 2013 05:42, Clayton Coleman  > wrote:
> >>>
> > Containers will work fine in diskimage-builder. One only needs
> to hack
> > in the ability to save in the container image format rather
> than qcow2.
> 
>  That's good to know.  Will diskimage-builder be able to break
> those down into multiple layers?
> >>>
> >>> What do you mean?
> >>
> >> Docker images can be layered. You can have a base image on the
> bottom, and then an arbitrary number of deltas on top of that. It
> essentially works like incremental backups do. You can think of it
> as each "layer" has a parent image, and if they all collapse
> together, you get the current state. Keeping track of past layers
> gives you the potential for rolling back to a particular restore
> point, or only distributing incremental changes when you know that
> the previous layer is already on the host.
> >>
> >>>
> >>> -Rob
> >>>
> >>>
> >>> --
> >>> Robert Collins mailto:rbtcoll...@hp.com>>
> >>> Distinguished Technologist
> >>> HP Converged Cloud
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> 
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Robert Collins mailto:rbtcoll...@hp.com>>
> >

Re: [openstack-dev] request-id in API response

2013-11-29 Thread Sean Dague
On 11/29/2013 10:33 AM, Jay Pipes wrote:
> On 11/28/2013 07:45 AM, Akihiro Motoki wrote:
>> Hi,
>>
>> I am working on adding request-id to API response in Neutron.
>> After I checked what header is used in other projects
>> header name varies project by project.
>> It seems there is no consensus what header is recommended
>> and it is better to have some consensus.
>>
>>nova: x-compute-request-id
>>cinder:   x-compute-request-id
>>glance:   x-openstack-request-id
>>neutron:  x-network-request-id  (under review)
>>
>> request-id is assigned and used inside of each project now,
>> so x--request-id looks good. On the other hand,
>> if we have a plan to enhance request-id across projects,
>> x-openstack-request-id looks better.
> 
> My vote is for:
> 
> x-openstack-request-id
> 
> With an implementation of "create a request UUID if none exists yet" in
> some standardized WSGI middleware...

Agreed. I don't think I see any value in having these have different
service names, having just x-openstack-request-id across all the
services seems a far better idea, and come back through and fix nova and
cinder to be that as well.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][messaging]: expected delivery guarantees & handling failure

2013-11-29 Thread Gordon Sim
What are the expected delivery guarantees for messages for rpc 
transports in oslo.messaging?


Are failures to be handled by the applications using the rpc interface 
(e.g. by retrying on timeout)? Or is the transport expected to provide 
the necessary reliability of messages? Or is this something that can 
vary by transport (i.e. with different qos for different transports)?


I see acks seem to be used for consumer with rabbit, but there is no 
means in AMQP 0-9-1 to confirm published messages (Rabbit have an 
extension, but I can't see this being used either). So though the 
delivery from broker to consumers is reliable (assuming durable or 
replicated storage of the message by the broker), the delivery from 
producers to the broker is not.


For qpid the queues are always autodelete and no reliability is set for 
consumers (i.e. messages are assumed delivered without waiting for 
acknowledgement). As the reconnect is handled by the driver rather than 
the underlying qpid.messaging client, in doubt published messages aren't 
replayed either.


I'm very new to the code so I may be missing something, but it seems 
like messages could be lost regardless of broker configuration with 
either the rabbit or qpid drivers and just wanted to confirm this is as 
intended.


--Gordon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Julien Danjou
On Fri, Nov 29 2013, Nadya Privalova wrote:

> I'm very interested in performance results for Ceilometer. Now we have
> successfully installed Ceilometer in the HA-lab with 200 computes and 3
> controllers. Now it works pretty good with MySQL. Our next steps are:

What I'd like to know in both your and Sandy's tests, is the number of
collector you are running in parallel.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-11-29 Thread Jay Pipes

On 11/28/2013 07:45 AM, Akihiro Motoki wrote:

Hi,

I am working on adding request-id to API response in Neutron.
After I checked what header is used in other projects
header name varies project by project.
It seems there is no consensus what header is recommended
and it is better to have some consensus.

   nova: x-compute-request-id
   cinder:   x-compute-request-id
   glance:   x-openstack-request-id
   neutron:  x-network-request-id  (under review)

request-id is assigned and used inside of each project now,
so x--request-id looks good. On the other hand,
if we have a plan to enhance request-id across projects,
x-openstack-request-id looks better.


My vote is for:

x-openstack-request-id

With an implementation of "create a request UUID if none exists yet" in 
some standardized WSGI middleware...


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] storage driver testing

2013-11-29 Thread Nadya Privalova
Hello Sandy,

I'm very interested in performance results for Ceilometer. Now we have
successfully installed Ceilometer in the HA-lab with 200 computes and 3
controllers. Now it works pretty good with MySQL. Our next steps are:

1. Configure alarms
2. Try to use Rally for OpenStack performance with MySQL and MongoDB (
https://wiki.openstack.org/wiki/Rally)

We are open to any suggestions.

Thanks,
Nadya



On Wed, Nov 27, 2013 at 9:42 PM, Sandy Walsh wrote:

> Hey!
>
> We've ballparked that we need to store a million events per day. To that
> end, we're flip-flopping between sql and no-sql solutions, hybrid solutions
> that include elastic search and other schemes. Seems every road we go down
> has some limitations. So, we've started working on test suite for load
> testing the ceilometer storage drivers. The intent is to have a common
> place to record our findings and compare with the efforts of others.
>
> There's an etherpad where we're tracking our results [1] and a test suite
> that we're building out [2]. The test suite works against a fork of
> ceilometer where we can keep our experimental storage driver tweaks [3].
>
> The test suite hits the storage drivers directly, bypassing the api, but
> still uses the ceilometer models. We've added support for dumping the
> results to statsd/graphite for charting of performance results in real-time.
>
> If you're interested in large scale deployments of ceilometer, we would
> welcome any assistance.
>
> Thanks!
> -Sandy
>
> [1] https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> [2] https://github.com/rackerlabs/ceilometer-load-tests
> [3] https://github.com/rackerlabs/instrumented-ceilometer
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-29 Thread Mark McLoughlin
Hey

Anyone got an update on this?

The keystone blueprint for KDS was marked approved on Tuesday:

  https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

and a new keystone review was added on Sunday, but it must be a draft
since I can't access it:

   https://review.openstack.org/58124

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-29 Thread Thierry Carrez
Robert Collins wrote:
> https://etherpad.openstack.org/p/icehouse-external-scheduler

Just looked into it with release management / TC hat on and I have a
(possibly minor) concern on the deprecation path/timing.

Assuming everything goes well, the separate scheduler will be
fast-tracked through incubation in I, graduate at the end of the I cycle
to be made a fully-integrated project in the J release.

Your deprecation path description mentions that the internal scheduler
will be deprecated in I, although there is no "released" (or
security-supported) alternative to switch to at that point. It's not
until the J release that such an alternative will be made available.

So IMHO for the release/security-oriented users, the switch point is
when they start upgrading to J, and not the final step of their upgrade
to I (as suggested by the "deploy the external scheduler and switch over
before you consider your migration to I complete" wording in the
Etherpad). As the first step towards *switching to J* you would install
the new scheduler before upgrading Nova itself. That works whether
you're a CD user (and start deploying pre-J stuff just after the I
release), or a release user (and wait until J final release to switch to
it).

Maybe we are talking about the same thing (the migration to the separate
scheduler must happen after the I release and, at the latest, when you
switch to the J release) -- but I wanted to make sure we were on the
same page.

I also assume that all the other "scheduler-consuming" projects would
develop the capability to talk to the external scheduler during the J
cycle, so that their own schedulers would be deprecated in J release and
removed at the start of H. That would be, to me, the condition to
considering the external scheduler as "integrated" with (even if not
mandatory for) the rest of the common release components.

Does that work for you ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-29 Thread Jay Pipes

On 11/27/2013 10:15 PM, Adrian Otto wrote:


On Nov 27, 2013, at 11:27 AM, Jay Pipes  wrote:


On 11/27/2013 02:03 PM, Adrian Otto wrote:

Jay,

On Nov 27, 2013, at 10:36 AM, Jay Pipes 
wrote:


On 11/27/2013 06:23 AM, Tom Deckers (tdeckers) wrote:

I understand the an Assembly can be a larger group of
components. However, those together exist to provide a
capability which we want to capture in some catalog so the
capability becomes discoverable. I'm not sure how the
'listing' mechanism works out in practice.  If this can be
used in an enterprise ecosystem to discover services then
that's fine.  We should capture a work item flesh out
discoverability of both Applications and Assemblies.  I make
that distinction because both scenarios should be provided.
As a service consumer, I should be able to look at the
'Applications' listed in the Openstack environment and
provision them.  In that case, we should also support flavors
of the service.  Depending on the consumer-provider
relationship, we might want to provide different
configuratons of the same Application. (e.g.
gold-silver-bronze tiering).  I believe this is covered by
the 'listing' you mentioned. Once deployed, there should also
be a mechanism to discover the deployed assemblies.  One
example of such deployed Assembly is a persistence service
that can in its turn be used as a Service in another
Assembly.  The specific details of the capability provided by
the Assembly needs to be discoverable in order to allow
successful auto-wiring (I've seen a comment about this
elsewhere in the project - I believe in last meeting).


Another thought around the naming of "Assembly"... there's no
reason why the API cannot just ditch the entire notion of an
assembly, and just use "Component" in a self-referential way.

In other words, an Application (or whatever is agree on for
that resource name) contains one or more Components. Components
may further be composed of one or more (sub)Components, which
themselves may be composed of further (sub)Components.

That way you keep the notion of a Component as generic and
encompassing as possible and allow for an unlimited generic
hierarchy of Component resources to comprise an Application.


As currently proposed, an Assembly (a top level grouping of
Components) requires only one Component, but may contain many.
The question is whether we should even have an Assembly. I admit
that Assembly is a new term, and therefore requires definition,
explanation, and examples. However, I think eliminating it and
just using Components is getting a bit too abstract, and requires
a bit too much explanation.

I consider this subject analogous to the fundamentals concepts
of Chemistry. Imagine trying to describe a molecule by only using
the concept of an atom. Each atom can be different, and have more
or less electrons etc. But if we did not have the concept of a
molecule (a top level grouping of atoms), and tried to explain
them as atoms contained within other atoms, Chemistry would get
harder to teach.

We want this API to be understandable to Application Developers.
I am afraid of simplifying matters too much, and making things a
bit too abstract.


Understood, but I actually think that the Component inside
Component approach would work quite well with a simple "component
type" attribute of the Component resource.

In your particle physics example, it would be the equivalent of
saying that an Atom is composed of subatomic particles, with those
subatomic particles having different types (hadrons, baryons,
mesons, etc) and those subatomic particles being composed of zero
or more subatomic particles of various types (neutrons, protons,
fermions, bosons, etc).

In fact, particle physics has the concept of elementary particles
-- those particles whose composition is unknown -- and composite
particles -- those particles that are composed of other particles.
The congruence between the taxonomy of particles and what I'm
proposing is actually remarkable :)

Elementary particle is like a Component with no sub Components
Composite particle is like a Component with sub Components. Each
particle has a type, and each Component would also have a type.


Yes, this is precisely my point. I'm aiming for elementary Chemistry,
and you're aiming for Particle Physics.


LOL. Touché.


Other possibility:

Call an Assembly exactly what it is: ComponentGroup


I'm open to revisiting more possible names for this besides Assembly,
but I do strongly believe that the top level grouping should be it's
own thing, and should not just be a self referential arrangement of
the same type of resources. I'd like it to convey the idea that an
Assembly is the running instance of the complete application, and all
of its various parts. I'm not convinced that componentGroup conveys
that.


Fair enough :)

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-

[openstack-dev] jsonschema version constraint

2013-11-29 Thread Parthipan, Loganathan
Hi,

Do we need to update the jsonschema constraint in the requirements.txt? Many 
tests are failing with jsonschema 1.3.0 with the same error.

==
FAIL: 
nova.tests.test_api_validation.AdditionalPropertiesDisableTestCase.test_validate_additionalProperties_disable
--
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 133, in setUp
@validation.schema(request_body_schema=schema)
  File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/__init__.py", line 
36, in schema
schema_validator = _SchemaValidator(request_body_schema)
  File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/validators.py", 
line 51, in __init__
validator_cls = jsonschema.validators.extend(self.validator_org,
AttributeError: 'dict' object has no attribute 'extend'


~parthi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-29 Thread Gary Kotton


On 11/28/13 11:34 PM, "Robert Collins"  wrote:

>On 29 November 2013 09:44, Gary Kotton  wrote:
>>
>>
>> The first stage is technical - move Nova scheduling code from A to be.
>> What do we achieve - not much - we actually complicate things - there is
>> always churn in Nova and we will have duplicate code bases. In addition
>>to
>> this the only service that can actually make use of they is Nova
>>
>> The second stage is defining an API that other modules can use (we have
>> yet to decide if this will be RPC based or have a interface like Glance,
>> Cinder etc.)
>> We have yet to even talk about the API's.
>> The third stage is adding shiny new features and trying to not have a
>> community tar and feather us.
>
>Yup; I look forward to our tar and feathering overlords. :)
>
>> Prior to copying code we really need to discuss the API's.
>
>I don't think we do: it's clear that we need to come up with them -
>it's necessary, and noone has expressed any doubt about the ability to
>do that. RPC API evolution is fairly well understood - we add a new
>method, and have it do the necessary, then we go to the users and get
>them using it, then we delete the old one.
>
>> This can even
>> be done in parallel if your concern is time and resources. But the point
>> is we need a API to interface with the service. For a start we can just
>> address the Nova use case. We need to at least address:
>> 1. Scheduling interface
>> 2. Statistics updates
>> 3. API's for configuring the scheduling policies
>>
>> Later these will all need to bode well with all of the existing modules
>> that we want to support - Nova, Cinder and Neutron (if I missed on then
>> feel free to kick me whilst I am down)
>
>Ironic perhaps.
>
>> I do not think that we should focus on failure modes, we should plan it
>> and break it up so that it will be usable and functional and most
>> importantly useful in the near future.
>>
>> How about next week we sit together and draw up a wiki of the flows,
>>data
>> structures and interfaces. Lets go from there.
>
>While I disagree about us needing to do it right now, I'm very happy
>to spend some time on it - I don't want to stop folk doing work that
>needs to be done!

I do not think that discussion will prevent any of the work getting done
or not done. It may actually save us a ton of time. I really think that
defining the API and interfaces can save us a lot in the short and long
run. The V1 API should really be very simple and we should not get bogged
down but if we can define an interface that could work with Nova and be
extensible to work with the rest then we will be in a very good state. I
am thinking of have a notion of a 'scheduling domain' and that will be
used with the scheduling request. This could be a host aggregate, a AZ,
the feature that Phil is working on - private hosts. If we can define an
interface around this and have the Nova <-> scheduling interface down then
we are on the wayŠ.

Hosts scan be added to the domain and the scheduling will be able to get
the stats etc. For the first phase this will be completely RPC bases so
not to get bogged down.

Can we talk about this next week?

>
>-Rob
>
>
>
>-- 
>Robert Collins 
>Distinguished Technologist
>HP Converged Cloud
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=m8Unw2sBiRyQsd6ADjYCH
>CpcxAKBG1Gb3R58ehl3wIw%3D%0A&s=6c2d9b2f3b884693094cffc0392402f24b50ddac3d9
>d956472d26c726c84a2a6


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Problem connecting from host to router and VM port under DevStack

2013-11-29 Thread Eugene Nikanorov
Hi Paul,

I think you are missing routes on your host. AFAIK devstack adds a route to
the host for default private network. You might need this as well.

Thanks,
Eugene.


On Thu, Nov 28, 2013 at 1:14 AM, Paul Michali  wrote:

> Hi,
>
> I had this working once, but after rebooting my host and restarting
> devstack, I cannot get it to work. Hoping someone has an idea…
>
> Running latest devstack with OVS and GRE tunnels.
>
> In Openstack, I added a network, subnet, and router, and have opened up
> security groups:
>
> neutron net-create mgmt
> neutron subnet-create --disable-dhcp --name=mgmt-subnet mgmt 192.168.200.0
> /24
> neutron router-create router2
> neutron router-interface-add router2 mgmt-subnet
> sudo ovs-vsctl add-port br-int my_port tag=2 -- set interface 
> my_porttype=internal
> sudo ifconfig my_port 192.168.200.3/24 up
> nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
> nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
>
> From the host, I cannot ping the router interface (192.168.200.1), nor can
> I ping the host (192.168.200.3) from the router's namespace. I had created
> a VM with an interface on the subnet (192.168.200.2), and I can ping the
> router from the VM and vice versa.
>
> With the private and public network, I can ping the router from the host
> w/o any issue.
>
> Any idea as to what I'm missing?
>
> Thanks!
>
> PCM (Paul Michali)
>
> MAIL  p...@cisco.com
> IRCpcm_  (irc.freenode.net)
> TW@pmichali
> GPG key4525ECC253E31A83
> Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start

2013-11-29 Thread Gopi Krishna B
Hi
Thankyou for the info, I downgraded sqlalchemy accordingly, but there
were lot of other dependencies I had to take care (as below). The
error which still continues to persist in my environment is :
RuntimeError: Unable to load quantum from configuration file
/etc/neutron/api-paste.ini.

are there any other dependencies to be taken care to resolve this error?

pip uninstall sqlalchemy (uninstalled -0.8.3)
pip install sqlalchemy==0.7.9

pip install jsonrpclib

pip uninstall eventlet  (uninstalled -0.12.0)
pip install eventlet   (installed -0.14.0)

pip install pyudev

for the error Requirement.parse('amqp>=1.0.10,<1.1.0'))
pip uninstall amqp
pip install amqp  -- but its installing 1.3.3
so, download the source code of 1.0.10
(https://pypi.python.org/pypi/amqp/1.0.10)
python setup.py build
python setup.py install


Regards
Gopi Krishna

Yongsheng Gong gongysh at unitedstack.com

VersionConflict: (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))

it seems your SQLAlchemy is newer than required. so

pip uninstall sqlalchemy
and then install older one:
sudo pip install sqlalchemy==0.7.9

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-29 Thread Zang MingJie
On Fri, Nov 29, 2013 at 2:25 PM, Jian Wen  wrote:
> I don't think we can implement a stateful firewall[1] now.

I don't think we need a stateful firewall, a stateless one should work
well. If the stateful conntrack is completed in the future, we can
also take benefit from it.

>
> Once connection tracking capability[2] is added to the Linux OVS, we
> could start to implement the ovs-firewall-driver blueprint.
>
> [1] http://en.wikipedia.org/wiki/Stateful_firewall
> [2]
> http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS
>
>
> On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson  wrote:
>>
>> Adding Jun to this thread since gmail is failing him.
>>
>>
>> On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi
>>  wrote:
>>>
>>> Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
>>> interested to see what Jun Park has. I might have something ready before he
>>> is available again, but would like to collaborate regardless.
>>>
>>> Amir
>>>
>>>
>>>
>>> On Nov 19, 2013, at 3:31 AM, Kanthi P  wrote:
>>>
>>> Hi All,
>>>
>>> Thanks for the response!
>>> Amir,Mike: Is your implementation being done according to ML2 plugin
>>>
>>> Regards,
>>> Kanthi
>>>
>>>
>>> On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson 
>>> wrote:

 Hi Kanthi,

 Just to reiterate what Kyle said, we do have an internal implementation
 using flows that looks very similar to security groups. Jun Park was the 
 guy
 that wrote this and is looking to get it upstreamed. I think he'll be back
 in the office late next week. I'll point him to this thread when he's back.

 -Mike


 On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery)
  wrote:
>
> On Nov 18, 2013, at 4:26 PM, Kanthi P 
> wrote:
> > Hi All,
> >
> > We are planning to implement quantum security groups using openflows
> > for ovs plugin instead of iptables which is the case now.
> >
> > Doing so we can avoid the extra linux bridge which is connected
> > between the vnet device and the ovs bridge, which is given as a work 
> > around
> > since ovs bridge is not compatible with iptables.
> >
> > We are planning to create a blueprint and work on it. Could you
> > please share your views on this
> >
> Hi Kanthi:
>
> Overall, this idea is interesting and removing those extra bridges
> would certainly be nice. Some people at Bluehost gave a talk at the Summit
> [1] in which they explained they have done something similar, you may want
> to reach out to them since they have code for this internally already.
>
> The OVS plugin is in feature freeze during Icehouse, and will be
> deprecated in favor of ML2 [2] at the end of Icehouse. I would advise you 
> to
> retarget your work at ML2 when running with the OVS agent instead. The
> Neutron team will not accept new features into the OVS plugin anymore.
>
> Thanks,
> Kyle
>
> [1]
> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
> [2] https://wiki.openstack.org/wiki/Neutron/ML2
>
> > Thanks,
> > Kanthi
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Cheers,
> Jian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Automatically post new threads from AskBot to the list

2013-11-29 Thread Jaromir Coufal


On 2013/20/11 01:23, Stefano Maffulli wrote:

On 11/19/2013 08:19 AM, Julie Pichon wrote:

I've been thinking about the AskBot UX website [0] and its lack of
visibility, particularly for new community members.

Indeed, it's one of the drawbacks of splitting groups: information tends
not to flow very well.

Yeah, We will try weekly summary from the beginning.


I've heard that the UX team will be the first team to dogfood
Storyboard: do you have any idea of any ETA/deadline for when this will
happen?
That's correct. The last information was that it might be around 
beginning of next year. But there were lot of variables (mostly 
resources). But yes, I will be very happy to help to test and provide 
feedback on that.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start

2013-11-29 Thread Gopi Krishna B
Hi Trinath
Please find the server.log and neutron.conf
 * **
server.log
-
2013-11-29 11:21:45.276 13505 INFO neutron.common.config [-] Logging enabled!
2013-11-29 11:21:45.277 13505 WARNING neutron.common.legacy [-] Old
class module path in use.  Please change
'quantum.openstack.common.rpc.impl_qpid' to
'neutron.openstack.common.rpc.impl_qpid'.
2013-11-29 11:21:45.277 13505 ERROR neutron.common.legacy [-] Skipping
unknown group key: firewall_driver
2013-11-29 11:21:45.277 13505 DEBUG neutron.service [-]

log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1890
2013-11-29 11:21:45.277 13505 DEBUG neutron.service [-] Configuration
options gathered from: log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1891
2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] command line
args: ['--config-file', '/usr/share/neutron/neutron-dist.conf',
'--config-file', '/etc/neutron/neutron.conf', '--config-file',
'/etc/neutron/plugin.ini', '--log-file',
'/var/log/neutron/server.log'] log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1892
2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] config files:
['/usr/share/neutron/neutron-dist.conf', '/etc/neutron/neutron.conf',
'/etc/neutron/plugin.ini'] log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1893
2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-]

log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1894
2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] allow_bulk
= True log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-]
allow_overlapping_ips  = True log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-]
allow_pagination   = False log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-] allow_sorting
= False log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-]
allowed_rpc_exception_modules  =
['neutron.openstack.common.exception', 'nova.exception',
'cinder.exception', 'exceptions'] log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-]
api_extensions_path=  log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-]
api_paste_config   = /etc/neutron/api-paste.ini
log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] auth_strategy
= keystone log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] backdoor_port
= None log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] backlog
= 4096 log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] base_mac
= fa:16:3e:00:00:00 log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] bind_host
= 0.0.0.0 log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] bind_port
= 9696 log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] config_dir
= None log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] config_file
= ['/usr/share/neutron/neutron-dist.conf',
'/etc/neutron/neutron.conf', '/etc/neutron/plugin.ini'] log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-]
control_exchange   = rabbit log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] core_plugin
= neutron.plugins.ml2.plugin.Ml2Plugin log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] debug
= True log_opt_values
/usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903
2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-]
default_log_levels = ['amqplib=WARN', 'sqlalchemy

Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-29 Thread Khanh-Toan Tran
> > The first stage is technical - move Nova scheduling code from A to be.
> > What do we achieve - not much - we actually complicate things - there
> > is always churn in Nova and we will have duplicate code bases. In
> > addition to this the only service that can actually make use of they
> > is Nova
> >
> > The second stage is defining an API that other modules can use (we
> > have yet to decide if this will be RPC based or have a interface like
> > Glance, Cinder etc.) We have yet to even talk about the API's.
> > The third stage is adding shiny new features and trying to not have a
> > community tar and feather us.
> 
> Yup; I look forward to our tar and feathering overlords. :)
> 
> > Prior to copying code we really need to discuss the API's.
> 
> I don't think we do: it's clear that we need to come up with them - it's
necessary,
> and noone has expressed any doubt about the ability to do that. RPC API
> evolution is fairly well understood - we add a new method, and have it
do the
> necessary, then we go to the users and get them using it, then we delete
the old
> one.
> 
I agree with Robert. I think that nova RPC is fairly enough for the new
scheduler 
right now. Most of the scheduler works focus on nova anyway, so starting
from there 
is reasonable and rather easy for the transition. We can think of
enhancing API later 
(even creating REST API perhaps).

> > This can even
> > be done in parallel if your concern is time and resources. But the
> > point is we need a API to interface with the service. For a start we
> > can just address the Nova use case. We need to at least address:
> > 1. Scheduling interface
> > 2. Statistics updates
> > 3. API's for configuring the scheduling policies
> >
If by "2. Statistics update" you mean the database issue for scheduler
then yes, it
is a  big issue, especially during the transition period when nova still
holds the host state
data. Should scheduler get access to nova's DB for the time being, and
later fork out the
DB to scheduler? According to Boris, Merantis has already studied the
separation of host state
from nova's DB. I think we can benefit from their experience.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-29 Thread Rafał Jaworowski
On Fri, Nov 29, 2013 at 8:24 AM, Roman Bogorodskiy
 wrote:
> Hello,
>
> Yes, libvirt's qemu driver works almost fine currently, except the fact that
> it
> needs a 'real' bridge driver, so all the networking configuration like
> filtering rules, NAT, etc
> could be done automatically, like for Linux now, instead of making user to
> perform
> all the configuration manually.

Networking is actually part of our work for FreeBSD Nova support: we
have a freebsd_net.py driver (equivalent to the linux_net.py), which
manages bridging in the BSD way and we're in the process of bringing
up FlatDHCPManager configuration for nova-network running on the
FreeBSD host.

Rafal


> I've been planning to get to bhyve driver as well, but probably after
> finishing with the bridge driver
> (but unfortunately, I don't have a full picture what would be the best way
> to implement that).
>
>
> On Mon, Nov 25, 2013 at 3:50 PM, Daniel P. Berrange 
> wrote:
>>
>> On Fri, Nov 22, 2013 at 10:46:19AM -0500, Russell Bryant wrote:
>> > On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
>> > > Russell,
>> > > First, thank you for the whiteboard input regarding the blueprint for
>> > > FreeBSD hypervisor nova driver:
>> > > https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node
>> > >
>> > > We were considering libvirt support for bhyve hypervisor as well, only
>> > > wouldn't want to do this as the first approach for FreeBSD+OpenStack
>> > > integration. We'd rather bring bhyve bindings for libvirt later as
>> > > another integration option.
>> > >
>> > > For FreeBSD host support a native hypervisor driver is important and
>> > > desired long-term and we would like to have it anyways. Among things
>> > > to consider are the following:
>> > > - libvirt package is additional (non-OpenStack), external dependency
>> > > (maintained in the 'ports' collection, not included in base system),
>> > > while native API (via libvmmapi.so library) is integral part of the
>> > > base system.
>> > > - libvirt license is LGPL, which might be an important aspect for some
>> > > users.
>> >
>> > That's perfectly fine if you want to go that route as a first step.
>> > However, that doesn't mean it's appropriate for merging into Nova.
>> > Unless there are strong technical justifications for why this approach
>> > should be taken, I would probably turn down this driver until you were
>> > able to go the libvirt route.
>>
>> The idea of a FreeBSD bhyve driver for libvirt has been mentioned
>> a few times. We've already got a FreeBSD port of libvirt being
>> actively maintained to support QEMU (and possibly Xen, not 100% sure
>> on that one), and we'd be more than happy to see further contributions
>> such as a bhyve driver.
>>
>> I am of course biased, as libvirt project maintainer, but I do agree
>> that supporting bhyve via libvirt would make sense, since it opens up
>> opportunities beyond just OpenStack. There are a bunch of applications
>> built on libvirt that could be used to manage bhyve, and a fair few
>> applications which have plugins using libvirt
>>
>> Taking on maint work for a new OpenStack driver is a non-trivial amount
>> of work in itself. If the burden for OpenStack maintainers can be reduced
>> by, pushing work out to / relying on support from, libvirt, that makes
>> sense from OpenStack/Nova's POV.
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
>> :|
>> |: http://libvirt.org  -o- http://virt-manager.org
>> :|
>> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
>> :|
>> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
>> :|
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][hadoop][template] Does anyone has a hadoop template

2013-11-29 Thread Dmitry Mescheryakov
Hello Jay,

Just in case you've missed it, there is a project Savanna dedicated to
deploying Hadoop clusters on OpenStack:

https://github.com/openstack/savanna
http://savanna.readthedocs.org/en/0.3/

Dmitry


2013/11/29 Jay Lau 

> Hi,
>
> I'm now trying to deploy a hadoop cluster with heat, just wondering if
> someone who has a heat template which can help me do the work.
>
> Thanks,
>
> Jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Vote required for certificate as first level citizen - SSL Termination

2013-11-29 Thread Vijay Venkatachalam

To summarize:
Certificate will be a first level citizen which can be reused and 
For certificate management nothing sophisticated is required.

Can you please Vote (+1, -1)?

We can move on if there is consensus around this.

> -Original Message-
> From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
> Sent: Wednesday, November 20, 2013 3:01 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> Hi,
> 
> On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
> > Hi,
> >
> >
> >
> > Evgeny has outlined the wiki for the proposed change at:
> > https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
> > with what was discussed during the summit.
> >
> > The
> >
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
> YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
> >
> >
> >
> > What would be the benefit of having a certificate that must be
> > connected to VIP vs. embedding it in the VIP?
> 
> You could reuse the same certificate for multiple loadbalancer VIPs.
> This is a fairly common pattern - we have a dev wildcard cert that is self-
> signed, and is used for lots of VIPs.
> 
> > When we get a system that can store certificates (ex: Barbican), we
> > will add support to it in the LBaaS model.
> 
> It probably doesn't need anything that complicated, does it?
> 
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - The Guardian
> 
> Please consider the environment before printing this email.
> --
> Visit theguardian.com
> 
> On your mobile, download the Guardian iPhone app
> theguardian.com/iphone and our iPad edition theguardian.com/iPad
> Save up to 33% by subscribing to the Guardian and Observer - choose the
> papers you want and get full digital access.
> Visit subscribe.theguardian.com
> 
> This e-mail and all attachments are confidential and may also be privileged. 
> If
> you are not the named recipient, please notify the sender and delete the e-
> mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use the
> information for any purpose, or store, or copy, it in any way.
> 
> Guardian News & Media Limited is not liable for any computer viruses or
> other material transmitted with or as part of this e-mail. You should employ
> virus checking software.
> 
> Guardian News & Media Limited
> 
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
> 
> Registered in England Number 908396
> 
> --
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Working with Vagrant and packstack

2013-11-29 Thread Peeyush Gupta
Hi all,

I have been trying to set up an openstack environment using vagrant and 
packstack. I provisioned a Fedora-19 VM  through vagrant and used a shell 
script to take care of installation and other things. The first thing that 
shell script does is "yum install -y openstack-packstack" and then "packstack 
--allinone". Now, the issue is that the second command requires me to enter the 
root's password explicitly. I mean it doesn't matter if I am running this as 
root or using sudo, I have to enter the password explicitly everytime. I tried 
to pass the password to the VM through pipes and other methods, but nothing 
works.

Did anyone face the same problem? Is there any way around this? Or does it mean 
that I can't use puppet/packstack with vagrant?

Thanks, 

~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev