Re: [openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-05 Thread Angelo Matarazzo

Hi Chmouel,
I'll create it in case it is a real bug.
Thank you

Angelo


On 05/11/2014 05:35, JunJie Nan wrote:


I think it's a bug, rejion should work after unstack. And stack.sh is 
need after clean.sh instead of unstack.sh.


Hi,

If you do ./unstack.sh you probably want to do ./stack.sh back again 
to restack, ./rejoin-stack.sh is here when you have your screen 
session killed and want to rejoin it without having to ./stack.sh the 
full shenanigan again.


Cheers,
Chmouel

On Tue, Nov 4, 2014 at 1:52 PM, Angelo Matarazzo 
> wrote:


Hi all,

sometimes I use devstack (in a VM with Ubuntu installed) and I
perform ./unstack command to reset my environment.

When I perform rejoin-stack.sh keystone endpoint doesn't work.
Following
http://www.gossamer-threads.com/lists/openstack/dev/41939 suggestion
I checked /etc/apache2/sites-enabled
and symbolic link to
../sites-available/keystone.conf and doesn't exist.

If I recreate the symbolic link keystone works..

what is the correct workflow after I have performed ./unstack.sh
Should I perform ./stack.sh or this is a bug?

Cheers,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-05 Thread Carl Baldwin
I don't think I know the precise answer to your question.  My best guess is
that floating ips were one of the initial core L3 features implemented
before other advanced services existed.  Implementing them in this way may
have been the path of least resistance at the time.

Are you suggesting a change?  What change?  What advantages would your
change bring?  Do you see something fundamentally wrong with the current
approach?  Does it have some deficiency that you can point out?  Basically,
we need a suggested modification with some good justification to spend time
making that modification.

Carl
Hi,

Address Translation(FIP, snat and dnat) looks like an advanced service. Why
it is integrated into L3 router? Actually, this is not how it's done in
practice. They are usually provided by Firewall device but not router.

What's the design concept?

Thanks&Regards,
Germy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-05 Thread Akilesh K
@Germy Lure,
I cannot give you a direct answer as I am not a developer.

But let me point out that openstack can make use of many agents for l3 and
above and not just neutron-l3-agent. You may even create your own agent.

The 'neutron-l3-agent' works that way just to keep things simple. One point
to consider is that Tenants may share same network space. So it becomes
necessary to tie a router which belongs to a tenant to the tenant's
security groups. If you try to distribute routing and firewall service you
might end up making it too complicated.


On Wed, Nov 5, 2014 at 2:40 PM, Carl Baldwin  wrote:

> I don't think I know the precise answer to your question.  My best guess
> is that floating ips were one of the initial core L3 features implemented
> before other advanced services existed.  Implementing them in this way may
> have been the path of least resistance at the time.
>
> Are you suggesting a change?  What change?  What advantages would your
> change bring?  Do you see something fundamentally wrong with the current
> approach?  Does it have some deficiency that you can point out?  Basically,
> we need a suggested modification with some good justification to spend time
> making that modification.
>
> Carl
> Hi,
>
> Address Translation(FIP, snat and dnat) looks like an advanced service.
> Why it is integrated into L3 router? Actually, this is not how it's done in
> practice. They are usually provided by Firewall device but not router.
>
> What's the design concept?
>
> Thanks&Regards,
> Germy
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Germy Lure
Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we
should set this flag to remove all flows? upgrade? restart manually?
internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted,
stable and so on) flows between agent and the ovs related, we need refresh
flows. But the problem is how we know this? I think a startup flag is too
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve
most problem. The flows are correct most time after all. But considering
NFV 5 9s, I still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
wrote:

> From what I gather from this thread and related bug report, the change
> introduced in the OVS agent is causing a data plane outage upon agent
> restart, which is not desirable in most cases.
>
> The rationale for the change that introduced this bug was, I believe,
> cleaning up stale flows on the OVS agent, which also makes some sense.
>
> Unless I'm missing something, I reckon the best way forward is actually
> quite straightforward; we might add a startup flag to reset all flows and
> not reset them by default.
> While I agree the "flow synchronisation" process proposed in the previous
> post is valuable too, I hope we might be able to fix this with a simpler
> approach.
>
> Salvatore
>
> On 5 November 2014 04:43, Germy Lure  wrote:
>
>> Hi,
>>
>> Consider the triggering of restart agent, I think it's nothing but:
>> 1). only restart agent
>> 2). reboot the host that agent deployed on
>>
>> When the agent started, the ovs may:
>> a.have all correct flows
>> b.have nothing at all
>> c.have partly correct flows, the others may need to be reprogrammed,
>> deleted or added
>>
>> In any case, I think both user and developer would happy to see that the
>> system recovery ASAP after agent restarting. The best is agent only push
>> those incorrect flows, but keep the correct ones. This can ensure those
>> business with correct flows working during agent starting.
>>
>> So, I suggest two solutions:
>> 1.Agent gets all flows from ovs and compare with its local flows after
>> restarting. And agent only corrects the different ones.
>> 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
>> and ovs prepares two tables for flows switch(like RCU lock).
>>
>> 1 is recommended because of the 3rd vendors.
>>
>> BR,
>> Germy
>>
>>
>> On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
>> wrote:
>>
>>> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
>>> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
>>> >>
>>> >>
>>> >> Sent from my iPad
>>> >>
>>> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
>>> robert.vanleeu...@spilgames.com> wrote:
>>> >>
>>> > I find our current design is remove all flows then add flow by
>>> entry, this
>>> > will cause every network node will break off all tunnels between
>>> other
>>> > network node and all compute node.
>>>  Perhaps a way around this would be to add a flag on agent startup
>>>  which would have it skip reprogramming flows. This could be used for
>>>  the upgrade case.
>>> >>>
>>> >>> I hit the same issue last week and filed a bug here:
>>> >>> https://bugs.launchpad.net/neutron/+bug/1383674
>>> >>>
>>> >>> From an operators perspective this is VERY annoying since you also
>>> cannot push any config changes that requires/triggers a restart of the
>>> agent.
>>> >>> e.g. something simple like changing a log setting becomes a hassle.
>>> >>> I would prefer the default behaviour to be to not clear the flows or
>>> at the least an config option to disable it.
>>> >>>
>>> >>
>>> >> +1, we also suffered from this even when a very little patch is done
>>> >>
>>> > I'd really like to get some input from the tripleo folks, because they
>>> > were the ones who filed the original bug here and were hit by the
>>> > agent NOT reprogramming flows on agent restart. It does seem fairly
>>> > obvious that adding an option around this would be a good way forward,
>>> > however.
>>>
>>> Since nobody else has commented, I'll put in my two cents (though I
>>> might be overcharging you ;-).  I've also added the TripleO tag to the
>>> subject, although with Summit coming up I don't know if that will help.
>>>
>>> Anyway, if the bug you're referring to is the one I think, then our
>>> issue was just with the flows not existing.  I don't think we care
>>> whether they get reprogrammed on agent restart or not as long as they
>>> somehow come into existence at some point.
>>>
>>> It's possible I'm wrong about that, and probably the best person to talk
>>> to would be Robert Collins since I think he's the one who actually
>>> tracked down the problem in the first place.
>>>
>>> -Ben
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/

Re: [openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-05 Thread Angelo Matarazzo

Hi Dean,
I think that a lot of developers use devstack installed on a VM.
So the right WoW after perfroming ./unstack.sh is :

1)If you don't want that stack.sh overwrite your local configuration
set RECLONE=no
in local.conf

2)perform ./stack.sh

3)perform ./rejoin-stack.sh

Right?

Thank you beforehand

Angelo


On 05/11/2014 08:41, Dean Troyer wrote:
On Tue, Nov 4, 2014 at 10:35 PM, JunJie Nan > wrote:


I think it's a bug, rejion should work after unstack. And stack.sh
is need after clean.sh instead of unstack.sh.

As Chmouel said, rejoin-stack.sh is meant to only re-create the screen 
sessions from the last stack.sh run.  As services are configured to 
run under Apache's mod_wsgi they will not be handled by 
rejoin.stack.sh .


dt

--

Dean Troyer
dtro...@gmail.com 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Eichberger, German
Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

>From an HP perspective the full logs on the load balancer are mostly 
>interesting for the user of the loadbalancer - we only care about aggregates 
>for our metering. That said we would be happy to just move them on demand to a 
>place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Erik Moe

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.
>>>
>>> I hit the same issue last week and filed a bug here:
>>> https://bugs.launchpad.net/neutron/+bug/1383674
>>>
>>> From an operators perspective this is VERY annoying since you also cannot 
>>> push any config changes that requires/triggers a restart of the agent.
>>> e.g. something simple like changing a log setting becomes a hassle.
>>> I would prefer the default behaviour to be to not clear the flows or at the 
>>> least an config option to disable it.
>>>
>>
>> +1, we also suffered from this even when a very little patch is done
>>
> I'd really like to get some input from the tripleo folks, because they
> were the ones who filed the original bug here and were hit by the
> agent NOT reprogramming flows on agent restart. It does seem fairly
> obvious that adding an option around this would be a good way forward,
> however.

Since nobody else has commented, I'll put in my two cents (though I
might be overcharging you ;-).  I've also added the TripleO tag to the
subject, although with Summit coming up I don't know if that will help.

Anyway, if the bug you're referring to is the one I think, then our
issue was just with the flows not existing.  I don't think we care
whether they get reprogrammed on agent restart or not as long as they
somehow come into existence at some point.

It's possible I'm wrong about that, and probably the best person to talk
to would be Robert Collins since I think he's the one who actually
trac

Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

2014-11-05 Thread Brandon Logan
Any but the 1400 utc

On Nov 4, 2014 8:48 AM, Doug Wiegley  wrote:
Hi LBaaS (and others),

We’ve been talking about possibly re-schedulng the LBaaS meeting to a time
to is less crazy early for those in the US.  Alternately, we could also
start alternating times.  For now, let’s see if we can find a slot that
works every week.  Please respond with any time slots that you can NOT
attend:

Monday, 1600UTC
Monday, 1700UTC
Tuesday, 1600UTC (US pacific, 8am)
Tuesday, 1700UTC
Tuesday, 1800UTC
Wednesday, 1600UTC (US pacific, 8am)
Wednesday, 1700UTC
Wednesday, 1800UTC
Thursday, 1400UTC (US pacific, 6am)


Note that many of these slots will require the approval of the
#openstack-meeting-4 channel:

https://review.openstack.org/#/c/132629/

https://review.openstack.org/#/c/132630/


Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based Policy] Audio stream for GBP Design Session in Paris

2014-11-05 Thread Gregory Lebovitz
Mandeep,
thanks a ton for setting it up. I just didn't see the email before I went
to sleep, so I didn't bother to get up for the session. Now I wish I had!

To affirm the attempt, Yi Sun opened up a google hangout for me today in
the split meeting. Even as crappy as the audio was from the mic on his
laptop, it was S helpful to have an audio stream to go along with the
ether pad.

Think that could happen for the advanced services meetup later today at
2:30pm?

Thanks again for going out of your way for me. I really appreciate it!! -
Gregory

On Tue, Nov 4, 2014 at 3:12 AM, Mandeep Dhami 
wrote:

>
> As no one was online, I closed the webex session.
>
> On Tue, Nov 4, 2014 at 10:07 AM, Mandeep Dhami 
> wrote:
>
>> Use this webex meeting for Audio streaming:
>>
>> https://cisco.webex.com/ciscosales/j.php?MTID=m210c77f6f51a6f313a7d130d19ee3e4d
>>
>>
>> Topic: GBP Design Session
>>
>> Date: Tuesday, November 4, 2014
>>
>> Time: 12:15 pm, Europe Time (Amsterdam, GMT+01:00)
>>
>> Meeting Number: 205 658 563
>>
>> Meeting Password: gbp
>>
>> On Mon, Nov 3, 2014 at 5:48 PM, Gregory Lebovitz 
>> wrote:
>>
>>> Hey all,
>>>
>>> I'm participating remotely this session. Any plan for audio stream of
>>> Tuesday's session? I'll happily offer a GoToMeeting, if needed.
>>>
>>> Would someone be willing to scribe discussion in #openstack-gbp channel?
>>>
>>> --
>>> 
>>> Open industry-related email from
>>> Gregory M. Lebovitz
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Open industry-related email from
Gregory M. Lebovitz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

2014-11-05 Thread Eichberger, German
Hi,

I like 16.00 UTC.

German

On Nov 3, 2014 11:42 PM, Doug Wiegley  wrote:
Hi LBaaS (and others),

We’ve been talking about possibly re-schedulng the LBaaS meeting to a time
to is less crazy early for those in the US.  Alternately, we could also
start alternating times.  For now, let’s see if we can find a slot that
works every week.  Please respond with any time slots that you can NOT
attend:

Monday, 1600UTC
Monday, 1700UTC
Tuesday, 1600UTC (US pacific, 8am)
Tuesday, 1700UTC
Tuesday, 1800UTC
Wednesday, 1600UTC (US pacific, 8am)
Wednesday, 1700UTC
Wednesday, 1800UTC
Thursday, 1400UTC (US pacific, 6am)


Note that many of these slots will require the approval of the
#openstack-meeting-4 channel:

https://review.openstack.org/#/c/132629/

https://review.openstack.org/#/c/132630/


Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Improving dhcp agent scheduling interface

2014-11-05 Thread Eugene Nikanorov
Hi folks,

I'd like to raise a discussion kept in irc and in gerrit recently:
https://review.openstack.org/#/c/131944/

The intention of the patch is to clean up particular scheduling
method/interface:
schedule_network.

Let me clarify why I think it needs to be done (beside code api consistency
reasons):
Scheduling process is ultimately just a two steps:
1) choosing appropriate agent for the network
2) adding binding between the agent and the network
To perform those two steps one doesn't need network object, network_id is
satisfactory for this need.

However, there is a concern, that having full dict (or full network object)
could allow us to do more flexible things in step 1 like deciding, whether
network should be scheduled at all.

See the TODO for the reference:
https://github.com/openstack/neutron/blob/master/neutron/scheduler/dhcp_agent_scheduler.py#L64

However, this just puts an unnecessary (and actually, incorrect)
requirement on the caller, to provide the network dict, mainly because
caller doesn't know what content of the dict the callee (scheduler driver)
expects.
Currently scheduler is only interested in ID, if there is another
scheduling driver,
it may now require additional parameters (like list of full subnet dicts)
in the dict which may or may not be provided by the calling code.
Instead of making assumptions about what is in the dict, it's better to go
with simpler and clearer interface that will allow scheduling driver to do
whatever makes sense to it. In other words: caller provides id, driver
fetches everything it
needs using the id. For existing scheduling drivers it's a no-op.

I think l3 scheduling is an example of interface done in the more right
way; to me it looks clearer and more consistent.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Salvatore Orlando
I have no opposition to that, and I will be happy to assist reviewing the
code that will enable flow synchronisation  (or to say it in an easier way,
punctual removal of flows unknown to the l2 agent).

In the meanwhile, I hope you won't mind if we go ahead and start making
flow reset optional - so that we stop causing downtime upon agent restart.

Salvatore

On 5 November 2014 11:57, Erik Moe  wrote:

>
>
> Hi,
>
>
>
> I also agree, IMHO we need flow synchronization method so we can avoid
> network downtime and stray flows.
>
>
>
> Regards,
>
> Erik
>
>
>
>
>
> *From:* Germy Lure [mailto:germy.l...@gmail.com]
> *Sent:* den 5 november 2014 10:46
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][TripleO] Clear all flows when
> ovs agent start? why and how avoid?
>
>
>
> Hi Salvatore,
>
> A startup flag is really a simpler approach. But in what situation we
> should set this flag to remove all flows? upgrade? restart manually?
> internal fault?
>
>
>
> Indeed, only at the time that there are inconsistent(incorrect, unwanted,
> stable and so on) flows between agent and the ovs related, we need refresh
> flows. But the problem is how we know this? I think a startup flag is too
> rough, unless we can tolerate the inconsistent situation.
>
>
>
> Of course, I believe that turn off startup reset flows action can resolve
> most problem. The flows are correct most time after all. But considering
> NFV 5 9s, I still recommend flow synchronization approach.
>
>
>
> BR,
>
> Germy
>
>
>
> On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
> wrote:
>
> From what I gather from this thread and related bug report, the change
> introduced in the OVS agent is causing a data plane outage upon agent
> restart, which is not desirable in most cases.
>
>
>
> The rationale for the change that introduced this bug was, I believe,
> cleaning up stale flows on the OVS agent, which also makes some sense.
>
>
>
> Unless I'm missing something, I reckon the best way forward is actually
> quite straightforward; we might add a startup flag to reset all flows and
> not reset them by default.
>
> While I agree the "flow synchronisation" process proposed in the previous
> post is valuable too, I hope we might be able to fix this with a simpler
> approach.
>
>
>
> Salvatore
>
>
>
> On 5 November 2014 04:43, Germy Lure  wrote:
>
> Hi,
>
>
>
> Consider the triggering of restart agent, I think it's nothing but:
>
> 1). only restart agent
>
> 2). reboot the host that agent deployed on
>
>
>
> When the agent started, the ovs may:
>
> a.have all correct flows
>
> b.have nothing at all
>
> c.have partly correct flows, the others may need to be reprogrammed,
> deleted or added
>
>
>
> In any case, I think both user and developer would happy to see that the
> system recovery ASAP after agent restarting. The best is agent only push
> those incorrect flows, but keep the correct ones. This can ensure those
> business with correct flows working during agent starting.
>
>
>
> So, I suggest two solutions:
>
> 1.Agent gets all flows from ovs and compare with its local flows after
> restarting. And agent only corrects the different ones.
>
> 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
> and ovs prepares two tables for flows switch(like RCU lock).
>
>
>
> 1 is recommended because of the 3rd vendors.
>
>
>
> BR,
>
> Germy
>
>
>
>
>
> On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
> wrote:
>
> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
> >>
> >>
> >> Sent from my iPad
> >>
> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
> robert.vanleeu...@spilgames.com> wrote:
> >>
> > I find our current design is remove all flows then add flow by
> entry, this
> > will cause every network node will break off all tunnels between
> other
> > network node and all compute node.
>  Perhaps a way around this would be to add a flag on agent startup
>  which would have it skip reprogramming flows. This could be used for
>  the upgrade case.
> >>>
> >>> I hit the same issue last week and filed a bug here:
> >>> https://bugs.launchpad.net/neutron/+bug/1383674
> >>>
> >>> From an operators perspective this is VERY annoying since you also
> cannot push any config changes that requires/triggers a restart of the
> agent.
> >>> e.g. something simple like changing a log setting becomes a hassle.
> >>> I would prefer the default behaviour to be to not clear the flows or
> at the least an config option to disable it.
> >>>
> >>
> >> +1, we also suffered from this even when a very little patch is done
> >>
> > I'd really like to get some input from the tripleo folks, because they
> > were the ones who filed the original bug here and were hit by the
> > agent NOT reprogramming flows on agent restart. It does seem fairly
> > obvious that adding an option around this would be a good way forward,
> > however.
>
> Since nobody else has comm

Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

2014-11-05 Thread Gregory Lebovitz
I'm just a lurker, so pls don't optimize for me. FWIW, here's my reply, in
order of pref:

wed 1600 UTC
wed 1800 UTC
wed 1700 UTC

On Mon, Nov 3, 2014 at 11:42 PM, Doug Wiegley  wrote:

> Hi LBaaS (and others),
>
> We’ve been talking about possibly re-schedulng the LBaaS meeting to a time
> to is less crazy early for those in the US.  Alternately, we could also
> start alternating times.  For now, let’s see if we can find a slot that
> works every week.  Please respond with any time slots that you can NOT
> attend:
>
> Monday, 1600UTC
> Monday, 1700UTC
> Tuesday, 1600UTC (US pacific, 8am)
> Tuesday, 1700UTC
> Tuesday, 1800UTC
> Wednesday, 1600UTC (US pacific, 8am)
> Wednesday, 1700UTC
> Wednesday, 1800UTC
> Thursday, 1400UTC (US pacific, 6am)
>
>
> Note that many of these slots will require the approval of the
> #openstack-meeting-4 channel:
>
> https://review.openstack.org/#/c/132629/
>
> https://review.openstack.org/#/c/132630/
>
>
> Thanks,
> Doug
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Open industry-related email from
Gregory M. Lebovitz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] questions on object/db usage

2014-11-05 Thread Daniele Casini

Hi All:

I replaced all*/db.instance_get_all_by_host()/*in nova and tested it 
using *tox*.
Might it be useful? If so, Which is the better way to propose it to 
OpenStack community?


Best regards,

Daniele


On 10/23/2014 09:01 PM, Dan Smith wrote:

When I fix some bugs, I found that some code in
nova/compute/api.py
   sometimes we use db ,sometimes we use objects do we have
any criteria for it? I knew we can't access db in compute layer code,
how about others ? prefer object or db direct access? thanks

Prefer objects, and any remaining db.* usage anywhere (other than the
object code itself) is not only a candidate for cleanup, it's much
appreciated :)

--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Gariganti, Sudhakar Babu
I guess this blueprint[1] attempted to implement the flow synchronization issue 
during the agent restart.
But I see no progress/updates. It would be helpful to know about the progress 
there.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-agent-soft-restart

On a different note, I agree with Salvatore on getting started with the 
simplistic approach and improve it further.

Regards,
Sudhakar.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, November 05, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

I have no opposition to that, and I will be happy to assist reviewing the code 
that will enable flow synchronisation  (or to say it in an easier way, punctual 
removal of flows unknown to the l2 agent).

In the meanwhile, I hope you won't mind if we go ahead and start making flow 
reset optional - so that we stop causing downtime upon agent restart.

Salvatore

On 5 November 2014 11:57, Erik Moe 
mailto:erik@ericsson.com>> wrote:

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.
>>>
>>> I hit the same issue last week and filed a bug here:
>>> https://bugs.launchpad.net/neutron/+bug/1383674
>>>
>>> From an operators perspective this is VERY annoying since you also cannot 
>>> push any config changes that requires/triggers a restart of the agent.
>>> e.g. something simple like 

Re: [openstack-dev] [Nova] questions on object/db usage

2014-11-05 Thread Chen CH Ji
I also working on it and I am glad you can help this ,see
https://review.openstack.org/#/c/130744/ for some info though it's still
ongoing

if you want to know general rule  ,I guess
https://wiki.openstack.org/wiki/How_To_Contribute can be a good place to
refer

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Daniele Casini 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/05/2014 07:22 PM
Subject:Re: [openstack-dev] [Nova] questions on object/db usage



Hi All:

I replaced all db.instance_get_all_by_host() in nova and tested it using
tox.
Might it be useful? If so, Which is the better way to propose it to
OpenStack community?

Best regards,

Daniele


On 10/23/2014 09:01 PM, Dan Smith wrote:
   When I fix some bugs, I found that some code in
nova/compute/api.py
  sometimes we use db ,sometimes we use objects do
we have
any criteria for it? I knew we can't access db in compute layer
code,
how about others ? prefer object or db direct access? thanks


  Prefer objects, and any remaining db.* usage anywhere (other than the
  object code itself) is not only a candidate for cleanup, it's much
  appreciated :)

  --Dan




  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] unable to collect compute.node.cpu.* data

2014-11-05 Thread Hang H Liu
Same steps I can get below results.
You may need debug into get_meters() in
ceilometer/storage/impl_sqlalchemy.py to see if some filters are taking
effect.


localadmin@ostest2:~/devstack$ ceilometer meter-list
+-++-+--+--+--+
| Name| Type   | Unit| Resource ID
| User ID  | Project ID   |
+-++-+--+--+--+
| compute.node.cpu.frequency  | gauge  | MHz | ostest2_ostest2
| None | None |
| compute.node.cpu.idle.percent   | gauge  | %   | ostest2_ostest2
| None | None |
| compute.node.cpu.idle.time  | cumulative | ns  | ostest2_ostest2
| None | None |
| compute.node.cpu.iowait.percent | gauge  | %   | ostest2_ostest2
| None | None |
| compute.node.cpu.iowait.time| cumulative | ns  | ostest2_ostest2
| None | None |
| compute.node.cpu.kernel.percent | gauge  | %   | ostest2_ostest2
| None | None |
| compute.node.cpu.kernel.time| cumulative | ns  | ostest2_ostest2
| None | None |
| compute.node.cpu.percent| gauge  | %   | ostest2_ostest2
| None | None |
| compute.node.cpu.user.percent   | gauge  | %   | ostest2_ostest2
| None | None |
| compute.node.cpu.user.time  | cumulative | ns  | ostest2_ostest2
| None



"Lu, Lianhao"  写于 2014/11/05 15:23:14:

> From: "Lu, Lianhao" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 2014/11/05 15:31
> Subject: Re: [openstack-dev] [ceilometer] unable to collect
> compute.node.cpu.* data
>
> Hi Frank,
>
> Could you try ‘celometer sample-list’ to see if the compute.node.cpu
> samples are there?
>
> -Lianhao
>
> From: Du Jun [mailto:dj199...@gmail.com]
> Sent: Wednesday, November 05, 2014 3:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [ceilometer] unable to collect
> compute.node.cpu.* data
>
> Hi all,
>
> I attempt to collect compute.node.cpu as the following link mentions:
>
>
http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova
>
> I set:
>
> compute_monitors = ComputeDriverCPUMonitor
>
> in /etc/nova/nova.conf and restart nova-compute, nova-scheduler,
> ceilometer-agent-notification, ceilometer-api, ceilometer-collector.
>
> From ceilometer-agent-notification's log, I can see agent transform
> and publish data samples compute.node.cpu.*
>
> What's more, from ceilometer database, I can see all the meters
> compute.node.cpu.*
>
> mysql> select * from meter;
> ++-++---+
> | id | name| type   | unit  |
> ++-++---+
> | 39 | compute.node.cpu.frequency  | gauge  | MHz   |
> | 41 | compute.node.cpu.idle.percent   | gauge  | % |
> | 38 | compute.node.cpu.idle.time  | cumulative | ns|
> | 45 | compute.node.cpu.iowait.percent | gauge  | % |
> | 42 | compute.node.cpu.iowait.time| cumulative | ns|
> | 36 | compute.node.cpu.kernel.percent | gauge  | % |
> | 44 | compute.node.cpu.kernel.time| cumulative | ns|
> | 37 | compute.node.cpu.percent| gauge  | % |
> | 43 | compute.node.cpu.user.percent   | gauge  | % |
> | 40 | compute.node.cpu.user.time  | cumulative | ns|
>
>
> However, when I type
>
> ceilometer meter-list
>
> It shows nothing about compute.node.cpu.*, so I wonder what's wrong
> with my steps.
>
> --
> Regards,
> Frank___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][context] oslo.context repository review request

2014-11-05 Thread Davanum Srinivas
Hello all,

At the Design Summit session for Oslo Library Graduation for Kilo, we
decided that oslo.context was a high priority item since oslo.log was
blocked. So here's a git repo [2], please take a look to see if this
is good enough for us to open up a infra request.

thanks,
dims

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
[2] https://github.com/dims/oslo.context

-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] New function: first_nonnull

2014-11-05 Thread Lee, Alexis
I'm considering adding a function which takes a list and returns the first
non-null, non-empty value in that list.

So you could do EG:

some_thing:
config:
ControlVIP:
first_nonnull:
- {get_param: ControlVIP}
- {get_attr: [ControlVirtualIP, fixed_ips, 0, ip_address]}]}

I'm open to other names, EG "some", "or", "fallback_list" etc.

Steve Hardy suggested building this into get_attr or Fn::Select. My feeling
is that those each do one job well right now, I'm happy to take a steer
though.

What do you think please?


Alexis (lxsli)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][context] oslo.context repository review request

2014-11-05 Thread Steve Martinelli
FWIW, this looks good to me - looking forward to using it in 
keystonemiddleware

Steve



From:   Davanum Srinivas 
To: openstack-dev@lists.openstack.org
Date:   11/05/2014 09:46 AM
Subject:[openstack-dev] [oslo][context] oslo.context repository 
review  request



Hello all,

At the Design Summit session for Oslo Library Graduation for Kilo, we
decided that oslo.context was a high priority item since oslo.log was
blocked. So here's a git repo [2], please take a look to see if this
is good enough for us to open up a infra request.

thanks,
dims

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
[2] https://github.com/dims/oslo.context

-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Nominating Steve Heyman for barbican-core

2014-11-05 Thread Chad Lung
Greetings ,

I would like to nominate Steve Heyman for the barbican-core team.

Steve is very active in Barbican code reviews and has been a regular
contributor of test related change requests as well as documentation.

As a reminder to barbican-core members, we use the voting process outlined
in https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our
team.

Thanks,

Chad Lung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Improving dhcp agent scheduling interface

2014-11-05 Thread Armando M.
Hi Eugene thanks for bringing this up for discussion. My comments inline.
Thanks,
Armando

On 5 November 2014 12:07, Eugene Nikanorov  wrote:

> Hi folks,
>
> I'd like to raise a discussion kept in irc and in gerrit recently:
> https://review.openstack.org/#/c/131944/
>
> The intention of the patch is to clean up particular scheduling
> method/interface:
> schedule_network.
>
> Let me clarify why I think it needs to be done (beside code api
> consistency reasons):
> Scheduling process is ultimately just a two steps:
> 1) choosing appropriate agent for the network
> 2) adding binding between the agent and the network
> To perform those two steps one doesn't need network object, network_id is
> satisfactory for this need.
>

I would argue that it isn't, actually.

You may need to know the state of the network to make that placement
decision. Just passing the id may cause the scheduling logic to issue an
extra DB query that can be easily avoided if the right interface between
the caller of a scheduler and the scheduler itself was in place. For
instance we cannot fix [1] (as you pointed out) today because the method
only accepts a dict that holds just a partial representation of the
network. If we had the entire DB object we would avoid that and just
passing the id is going in the opposite direction IMO


> However, there is a concern, that having full dict (or full network
> object) could allow us to do more flexible things in step 1 like deciding,
> whether network should be scheduled at all.
>

That's the whole point of scheduling, is it not? If you are arguing that we
should split the schedule method into two separate steps
(get_me_available_agent and bind_network_to_agent), and make the caller of
the schedule method carry out the two step process by itself, I think it
could be worth exploring that, but at this point I don't believe this is
the right refactoring.


> See the TODO for the reference:
>

[1]


>
> https://github.com/openstack/neutron/blob/master/neutron/scheduler/dhcp_agent_scheduler.py#L64
>
> However, this just puts an unnecessary (and actually, incorrect)
> requirement on the caller, to provide the network dict, mainly because
> caller doesn't know what content of the dict the callee (scheduler driver)
> expects.
>

Why is it incorrect? We should move away from dictionaries and passing
objects so that they can be reused where it makes sense without incurring
in the overhead of re-fetching the object associated to the uuid when
needed. We can even hide the complexity of refreshing the copy of the
object every time it is accessed if needed. With information hiding and
encapsulation we can wrap this logic in one place without scattering it
around everywhere in the code base, like it's done today.


> Currently scheduler is only interested in ID, if there is another
> scheduling driver,
>

No, the scheduler needs to know about the state of the network to do proper
placement. It's a side-effect of the default scheduling (i.e. random). If
we want to do more intelligent placement we need the state of the network.


> it may now require additional parameters (like list of full subnet dicts)
> in the dict which may or may not be provided by the calling code.
> Instead of making assumptions about what is in the dict, it's better to go
> with simpler and clearer interface that will allow scheduling driver to do
> whatever makes sense to it. In other words: caller provides id, driver
> fetches everything it
> needs using the id. For existing scheduling drivers it's a no-op.
>

Again, the problem lies with the fact that we're passing dictionaries
around.


>
> I think l3 scheduling is an example of interface done in the more right
> way; to me it looks clearer and more consistent.
>

I may argue that the l3 scheduling api is the bad example for the above
mentioned reasons.


>
> Thanks,
> Eugene.
>

At this point I am still not convinced by the arguments provided that the
patch 131944  should go forward
as it is.


>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Steve Heyman for barbican-core

2014-11-05 Thread Douglas Mendizabal
+1

Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

From:  Chad Lung 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Wednesday, November 5, 2014 at 4:17 PM
To:  "openstack-dev@lists.openstack.org" 
Subject:  [openstack-dev] [Barbican] Nominating Steve Heyman for
barbican-core

Greetings ,

I would like to nominate Steve Heyman for the barbican-core team.

Steve is very active in Barbican code reviews and has been a regular
contributor of test related change requests as well as documentation.

As a reminder to barbican-core members, we use the voting process outlined
in https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our
team.

Thanks,

Chad Lung





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Nominating Juan Antonio Osorio Robles for barbican-core

2014-11-05 Thread Douglas Mendizabal
Hi All,

I would like to nominate Juan Antonio Osorio Robles to the barbican-core
team.

Juan has been consistently giving us very well thought out and constructive
reviews for Barbican, python-barbicanclient and barbican-specs.  It’s
obvious from his reviews that he cares deeply for the quality of the
Barbican project, and I think he will be a great addition to the core team.

As a reminder to barbican-core members, we use the voting process outlined
in https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our
team.

References:

http://stackalytics.com/report/contribution/barbican-group/90

Thanks,
Douglas


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][context] oslo.context repository review request

2014-11-05 Thread Julien Danjou
On Wed, Nov 05 2014, Davanum Srinivas wrote:

Sorry I missed the session (had a talk at that time).

> At the Design Summit session for Oslo Library Graduation for Kilo, we
> decided that oslo.context was a high priority item since oslo.log was
> blocked. So here's a git repo [2], please take a look to see if this
> is good enough for us to open up a infra request.

A few comments, considering that:

- https://github.com/dims/oslo.context/blob/master/oslo/context/context.py#L28
  should switch to use oslo.utils.uuidutils to generate the UUID.
- The list of dependency is very short
- oslo.log (will) depends on oslo.utils
- oslo.log is the only user of that (out of the projects themselves)

What about just moving this into oslo.log or oslo.utils?

That would avoid the burden of having yet-another-lib for a 100 SLOC
long file.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][context] oslo.context repository review request

2014-11-05 Thread Davanum Srinivas
jd__,

No issues. I followed the process in the wiki for creating a library
with minimal changes required to get the test running as documented.
we can do some of these in the openstack git when the project gets
created.

Please see notes from Doug on the etherpad on why leaving it in
oslo.log or oslo.utils was not considered.
https://etherpad.openstack.org/p/kilo-oslo-library-proposals

Yes, i had the same concern about yet-another-oslo-lib concern!

thanks,
dims

On Wed, Nov 5, 2014 at 5:02 PM, Julien Danjou  wrote:
> On Wed, Nov 05 2014, Davanum Srinivas wrote:
>
> Sorry I missed the session (had a talk at that time).
>
>> At the Design Summit session for Oslo Library Graduation for Kilo, we
>> decided that oslo.context was a high priority item since oslo.log was
>> blocked. So here's a git repo [2], please take a look to see if this
>> is good enough for us to open up a infra request.
>
> A few comments, considering that:
>
> - https://github.com/dims/oslo.context/blob/master/oslo/context/context.py#L28
>   should switch to use oslo.utils.uuidutils to generate the UUID.
> - The list of dependency is very short
> - oslo.log (will) depends on oslo.utils
> - oslo.log is the only user of that (out of the projects themselves)
>
> What about just moving this into oslo.log or oslo.utils?
>
> That would avoid the burden of having yet-another-lib for a 100 SLOC
> long file.
>
> --
> Julien Danjou
> # Free Software hacker
> # http://julien.danjou.info



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Policy][Group-based-policy] GBP Juno/Kilo next steps meeting

2014-11-05 Thread Sumit Naiksatam
Hi,

We had a productive design session discussion on Tuesday. However, we
could not get to the point where we discussed all the next steps and
specific action items for Juno/Kilo GBP releases. We will be meeting
tomorrow (Thursday) morning from in the Le Meridian to cover these.

Time: 10 to 11 AM (before the Neutron sessions start)
Location: Round tables (just outside the design session rooms), Floor
-1, Le Meridian.

Thanks,
~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Juan Antonio Osorio Robles for barbican-core

2014-11-05 Thread Jarret Raim
+1 for me.

From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 at 4:53 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Barbican] Nominating Juan Antonio Osorio Robles for 
barbican-core

Hi All,

I would like to nominate Juan Antonio Osorio Robles to the barbican-core team.

Juan has been consistently giving us very well thought out and constructive 
reviews for Barbican, python-barbicanclient and barbican-specs.  It’s obvious 
from his reviews that he cares deeply for the quality of the Barbican project, 
and I think he will be a great addition to the core team.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

References:

http://stackalytics.com/report/contribution/barbican-group/90

Thanks,
Douglas


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Steve Heyman for barbican-core

2014-11-05 Thread Jarret Raim
+1 for me as well.

From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 at 4:29 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Nominating Steve Heyman for 
barbican-core

+1

Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C

From: Chad Lung mailto:chad.l...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 at 4:17 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Barbican] Nominating Steve Heyman for barbican-core

Greetings ,

I would like to nominate Steve Heyman for the barbican-core team.

Steve is very active in Barbican code reviews and has been a regular 
contributor of test related change requests as well as documentation.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

Thanks,

Chad Lung

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-05 Thread Vladimir Kuklin
I am +1 for using cobbler as power management before we merge Ironic-based
stuff. It is essential part also for our HA and stop
provisioning/deployment mechanism.

On Tue, Nov 4, 2014 at 1:00 PM, Dmitriy Shulyak 
wrote:

> Not long time ago we discussed necessity of power management feature in
> Fuel.
>
> What is your opinion on power management support in Cobbler, i took a look
> at documentation [1] and templates [2] that  we have right now.
> And it actually looks like we can make use of it.
>
> The only issue is that power address that cobbler system is configured
> with is wrong.
> Because provisioning serializer uses one reported by boostrap, but it can
> be easily fixed.
>
> Ofcourse another question is separate network for power management, but we
> can leave with
> admin for now.
>
> Please share your opinions on this matter. Thanks
>
> [1] http://www.cobblerd.org/manuals/2.6.0/4/5_-_Power_Management.html
> [2] http://paste.openstack.org/show/129063/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Anton Zemlyanov
Monitoring of the Fuel master's disk space is the special case. I really
wonder why Fuel master have no HA option, disk overflow can be predicted
but many other failures cannot. HA is a solution of the 'single point of
failure' problem.

The current monitoring recommendations (
http://docs.openstack.org/openstack-ops/content/logging_monitoring.html)
are based on analyzing logs and manual checks, that are rather reactive way
of fixing problems. Zabbix is quite good for preventing failures that are
predictable but for the abrupt problems Zabbix just reports them 'post
mortem'.

The only way to remove the single failure point is to implement
redundancy/HA

Anton

On Tue, Nov 4, 2014 at 6:26 PM, Przemyslaw Kaminski 
wrote:

> Hello,
>
> In extension to my comment in this bug [1] I'd like to discuss the
> possibility of adding Fuel master node monitoring. As I wrote in the
> comment, when disk is full it might be already too late to perform any
> action since for example Nailgun could be down because DB shut itself down.
> So we should somehow warn the user that disk is running low (in the UI and
> fuel CLI on stderr for example) before it actually happens.
>
> For now the only meaningful value to monitor would be disk usage -- do you
> have other suggestions? If not then probably a simple API endpoint with
> statvfs calls would suffice. If you see other usages of this then maybe it
> would be better to have some daemon collecting the stats we want.
>
> If we opted for a daemon, then I'm aware that the user can optionally
> install Zabbix server although looking at blueprints in [2] I don't see
> anything about monitoring Fuel master itself -- is it possible to do?
> Though the installation of Zabbix though is not mandatory so it still
> doesn't completely solve the problem.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1371757
> [2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system
>
> Przemek
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Improving dhcp agent scheduling interface

2014-11-05 Thread Eugene Nikanorov
My comments inline:

I would argue that it isn't, actually.
>
> You may need to know the state of the network to make that placement
> decision.
>
Yes, I could agree with that - and that's just a particular scheduling
implementation, not a requirement for the interface.


> Just passing the id may cause the scheduling logic to issue an extra DB
> query that can be easily avoided if the right interface between the caller
> of a scheduler and the scheduler itself was in place.
>
Yes, i may cause scheduling logic to issue a query, *iff* it needs it.

For instance we cannot fix [1] (as you pointed out) today because the
> method only accepts a dict that holds just a partial representation of the
> network. If we had the entire DB object we would avoid that and just
> passing the id is going in the opposite direction IMO
>
And here is another issue, I think. Requiring an object is something not
quite clear at this point: if scheduling needs to be aware of subnets -
network object is not enough then, and that's why I think we only need to
provide ids that allow scheduling logic to act on it's own.


>
>> However, there is a concern, that having full dict (or full network
>> object) could allow us to do more flexible things in step 1 like deciding,
>> whether network should be scheduled at all.
>>
>
> That's the whole point of scheduling, is it not?
>
Right, and we are just arguing, who should prepare the data needed to make
the scheduling decision.
I just think that scheduling logic may potentially require more than just
network object.

In my concrete example, i want to schedule a network which my code moves
from a dead agent to some alive agent.
I only have a network id during that operation. I'd like to avoid issuing
DB query as well - just as you.
My first thought was something like:
self.schedule_network(context, {'id': network_id}) - which is clearly a
dirty hack!
But that's what the interface is forcing me to do. Or, it forces me to
fetch the network which I'd like to avoid as well.
That's why I want scheduling decide, whether it needs additional data or
not.


>
>>
>> https://github.com/openstack/neutron/blob/master/neutron/scheduler/dhcp_agent_scheduler.py#L64
>>
>> However, this just puts an unnecessary (and actually, incorrect)
>> requirement on the caller, to provide the network dict, mainly because
>> caller doesn't know what content of the dict the callee (scheduler driver)
>> expects.
>>
>
> Why is it incorrect? We should move away from dictionaries and passing
> objects
>
Passing objects is for sure much stronger api contract, however I think it
leads to the same level of overhead, if not worse!
For instance, will network object include the collection of its subnet
objects? Will they, in turn, include ipallocations and such?
If the answer is No (and my opinion that it *must* be No), then we don't
win much with object approach.
If the answer is yes - we're fetching way too much from the DB to create
network object; it's much bigger overhead then additional db query in a
scheduling driver.

No, the scheduler needs to know about the state of the network to do proper
> placement. It's a side-effect of the default scheduling (i.e. random). If
> we want to do more intelligent placement we need the state of the network.
>
That's for sure, the question is only about who prepares the data: caller
or the scheduler.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Oleg Gelbukh
Hello,

As far as I can tell, disk space monitoring is pretty useless, unless Fuel
provides user with some means to automatically cleanup of stored data (i.e.
remove obsolete diagnostic snapshots, etc). Otherwise, it will be only
useful for experienced Fuel developers who know how to properly cleanup the
Master node.

--
Best regards,
Oleg Gelbukh

On Tue, Nov 4, 2014 at 3:26 PM, Przemyslaw Kaminski 
wrote:

> Hello,
>
> In extension to my comment in this bug [1] I'd like to discuss the
> possibility of adding Fuel master node monitoring. As I wrote in the
> comment, when disk is full it might be already too late to perform any
> action since for example Nailgun could be down because DB shut itself down.
> So we should somehow warn the user that disk is running low (in the UI and
> fuel CLI on stderr for example) before it actually happens.
>
> For now the only meaningful value to monitor would be disk usage -- do you
> have other suggestions? If not then probably a simple API endpoint with
> statvfs calls would suffice. If you see other usages of this then maybe it
> would be better to have some daemon collecting the stats we want.
>
> If we opted for a daemon, then I'm aware that the user can optionally
> install Zabbix server although looking at blueprints in [2] I don't see
> anything about monitoring Fuel master itself -- is it possible to do?
> Though the installation of Zabbix though is not mandatory so it still
> doesn't completely solve the problem.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1371757
> [2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system
>
> Przemek
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] New function: first_nonnull

2014-11-05 Thread Fox, Kevin M
That would be very useful. It would eliminate a few more places where I've 
needed the aws if function.

It would be good to keep the get_ prefix for consistency.

Id vote for seperate function. Its cleaner.

Thanks,
Kevin


From: Lee, Alexis
Sent: Wednesday, November 05, 2014 6:46:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] New function: first_nonnull

I’m considering adding a function which takes a list and returns the first
non-null, non-empty value in that list.

So you could do EG:

some_thing:
config:
ControlVIP:
first_nonnull:
- {get_param: ControlVIP}
- {get_attr: [ControlVirtualIP, fixed_ips, 0, ip_address]}]}

I’m open to other names, EG “some”, “or”, “fallback_list” etc.

Steve Hardy suggested building this into get_attr or Fn::Select. My feeling
is that those each do one job well right now, I’m happy to take a steer
though.

What do you think please?


Alexis (lxsli)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Erik Moe

Ok, I don’t mind starting with the simplistic approach.

Regards,
Erik


From: Gariganti, Sudhakar Babu [mailto:sudhakar-babu.gariga...@hp.com]
Sent: den 5 november 2014 12:14
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

I guess this blueprint[1] attempted to implement the flow synchronization issue 
during the agent restart.
But I see no progress/updates. It would be helpful to know about the progress 
there.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-agent-soft-restart

On a different note, I agree with Salvatore on getting started with the 
simplistic approach and improve it further.

Regards,
Sudhakar.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, November 05, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

I have no opposition to that, and I will be happy to assist reviewing the code 
that will enable flow synchronisation  (or to say it in an easier way, punctual 
removal of flows unknown to the l2 agent).

In the meanwhile, I hope you won't mind if we go ahead and start making flow 
reset optional - so that we stop causing downtime upon agent restart.

Salvatore

On 5 November 2014 11:57, Erik Moe 
mailto:erik@ericsson.com>> wrote:

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows.

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Jorge Miramontes
Thanks German,

It looks like the conversation is going towards using the HAProxy stats 
interface and/or iptables. I just wanted to explore logging a bit. That said, 
can you and Stephen share your thoughts on how we might implement that 
approach? I'd like to get a spec out soon because I believe metric gathering 
can be worked on in parallel with the rest of the project. In fact, I was 
hoping to get my hands dirty on this one and contribute some code, but a 
strategy and spec are needed first before I can start that ;)

Cheers,
--Jorge

From: , German 
mailto:german.eichber...@hp.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 3:50 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

>From an HP perspective the full logs on the load balancer are mostly 
>interesting for the user of the loadbalancer – we only care about aggregates 
>for our metering. That said we would be happy to just move them on demand to a 
>place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple 

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Armando M.
I would be open to making this toggle switch available, however I feel that
doing it via static configuration can introduce unnecessary burden to the
operator. Perhaps we could explore a way where the agent can figure which
state it's supposed to be in based on its reported status?

Armando

On 5 November 2014 12:09, Salvatore Orlando  wrote:

> I have no opposition to that, and I will be happy to assist reviewing the
> code that will enable flow synchronisation  (or to say it in an easier way,
> punctual removal of flows unknown to the l2 agent).
>
> In the meanwhile, I hope you won't mind if we go ahead and start making
> flow reset optional - so that we stop causing downtime upon agent restart.
>
> Salvatore
>
> On 5 November 2014 11:57, Erik Moe  wrote:
>
>>
>>
>> Hi,
>>
>>
>>
>> I also agree, IMHO we need flow synchronization method so we can avoid
>> network downtime and stray flows.
>>
>>
>>
>> Regards,
>>
>> Erik
>>
>>
>>
>>
>>
>> *From:* Germy Lure [mailto:germy.l...@gmail.com]
>> *Sent:* den 5 november 2014 10:46
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][TripleO] Clear all flows when
>> ovs agent start? why and how avoid?
>>
>>
>>
>> Hi Salvatore,
>>
>> A startup flag is really a simpler approach. But in what situation we
>> should set this flag to remove all flows? upgrade? restart manually?
>> internal fault?
>>
>>
>>
>> Indeed, only at the time that there are inconsistent(incorrect, unwanted,
>> stable and so on) flows between agent and the ovs related, we need refresh
>> flows. But the problem is how we know this? I think a startup flag is too
>> rough, unless we can tolerate the inconsistent situation.
>>
>>
>>
>> Of course, I believe that turn off startup reset flows action can resolve
>> most problem. The flows are correct most time after all. But considering
>> NFV 5 9s, I still recommend flow synchronization approach.
>>
>>
>>
>> BR,
>>
>> Germy
>>
>>
>>
>> On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
>> wrote:
>>
>> From what I gather from this thread and related bug report, the change
>> introduced in the OVS agent is causing a data plane outage upon agent
>> restart, which is not desirable in most cases.
>>
>>
>>
>> The rationale for the change that introduced this bug was, I believe,
>> cleaning up stale flows on the OVS agent, which also makes some sense.
>>
>>
>>
>> Unless I'm missing something, I reckon the best way forward is actually
>> quite straightforward; we might add a startup flag to reset all flows and
>> not reset them by default.
>>
>> While I agree the "flow synchronisation" process proposed in the previous
>> post is valuable too, I hope we might be able to fix this with a simpler
>> approach.
>>
>>
>>
>> Salvatore
>>
>>
>>
>> On 5 November 2014 04:43, Germy Lure  wrote:
>>
>> Hi,
>>
>>
>>
>> Consider the triggering of restart agent, I think it's nothing but:
>>
>> 1). only restart agent
>>
>> 2). reboot the host that agent deployed on
>>
>>
>>
>> When the agent started, the ovs may:
>>
>> a.have all correct flows
>>
>> b.have nothing at all
>>
>> c.have partly correct flows, the others may need to be reprogrammed,
>> deleted or added
>>
>>
>>
>> In any case, I think both user and developer would happy to see that the
>> system recovery ASAP after agent restarting. The best is agent only push
>> those incorrect flows, but keep the correct ones. This can ensure those
>> business with correct flows working during agent starting.
>>
>>
>>
>> So, I suggest two solutions:
>>
>> 1.Agent gets all flows from ovs and compare with its local flows after
>> restarting. And agent only corrects the different ones.
>>
>> 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
>> and ovs prepares two tables for flows switch(like RCU lock).
>>
>>
>>
>> 1 is recommended because of the 3rd vendors.
>>
>>
>>
>> BR,
>>
>> Germy
>>
>>
>>
>>
>>
>> On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
>> wrote:
>>
>> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
>> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
>> >>
>> >>
>> >> Sent from my iPad
>> >>
>> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
>> robert.vanleeu...@spilgames.com> wrote:
>> >>
>> > I find our current design is remove all flows then add flow by
>> entry, this
>> > will cause every network node will break off all tunnels between
>> other
>> > network node and all compute node.
>>  Perhaps a way around this would be to add a flag on agent startup
>>  which would have it skip reprogramming flows. This could be used for
>>  the upgrade case.
>> >>>
>> >>> I hit the same issue last week and filed a bug here:
>> >>> https://bugs.launchpad.net/neutron/+bug/1383674
>> >>>
>> >>> From an operators perspective this is VERY annoying since you also
>> cannot push any config changes that requires/triggers a restart of the
>> agent.
>> >>> e.g. something simple like changing a log setting becomes a hassle.
>> >>> I would pref

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Dmitry Borodaenko
Even one additional hardware node required to host the Fuel master is seen
by many users as excessive. Unless you can come up with an architecture
that adds HA capability to Fuel without increasing its hardware footprint
by 2 more nodes, it's just not worth it.

The only operational aspect of the Fuel master node that you don't want to
lose even for a short while is logging. You'd be better off redirecting
OpenStack environments' logs to a dedicated highly available logging server
(which, of course, you already have in your environment), and deal with
Fuel master node failures by restoring it from backups.

On Wed, Nov 5, 2014 at 8:26 AM, Anton Zemlyanov 
wrote:

> Monitoring of the Fuel master's disk space is the special case. I really
> wonder why Fuel master have no HA option, disk overflow can be predicted
> but many other failures cannot. HA is a solution of the 'single point of
> failure' problem.
>
> The current monitoring recommendations (
> http://docs.openstack.org/openstack-ops/content/logging_monitoring.html)
> are based on analyzing logs and manual checks, that are rather reactive way
> of fixing problems. Zabbix is quite good for preventing failures that are
> predictable but for the abrupt problems Zabbix just reports them 'post
> mortem'.
>
> The only way to remove the single failure point is to implement
> redundancy/HA
>
> Anton
>
> On Tue, Nov 4, 2014 at 6:26 PM, Przemyslaw Kaminski <
> pkamin...@mirantis.com> wrote:
>
>> Hello,
>>
>> In extension to my comment in this bug [1] I'd like to discuss the
>> possibility of adding Fuel master node monitoring. As I wrote in the
>> comment, when disk is full it might be already too late to perform any
>> action since for example Nailgun could be down because DB shut itself down.
>> So we should somehow warn the user that disk is running low (in the UI and
>> fuel CLI on stderr for example) before it actually happens.
>>
>> For now the only meaningful value to monitor would be disk usage -- do
>> you have other suggestions? If not then probably a simple API endpoint with
>> statvfs calls would suffice. If you see other usages of this then maybe it
>> would be better to have some daemon collecting the stats we want.
>>
>> If we opted for a daemon, then I'm aware that the user can optionally
>> install Zabbix server although looking at blueprints in [2] I don't see
>> anything about monitoring Fuel master itself -- is it possible to do?
>> Though the installation of Zabbix though is not mandatory so it still
>> doesn't completely solve the problem.
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1371757
>> [2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system
>>
>> Przemek
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Borodaenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [neutron] [nfv]

2014-11-05 Thread Tidwell, Ryan
Keshava,

This sounds like you're asking how you might do service function chaining with 
Neutron.  Is that a fair way to characterize your thoughts? I think the concept 
of service chain provisioning in Neutron is worth some discussion, keeping in 
mind Neutron is not a fabric controller.

-Ryan

From: A, Keshava
Sent: Tuesday, November 04, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage questions); Singh, 
Gangadhar S
Subject: [openstack-dev] openstack-dev] [neutron] [nfv]

Hi,
I am thinking loud here, about NFV Service VM and OpenStack infrastructure.
Please let me know does the below scenario analysis make sense.

NFV Service VM's are hosted on cloud (OpenStack)  where in there are  2 Tenants 
with different Service order of execution.
(Service order what I have mentioned here is  just an example ..)

* Does OpenStack controls the order of Service execution for every 
packet ?

* Does OpenStack will have different Service-Tag for different Service ?

* If there are multiple features with in a Service-VM, how 
Service-Execution is controlled in that  VM ?

* After completion of a particular Service ,  how the next Service will 
be invoked ?

Will there be pre-configured flows from OpenStack  to invoke next service for 
tagged packet from Service-VM ?

[cid:image004.png@01CFF8ED.4E25C730]

[cid:image008.png@01CFF8ED.4E25C730]


Thanks & regards,
keshava




image002.emz
Description: image002.emz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Mid-cycle hack-a-thon

2014-11-05 Thread Adam Harwell
I can probably make it up there to attend.

--Adam

https://keybase.io/rm_you


From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 3:46 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Octavia] Mid-cycle hack-a-thon


Howdy, folks!

We are planning to have a mid-cycle hack-a-thon in Seattle from the 8th through 
the 12th of December. This will be at the HP corporate offices located in the 
Seattle convention center.

During this week we will be concentrating on Octavia code and hope to make 
significant progress toward our v0.5 milestone.

If you are interested in attending, please e-mail me. If you are interested in 
participating but can't travel to Seattle that week, please also let me know, 
and we will see about using other means to collaborate with you in real time.

Thanks!
Stephen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OVF/OVA support

2014-11-05 Thread Bhandaru, Malini K
Please join us on Friday in the Glance track - free format session, to discuss 
supporting OVF/OVA in OpenStack.

Poll:

1)  How interested are you in this feature? 0 - 10

2)  Interested enough to help develop the feature?



Artifacts are ready for use.

We are considering defining an artifact for OVF/OVA.
What should the scope of this work be? Who are our fellow travelers?
Intel is interested in parsing OVF meta data associated with images - to ensure 
that a VM image lands on the most appropriate hardware in the cloud instance, 
to ensure optimal performance.
The goal is to remove the need to manually specify image meta data, allow the 
appliance provider to specify HW requirements, and in so doing reduce human 
error.
Are any partners interested in writing an OVF/OVA artifact => stack deployment? 
Along the lines of heat?
As a first pass, Intel we could at least

1)  Defining artifact for OVA, parsing the OVF in it, pulling out the 
images therein and storing them in the glance image database and attaching meta 
data to the same.

2)  Do not want to imply that OpenStack supports OVA/OVF -- need to be 
clear on this.

3)  An OpenStack user could create a heat template using the images 
registered in step -1

4)  OVA to Heat - there may be a loss in translation! Should we attempt 
this?

5)  What should we do with multiple volume artifacts?

6)  Are volumes read-only? Or on cloning, make copies of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-novaclient] When do you use "human_id" ?

2014-11-05 Thread 熊谷育朗
Hi,All

 I have a simple question.

Acording to the commit comment(*1),"human_id" is human-friendly ID.
And that is a slugified form of the model.
"slugified" means like url slugfy.
It replace the space between the string to hyphen and remove non
charactor string.
For example, "a b c"  is replaced to "a-b-c", and "a.b.c" is replaced "abc".

I already know that it is used for bash-completion.
Do you know any other usecase? Is that all?


FYI: (*1)
The commit comment is below.
--
commit b22ec22336def07a0678fd0c548fb87ea48c6eab
Author: Rick Harris 
Date:   Tue Mar 6 00:33:37 2012 +

Add human-friendly ID support.

Allows a user to interact with certain models (image, flavors, and
servers currently) using a human-friendly identifier which is a
slugified form of the model name.

Example:

nova boot --image debian-6-squeeze --flavor 256mb-instance myinst

Change-Id: I43dbedac3493d010c1ec9ba8b8bb1007ff7ac499
--


Thanks

Bit-isle Inc.
R &D Institute
Ikuo Kumagai
mobile: 080-6857-3938
E-mail:i-kuma...@bit-isle.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] New function: first_nonnull

2014-11-05 Thread Steve Baker
We have a template usability session at 9am this morning, and we'll be 
covering these sort of utility functions as part of that session. If you 
don't make it we can follow up later.


On 05/11/14 15:46, Lee, Alexis wrote:


I’m considering adding a function which takes a list and returns the first

non-null, non-empty value in that list.

So you could do EG:

 some_thing:

config:

ControlVIP:

first_nonnull:

- {get_param: ControlVIP}

- {get_attr: [ControlVirtualIP, fixed_ips, 0, ip_address]}]}

I’m open to other names, EG “some”, “or”, “fallback_list” etc.

Steve Hardy suggested building this into get_attr or Fn::Select. My 
feeling


is that those each do one job well right now, I’m happy to take a steer

though.

What do you think please?

Alexis (lxsli)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

2014-11-05 Thread Samuel Bercovici
For us in Israel, the earlier the better.
The current meeting time is very good for us, although I understand it too 
early for some.

-Sam.

From: Gregory Lebovitz [mailto:gregory.i...@gmail.com]
Sent: Wednesday, November 05, 2014 1:10 PM
To: OpenStack Development Mailing List (not for usage questions); Doug Wiegley
Subject: Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

I'm just a lurker, so pls don't optimize for me. FWIW, here's my reply, in 
order of pref:

wed 1600 UTC
wed 1800 UTC
wed 1700 UTC

On Mon, Nov 3, 2014 at 11:42 PM, Doug Wiegley 
mailto:do...@a10networks.com>> wrote:
Hi LBaaS (and others),

We’ve been talking about possibly re-schedulng the LBaaS meeting to a time
to is less crazy early for those in the US.  Alternately, we could also
start alternating times.  For now, let’s see if we can find a slot that
works every week.  Please respond with any time slots that you can NOT
attend:

Monday, 1600UTC
Monday, 1700UTC
Tuesday, 1600UTC (US pacific, 8am)
Tuesday, 1700UTC
Tuesday, 1800UTC
Wednesday, 1600UTC (US pacific, 8am)
Wednesday, 1700UTC
Wednesday, 1800UTC
Thursday, 1400UTC (US pacific, 6am)


Note that many of these slots will require the approval of the
#openstack-meeting-4 channel:

https://review.openstack.org/#/c/132629/

https://review.openstack.org/#/c/132630/


Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Open industry-related email from
Gregory M. Lebovitz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] New function: first_nonnull

2014-11-05 Thread Clint Byrum
Excerpts from Lee, Alexis's message of 2014-11-05 15:46:43 +0100:
> I'm considering adding a function which takes a list and returns the first
> non-null, non-empty value in that list.
> 
> So you could do EG:
> 
> some_thing:
> config:
> ControlVIP:
> first_nonnull:
> - {get_param: ControlVIP}
> - {get_attr: [ControlVirtualIP, fixed_ips, 0, ip_address]}]}
> 
> I'm open to other names, EG "some", "or", "fallback_list" etc.
> 
> Steve Hardy suggested building this into get_attr or Fn::Select. My feeling
> is that those each do one job well right now, I'm happy to take a steer
> though.
> 
> What do you think please?
> 

Yes this is super useful for writing responsive, reusable templates.

I'd like to suggest that this be called 'coalesce' as that is what SQL
calls it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Przemyslaw Kaminski
I think we're missing the point here. What I meant adding a simple 
monitoring system that informed the user via UI/CLI/email/whatever of 
low resources on fuel master node. That's it. HA here is not an option 
-- if, despite of warnings, the user still continues to use fuel and 
disk becomes full, it's the user's fault. By adding these warnings we 
have a way of saying "We told you so!" Without warnings we get bugs like 
[1] I mentioned in the first post.


Of course user can check disk space by hand but since we do have a 
full-blown UI telling the user to periodically log in to the console and 
check disks by hand seems a bit of a burden.


We can even implement such monitoring functionality as a Nailgun plugin 
-- installing it would be optional and at the same time we would grow 
our plugin ecosystem.


P.

On 11/05/2014 08:42 PM, Dmitry Borodaenko wrote:
Even one additional hardware node required to host the Fuel master is 
seen by many users as excessive. Unless you can come up with an 
architecture that adds HA capability to Fuel without increasing its 
hardware footprint by 2 more nodes, it's just not worth it.


The only operational aspect of the Fuel master node that you don't 
want to lose even for a short while is logging. You'd be better off 
redirecting OpenStack environments' logs to a dedicated highly 
available logging server (which, of course, you already have in your 
environment), and deal with Fuel master node failures by restoring it 
from backups.


On Wed, Nov 5, 2014 at 8:26 AM, Anton Zemlyanov 
mailto:azemlya...@mirantis.com>> wrote:


Monitoring of the Fuel master's disk space is the special case. I
really wonder why Fuel master have no HA option, disk overflow can
be predicted but many other failures cannot. HA is a solution of
the 'single point of failure' problem.

The current monitoring recommendations
(http://docs.openstack.org/openstack-ops/content/logging_monitoring.html)
are based on analyzing logs and manual checks, that are rather
reactive way of fixing problems. Zabbix is quite good for
preventing failures that are predictable but for the abrupt
problems Zabbix just reports them 'post mortem'.

The only way to remove the single failure point is to implement
redundancy/HA

Anton

On Tue, Nov 4, 2014 at 6:26 PM, Przemyslaw Kaminski
mailto:pkamin...@mirantis.com>> wrote:

Hello,

In extension to my comment in this bug [1] I'd like to discuss
the possibility of adding Fuel master node monitoring. As I
wrote in the comment, when disk is full it might be already
too late to perform any action since for example Nailgun could
be down because DB shut itself down. So we should somehow warn
the user that disk is running low (in the UI and fuel CLI on
stderr for example) before it actually happens.

For now the only meaningful value to monitor would be disk
usage -- do you have other suggestions? If not then probably a
simple API endpoint with statvfs calls would suffice. If you
see other usages of this then maybe it would be better to have
some daemon collecting the stats we want.

If we opted for a daemon, then I'm aware that the user can
optionally install Zabbix server although looking at
blueprints in [2] I don't see anything about monitoring Fuel
master itself -- is it possible to do? Though the installation
of Zabbix though is not mandatory so it still doesn't
completely solve the problem.

[1] https://bugs.launchpad.net/fuel/+bug/1371757
[2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system

Przemek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Dmitry Borodaenko


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev