Re: [openstack-dev] [tc] Re: [kolla] A new kolla-salt deliverable

2017-01-22 Thread Daniel Comnea
So what the conclusion with kolla-salt deliverable, has this work started,
if yes where is the code etc etc ?

On Mon, Jan 2, 2017 at 1:12 PM, Thierry Carrez 
wrote:

> Michał Jastrzębski wrote:
> > I agree this would make good PTG discussion for us, and maybe someone
> > from TC could join in. We are somehow special beast as Kolla itself is
> > just meant to be consumed. Kolla to me is similar to Oslo in that
> > matter. But still we don't have oslo-compute projects, we have nova,
> > but we would have kolla-salt, even tho core teams would be totally
> > different potentially. As Britt said, growing pains.
>
> Happy to join in any discussion at the PTG on that topic.
>
> As far as our governance is concerned, "teams" decide what git
> repositories they vouch for -- the team's opinion being represented by
> the PTL. Each team is of course free to add more complex ways of
> gathering the "team's opinion" :)
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] potential Ansible template issue you may meet

2016-09-13 Thread Daniel Comnea
thanks for heads up.
suggest we add this to the kolla docs as is easy to  forget about it

On Tue, Sep 13, 2016 at 3:15 AM, Jeffrey Zhang 
wrote:

> When using ansible template module, you may see the trailing newline
> is stripped by blocks, like
>
> # template.j2
> a = {% if true %}1{% endfor %}
> b = 2
>
> the render will be like
>
> a=1b=2
>
> The newline character after `a=1` is stripped.
>
> The root cause comes from jinja2's trim_blocks feature. Ansible
> enabled this feature. If you want to disable it, just add `#jinja2:
> trim_blocks: False` to the j2 template file. This is a feature in
> Ansible, and I do not think they will fix/change this. But we need
> take care of this when using the template module.
>
> More info please check[0][1]
>
> [0] https://github.com/ansible/ansible/issues/16344
> [1] http://jinja.pocoo.org/docs/dev/api/#jinja2.Environment
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-20 Thread Daniel Comnea
Superb report Jim, thanks !

On Thu, Nov 19, 2015 at 10:47 AM, Markus Zoeller 
wrote:

> David Pursehouse  wrote on 11/12/2015 09:22:50
> PM:
>
> > From: David Pursehouse 
> > To: OpenStack Development Mailing List
> 
> > Cc: openstack-in...@lists.openstack.org
> > Date: 11/12/2015 09:27 PM
> > Subject: Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User
> Summit
> >
> > On Mon, Nov 9, 2015 at 10:40 PM David Pursehouse
>  > > wrote:
> >
> > <...>
> >
> > * As noted in another recent thread by Khai, the hashtags support
> >   (user-defined tags applied to changes) exists but depends on notedb
> >   which is not ready for use yet (targeted for 3.0 which is probably
> >   at least 6 months off).
>
> >
> > We're looking into the possibility of enabling only enough of the
> > notedb to make hashtags work in 2.12.
> >
> >
> >
> > Unfortunately it looks like it's not going to be possible to do this.
>
> That's a great pity. :(
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][dns]What the meaning of"dns_assignment" and "dns_name"?

2015-10-16 Thread Daniel Comnea
Hi,

While #1 does improve things further it doesn't cover the use case where
you have 2 tenants and you would like to have the VMs spun up different dns
domains.

I suspect this was intentionally left out?

Dani

On Wed, Oct 14, 2015 at 11:39 PM, Miguel Lavalle 
wrote:

> Zhi Chang,
>
> You got all the steps correct. A few clarifications:
>
>
>1. Address 104.130.78.191 is the ip address of my devstack VM. When
>you deploy Designate in devstack, it starts an instance of PowerDNS for
>you. Designate then pushes all its zones and records to that PowerDNS
>instance. When I say "dig my-instance.my-example.org @104.130.78.191"
>I am instructing dig to direct the lookup to the DNS server @
>104.130.78.191: in other words, my PowerDNS instance
>2. For you to be able to execute the same steps in your devstack, you
>need:
>   - The code in patchset https://review.openstack.org/#/c/212213/
>   - The modified nova code in nova/network/neutronv2/api.py that I
>   haven't pushed to Gerrit yet
>   - Configure a few parameters in /etc/neutron/neutron.conf
>   - Migrate the Neutron database, because I added columns to a couple
>   of tables
>
> Let me know if you want to try this in your devstack. If the answer is
> yes, I will let you know when I push the nova change to gerrit. At that
> point, I will provide detailed steps to accomplish point 2 above
>
> Best regards
>
>
> Miguel
>
>
> On Wed, Oct 14, 2015 at 12:53 AM, Zhi Chang 
> wrote:
>
>> Hi, Miguel
>> Thank you so much for your reply. You are so patient!
>> After reading your reply, I still have some questions to ask you. :-)
>> Below, is my opinion about the
>> http://paste.openstack.org/show/476210/, please read it and tell me
>> whether I was right.
>> (1). Define a DNS domain
>> (2). Update a network's "dns_domain" attribute to the DNS domain
>> which defined in the step1
>> (3). Create a VM in this network. The instance's port will assign
>> instance's hostname to it's dns_name attribute
>> (4). Create a floating IP for this VM
>> (5). In Designate, there will be generate a new A record. This record
>> is a link between floating IP and dns_name+domain_name. Just like your
>> record: deec921d-b630-4479-8932-c5ec7c530820 | A |
>> my-instance.my-example.org. | 172.24.4.3
>>   (6). I am don't understand where the IP address "104.130.78.191" comes
>> from. I think this address is a public DNS, just like 8.8.8.8. Does it
>> right?
>>(7). I can dig "my-instance.my-example.org." by a public DNS. And the
>> result is the floating IP.
>>
>> Does my understanding was right?
>>
>> Hope For Your Reply.
>> Thanks
>> Zhi Chang
>>
>> -- Original --
>> *From: * "Miguel Lavalle";
>> *Date: * Wed, Oct 14, 2015 11:22 AM
>> *To: * "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] [Neutron][dns]What the meaning
>> of"dns_assignment" and "dns_name"?
>>
>> Zhi Chang,
>>
>> Thank you for your questions. We are in the process of integrating
>> Neutron and Nova with an external DNS service, using Designate as the
>> reference implementation. This integration is being achieved in 3 steps.
>> What you are seeing is the result of only the first one. These steps are:
>>
>> 1) Internal DNS integration in Neutron, which merged recently:
>> https://review.openstack.org/#/c/200952/. As you may know, Neutron has
>> an internal DHCP / DNS service based on dnsmasq for each virtual network
>> that you create. Previously, whenever you created a port on a given
>> network, your port would get a default host name in dnsmasq of the form
>> 'host-xx-xx-xx-xx.openstacklocal.", where xx-xx-xx-xx came from the port's
>> fixed ip address "xx.xx.xx.xx" and "openstacklocal" is the default domain
>> used by Neutron. This name was generated by the dhcp agent. In the above
>> mentioned patchset, we are moving the generation of these dns names to the
>> Neutron server, with the intent of allowing the user to specify it. In
>> order to do that, you need to enable it by defining in neutron.conf the
>> 'dns_domain' parameter with a value different to the default
>> 'openstacklocal'. Once you do that, you can create or update a port and
>> assign a value to its 'dns_name' attribute. Why is this useful? Please read
>> on.
>>
>> 2) External DNS integration in Neutron. The patchset is being worked now:
>> https://review.openstack.org/#/c/212213/. The functionality implemented
>> here allows Neutron to publish the dns_name associated with a floating ip
>> under a domain in an external dns service. We are using Designate as the
>> reference implementation, but the idea is that in the future other DNS
>> services can be integrated.. Where does the dns name and domain of the
>> floating ip come from? It can come from 2 sources. Source number 1 is 

Re: [openstack-dev] I love csv output in openstackclient, please never removed it

2015-10-03 Thread Daniel Comnea
thanks for sharing, useful indeed.



On Fri, Oct 2, 2015 at 8:51 PM, Ricardo Carrillo Cruz <
ricardo.carrillo.c...@gmail.com> wrote:

> Erm, yeah, I hear your pain with processing output with awk.
>
> Having the option to output CSV is cool :-).
> Thanks for sharing.
>
> Cheers
>
> 2015-10-02 21:49 GMT+02:00 Thomas Goirand :
>
>> Hi,
>>
>> I saw the csv output format of openstackclient going and coming back.
>> Please leave it in, it's super useful, especially when you combine it
>> with "q-text-as-data". Just try to apt-get install q-text-as-data" and
>> try by yourself:
>>
>> openstack endpoint list --long -f csv | \
>> q -d , -H 'SELECT ID FROM - WHERE `Service Name`="cinder"'
>>
>> This is so much better than any awk hacks to get IDs... :)
>> I just wanted to share, hoping it could be useful to someone.
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] Nice new Network Topology panel in Horizon

2015-09-25 Thread Daniel Comnea
Great job Henry !

On Fri, Sep 25, 2015 at 6:47 PM, Henry Gessau  wrote:

> It has been about three years in the making but now it is finally here.
> A screenshot doesn't do it justice, so here is a short video overview:
> https://youtu.be/PxFd-lJV0e4
>
> Isn't that neat? I am sure you can see that it is a great improvement,
> especially for larger topologies.
>
> This new view will be part of the Liberty release of Horizon. I encourage
> you to
> take a look at it with your own network topologies, play around with it,
> and
> provide feedback. Please stop by the #openstack-horizon IRC channel if
> there are
> issues you would like addressed.
>
> Thanks to the folks who made this happen.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Allowing DNS suffix to be set per subnet (at least per tenant)

2015-09-04 Thread Daniel Comnea
Kevin,

am i right in saying that the merge above was packaged into Liberty ?

Any chance to be ported to Juno?


Cheers,
Dani



On Fri, Sep 4, 2015 at 12:21 AM, Kevin Benton  wrote:

> Support for that blueprint already merged[1] so it's a little late to
> change it to per-subnet. If that is too fine-grained for your use-case, I
> would file an RFE bug[2] to allow it to be set at the subnet level.
>
>
> 1. https://review.openstack.org/#/c/200952/
> 2.
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#rfe-submission-guidelines
>
> On Thu, Sep 3, 2015 at 1:07 PM, Maish Saidel-Keesing 
> wrote:
>
>> On 09/03/15 20:51, Gal Sagie wrote:
>>
>> I am not sure if this address what you need specifically, but it would be
>> worth checking these
>> two approved liberty specs:
>>
>> 1)
>> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
>> 2)
>> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst
>>
>> Thanks Gal,
>>
>> So I see that from the bp [1] the fqdn will be configurable for each and
>> every port ?
>>
>> I think that this does open up a number of interesting possibilities, but
>> I would also think that it would be sufficient to do this on a subnet level?
>>
>> We do already have the option of setting nameservers per subnet - I
>> assume the data model is already implemented - which is interesting  -
>> because I don't see that as part of the information that is sent by dnsmasq
>> so it must be coming from neutron somewhere.
>>
>> The domain suffix - definitely is handled by dnsmasq.
>>
>>
>>
>> On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley 
>> wrote:
>>
>>> As far as I am aware it is not presently built-in to Openstack. You'll
>>> need to add a dnsmasq_config_file option to your dhcp agent configurations
>>> and then populate the file with:
>>> domain=DOMAIN_NAME,CIDR for each network
>>> i.e.
>>> domain=example.com,10.11.22.0/24
>>> ...
>>>
>>> -Steve
>>>
>>>
>>> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <
>>> mais...@maishsk.com> wrote:
>>>
 Hello all (cross-posting to openstack-operators as well)

 Today the setting of the dns suffix that is provided to the instance is
 passed through dhcp_agent.

 There is the option of setting different DNS servers per subnet (and
 and therefore tenant) but the domain suffix is something that stays the
 same throughout the whole system is the domain suffix.

 I see that this is not a current neutron feature.

 Is this on the roadmap? Are there ways to achieve this today? If so I
 would be very interested in hearing how.

 Thanks
 --
 Best Regards,
 Maish Saidel-Keesing


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>>
>> --
>> Best Regards,
>> Maish Saidel-Keesing
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Liberty-3 BPs and gerrit topics

2015-08-12 Thread Daniel Comnea
Kyle,

Is the dashboard available to a limited set of users? curious by nature
tried to access it and it said Session expired, please login again
however i thought i don't have to login to Gerrit, am i missing something?


Cheers,
Dani

On Tue, Aug 11, 2015 at 2:44 PM, Kyle Mestery mest...@mestery.com wrote:

 Folks:

 To make reviewing all approved work for Liberty-3 in Neutron easier, I've
 created a handy dandy gerrit dashboard [1]. What will make this even more
 useful is if everyone makes sure to set their topics to something uniform
 from their approved LP BP found here [2]. The gerrit dashboard includes all
 Essential, High, and Medium priority BPs from that link. If everyone who
 has patches could make sure their gerrit topics for the patches are synced
 to what is in the LP BP, that will help as people use the dashboard to
 review in the final weeks before FF.

 Thanks!
 Kyle

 [1] https://goo.gl/x9bO7i
 [2] https://launchpad.net/neutron/+milestone/liberty-3

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Daniel Comnea
From Operators point of view i'd love to see less technology proliferation
in OpenStack, if you wear the developer hat please don't be selfish, take
into account the others :)

ZK is a robust technology but hey is a beast like Rabbit, there is a lot to
massage and over 2 data centers ZK is not very efficient.


On Sat, Aug 1, 2015 at 4:27 AM, Joshua Harlow harlo...@outlook.com wrote:

 Monty Taylor wrote:

 On 08/01/2015 03:40 AM, Mike Perez wrote:

 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com
 wrote:

 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should
 just
 use it cause they are, but the likelihood that they just picked it for
 no
 reason are imho slim.

 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.

 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.

 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?


 I'd like to take that one step further and say that we should also look
 holistically at the other things that such technology are often used for
 in distributed systems and see if, in addition to Does Cinder need a
 DLM - ask does Cinder need service discover and does Cinder need
 distributed KV store and does anyone else?

 Adding something like zookeeper or etcd or consul has the potential to
 allow us to design an OpenStack that works better. Adding all of them in
 an ad-hoc and uncoordinated manner is a bit sledgehammery.

 The Java community uses zookeeper a lot
 The container orchestration community seem to all love etcd
 I hear tell that there a bunch of ops people who are in love with consul

 I'd suggest we look at more than lock management.


 Oh I very much agree, but gotta start somewhere :)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Daniel Comnea
I couldn't put it better, nice write up Morgan!! +1

On Sun, Aug 2, 2015 at 10:28 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:




  On Aug 2, 2015, at 16:00, Daniel Comnea comnea.d...@gmail.com wrote:
 
  From Operators point of view i'd love to see less technology
 proliferation in OpenStack, if you wear the developer hat please don't be
 selfish, take into account the others :)
 
  ZK is a robust technology but hey is a beast like Rabbit, there is a lot
 to massage and over 2 data centers ZK is not very efficient.
 

 Sure, lets evaluate the more far reaching benefits of running the new
 service for all openstack deployments. This is not a hey neat tech
 debate, it is a lets see if this tool solves enough issues that it is
 worth using an 'innovation token' on. I think it is worth it personally,
 but it should be a consistent choice with a strong reason and added value
 beyond a single one-off usecase.

 --morgan

 Sent via mobile
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread Daniel Comnea
+100 on what Sean said

On Mon, Jul 27, 2015 at 9:39 PM, Sean M. Collins s...@coreitpro.com wrote:

 We should have the Wiki page redirect, or link to:

 https://github.com/openstack/neutron/blob/master/TESTING.rst#debugging

 And then update that RST file it to add any info we have about
 debugging under IDEs. Generally, I dislike wikis because they go stale
 very quick and aren't well maintained, compared to files in the code repo
 (hopefully).


 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security-group] rules for filter mac-addresses

2015-07-15 Thread Daniel Comnea
Can i understand the use case for that?

What i don't get it is how will you know the MAC for a new created instance
via HEAT so you can set at the same time the SG based on MAC?



On Tue, Jul 14, 2015 at 12:29 PM, yan_xing...@163.com yan_xing...@163.com
wrote:

 Thank you, Kevin.
 I search the blueprint about this point in launchpad.net
 , and got nothing, then register one at:
 https://blueprints.launchpad.net/neutron/+spec/security-group-mac-rule


 --
 Yan Xing'an


 *From:* Kevin Benton blak...@gmail.com
 *Date:* 2015-07-14 18:31
 *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [neutron][security-group] rules for filter
 mac-addresses
 Unfortunately the security groups API does not have mac-level rules right
 now.

 On Tue, Jul 14, 2015 at 2:17 AM, yan_xing...@163.com yan_xing...@163.com
 wrote:

 Hi, all:

 Here is a requirement: deny/permit incoming packets on VM by mac
 addresses,
 I have tried to find better method than modifying neutron code, but
 failed.
 Any suggesion is grateful. Thank you.

 Yan.

 --
 yan_xing...@163.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]: provider network use case supported?

2015-07-13 Thread Daniel Comnea
Right, thanks a lot Assaf!


On Sun, Jul 12, 2015 at 8:47 PM, Assaf Muller amul...@redhat.com wrote:

 You can mark the network as shared and have it exposed to all of your
 tenants.

 - Original Message -
  Hi,
 
  I'm running openstack ( IceHouse ) configured for provider networks.
 
  For a tenant i've created the network/ subnet using the below commands
 and
  everything works as expected.
 
  neutron net-create --tenant-id f557a3f5303d4e7c9218c5539456eb37
  --provider:physical_network=physnet2 --provider:network_type= vlan
  --provider:segmentation_id=315 ih - lwr - ci -provider-vlan315
 
  neutron subnet -create Openstack -External-vlan55 10.82.42.0/24 --name
  Openstack -External-vlan55- subnet --no-gateway --host-route destination=
  0.0.0.0/0 , nexthop =10.82.42.1 --allocation-pool
  start=10.82.42.223,end=10.82.42.254
 
  Now my questions/ use cases are:
 
  1. For a 2nd tenant, can i map it to the same subnet created for
 tenant
  1? If the answer is to create same net/ subnet (same segmentation_id
 -
  i.e vlans ) for tenant 2, how will dhcp agent avoid IP duplication.
  2. Is having multiple tenants pointing to same provider network (net/
  subnet / vlan ) is not possible what options do i have? Only to
 create a
  new net/ subnet based on new vlan and allocated to tenant 2?
 
 
 
 
 
  Cheers,
 
  Dani
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]: provider network use case supported?

2015-07-12 Thread Daniel Comnea
Hi,

I'm running openstack (IceHouse) configured for provider networks.

For a tenant i've created the network/ subnet using the below commands and
everything works as expected.

neutron net-create --tenant-id f557a3f5303d4e7c9218c5539456eb37
--provider:physical_network=physnet2 --provider:network_type=vlan
--provider:segmentation_id=315 ih-lwr-ci-provider-vlan315

neutron subnet-create Openstack-External-vlan55 10.82.42.0/24 --name
Openstack-External-vlan55-subnet --no-gateway --host-route destination=
0.0.0.0/0,nexthop=10.82.42.1 --allocation-pool
start=10.82.42.223,end=10.82.42.254

Now my questions/ use cases are:

   1. For a 2nd tenant, can i map it to the same subnet created for tenant
   1? If the answer is to create same net/ subnet (same segmentation_id -
   i.e vlans) for tenant 2, how will dhcp agent avoid IP duplication.
   2. Is having multiple tenants pointing to same provider network (net/
   subnet/ vlan) is not possible what options do i have? Only to create a
   new net/ subnet based on new vlan and allocated to tenant 2?


Cheers,

Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] wrong network for keystone endpoint in 6.1 ?

2015-07-10 Thread Daniel Comnea
I know about the flow but what i'm questioning is:

admin endpoint is mapped to br-mgmt subnet (you do have the HAproxy as
below defined in 6.1. In 6.0 and before you had no HAproxy)

listen keystone-2
  bind 192.168.20.3:35357
  option  httpchk
  option  httplog
  option  httpclose
  server node-17 192.168.20.20:35357   check inter 10s fastinter 2s
downinter 3s rise 3 fall 3
  server node-18 192.168.20.21:35357   check inter 10s fastinter 2s
downinter 3s rise 3 fall 3
  server node-23 192.168.20.26:35357   check inter 10s fastinter 2s
downinter 3s rise 3 fall 3

public endpoint is mapped to br-ex

So with this behavior you are saying the bt-mgmt subnet (which i thought is
only for controller  compute traffic, isolated network) should be
routable in the same way br-ex is?

Dani

On Thu, Jul 9, 2015 at 11:30 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi Daniel,

 answer is no - actually there is no strong dependency between public and
 internal/admin endpoints. In your case keystone client ask keystone on
 address 10.52.71.39 (which, I think, was provided by system
 variable OS_AUTH_URL), auth on it and then keystone give endpoints list to
 client. Client selected admin endpoint from this list (192.168.20.3
 address) and tried to get information you asked. It's a normal behavior.

 So, in Fuel by default we have 3 different endpoints for keystone - public
 on public VIP, port 5000; internal on management VIP, port 5000, admin on
 management VIP, port 35357.

 On Thu, Jul 9, 2015 at 4:59 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi,

 I'm running Fuel 6.1 and i've seen an interesting behavior which i think
 match bug [1]

 Basically the adminUrl  publicUrl part of keystone endpoint are
 different

 And the result of that is that you can't run keystone cli - i.e
 create/list tenants etc

 keystone --debug tenant-list
 /usr/local/lib/python2.7/site-packages/keystoneclient/shell.py:65:
 DeprecationWarning: The keystone CLI is deprecated in favor of python-
 openstackclient. For a Python library, continue using python-keys
 toneclient.
   'python-keystoneclient.', DeprecationWarning)
 DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
 http://10.20.71.39:5000/v2.0/tokens
 INFO:requests.packages.urllib3.connectionpool:Starting new HTTP
 connection (1): 10.52.71.39
 DEBUG:requests.packages.urllib3.connectionpool:POST /v2.0/tokens
 HTTP/1.1 200 3709
 DEBUG:keystoneclient.session:REQ: curl -g -i -X GET
 http://192.168.20.3:35357/v2.0/tenants -H User-Agent: python-
 keystoneclient -H Accept: application/json -H X-Auth-Token:
 {SHA1}cc918b89c2dca563edda43e01964b1f1979c552b

 shouldn't adminURL = publicURL = br-ex for keystone?


 Dani


 [1] https://bugs.launchpad.net/fuel/+bug/1441855

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] wrong network for keystone endpoint in 6.1 ?

2015-07-10 Thread Daniel Comnea
Okay Vladimir, thanks for confirmation!

So then you happy to stick my sketch proposal (of course needs re-wording)
into documentation?

Dani

On Fri, Jul 10, 2015 at 11:31 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Daniel

 Yes, if you want to do some administrative stuff you need to have access
 to management network to be able to work with internal and admin endpoints.

 On Fri, Jul 10, 2015 at 9:58 AM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 I know about the flow but what i'm questioning is:

 admin endpoint is mapped to br-mgmt subnet (you do have the HAproxy as
 below defined in 6.1. In 6.0 and before you had no HAproxy)

 listen keystone-2
   bind 192.168.20.3:35357
   option  httpchk
   option  httplog
   option  httpclose
   server node-17 192.168.20.20:35357   check inter 10s fastinter 2s
 downinter 3s rise 3 fall 3
   server node-18 192.168.20.21:35357   check inter 10s fastinter 2s
 downinter 3s rise 3 fall 3
   server node-23 192.168.20.26:35357   check inter 10s fastinter 2s
 downinter 3s rise 3 fall 3

 public endpoint is mapped to br-ex

 So with this behavior you are saying the bt-mgmt subnet (which i thought
 is only for controller  compute traffic, isolated network) should be
 routable in the same way br-ex is?

 Dani


 On Thu, Jul 9, 2015 at 11:30 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi Daniel,

 answer is no - actually there is no strong dependency between public and
 internal/admin endpoints. In your case keystone client ask keystone on
 address 10.52.71.39 (which, I think, was provided by system
 variable OS_AUTH_URL), auth on it and then keystone give endpoints list to
 client. Client selected admin endpoint from this list (192.168.20.3
 address) and tried to get information you asked. It's a normal behavior.

 So, in Fuel by default we have 3 different endpoints for keystone -
 public on public VIP, port 5000; internal on management VIP, port 5000,
 admin on management VIP, port 35357.

 On Thu, Jul 9, 2015 at 4:59 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi,

 I'm running Fuel 6.1 and i've seen an interesting behavior which i
 think match bug [1]

 Basically the adminUrl  publicUrl part of keystone endpoint are
 different

 And the result of that is that you can't run keystone cli - i.e
 create/list tenants etc

 keystone --debug tenant-list
 /usr/local/lib/python2.7/site-packages/keystoneclient/shell.py:65:
 DeprecationWarning: The keystone CLI is deprecated in favor of python-
 openstackclient. For a Python library, continue using python-keys
 toneclient.
   'python-keystoneclient.', DeprecationWarning)
 DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
 http://10.20.71.39:5000/v2.0/tokens
 INFO:requests.packages.urllib3.connectionpool:Starting new HTTP
 connection (1): 10.52.71.39
 DEBUG:requests.packages.urllib3.connectionpool:POST /v2.0/tokens
 HTTP/1.1 200 3709
 DEBUG:keystoneclient.session:REQ: curl -g -i -X GET
 http://192.168.20.3:35357/v2.0/tenants -H User-Agent: python-
 keystoneclient -H Accept: application/json -H X-Auth-Token:
 {SHA1}cc918b89c2dca563edda43e01964b1f1979c552b

 shouldn't adminURL = publicURL = br-ex for keystone?


 Dani


 [1] https://bugs.launchpad.net/fuel/+bug/1441855


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] wrong network for keystone endpoint in 6.1 ?

2015-07-09 Thread Daniel Comnea
Hi,

I'm running Fuel 6.1 and i've seen an interesting behavior which i think
match bug [1]

Basically the adminUrl  publicUrl part of keystone endpoint are different

And the result of that is that you can't run keystone cli - i.e create/list
tenants etc

keystone --debug tenant-list
/usr/local/lib/python2.7/site-packages/keystoneclient/shell.py:65:
DeprecationWarning: The keystone CLI is deprecated in favor of python-
openstackclient. For a Python library, continue using python-keys
toneclient.
  'python-keystoneclient.', DeprecationWarning)
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
http://10.20.71.39:5000/v2.0/tokens
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection
(1): 10.52.71.39
DEBUG:requests.packages.urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1
200 3709
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET
http://192.168.20.3:35357/v2.0/tenants -H User-Agent: python-keystoneclient
-H Accept: application/json -H X-Auth-Token:
{SHA1}cc918b89c2dca563edda43e01964b1f1979c552b

shouldn't adminURL = publicURL = br-ex for keystone?


Dani


[1] https://bugs.launchpad.net/fuel/+bug/1441855
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-07-01 Thread Daniel Comnea
Neil, much thanks !!!

Any idea if i can go and only run apt-get --only-upgrade install
packagename  or that will be too crazy?

Cheers,
Dani


On Wed, Jul 1, 2015 at 9:23 AM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Well, the bug discussion seems to point specifically to this dnsmasq fix:


 http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=9380ba70d67db6b69f817d8e318de5ba1e990b12

 Neil


 On 01/07/15 07:34, Daniel Comnea wrote:

 Hi,

 sorry for no feedback, i've been doing more and more test and after
 enabled the dnsmasq log i found the error which i'm not longer sure if
 is related to having duplicated entries

 dnsmasq-dhcp[21231]: 0 DHCPRELEASE(tap8ecf66b6-72) 192.168.111.24
 fa:16:3e:72:04:82 unknown lease

 Looking around it seems i'm hitting this bug [1] but not clear from the
 description what was the problem on dnsmasp 2.59 (which comes wiht Fuel
 5.1)

 Any ideas?

 Cheers,
 Dani

 [1] https://bugs.launchpad.net/neutron/+bug/1271344

 On Wed, Jun 10, 2015 at 7:13 AM, Daniel Comnea comnea.d...@gmail.com
 mailto:comnea.d...@gmail.com wrote:

 Thanks a bunch Kevin!

 I'll try this patch and report back.

 Dani


 On Tue, Jun 9, 2015 at 2:50 AM, Kevin Benton blak...@gmail.com
 mailto:blak...@gmail.com wrote:

 Hi Daniel,

 I'm concerned that we are encountered out-of-order port events
 on the DHCP agent side so the delete message is processed before
 the create message. Would you be willing to apply a small patch
 to your dhcp agent to see if it fixes the issue?

 If it does fix the issue, you should see occasional warnings in
 the DHCP agent log that show Received message for port that was
 already deleted. If it doesn't fix the issue, we may be losing
 the delete event entirely. If that's the case, it would be great
 if you can enable debuging on the agent and upload a log of a
 run when it happens.

 Cheers,
 Kevin Benton

 Here is the patch:

 diff --git a/neutron/agent/dhcp_agent.py
 b/neutron/agent/dhcp_agent.py
 index 71c9709..9b9b637 100644
 --- a/neutron/agent/dhcp_agent.py
 +++ b/neutron/agent/dhcp_agent.py
 @@ -71,6 +71,7 @@ class DhcpAgent(manager.Manager):
   self.needs_resync = False
   self.conf = cfg.CONF
   self.cache = NetworkCache()
 +self.deleted_ports = set()
   self.root_helper = config.get_root_helper(self.conf)
   self.dhcp_driver_cls =
 importutils.import_class(self.conf.dhcp_driver)
   ctx = context.get_admin_context_without_session()
 @@ -151,6 +152,7 @@ class DhcpAgent(manager.Manager):
   LOG.info(_('Synchronizing state'))
   pool = eventlet.GreenPool(cfg.CONF.num_sync_threads)
   known_network_ids = set(self.cache.get_network_ids())
 +self.deleted_ports = set()

   try:
   active_networks =
 self.plugin_rpc.get_active_networks_info()
 @@ -302,6 +304,10 @@ class DhcpAgent(manager.Manager):
   @utils.synchronized('dhcp-agent')
   def port_update_end(self, context, payload):
   Handle the port.update.end notification event.
 +if payload['port']['id'] in self.deleted_ports:
 +LOG.warning(_(Received message for port that was 
 +  already deleted: %s),
 payload['port']['id'])
 +return
   updated_port = dhcp.DictModel(payload['port'])
   network =
 self.cache.get_network_by_id(updated_port.network_id)
   if network:
 @@ -315,6 +321,7 @@ class DhcpAgent(manager.Manager):
   def port_delete_end(self, context, payload):
   Handle the port.delete.end notification event.
   port = self.cache.get_port_by_id(payload['port_id'])
 +self.deleted_ports.add(payload['port_id'])
   if port:
   network =
 self.cache.get_network_by_id(port.network_id)
   self.cache.remove_port(port)








 On Mon, Jun 8, 2015 at 8:26 AM, Daniel Comnea
 comnea.d...@gmail.com mailto:comnea.d...@gmail.com wrote:

 Any help, ideas please?

 Thx,
 Dani

 On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea
 comnea.d...@gmail.com mailto:comnea.d...@gmail.com wrote:

 + Operators

 Much thanks in advance,
 Dani




 On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea
 comnea.d...@gmail.com mailto:comnea.d...@gmail.com
 wrote:

 Hi all,

 I'm running IceHouse

Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-07-01 Thread Daniel Comnea
Indeed Neil that is the case.
Pasting below some info in case someone will face the same issue.

root@node-9:~# apt-cache madison dnsmasq-base
dnsmasq-base | 2.59-4ubuntu0.1 | http://1.1.1.1/ubuntu/fuelweb/x86_64/
precise/main amd64 Packages
root@node-9:~# dpkg -I dnsmasq-base
dpkg-deb: error: failed to read archive `dnsmasq-base': No such file or
directory
root@node-94:~# dpkg -l | grep dnsmas
ii  dnsmasq-base
2.59-4ubuntu0.1 Small caching DNS proxy
and DHCP/TFTP server
ii  dnsmasq-utils
2.59-4ubuntu0.1 Utilities for
manipulating DHCP leases
root@node-9:~# apt-cache showpkg dnsmasq-base
Package: dnsmasq-base
Versions:
2.59-4ubuntu0.1
(/var/lib/apt/lists/1.1.1.1:8080_ubuntu_fuelweb_x86%5f64_dists_precise_main_binary-amd64_Packages)
(/var/lib/dpkg/status)
 Description Language:
 File: /var/lib/apt/lists/211.210.0.9:8080
_ubuntu_fuelweb_x86%5f64_dists_precise_main_binary-amd64_Packages
  MD5: 1f9c3f0c557ca377bcc6c659e4694437


Reverse Depends:
  neutron-dhcp-agent,dnsmasq-base
  nova-network,dnsmasq-base
  libvirt-bin,dnsmasq-base 2.46-1
Dependencies:
2.59-4ubuntu0.1 - libc6 (2 2.15) libdbus-1-3 (2 1.1.1) libidn11 (2 1.13)
libnetfilter-conntrack3 (2 0.9.1) dnsmasq (3 2.59-4ubuntu0) dnsmasq:i386 (3
2.59-4ubuntu0) dnsmasq (3 2.59-4ubuntu0) dnsmasq:i386 (3 2.59-4ubuntu0)
Provides:
2.59-4ubuntu0.1 -
Reverse Provides:
root@node-9:~#


Will keep you updated on how i solved it (hopefully will help others)


Dani


On Wed, Jul 1, 2015 at 12:01 PM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Hi Dani,

 I think that would be fine, if it worked.  The packagename that you want
 is dnsmasq-base, I believe.

 However, I would not expect it to work, on a Fuel 5.1 node, because I
 believe such nodes are set up to use the Fuel master as their package
 repository, and I don't think that a Fuel 5.1 master will have any newer
 dnsmasq packages that what you already have installed.

 I hope that makes sense - happy to explain further if not.

 Neil


 On 01/07/15 10:24, Daniel Comnea wrote:

 Neil, much thanks !!!

 Any idea if i can go and only run apt-get --only-upgrade install
 packagename  or that will be too crazy?

 Cheers,
 Dani


 On Wed, Jul 1, 2015 at 9:23 AM, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:

 Well, the bug discussion seems to point specifically to this dnsmasq
 fix:


 http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=9380ba70d67db6b69f817d8e318de5ba1e990b12

  Neil


 On 01/07/15 07:34, Daniel Comnea wrote:

 Hi,

 sorry for no feedback, i've been doing more and more test and
 after
 enabled the dnsmasq log i found the error which i'm not longer
 sure if
 is related to having duplicated entries

 dnsmasq-dhcp[21231]: 0 DHCPRELEASE(tap8ecf66b6-72) 192.168.111.24
 fa:16:3e:72:04:82 unknown lease

 Looking around it seems i'm hitting this bug [1] but not clear
 from the
 description what was the problem on dnsmasp 2.59 (which comes
 wiht Fuel 5.1)

 Any ideas?

 Cheers,
 Dani

 [1] https://bugs.launchpad.net/neutron/+bug/1271344

 On Wed, Jun 10, 2015 at 7:13 AM, Daniel Comnea
 comnea.d...@gmail.com mailto:comnea.d...@gmail.com
 mailto:comnea.d...@gmail.com mailto:comnea.d...@gmail.com
 wrote:

  Thanks a bunch Kevin!

  I'll try this patch and report back.

  Dani


  On Tue, Jun 9, 2015 at 2:50 AM, Kevin Benton
 blak...@gmail.com mailto:blak...@gmail.com
  mailto:blak...@gmail.com mailto:blak...@gmail.com
 wrote:

  Hi Daniel,

  I'm concerned that we are encountered out-of-order port
 events
  on the DHCP agent side so the delete message is
 processed before
  the create message. Would you be willing to apply a
 small patch
  to your dhcp agent to see if it fixes the issue?

  If it does fix the issue, you should see occasional
 warnings in
  the DHCP agent log that show Received message for port
 that was
  already deleted. If it doesn't fix the issue, we may
 be losing
  the delete event entirely. If that's the case, it would
 be great
  if you can enable debuging on the agent and upload a
 log of a
  run when it happens.

  Cheers,
  Kevin Benton

  Here is the patch:

  diff --git a/neutron/agent/dhcp_agent.py
  b/neutron/agent/dhcp_agent.py
  index 71c9709..9b9b637 100644
  --- a/neutron/agent/dhcp_agent.py
  +++ b/neutron/agent

Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-07-01 Thread Daniel Comnea
Hi,

sorry for no feedback, i've been doing more and more test and after enabled
the dnsmasq log i found the error which i'm not longer sure if is related
to having duplicated entries

dnsmasq-dhcp[21231]: 0 DHCPRELEASE(tap8ecf66b6-72) 192.168.111.24
fa:16:3e:72:04:82 unknown lease

Looking around it seems i'm hitting this bug [1] but not clear from the
description what was the problem on dnsmasp 2.59 (which comes wiht Fuel 5.1)

Any ideas?

Cheers,
Dani

[1] https://bugs.launchpad.net/neutron/+bug/1271344

On Wed, Jun 10, 2015 at 7:13 AM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Thanks a bunch Kevin!

 I'll try this patch and report back.

 Dani


 On Tue, Jun 9, 2015 at 2:50 AM, Kevin Benton blak...@gmail.com wrote:

 Hi Daniel,

 I'm concerned that we are encountered out-of-order port events on the
 DHCP agent side so the delete message is processed before the create
 message. Would you be willing to apply a small patch to your dhcp agent to
 see if it fixes the issue?

 If it does fix the issue, you should see occasional warnings in the DHCP
 agent log that show Received message for port that was already deleted.
 If it doesn't fix the issue, we may be losing the delete event entirely. If
 that's the case, it would be great if you can enable debuging on the agent
 and upload a log of a run when it happens.

 Cheers,
 Kevin Benton

 Here is the patch:

 diff --git a/neutron/agent/dhcp_agent.py b/neutron/agent/dhcp_agent.py
 index 71c9709..9b9b637 100644
 --- a/neutron/agent/dhcp_agent.py
 +++ b/neutron/agent/dhcp_agent.py
 @@ -71,6 +71,7 @@ class DhcpAgent(manager.Manager):
  self.needs_resync = False
  self.conf = cfg.CONF
  self.cache = NetworkCache()
 +self.deleted_ports = set()
  self.root_helper = config.get_root_helper(self.conf)
  self.dhcp_driver_cls =
 importutils.import_class(self.conf.dhcp_driver)
  ctx = context.get_admin_context_without_session()
 @@ -151,6 +152,7 @@ class DhcpAgent(manager.Manager):
  LOG.info(_('Synchronizing state'))
  pool = eventlet.GreenPool(cfg.CONF.num_sync_threads)
  known_network_ids = set(self.cache.get_network_ids())
 +self.deleted_ports = set()

  try:
  active_networks = self.plugin_rpc.get_active_networks_info()
 @@ -302,6 +304,10 @@ class DhcpAgent(manager.Manager):
  @utils.synchronized('dhcp-agent')
  def port_update_end(self, context, payload):
  Handle the port.update.end notification event.
 +if payload['port']['id'] in self.deleted_ports:
 +LOG.warning(_(Received message for port that was 
 +  already deleted: %s), payload['port']['id'])
 +return
  updated_port = dhcp.DictModel(payload['port'])
  network = self.cache.get_network_by_id(updated_port.network_id)
  if network:
 @@ -315,6 +321,7 @@ class DhcpAgent(manager.Manager):
  def port_delete_end(self, context, payload):
  Handle the port.delete.end notification event.
  port = self.cache.get_port_by_id(payload['port_id'])
 +self.deleted_ports.add(payload['port_id'])
  if port:
  network = self.cache.get_network_by_id(port.network_id)
  self.cache.remove_port(port)








 On Mon, Jun 8, 2015 at 8:26 AM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Any help, ideas please?

 Thx,
 Dani

 On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 + Operators

 Much thanks in advance,
 Dani




 On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi all,

 I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
 version 2.59-4.
 I have a very basic network layout where i have a private net which
 has 2 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
| e79c3477-d3e5-471c-a728-8d881cf31bee
 192.168.110.0/24 |
 |
 | |
 f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

 and i'm creating VMs via HEAT.
 What is happening is that sometimes i get duplicated entries in [1]
 and because of that the VM which was spun up doesn't get an ip.
 The Dnsmask processes are running okay [2] and i can't see anything
 special/ wrong in it.

 Any idea why this is happening? Or are you aware of any bugs around
 this area? Do you see a problems with having 2 subnets mapped to 1
 private-net?



 Thanks,
 Dani

 [1]
 /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 [2]

 nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
 --no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
 --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 --dhcp-optsfile=/var

Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla

2015-06-30 Thread Daniel Comnea
Ian,

a while ago it was discussed on operator's mailing list between Kevin,
Steve  co

http://lists.openstack.org/pipermail/openstack-operators/2015-June/007267.html



On Tue, Jun 30, 2015 at 8:28 PM, Ian Cordasco ian.corda...@rackspace.com
wrote:



 On 6/29/15, 23:59, Steven Dake (stdake) std...@cisco.com wrote:

 The Kolla community
 is pleased to announce the
  release of the
  Kolla Liberty 1 milestone.  This release fixes 56 bugs
  and implements 14 blueprints!
 
 
 Our community developed the following notable features:
 
 
 
 * A start at
 source-basedcontainers

 So how does this now compare to the stackforge/os-ansible-deployment (soon
 to be openstack/openstack-ansible) project?

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Debian already using Python 3.5: please gate on that

2015-06-18 Thread Daniel Comnea
On Thu, Jun 18, 2015 at 7:17 PM, Brian Curtin br...@python.org wrote:



 On Thursday, June 18, 2015, Doug Hellmann d...@doughellmann.com wrote:

 Excerpts from Thomas Goirand's message of 2015-06-18 15:44:17 +0200:
  Hi!
 
  tl;dr: skip this message, the subject line is enough! :)
 
  As per the subject line, we already have Python 3.5 in Debian (AFAICT,
  from Debian Experimental, in version beta 2). As a consequence, we're
  already running (unit) tests using Python 3.5. Some have failures: I
  could see issues in ceilometerclient, keystoneclient, glanceclient and
  more (yes, I am planning to report these issues, and we already started
  doing so). As Python 3.4 is still the default interpreter for
  /usr/bin/python3, that's currently fine, but it soon wont be.
 
  All this to say: if you are currently gating on Python 3, please start
  slowly adding support for 3.5, as we're planning to switch to that for
  Debian 9 (aka Stretch). I believe Ubuntu will follow (as the Python core
  packages are imported from Debian).

 3.5 is still in beta. What's the schedule for an official release from
 the python-dev team?


 3.4 Final is planed for September 13

small type here - 3.5 as per the link you provided  :)

  https://www.python.org/dev/peps/pep-0478/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Daniel Comnea
+ Operators

Much thanks in advance,
Dani



On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com wrote:

 Hi all,

 I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
 version 2.59-4.
 I have a very basic network layout where i have a private net which has 2
 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
| e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24 |
 |
 | |
 f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

 and i'm creating VMs via HEAT.
 What is happening is that sometimes i get duplicated entries in [1] and
 because of that the VM which was spun up doesn't get an ip.
 The Dnsmask processes are running okay [2] and i can't see anything
 special/ wrong in it.

 Any idea why this is happening? Or are you aware of any bugs around this
 area? Do you see a problems with having 2 subnets mapped to 1 private-net?



 Thanks,
 Dani

 [1] /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 [2]

 nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
 --no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
 --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 --dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
 --leasefile-ro --dhcp-authoritative
 --dhcp-range=set:tag0,192.168.110.0,static,86400s
 --dhcp-range=set:tag1,192.168.111.0,static,86400s --dhcp-lease-max=512
 --conf-file= --server=10.0.0.31 --server=10.0.0.32 --domain=openstacklocal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Daniel Comnea
Any help, ideas please?

Thx,
Dani

On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com wrote:

 + Operators

 Much thanks in advance,
 Dani




 On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi all,

 I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
 version 2.59-4.
 I have a very basic network layout where i have a private net which has 2
 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
  | e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
 |
 |
 | |
 f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

 and i'm creating VMs via HEAT.
 What is happening is that sometimes i get duplicated entries in [1] and
 because of that the VM which was spun up doesn't get an ip.
 The Dnsmask processes are running okay [2] and i can't see anything
 special/ wrong in it.

 Any idea why this is happening? Or are you aware of any bugs around this
 area? Do you see a problems with having 2 subnets mapped to 1 private-net?



 Thanks,
 Dani

 [1] /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 [2]

 nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
 --no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
 --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 --dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
 --leasefile-ro --dhcp-authoritative
 --dhcp-range=set:tag0,192.168.110.0,static,86400s
 --dhcp-range=set:tag1,192.168.111.0,static,86400s --dhcp-lease-max=512
 --conf-file= --server=10.0.0.31 --server=10.0.0.32 --domain=openstacklocal



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-07 Thread Daniel Comnea
Hi all,

I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
version 2.59-4.
I have a very basic network layout where i have a private net which has 2
subnets

 2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
   | e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24 |
|
| |
f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

and i'm creating VMs via HEAT.
What is happening is that sometimes i get duplicated entries in [1] and
because of that the VM which was spun up doesn't get an ip.
The Dnsmask processes are running okay [2] and i can't see anything
special/ wrong in it.

Any idea why this is happening? Or are you aware of any bugs around this
area? Do you see a problems with having 2 subnets mapped to 1 private-net?



Thanks,
Dani

[1] /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
[2]

nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
--no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
--except-interface=lo
--pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
--dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
--addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
--dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
--leasefile-ro --dhcp-authoritative
--dhcp-range=set:tag0,192.168.110.0,static,86400s
--dhcp-range=set:tag1,192.168.111.0,static,86400s --dhcp-lease-max=512
--conf-file= --server=10.0.0.31 --server=10.0.0.32 --domain=openstacklocal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Daniel Comnea
+1

On Fri, Jun 5, 2015 at 6:38 PM, Vilobh Meshram 
vilobhmeshram.openst...@gmail.com wrote:

 +1

 On Thu, May 14, 2015 at 3:52 AM, John Garbutt j...@johngarbutt.com
 wrote:

 On 12 May 2015 at 20:33, Sean Dague s...@dague.net wrote:
  On 05/12/2015 01:12 PM, Jeremy Stanley wrote:
  On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote:
  It's a nice up side. However, as others have pointed out, it's only
  capable of displaying the most basic pieces of the architecture.
 
  For higher level views with more components, I don't think ASCII art
  can provide enough bandwidth to help as much as a vector diagram.
 
  Of course, simply a reminder that just because you have one or two
  complex diagram callouts in a document doesn't mean it's necessary
  to also go back and replace your simpler ASCII art diagrams with
  unintelligible (without rendering) SVG or Postscript or whatever.
  Doing so pointlessly alienates at least some fraction of readers.
 
  Sure, it's all about trade offs.
 
  But I believe that statement implicitly assumes that ascii art diagrams
  do not alienate some fraction of readers. And I think that's a bad
  assumption.
 
  If we all feel alienated every time anyone does anything that's not
  exactly the way we would have done it, it's time to give up and pack it
  in. :) This thread specifically mentioned source based image formats
  that were internationally adopted open standards (w3c SVG, ISO ODG) that
  have free software editors that exist in Windows, Mac, and Linux
  (Inkscape and Open/LibreOffice).

 Some great points make here.

 Lets try decide something, and move forward here.

 Key requirements seem to be:
 * we need something that gives us readable diagrams
 * if its not easy to edit, it will go stale
 * ideally needs to be source based, so it lives happily inside git
 * needs to integrate into our sphinx pipeline
 * ideally have an opensource editor for that format (import and
 export), for most platforms

 ascii art fails on many of these, but its always a trade off.

 Possible way forward:
 * lets avoid merging large hard to edit bitmap style images
 * nova-core reviewers can apply their judgement on merging source based
 formats
 * however it *must* render correctly in the generated html (see result
 of docs CI job)

 Trying out SVG, and possibly blockdiag, seem like the front runners.
 I don't think we will get consensus without trying them, so lets do that.

 Will that approach work?

 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-05-29 Thread Daniel Comnea
My vote doesn't count but as an operator i'd give Assaf +100 for all the
great knowledge he has and shared through his blog - incredible source of
information for everyone who wants to understand in a dummy way how Neutron
does things.

Dani

On Fri, May 29, 2015 at 4:13 AM, Akihiro Motoki amot...@gmail.com wrote:

 +1

 2015-05-28 22:42 GMT+09:00 Kyle Mestery mest...@mestery.com:

 Folks, I'd like to propose Assaf Muller to be a member of the Neutron
 core reviewer team. Assaf has been a long time contributor in Neutron, and
 he's also recently become my testing Lieutenant. His influence and
 knowledge in testing will be critical to the team in Liberty and beyond. In
 addition to that, he's done some fabulous work for Neutron around L3 HA and
 DVR. Assaf has become a trusted member of our community. His review stats
 place him in the pack with the rest of the Neutron core reviewers.

 I'd also like to take this time to remind everyone that reviewing code is
 a responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 that +1/-1 reviews are very useful, and I encourage everyone to continue
 reviewing code even if you are not a core reviewer.

 Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
 the core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/180

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova]: anti-affinity policy via heat in IceHouse?

2015-05-25 Thread Daniel Comnea
Thanks a bunch Gents!

Dani

On Mon, May 25, 2015 at 12:30 PM, Dimitri Mazmanov 
dimitri.mazma...@ericsson.com wrote:

  Here’s one way:

  heat_template_version: 2013-05-23
 parameters:
   image:
 type: string
 default: TestVM
   flavor:
 type: string
 default: m1.micro
   network:
 type: string
 default: cirros_net2

 resources:
   serv_1:
 type: OS::Nova::Server
 properties:
   image: { get_param: image }
   flavor: { get_param: flavor }
   networks:
 - network: {get_param: network}
   scheduler_hints: {different_host: {get_resource: serv_2}}
serv_2:
 type: OS::Nova::Server
 properties:
   image: { get_param: image }
   flavor: { get_param: flavor }
   networks:
 - network: {get_param: network}
   scheduler_hints: {different_host: {get_resource: serv_1}}

 Note: In order to the above mentioned scheduler hints to work, the
 following scheduler filter should be enabled for nova scheduler
   SameHostFilter and
   DifferentHostFilter

  There’s another way of doing it using OS::Nova::ServerGroup, but it’s
 available only since Juno.

  -
 Dimitri

   From: Daniel Comnea comnea.d...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Sunday 24 May 2015 12:24
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [heat][nova]: anti-affinity policy via heat
 in IceHouse?

Thanks Kevin !

  Would you have an example?

  Much appreciated,
  Dani


 On Sun, May 24, 2015 at 12:28 AM, Fox, Kevin M kevin@pnnl.gov wrote:

 It works with heat. You can use a scheduler hint on the instance and the
 server group resource to make a new one.

 Thanks,
 Kevin

 --
 *From:* Daniel Comnea
 *Sent:* Saturday, May 23, 2015 3:17:11 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [heat][nova]: anti-affinity policy via heat
 in IceHouse?

 Hi,

  I'm aware of the anti-affinity policy which you can create via nova cli
 and associated instances with it.
  I'm also aware of the default policies in nova.conf

  by creating instances via HEAT is any alternatives to create instances
 part of anti-affinity group?

  Thx,
  Dani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova]: anti-affinity policy via heat in IceHouse?

2015-05-24 Thread Daniel Comnea
Thanks Kevin !

Would you have an example?

Much appreciated,
Dani


On Sun, May 24, 2015 at 12:28 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  It works with heat. You can use a scheduler hint on the instance and the
 server group resource to make a new one.

 Thanks,
 Kevin

 --
 *From:* Daniel Comnea
 *Sent:* Saturday, May 23, 2015 3:17:11 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [heat][nova]: anti-affinity policy via heat in
 IceHouse?

 Hi,

  I'm aware of the anti-affinity policy which you can create via nova cli
 and associated instances with it.
  I'm also aware of the default policies in nova.conf

  by creating instances via HEAT is any alternatives to create instances
 part of anti-affinity group?

  Thx,
  Dani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why need br-int and br-tun in openstack neutron

2015-05-24 Thread Daniel Comnea
On Sun, May 24, 2015 at 5:46 PM, Armando M. arma...@gmail.com wrote:

 On 23 May 2015 at 04:43, Assaf Muller amul...@redhat.com wrote:

 There's no real reason as far as I'm aware, just an implementation
 decision.


 This is inaccurate. There is a reason(s), and this has been asked before:

 http://lists.openstack.org/pipermail/openstack/2014-March/005950.html
 http://lists.openstack.org/pipermail/openstack/2014-April/006865.html

 In a nutshell, the design decision that led to the existing architecture
 is due to the way OVS handles packets and interact with netfilter.

 The fact that we keep asking the same question clearly shows lack of
 documentation, both developer and user facing.

 I'll get this fixed once and for all.

[DC]: and very much appreciate your initiative !!


 Thanks,
 Armando





 On 21 במאי 2015, at 01:48, Na Zhu na...@cn.ibm.com wrote:

 Dear,


 When OVS plugin is used with GRE option in Neutron, I see that each
 compute
 node has br-tun and br-int bridges created.

 I'm trying to understand why we need the additional br-tun bridge here.
 Can't we create tunneling ports in br-int bridge, and have br-int relay
 traffic between VM ports and tunneling ports directly? Why do we have to
 introduce another br-tun bridge?


 Regards,
 Juno Zhu
 Staff Software Engineer, System Networking
 China Systems and Technology Lab (CSTL), IBM Wuxi
 Email: na...@cn.ibm.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why need br-int and br-tun in openstack neutron

2015-05-24 Thread Daniel Comnea
On Sat, May 23, 2015 at 12:43 PM, Assaf Muller amul...@redhat.com wrote:

 There's no real reason as far as I'm aware, just an implementation
 decision.

[DC]: in this case wouldn't this bee seen suitable as best practicies vs
what every blog/ manual is suggesting?




 On 21 במאי 2015, at 01:48, Na Zhu na...@cn.ibm.com wrote:

 Dear,


 When OVS plugin is used with GRE option in Neutron, I see that each
 compute
 node has br-tun and br-int bridges created.

 I'm trying to understand why we need the additional br-tun bridge here.
 Can't we create tunneling ports in br-int bridge, and have br-int relay
 traffic between VM ports and tunneling ports directly? Why do we have to
 introduce another br-tun bridge?


 Regards,
 Juno Zhu
 Staff Software Engineer, System Networking
 China Systems and Technology Lab (CSTL), IBM Wuxi
 Email: na...@cn.ibm.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][nova]: anti-affinity policy via heat in IceHouse?

2015-05-23 Thread Daniel Comnea
Hi,

I'm aware of the anti-affinity policy which you can create via nova cli and
associated instances with it.
I'm also aware of the default policies in nova.conf

by creating instances via HEAT is any alternatives to create instances part
of anti-affinity group?

Thx,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] [barbican] Relationship between Octavia and Barbican and Octavia 1.0 questions

2015-05-22 Thread Daniel Comnea
My $0.2 cents:

I echo what Maish said with regards to functionality:
- integration with HEAT is a must from Day -1 (if there is anything like
this :) ) otherwise will be hard to gain operators traction. Look it as the
entry point for everyone trying to move from Neutron LB
- white/ black listing traffic based on source port/ network/IP
- multiple FIPs associated with 1 LB, the use case is: say i have 1 LB open
for port tcp 80  udp 88 listening on 2 FIPs (even registered into a DNS)
and 2 different set of clients consuming this interfaces - frontend and
backend

Dani


Dani

On Thu, May 21, 2015 at 10:45 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

  ​Right now its all or nothing as far as tcp ports go.  It currently does
 not have the functionality of white/black listinging traffic originating
 from a specific network.
  --
 *From:* Maish Saidel-Keesing mais...@maishsk.com
 *Sent:* Thursday, May 21, 2015 7:45 AM
 *To:* openstack-dev@lists.openstack.org

 *Subject:* Re: [openstack-dev] [lbaas] [octavia] [barbican] Relationship
 between Octavia and Barbican and Octavia 1.0 questions

  Thanks Brandon

 On 05/20/15 22:58, Brandon Logan wrote:

 ​Just to add a few things,

 Barbican is not yet implemented in Octavia, though the code is there, we
 just need to spend a few hours hooking it all up and testing it out.


  Also, the security groups are used by octavia right now so that only the
 ports on the listener are accessible.  Basically if a loadbalancer has
 listeners on ports 80 and 443, the vip ports will only allow traffic on
 those ports.  It shouldn't allow other traffic.

 That is great to hear. I assume that if we are using security groups we
 will also be able to define rules regarding which networks the listeners
 are allowed to accept traffic from?

 Is that assumption correct?


  Thanks,

 Brandon
  --
 *From:* Doug Wiegley doug...@parksidesoftware.com
 doug...@parksidesoftware.com
 *Sent:* Thursday, May 21, 2015 12:49 AM
 *To:* maishsk+openst...@maishsk.com; OpenStack Development Mailing List
 (not for usage questions); Maish Saidel-Keesing
 *Subject:* Re: [openstack-dev] [lbaas] [octavia] [barbican] Relationship
 between Octavia and Barbican and Octavia 1.0 questions

  Hi Maish,

  Thanks for the feedback, some answers below.  Please also be aware of
 the lbaas use cases session tomorrow at 9am (yuck, I know),
 https://etherpad.openstack.org/p/YVR-neutron-lbaas-use-cases


  On May 19, 2015, at 12:05 AM, Maish Saidel-Keesing mais...@maishsk.com
 wrote:

  Hello all,

 Going over today's presentation Load Balancing as a Service, Kilo and
 Beyond[1] (great presentation!!) - there are a few questions I have
 regarding the future release:

 For Octavia 1.0:

 1. Can someone explain to me how the flow would work for spinning up a a
 new Amphora with regards to interaction between Neutron, LBaaS and Barbican?
 Same question as well regarding how the standby is created and its
 relationship with Barbican.


  The lbaas API runs inside neutron-server.  The general flow is:

  - User interacts with neutron CLI/API or horizon (in liberty), and
 creates an LB.
 - Lbaas plugin in neutron creates logical models, fetches cert data from
 barbican, and calls the backend lbaas driver.
 - The backend driver does what it needs to to instantiate the LB. Today
 this is a synchronous call that waits for the nova boot, but by Liberty, it
 will likely be an async call to the octavia controller to finish the job.

  Once Octavia has control, it is doing:

  - Get REST calls for objects,
 - Talk to nova, spin up an amphora image,
 - Talk to neutron, plumb in the networks,
 - Send the amphora its config.


 2. Will the orchestration (Heat) also be implemented when Octavia 1.0 is
 released or only further down the line?
 If not what would you suggest be the way to orchestrate LBaaS until this
 is ready?


  We need to talk to the Heat folks and coordinate this, which we are
 planning to do soon.


 3. Is there some kind of hook into Security groups also planned for the
 Amphora to also protect the Load Balancer?


  Not at present, but I recorded this in the feature list on the etherpad
 above.


 I think that based on the answers to these questions above - additional
 questions will follow.

 Thanks

 [1] https://www.youtube.com/watch?v=-eAKur8lErU
 --
 Best Regards,
 Maish Saidel-Keesing
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Best Regards,
 Maish Saidel-Keesing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [lbaas] [octavia] [barbican] Relationship between Octavia and Barbican and Octavia 1.0 questions

2015-05-20 Thread Daniel Comnea
+1

On Tue, May 19, 2015 at 8:05 AM, Maish Saidel-Keesing mais...@maishsk.com
wrote:

  Hello all,

 Going over today's presentation Load Balancing as a Service, Kilo and
 Beyond[1] (great presentation!!) - there are a few questions I have
 regarding the future release:

 For Octavia 1.0:

 1. Can someone explain to me how the flow would work for spinning up a a
 new Amphora with regards to interaction between Neutron, LBaaS and Barbican?
 Same question as well regarding how the standby is created and its
 relationship with Barbican.

 2. Will the orchestration (Heat) also be implemented when Octavia 1.0 is
 released or only further down the line?
 If not what would you suggest be the way to orchestrate LBaaS until this
 is ready?

 3. Is there some kind of hook into Security groups also planned for the
 Amphora to also protect the Load Balancer?

 I think that based on the answers to these questions above - additional
 questions will follow.

 Thanks

 [1] https://www.youtube.com/watch?v=-eAKur8lErU
 --
 Best Regards,
 Maish Saidel-Keesing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [kolla] [magnum] [nova] [neutron] Vancouver Summit Operations Containers Session

2015-05-20 Thread Daniel Comnea
Great info, thanks for sharing.

Since i couldn't attend the summit, are there any AIs which needs to
happen/ take place and which i can keep an eye on?

thanks,
Dani

On Thu, May 21, 2015 at 4:45 AM, Richard Raseley rich...@raseley.com
wrote:

 I apologize if this message is inappropriately broad, but I wanted to
 share the results of the Vancouver Summit operations containers session[0]
 we held yesterday with all of the projects which were mentioned.

 This was a great discussion with approximately two-dozen individuals in
 attendance. Most of the conversation was captured pretty well on the
 etherpad, which can be found here:

 https://etherpad.openstack.org/p/YVR-ops-containers

 A big thank you to everyone who participated - it was really informative.

 Regards,

 Richard Raseley

 SysOps Engineer @ Puppet Labs

 [0] - http://sched.co/3B45

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] LBaaS in version 5.1

2015-05-06 Thread Daniel Comnea
HI all,

Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a
request came with enabling Neutron LBaaS.

I have looked up on Fuel doc to see if this is supported in the version i'm
running but failed ot find anything.

Anyone can point me to any docs which mentioned a) yes it is supported and
b) how to update it via Fuel?

Thanks,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] LBaaS in version 5.1

2015-05-06 Thread Daniel Comnea
Thanks Stanislaw for reply.

sure i can do that the only unknown question i have is related to the Fuel
HA controllers. I assume i can easily ignore the controller HA (LBaaS
doesn't support HA :) ) and just go the standard LBaaS?



On Wed, May 6, 2015 at 2:55 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi Daniel,

 Unfortunately, we never supported LBaaS until Fuel 6.0 when plugin system
 was introduced and LBaaS plugin was created. So, I think than docs about it
 never existed for 5.1. But as I know, you can easily install LBaaS in 5.1
 (it should be shipped in our repos) and configure it with accordance to
 standard OpenStack cloud administrator guide [1].

 [1]
 http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html

 On Wed, May 6, 2015 at 2:12 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 HI all,

 Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and
 a request came with enabling Neutron LBaaS.

 I have looked up on Fuel doc to see if this is supported in the version
 i'm running but failed ot find anything.

 Anyone can point me to any docs which mentioned a) yes it is supported
 and b) how to update it via Fuel?

 Thanks,
 Dani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Thanks a bunch Doug, very clear  helpful info.

so with that said those who run IceHouse or Juno are (more or less :) )
dead in the water as the only option is v1 ...hmm

Dani

On Mon, May 4, 2015 at 10:21 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 lbaas v1:

 This is the original Neutron LBaaS, and what you see in Horizon or in the
 neutron CLI as “lb-*”.  It has an haproxy backend, and a few vendors
 supporting it. Feature-wise, it’s basically a byte pump.

 lbaas v2:

 This is the “new” Neutron LBaaS, and is in the neutron CLI as “lbaas-*”
 (it’s not yet in Horizon.)  It first shipped in Kilo. It re-organizes the
 objects, and adds TLS termination support, and has L7 plus other new
 goodies planned in Liberty. It similarly has an haproxy reference backend
 with a few vendors supporting it.

 octavia:

 Think of this as a service vm framework that is specific to lbaas, to
 implement lbaas via nova VMs instead of “lbaas agents. It is expected to
 be the reference backend implementation for neutron lbaasv2 in liberty. It
 could also be used as its own front-end, and/or given drivers to be a load
 balancing framework completely outside neutron/nova, though that is not the
 present direction of development.

 Thanks,
 doug




  On May 4, 2015, at 1:57 PM, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm trying to gather more info about the differences between
 
  Neutron LBaaS v1
  Neutron LBaaS v2
  Octavia
 
  I know Octavia is still not marked production but on the other hand i
 keep hearing inside my organization that Neutron LBaaS is missing few
 critical pieces so i'd very much appreciate if anyone can provide
 detailed info about the differences above.
 
  Thanks,
  Dani
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-04 Thread Daniel Comnea
Hey Kevin,

Let me add more info:

1) trying to understand if there is any support for baremetal provisioning
(e.g setup the UCS manager if using UCS blades etc, dump the OS on it). I
don't care if is Ironic or PXE/ Kickstart or Foreman etc
2) deployments on baremetal without the use of LXC containers (stick with
default VM instances)

Dani

On Mon, May 4, 2015 at 3:02 PM, Kevin Carter kevin.car...@rackspace.com
wrote:

 Hey Dani,

 Are you looking for support for Ironic for baremetal provisioning or for
 deployments on baremetal without the use of LXC containers?

 —

 Kevin

  On May 3, 2015, at 06:45, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Great job Kevin  co !!
 
  Are there any plans in supporting configure the baremetal as well ?
 
  Dani
 
  On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
 gene@alcatel-lucent.com wrote:
  cool!
  
  From: Kevin Carter [kevin.car...@rackspace.com]
  Sent: Thursday, April 30, 2015 4:36 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo
 
  Hello Stackers,
 
  The OpenStack Ansible Deployment (OSAD) project is happy to announce our
 stable Kilo release, version 11.0.0. The project has come a very long way
 from initial inception and taken a lot of work to excise our original
 vendor logic from the stack and transform it into a community-driven
 architecture and deployment process. If you haven’t yet looked at the
 `os-ansible-deployment` project on StackForge, we'd love for you to take a
 look now [ https://github.com/stackforge/os-ansible-deployment ]. We
 offer an OpenStack solution orchestrated by Ansible and powered by upstream
 OpenStack source. OSAD is a batteries included OpenStack deployment
 solution that delivers OpenStack as the developers intended it: no
 modifications to nor secret sauce in the services it deploys. This release
 includes 436 commits that brought the project from Rackspace Private Cloud
 technical debt to an OpenStack community deployment solution. I'd like to
 recognize the following people (from Git logs) for all of their hard work
 in making the OSAD project successful:
 
  Andy McCrae
  Matt Thompson
  Jesse Pretorius
  Hugh Saunders
  Darren Birkett
  Nolan Brubaker
  Christopher H. Laco
  Ian Cordasco
  Miguel Grinberg
  Matthew Kassawara
  Steve Lewis
  Matthew Oliver
  git-harry
  Justin Shepherd
  Dave Wilde
  Tom Cameron
  Charles Farquhar
  BjoernT
  Dolph Mathews
  Evan Callicoat
  Jacob Wagner
  James W Thorne
  Sudarshan Acharya
  Jesse P
  Julian Montez
  Sam Yaple
  paul
  Jeremy Stanley
  Jimmy McCrory
  Miguel Alex Cantu
  elextro
 
 
  While Rackspace remains the main proprietor of the project in terms of
 community members and contributions, we're looking forward to more
 community participation especially after our stable Kilo release with a
 community focus. Thank you to everyone that contributed on the project so
 far and we look forward to working with more of you as we march on.
 
  —
 
  Kevin Carter
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Hi all,

I'm trying to gather more info about the differences between

Neutron LBaaS v1
Neutron LBaaS v2
Octavia

I know Octavia is still not marked production but on the other hand i keep
hearing inside my organization that Neutron LBaaS is missing few critical
pieces so i'd very much appreciate if anyone can provide detailed info
about the differences above.

Thanks,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStackClient 1.2.0 release

2015-05-03 Thread Daniel Comnea
Dean, quick question:

will i be able to run this new client against Icehouse too ?

Thanks,
Dani

On Thu, Apr 30, 2015 at 9:24 PM, Dean Troyer dtro...@gmail.com wrote:

 OpenStackClient 1.2.0 has been released to PyPI.

 python-openstackclient can be installed from the following locations:

 PyPI: https://pypi.python.org/pypi/python-openstackclient
 OpenStack tarball: http://tarballs.openstack.org/python-openstackclient
 /python-openstackclient-1.2.0.tar.gz
 http://tarballs.openstack.org/python-openstackclient/python-openstackclient-1.0.3.tar.gz

 Please report issues through launchpad: https://bugs.launchpad.net/python-
 openstackclient

 1.2.0 (30 Apr 2015)
 ===

 * Fix error in ``security group create`` command when ``--description`` is
 not supplied.
 * Correct ``image list`` pagination handling, all images are now correctly
 returned.
 * Do not require ``--dst-port`` option with ``security group rule create``
 when ``--proto ICMP`` is selected.
 * Correctly pass ``--location`` arguemnt in ``image create`` command.
 * Correctly handle use of ``role`` commands for project admins.  Using IDs
 will work for project admins even when names will not due to non-admin
 contraints.
 * Correctly exit with an error when authentication can not be completed.
 * Fix ``backup create`` to correctly use the ``--container`` value if
 supplied.
 * Document the backward-compatibility-breaking changes in
   :doc:`backwards-incompatibile`.
 * Add `--parent`` option to `projct create` command.


 dt

 --

 Dean Troyer
 dtro...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-03 Thread Daniel Comnea
Great job Kevin  co !!

Are there any plans in supporting configure the baremetal as well ?

Dani

On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
gene@alcatel-lucent.com wrote:

 cool!
 
 From: Kevin Carter [kevin.car...@rackspace.com]
 Sent: Thursday, April 30, 2015 4:36 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo

 Hello Stackers,

 The OpenStack Ansible Deployment (OSAD) project is happy to announce our
 stable Kilo release, version 11.0.0. The project has come a very long way
 from initial inception and taken a lot of work to excise our original
 vendor logic from the stack and transform it into a community-driven
 architecture and deployment process. If you haven’t yet looked at the
 `os-ansible-deployment` project on StackForge, we'd love for you to take a
 look now [ https://github.com/stackforge/os-ansible-deployment ]. We
 offer an OpenStack solution orchestrated by Ansible and powered by upstream
 OpenStack source. OSAD is a batteries included OpenStack deployment
 solution that delivers OpenStack as the developers intended it: no
 modifications to nor secret sauce in the services it deploys. This release
 includes 436 commits that brought the project from Rackspace Private Cloud
 technical debt to an OpenStack community deployment solution. I'd like to
 recognize the following people (from Git logs) for all of their hard work
 in making the OSAD project successful:

 Andy McCrae
 Matt Thompson
 Jesse Pretorius
 Hugh Saunders
 Darren Birkett
 Nolan Brubaker
 Christopher H. Laco
 Ian Cordasco
 Miguel Grinberg
 Matthew Kassawara
 Steve Lewis
 Matthew Oliver
 git-harry
 Justin Shepherd
 Dave Wilde
 Tom Cameron
 Charles Farquhar
 BjoernT
 Dolph Mathews
 Evan Callicoat
 Jacob Wagner
 James W Thorne
 Sudarshan Acharya
 Jesse P
 Julian Montez
 Sam Yaple
 paul
 Jeremy Stanley
 Jimmy McCrory
 Miguel Alex Cantu
 elextro


 While Rackspace remains the main proprietor of the project in terms of
 community members and contributions, we're looking forward to more
 community participation especially after our stable Kilo release with a
 community focus. Thank you to everyone that contributed on the project so
 far and we look forward to working with more of you as we march on.

 —

 Kevin Carter


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Cross-project coordination

2015-04-23 Thread Daniel Comnea
Good luck Sean!! Hopefully you can sort out the DNS and the Qoutas.

Thanks again !

On Wed, Apr 22, 2015 at 11:41 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 04/22/2015 04:33 PM, Kyle Mestery wrote:

 On Wed, Apr 22, 2015 at 3:28 PM, Sean M. Collins s...@coreitpro.com
 mailto:s...@coreitpro.com wrote:

 Hi,

 Kyle Mestery has asked me to serve as a cross-project liaison between
 Neutron and Nova. So, I'd like to say hello, and
 direct everyone towards an etherpad that I have created.

 https://etherpad.openstack.org/p/nova-neutron

 The etherpad can serve as a way to collect items that should be
 discussed between the projects.

 I will be attending the Nova main meetings to field Neutron
 questions, or at least provide who on the Neutron side would know the
 answer to a question.

 This is a big job, and I'd like to see if there is a volunteer on the
 Nova side who would be interested in also helping this effort, who
 could
 do the reverse (Sit in on Neutron meetings, field Nova questions).

 Thank you, and I look forward to working with everyone!

 Sean, thank you so much for volunteering to take on this incredibly
 important job. I'm hoping we can get someone from the nova side to work
 hand-in-hand with you and ensure we continue to drive issues related to
 both nova and neutron to conclusion in a way which benefits are mutual
 users!


 Agreed. I'm hoping that someone in the Nova community -- note, this does
 not need to be a Nova core contributor -- can step up to the plate and
 serve in this critical role.

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-18 Thread Daniel Comnea
Monty many thanks for a clear summary, fully agree with you. I have a
nightmare trying to educate developers (mainly from client side) in my
group that they need to get used with private net and not consume all FIP
because is not an unlimited resource


On Sat, Apr 18, 2015 at 3:53 AM, Monty Taylor mord...@inaugust.com wrote:

 On 04/17/2015 06:48 PM, Rochelle Grober wrote:
  I know the DevStack issue seems to be solved, but I had to
  respond.inline
 
  From: Fox, Kevin M [mailto:kevin@pnnl.gov] Sent: Friday, April
  17, 2015 12:28 To: OpenStack Development Mailing List (not for usage
  questions) Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge
  as the default in DevStack [was: Status of the nova-network to
  Neutron migration work]
 
  No, the complaints from ops I have heard even internally, which I
  think is being echo'd here is I understand how linux bridge works, I
  don't opensvswitch. and I don't want to be bothered to learn to
  debug openvswitch because I don't think we need it.
 
  If linux bridge had feature parity with openvswitch, then it would be
  a reasonable argument or if the users truly didn't need the extra
  features provided by openvswitch/naas. I still assert though, that
  linux bridge won't get feature parity with openvswitch and the extra
  features are actually critical to users (DVR/NaaS), so its worth
  switching to opevnswitch and learning how to debug it. Linux Bridge
  is a nonsolution at this point.

 I'm sorry, but with all due respect - I believe that sounds very much
 like sticking fingers in ears and not paying attention to the very real
 needs of users.

 Let me tell you some non-features I encounter currently:

 - Needing Floating IPs to get a public address

 This is touted as the right way to do it - but it's actually a
 terrible experience for a user. The clouds I have access to that just
 give me a direct DHCP address are much more useful.

 In fact, we should delete floating ips - they are a non-feature that
 make life harder. Literally no user of a cloud has ever wanted them,
 although we've learned to deal with them.

 - SDN

 I understand this is important for people, so let's keep it around - but
 having software routers essentially means that it's a scaling
 bottleneck. In the cloud Infra uses that has SDN, we have to create
 multiple software routers to handle the scaling issues. On the other
 hand, direct routing / linuxbridge does NOT have this problem, because
 the network packets are routed directly.

 We should not delete SDN like we should delete floating IPs, because
 there are real users who have real uses cases and SDN helps them.
 However, it should be an opt-in feature for a user that is an add on.

 vexxhost is getting this right right now - you automatically get a
 DHCP'd direct routed IP on each VM you provision, but if you decide you
 need fancy, you can opt in to create a private network.

 - DVR

 I'm an end user. I do not care about this at all. DVR is only important
 if you have bought in to software routers. It's a solution to a problem
 that would go away if things worked like networks.



 :/ So is keeping nova-network around
  forever. :/ But other then requiring some more training for ops
  folks, I think Neutron can suit the rest of the use cases these days
  nova-network provided over neutron. The sooner we can put the
  nova-network issue to bed, the better off the ecosystem will be. It
  will take a couple of years for the ecosystem to settle out to
  deprecating it, since a lot of clouds take years to upgrade and
  finally put the issue to bed. Lets do that sooner rather then later
  so a couple of years from now, we're done. :/

 I'm about to deploy a cloud, I'm going to run neutron, and I'm not going
 to run openvswitch because I do not need it. I will run the equiv of
 flatdhcp.

 If neutron doesn't have it, I will write it, because it's that important
 that it exist.

 If you take that ability away from me, you will be removing working
 feature and replacing them with things that make my user experience worse.

 Let's not do that. Let's listen to the people who are using this thing
 as end users. Let's understand their experience and frustration. And
 let's not chase pie-in-the-sky theory of how it should work in the
 face of what a ton of people are asking and even begging for. FlatDHCP
 is perfect for the 80% case. The extra complexity of the additional
 things if you don't actually need them is irresponsible.

 
  [Rockyg] Kevin, the problem is that the extra features *aren't*
  critical to the deployers and/or users of many of openstack
  deployments.  And since they are not critical, the deployers won't
  *move* to using neutron that requires them to learn all this new
  stuff that thjey don't need.  By not providing a simple path to a
  flatDHCP implementation, you will get existing users refusing to
  upgrade rather than take a bunch of extraneous stuff from Neutron
  because the OpenStack project 

Re: [openstack-dev] 答复: [neutron] Neutron scaling datapoints?

2015-04-14 Thread Daniel Comnea
Joshua,

those are old and have been fixed/ documented on Consul side.
As for ZK, i have nothing against it, just wish you good luck running it in
a multi cross-DC setup :)

Dani

On Mon, Apr 13, 2015 at 11:37 PM, Joshua Harlow harlo...@outlook.com
wrote:

 Did the following get addressed?

 https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

 Seems like quite a few things got raised in that post about etcd/consul.

 Maybe they are fixed, idk...

 https://aphyr.com/posts/291-call-me-maybe-zookeeper though worked as
 expected (and without issue)...

 I quote:

 '''
 Recommendations

 Use Zookeeper. It’s mature, well-designed, and battle-tested. Because the
 consequences of its connection model and linearizability properties are
 subtle, you should, wherever possible, take advantage of tested recipes and
 client libraries like Curator, which do their best to correctly handle the
 complex state transitions associated with session and connection loss.
 '''

 Daniel Comnea wrote:

 My $2 cents:

 I like the 3rd party backend however instead of ZK wouldn't Consul [1]
 fit better due to lighter/ out of box multi DC awareness?

 Dani

 [1] Consul - https://www.consul.io/


 On Mon, Apr 13, 2015 at 9:51 AM, Wangbibo wangb...@huawei.com
 mailto:wangb...@huawei.com wrote:

 Hi Kevin,

 __ __

 Totally agree with you that heartbeat from each agent is something
 that we cannot eliminate currently. Agent status depends on it, and
 further scheduler and HA depends on agent status.

 __ __

 I proposed a Liberty spec for introducing open framework/pluggable
 agent status drivers.[1][2]  It allows us to use some other 3^rd
 party backend to monitor agent status, such as zookeeper, memcached.
 Meanwhile, it guarantees backward compatibility so that users could
 still use db-based status monitoring mechanism as their default
 choice.

 __ __

 Base on that, we may do further optimization on issues Attila and
 you mentioned. Thanks. 

 __ __

 [1] BP  -
 https://blueprints.launchpad.net/neutron/+spec/agent-group-
 and-status-drivers

 [2] Liberty Spec proposed - https://review.openstack.org/#
 /c/168921/

 __ __

 Best,

 Robin

 __ __

 __ __

 __ __

 __ __

 *发件人:*Kevin Benton [mailto:blak...@gmail.com
 mailto:blak...@gmail.com]
 *发送时间:*2015年4月11日12:35
 *收件人:*OpenStack Development Mailing List (not for usage questions)
 *主题:*Re: [openstack-dev] [neutron] Neutron scaling datapoints?

 __ __

 Which periodic updates did you have in mind to eliminate? One of the
 few remaining ones I can think of is sync_routers but it would be
 great if you can enumerate the ones you observed because eliminating
 overhead in agents is something I've been working on as well.

 __ __

 One of the most common is the heartbeat from each agent. However, I
 don't think we can't eliminate them because they are used to
 determine if the agents are still alive for scheduling purposes. Did
 you have something else in mind to determine if an agent is alive?

 __ __

 On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
 mailto:afaze...@redhat.com wrote:

 I'm 99.9% sure, for scaling above 100k managed node,
 we do not really need to split the openstack to multiple smaller
 openstack,
 or use significant number of extra controller machine.

 The problem is openstack using the right tools SQL/AMQP/(zk),
 but in a wrong way.

 For example.:
 Periodic updates can be avoided almost in all cases

 The new data can be pushed to the agent just when it needed.
 The agent can know when the AMQP connection become unreliable (queue
 or connection loose),
 and needs to do full sync.
 https://bugs.launchpad.net/neutron/+bug/1438159

 Also the agents when gets some notification, they start asking for
 details via the
 AMQP - SQL. Why they do not know it already or get it with the
 notification ?


 - Original Message -
   From: Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com

   To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org

   Sent: Thursday, April 9, 2015 5:01:45 PM
   Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
   Hi Joe,
 
   Many thanks for your reply!
 
   On 09/04/15 03:34, joehuang wrote:
Hi, Neil,
   
 From theoretic, Neutron is like a broadcast domain, for
 example,
 enforcement of DVR and security group has to touch each
 regarding host
 where there is VM of this project resides. Even using SDN
 controller, the
touch to regarding host is inevitable. If there are plenty of
 physical

Re: [openstack-dev] 答复: [neutron] Neutron scaling datapoints?

2015-04-13 Thread Daniel Comnea
My $2 cents:

I like the 3rd party backend however instead of ZK wouldn't Consul [1] fit
better due to lighter/ out of box multi DC awareness?

Dani

[1] Consul - https://www.consul.io/


On Mon, Apr 13, 2015 at 9:51 AM, Wangbibo wangb...@huawei.com wrote:

  Hi Kevin,



 Totally agree with you that heartbeat from each agent is something that we
 cannot eliminate currently. Agent status depends on it, and further
 scheduler and HA depends on agent status.



 I proposed a Liberty spec for introducing open framework/pluggable agent
 status drivers.[1][2]  It allows us to use some other 3rd party backend
 to monitor agent status, such as zookeeper, memcached. Meanwhile, it
 guarantees backward compatibility so that users could still use db-based
 status monitoring mechanism as their default choice.



 Base on that, we may do further optimization on issues Attila and you
 mentioned. Thanks.



 [1] BP  -
 https://blueprints.launchpad.net/neutron/+spec/agent-group-and-status-drivers

 [2] Liberty Spec proposed - https://review.openstack.org/#/c/168921/



 Best,

 Robin









 *发件人:* Kevin Benton [mailto:blak...@gmail.com]
 *发送时间:* 2015年4月11日 12:35
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] [neutron] Neutron scaling datapoints?



 Which periodic updates did you have in mind to eliminate? One of the few
 remaining ones I can think of is sync_routers but it would be great if you
 can enumerate the ones you observed because eliminating overhead in agents
 is something I've been working on as well.



 One of the most common is the heartbeat from each agent. However, I don't
 think we can't eliminate them because they are used to determine if the
 agents are still alive for scheduling purposes. Did you have something else
 in mind to determine if an agent is alive?



 On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
 wrote:

 I'm 99.9% sure, for scaling above 100k managed node,
 we do not really need to split the openstack to multiple smaller openstack,
 or use significant number of extra controller machine.

 The problem is openstack using the right tools SQL/AMQP/(zk),
 but in a wrong way.

 For example.:
 Periodic updates can be avoided almost in all cases

 The new data can be pushed to the agent just when it needed.
 The agent can know when the AMQP connection become unreliable (queue or
 connection loose),
 and needs to do full sync.
 https://bugs.launchpad.net/neutron/+bug/1438159

 Also the agents when gets some notification, they start asking for details
 via the
 AMQP - SQL. Why they do not know it already or get it with the
 notification ?


 - Original Message -
  From: Neil Jerram neil.jer...@metaswitch.com

  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Thursday, April 9, 2015 5:01:45 PM
  Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
  Hi Joe,
 
  Many thanks for your reply!
 
  On 09/04/15 03:34, joehuang wrote:
   Hi, Neil,
  
From theoretic, Neutron is like a broadcast domain, for example,
enforcement of DVR and security group has to touch each regarding host
where there is VM of this project resides. Even using SDN controller,
 the
touch to regarding host is inevitable. If there are plenty of
 physical
hosts, for example, 10k, inside one Neutron, it's very hard to
 overcome
the broadcast storm issue under concurrent operation, that's the
bottleneck for scalability of Neutron.
 
  I think I understand that in general terms - but can you be more
  specific about the broadcast storm?  Is there one particular message
  exchange that involves broadcasting?  Is it only from the server to
  agents, or are there 'broadcasts' in other directions as well?
 
  (I presume you are talking about control plane messages here, i.e.
  between Neutron components.  Is that right?  Obviously there can also be
  broadcast storm problems in the data plane - but I don't think that's
  what you are talking about here.)
 
   We need layered architecture in Neutron to solve the broadcast domain
   bottleneck of scalability. The test report from OpenStack cascading
 shows
   that through layered architecture Neutron cascading, Neutron can
   supports up to million level ports and 100k level physical hosts. You
 can
   find the report here:
  
 http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
 
  Many thanks, I will take a look at this.
 
   Neutron cascading also brings extra benefit: One cascading Neutron
 can
   have many cascaded Neutrons, and different cascaded Neutron can
 leverage
   different SDN controller, maybe one is ODL, the other one is
 OpenContrail.
  
   Cascading Neutron---
/ \
   --cascaded Neutron--   --cascaded Neutron-
   |  |
   

Re: [openstack-dev] [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project

2015-04-09 Thread Daniel Comnea
Thanks for your reply Kris.

I'd love to but we're forced to by an in-house app we built (in the same
space with Murano to offer a Service catalogue for various services)
deployment.

IT must be a different path to corss the bridge given the circumstances.

Dani

On Wed, Apr 8, 2015 at 3:54 PM, Kris G. Lindgren klindg...@godaddy.com
wrote:

  Why wouldn't you separate you dev/test/productiion via tenants as well?
 That’s what we encourage our users to do.  This would let you create
 flavors that give dev/test less resources under exhaustion conditions and
 production more resources.  You could even pin dev/test to specific
 hypervisors/areas of the cloud and let production have the rest via those
 flavors.
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.

   From: Daniel Comnea comnea.d...@gmail.com
 Date: Wednesday, April 8, 2015 at 3:32 AM
 To: Daniel Comnea comnea.d...@gmail.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, 
 openstack-operat...@lists.openstack.org 
 openstack-operat...@lists.openstack.org
 Subject: Re: [Openstack-operators] [Openstack-dev] resource quotas limit
 per stacks within a project

+ operators

  Hard to believe nobody is facing this problems, even on small shops you
 end up with multiple stacks part of the same tenant/ project.

  Thanks,
  Dani

 On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

   Any ideas/ thoughts please?

  In VMware world is basically the same feature provided by the resource
 pool.


  Thanks,
  Dani

 On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

   Hi all,

  I'm trying to understand what options i have for the below use case...

  Having multiple stacks (various number of instances) deployed within 1
 Openstack project (tenant), how can i guarantee that there will be no
 race after the project resources.

  E.g - say i have few stacks like

  stack 1 = production
  stack 2 = development
  stack 3 = integration

  i don't want to be in a situation where stack 3 (because of a need to
 run some heavy tests) will use all of the resources for a short while while
 production will suffer from it.

  Any ideas?

  Thanks,
  Dani

  P.S - i'm aware of the heavy work being put into improving the quotas
 or the CPU pinning however that is at the project level




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] resource quotas limit per stacks within a project

2015-04-08 Thread Daniel Comnea
+ operators

Hard to believe nobody is facing this problems, even on small shops you end
up with multiple stacks part of the same tenant/ project.

Thanks,
Dani

On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea comnea.d...@gmail.com wrote:

 Any ideas/ thoughts please?

 In VMware world is basically the same feature provided by the resource
 pool.


 Thanks,
 Dani

 On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi all,

 I'm trying to understand what options i have for the below use case...

 Having multiple stacks (various number of instances) deployed within 1
 Openstack project (tenant), how can i guarantee that there will be no
 race after the project resources.

 E.g - say i have few stacks like

 stack 1 = production
 stack 2 = development
 stack 3 = integration

 i don't want to be in a situation where stack 3 (because of a need to run
 some heavy tests) will use all of the resources for a short while while
 production will suffer from it.

 Any ideas?

 Thanks,
 Dani

 P.S - i'm aware of the heavy work being put into improving the quotas or
 the CPU pinning however that is at the project level



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2015-04-08 Thread Daniel Comnea
Thanks Angus for feedback.

Best,
Dani

On Mon, Apr 6, 2015 at 11:30 PM, Angus Salkeld asalk...@mirantis.com
wrote:


 On Fri, Apr 3, 2015 at 8:51 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi all,

 Does anyone know if the above use case has made it into the convergence
 project and in which release was/ is going to be merged?


 Hi

 Phase one of convergence should be merged in early L (we have some of the
 patches merged now).
 Phase two is to separate the checking of actual state into a new RPC
 service within Heat.
 Then you *could* run that checker periodically (or receive RPC
 notifications) to learn of changes
 to the stack's state during the lifetime of the stack. That *might* get
 done in L - we will just have to see
 how things go.

 -Angus


 Thanks,
 Dani

 On Tue, Oct 28, 2014 at 5:40 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Thanks all for reply.

 I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed
 this is not possible as of today. Someone else - sorry i forgot his nicname
 on IRC suggested to write a Ceilometer query to count the number of
 instances but what @ZhiQiang said is true and this is what we have seen via
 the instance sample

 *@Clint - *that is the case indeed

 *@ZhiQiang* - what do you mean by *count of resource should be queried
 from specific service's API*? Is it related to Ceilometer's event
 types configuration?

 *@Mike - *my use case is very simple: i have a group of instances and
 in case the # of instances reach the minimum number i set, i would like a
 new instance to be spun up - think like a cluster where i want to maintain
 a minimum number of members

 With regards to the proposal you made -
 https://review.openstack.org/#/c/127884/ that works but only in a
 specific use case hence is not generic because the assumption is that my
 instances are hooked behind a LBaaS which is not always the case.

 Looking forward to see the 'convergence' in action.


 Cheers,
 Dani

 On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer mspre...@us.ibm.com
 wrote:

 Daniel Comnea comnea.d...@gmail.com wrote on 10/27/2014 07:16:32 AM:

  Yes i did but if you look at this example
 
 
 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 

  the flow is simple:

  CPU alarm in Ceilometer triggers the type: OS::Heat::ScalingPolicy
  which then triggers the type: OS::Heat::AutoScalingGroup

 Actually the ScalingPolicy does not trigger the ASG.  BTW,
 ScalingPolicy is mis-named; it is not a full policy, it is only an action
 (the condition part is missing --- as you noted, that is in the Ceilometer
 alarm).  The so-called ScalingPolicy does the action itself when
 triggered.  But it respects your configured min and max size.

 What are you concerned about making your scaling group smaller than
 your configured minimum?  Just checking here that there is not a
 misunderstanding.

 As Clint noted, there is a large-scale effort underway to make Heat
 maintain what it creates despite deletion of the underlying resources.

 There is also a small-scale effort underway to make ASGs recover from
 members stopping proper functioning for whatever reason.  See
 https://review.openstack.org/#/c/127884/ for a proposed interface and
 initial implementation.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2015-04-03 Thread Daniel Comnea
Hi all,

Does anyone know if the above use case has made it into the convergence
project and in which release was/ is going to be merged?

Thanks,
Dani

On Tue, Oct 28, 2014 at 5:40 PM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Thanks all for reply.

 I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed
 this is not possible as of today. Someone else - sorry i forgot his nicname
 on IRC suggested to write a Ceilometer query to count the number of
 instances but what @ZhiQiang said is true and this is what we have seen via
 the instance sample

 *@Clint - *that is the case indeed

 *@ZhiQiang* - what do you mean by *count of resource should be queried
 from specific service's API*? Is it related to Ceilometer's event types
 configuration?

 *@Mike - *my use case is very simple: i have a group of instances and in
 case the # of instances reach the minimum number i set, i would like a new
 instance to be spun up - think like a cluster where i want to maintain a
 minimum number of members

 With regards to the proposal you made -
 https://review.openstack.org/#/c/127884/ that works but only in a
 specific use case hence is not generic because the assumption is that my
 instances are hooked behind a LBaaS which is not always the case.

 Looking forward to see the 'convergence' in action.


 Cheers,
 Dani

 On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer mspre...@us.ibm.com
 wrote:

 Daniel Comnea comnea.d...@gmail.com wrote on 10/27/2014 07:16:32 AM:

  Yes i did but if you look at this example
 
 
 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 

  the flow is simple:

  CPU alarm in Ceilometer triggers the type: OS::Heat::ScalingPolicy
  which then triggers the type: OS::Heat::AutoScalingGroup

 Actually the ScalingPolicy does not trigger the ASG.  BTW,
 ScalingPolicy is mis-named; it is not a full policy, it is only an action
 (the condition part is missing --- as you noted, that is in the Ceilometer
 alarm).  The so-called ScalingPolicy does the action itself when
 triggered.  But it respects your configured min and max size.

 What are you concerned about making your scaling group smaller than your
 configured minimum?  Just checking here that there is not a
 misunderstanding.

 As Clint noted, there is a large-scale effort underway to make Heat
 maintain what it creates despite deletion of the underlying resources.

 There is also a small-scale effort underway to make ASGs recover from
 members stopping proper functioning for whatever reason.  See
 https://review.openstack.org/#/c/127884/ for a proposed interface and
 initial implementation.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] Application level HA via Heat

2015-04-03 Thread Daniel Comnea
Sorry to chime in but i will throw in another use case for Steven since is
about the HA/ auto-scaling and i think it does match what i asked back in
October.

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049375.html

If you need more info let me know

Dani

On Thu, Apr 2, 2015 at 10:59 AM, Huangtianhua huangtian...@huawei.com
wrote:

 If we replace a autoscaling group member, we can't make sure the attached
 resources keep the same, why not to call the evacuate or rebuild api of
 nova,
 just to add meters for ha(vm state or host state) in ceilometer, and then
 signal to HA resource(such as HARestarter)?

 -邮件原件-
 发件人: Steven Hardy [mailto:sha...@redhat.com]
 发送时间: 2014年12月23日 2:21
 收件人: openstack-dev@lists.openstack.org
 主题: [openstack-dev] [heat] Application level HA via Heat

 Hi all,

 So, lately I've been having various discussions around $subject, and I
 know it's something several folks in our community are interested in, so I
 wanted to get some ideas I've been pondering out there for discussion.

 I'll start with a proposal of how we might replace HARestarter with
 AutoScaling group, then give some initial ideas of how we might evolve that
 into something capable of a sort-of active/active failover.

 1. HARestarter replacement.

 My position on HARestarter has long been that equivalent functionality
 should be available via AutoScalingGroups of size 1.  Turns out that
 shouldn't be too hard to do:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 1
   max_size: 1
   resource:
 type: ha_server.yaml

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_group_id: {get_resource: server_group}
   scaling_adjustment: 1

 So, currently our ScalingPolicy resource can only support three adjustment
 types, all of which change the group capacity.  AutoScalingGroup already
 supports batched replacements for rolling updates, so if we modify the
 interface to allow a signal to trigger replacement of a group member, then
 the snippet above should be logically equivalent to HARestarter AFAICT.

 The steps to do this should be:

  - Standardize the ScalingPolicy-AutoScaling group interface, so
 aynchronous adjustments (e.g signals) between the two resources don't use
 the adjust method.

  - Add an option to replace a member to the signal interface of
 AutoScalingGroup

  - Add the new replace adjustment type to ScalingPolicy

 I posted a patch which implements the first step, and the second will be
 required for TripleO, e.g we should be doing it soon.

 https://review.openstack.org/#/c/143496/
 https://review.openstack.org/#/c/140781/

 2. A possible next step towards active/active HA failover

 The next part is the ability to notify before replacement that a scaling
 action is about to happen (just like we do for LoadBalancer resources
 already) and orchestrate some or all of the following:

 - Attempt to quiesce the currently active node (may be impossible if it's
   in a bad state)

 - Detach resources (e.g volumes primarily?) from the current active node,
   and attach them to the new active node

 - Run some config action to activate the new node (e.g run some config
   script to fsck and mount a volume, then start some application).

 The first step is possible by putting a SofwareConfig/SoftwareDeployment
 resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
 node is too bricked to respond and specifying DELETE action so it only runs
 when we replace the resource).

 The third step is possible either via a script inside the box which polls
 for the volume attachment, or possibly via an update-only software config.

 The second step is the missing piece AFAICS.

 I've been wondering if we can do something inside a new heat resource,
 which knows what the current active member of an ASG is, and gets
 triggered on a replace signal to orchestrate e.g deleting and creating a
 VolumeAttachment resource to move a volume between servers.

 Something like:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 2
   max_size: 2
   resource:
 type: ha_server.yaml

   server_failover_policy:
 type: OS::Heat::FailoverPolicy
 properties:
   auto_scaling_group_id: {get_resource: server_group}
   resource:
 type: OS::Cinder::VolumeAttachment
 properties:
 # FIXME: refs is a ResourceGroup interface not currently
 # available in AutoScalingGroup
 instance_uuid: {get_attr: [server_group, refs, 1]}

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_policy_id: {get_resource: 

Re: [openstack-dev] [Openstack-operators] Security around enterprise credentials and OpenStack API

2015-04-01 Thread Daniel Comnea
+ developers mailing list, hopefully a developer might be able to chime in.



On Wed, Apr 1, 2015 at 3:58 AM, Marc Heckmann marc.heckm...@ubisoft.com
wrote:

 Hi all,

 I was going to post a similar question this evening, so I decided to just
 bounce on Mathieu’s question. See below inline.

  On Mar 31, 2015, at 8:35 PM, Matt Fischer m...@mattfischer.com wrote:
 
  Mathieu,
 
  We LDAP (AD) with a fallback to MySQL. This allows us to store service
 accounts (like nova) and team accounts for use in Jenkins/scripts etc in
 MySQL. We only do Identity via LDAP and we have a forked copy of this
 driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do
 this. We don't have any permissions to write into LDAP or move people into
 groups, so we keep a copy of users locally for purposes of user-list
 operations. The only interaction between OpenStack and LDAP for us is when
 that driver tries a bind.
 
 
 
  On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagné mga...@iweb.com wrote:
  Hi,
 
  Lets say I wish to use an existing enterprise LDAP service to manage my
  OpenStack users so I only have one place to manage users.
 
  How would you manage authentication and credentials from a security
  point of view? Do you tell your users to use their enterprise
  credentials or do you use an other method/credentials?

 We too have integration with enterprise credentials through LDAP, but as
 you suggest, we certainly don’t want users to use those credentials in
 scripts or store them on instances. Instead we have a custom Web portal
 where they can create separate Keystone credentials for their
 project/tenant which are stored in Keystone’s MySQL database. Our LDAP
 integration actually happens at a level above Keystone. We don’t actually
 let users acquire Keystone tokens using their LDAP accounts.

 We’re not really happy with this solution, it’s a hack and we are looking
 to revamp it entirely. The problem is that I never have been able to find a
 clear answer on how to do this with Keystone.

 I’m actually quite partial to the way AWS IAM works. Especially the
 instance “role features. Roles in AWS IAM is similar to TRUSTS in Keystone
 except that it is integrated into the instance metadata. It’s pretty cool.

 Other than that, RBAC policies in Openstack get us a good way towards IAM
 like functionality. We just need a policy editor in Horizon.

 Anyway, the problem is around delegation of credentials which are used
 non-interactively. We need to limit what those users can do (through RBAC
 policy) but also somehow make the credentials ephemeral.

 If someone (Keystone developer?) could point us in the right direction,
 that would be great.

 Thanks in advance.

 
  The reason is that (usually) enterprise credentials also give access to
  a whole lot of systems other than OpenStack itself. And it goes without
  saying that I'm not fond of the idea of storing my password in plain
  text to be used by some scripts I created.
 
  What's your opinion/suggestion? Do you guys have a second credential
  system solely used for OpenStack?
 
  --
  Mathieu
 
  ___
  OpenStack-operators mailing list
  openstack-operat...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
  ___
  OpenStack-operators mailing list
  openstack-operat...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] resource quotas limit per stacks within a project

2015-04-01 Thread Daniel Comnea
Any ideas/ thoughts please?

In VMware world is basically the same feature provided by the resource pool.


Thanks,
Dani

On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Hi all,

 I'm trying to understand what options i have for the below use case...

 Having multiple stacks (various number of instances) deployed within 1
 Openstack project (tenant), how can i guarantee that there will be no
 race after the project resources.

 E.g - say i have few stacks like

 stack 1 = production
 stack 2 = development
 stack 3 = integration

 i don't want to be in a situation where stack 3 (because of a need to run
 some heavy tests) will use all of the resources for a short while while
 production will suffer from it.

 Any ideas?

 Thanks,
 Dani

 P.S - i'm aware of the heavy work being put into improving the quotas or
 the CPU pinning however that is at the project level

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hierarchical Multitenancy quotas

2015-03-31 Thread Daniel Comnea
I see this spec has been merged however can anyone point out if this will
make it into final Kilo release?

Thanks,
Dani

On Wed, Jan 7, 2015 at 5:03 PM, Tim Bell tim.b...@cern.ch wrote:

  Are we yet at the point  in the New Year to register requests for
 exceptions ?



 There is strong interest from CERN and Yahoo! In this feature and there
 are many +1s and no unaddressed -1s.



 Thanks for consideration,



 Tim



  Joe wrote

  ….

 

 Nova's spec deadline has passed, but I think this is a good candidate for
 an exception.  We will announce the process for asking for a formal spec
 exception shortly after new years.

 



 *From:* Tim Bell [mailto:tim.b...@cern.ch]
 *Sent:* 23 December 2014 19:02
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] Hierarchical Multitenancy



 Joe,



 Thanks… there seems to be good agreement on the spec and the matching
 implementation is well advanced with BARC so the risk is not too high.



 Launching HMT with quota in Nova in the same release cycle would also
 provide a more complete end user experience.



 For CERN, this functionality is very interesting as it allows the central
 cloud providers to delegate the allocation of quotas to the LHC
 experiments. Thus, from a central perspective, we are able to allocate N
 thousand cores to an experiment and delegate their resource co-ordinator to
 prioritise the work within the experiment. Currently, we have many manual
 helpdesk tickets with significant latency to adjust the quotas.



 Tim



 *From:* Joe Gordon [mailto:joe.gord...@gmail.com joe.gord...@gmail.com]
 *Sent:* 23 December 2014 17:35
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] Hierarchical Multitenancy




 On Dec 23, 2014 12:26 AM, Tim Bell tim.b...@cern.ch wrote:
 
 
 
  It would be great if we can get approval for the Hierachical Quota
 handling in Nova too (https://review.openstack.org/#/c/129420/).

 Nova's spec deadline has passed, but I think this is a good candidate for
 an exception.  We will announce the process for asking for a formal spec
 exception shortly after new years.

 
 
 
  Tim
 
 
 
  From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
  Sent: 23 December 2014 01:22
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Hierarchical Multitenancy
 
 
 
  Hi Raildo,
 
 
 
  Thanks for putting this post together. I really appreciate all the work
 you guys have done (and continue to do) to get the Hierarchical
 Mulittenancy code into Keystone. It’s great to have the base implementation
 merged into Keystone for the K1 milestone. I look forward to seeing the
 rest of the development land during the rest of this cycle and what the
 other OpenStack projects build around the HMT functionality.
 
 
 
  Cheers,
 
  Morgan
 
 
 
 
 
 
 
  On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:
 
 
 
  Hello folks, My team and I developed the Hierarchical Multitenancy
 concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What
 have we implemented? What are the next steps for kilo?
 
  To answers these questions, I created a blog post
 http://raildo.me/hierarchical-multitenancy-in-openstack/
 
 
 
  Any question, I'm available.
 
 
 
  --
 
  Raildo Mascena
 
  Software Engineer.
 
  Bachelor of Computer Science.
 
  Distributed Systems Laboratory
  Federal University of Campina Grande
  Campina Grande, PB - Brazil
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators][Openstack-dev][all] how to apply security/back-ported release to Icehouse production

2015-03-30 Thread Daniel Comnea
No thoughts?



On Sat, Mar 28, 2015 at 10:35 PM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Hi all,

 Can anyone shed some light as to how you upgrade/ applies the
 security/back-ported patches?

 E.g  - let's say i already have a production environment running Icehouse
 2014.1 as per the link [1] and i'd like to upgrade it to latest Icehouse
 release 2014.1.4.

 Also do you have to go via sequential process like

 2014.1 - 2014.1.1 - 2014.1.2 - 2014.1.3 - 2014.4 or i can jump from
 2014.1 to 2014.1.4?

 And the last questions is: can i cherry pick which bugs part of a project
 to pull? Can i pull only 1 project - e.g HEAT from latest release 2014.1.4?


 Thanks,
 Dani

 [1] https://wiki.openstack.org/wiki/Releases

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][Openstack-dev][all] how to apply security/back-ported release to Icehouse production

2015-03-28 Thread Daniel Comnea
Hi all,

Can anyone shed some light as to how you upgrade/ applies the
security/back-ported patches?

E.g  - let's say i already have a production environment running Icehouse
2014.1 as per the link [1] and i'd like to upgrade it to latest Icehouse
release 2014.1.4.

Also do you have to go via sequential process like

2014.1 - 2014.1.1 - 2014.1.2 - 2014.1.3 - 2014.4 or i can jump from
2014.1 to 2014.1.4?

And the last questions is: can i cherry pick which bugs part of a project
to pull? Can i pull only 1 project - e.g HEAT from latest release 2014.1.4?


Thanks,
Dani

[1] https://wiki.openstack.org/wiki/Releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Info on Currently Supported Data Drivers

2015-03-25 Thread Daniel Comnea
This project looks very interesting and i might give it a go by installing
standalone. I have a question though: are there any plans to expose the
current output as part of Horizon or any other UI ?

Thx,
Dani

On Wed, Mar 25, 2015 at 5:15 AM, Zhou, Zhenzan zhenzan.z...@intel.com
wrote:

  Just found it has been supported, e.g.



 openstack  congress driver schema show ceilometer



 So here is the way to check data source drivers:



 For supported data source drivers:

 1.   openstack  congress driver list

 2.   openstack  congress driver schema show ds-name



 For enabled data source drivers:

 1.   openstack congress datasource list

 2.   openstack congress datasource schema show ds-name



 BR

 Zhou Zhenzan



 *From:* Zhou, Zhenzan [mailto:zhenzan.z...@intel.com]
 *Sent:* Wednesday, March 25, 2015 12:24 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Congress] Info on Currently Supported
 Data Drivers



 Hi,



 The ‘driver list’ sub command could provide supported data source drivers,
 but we cannot show its schema if it’s NOT LOADED. So we still have to check
 the code for the schema. My point is that we could support show schemas for
 any supported data source drivers, even it’s not loaded.



 zhenzan@zhenzan-openstack:~$ openstack congress driver list


 ++--+

 | id |
 description  |


 ++--+

 | ceilometer | Datasource driver that interfaces with
 ceilometer.   |

 | plexxi | Datasource driver that interfaces with
 PlexxiCore.   |

 | swift  | Datasource driver that interfaces with
 swift.|

 | neutronv2  | Datasource driver that interfaces with OpenStack
 Networking aka Neutron. |

 | nova   | Datasource driver that interfaces with OpenStack
 Compute aka nova.   |

 | murano | Datasource driver that interfaces with
 murano|

 | keystone   | Datasource driver that interfaces with
 keystone. |

 | cloudfoundryv2 | Datasource driver that interfaces with
 cloudfoundry  |

 | ironic | Datasource driver that interfaces with OpenStack bare
 metal aka ironic.  |

 | cinder | Datasource driver that interfaces with OpenStack
 cinder. |

 | vcenter| Datasource driver that interfaces with
 vcenter   |

 | glancev2   | Datasource driver that interfaces with OpenStack Images
 aka Glance.  |


 ++--+

 zhenzan@zhenzan-openstack:~$ openstack congress datasource schema show
 ceilometer

 ERROR: openstack Resource ceilometer not found (HTTP 404)





 BR

 Zhou Zhenzan

 *From:* Pierre-Emmanuel Ettori [mailto:pe.ett...@gmail.com
 pe.ett...@gmail.com]
 *Sent:* Wednesday, March 25, 2015 11:54 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Congress] Info on Currently Supported
 Data Drivers



 Hi Zhenzan,
 Actually I believe the command that Tim is looking for is:

 openstack congress datasource list

 Please let us know if you are running into any issue.

 Thanks
 Pierre



 On Tue, Mar 24, 2015 at 8:39 PM Tim Hinrichs thinri...@vmware.com wrote:

  Hi Zhenzan,



 I don't have the CLI in front of me, but check out the 'driver' commands.
 The command you're looking for is something like the following.



 $ openstack congress driver list



 Tim


   --

 *From:* Zhou, Zhenzan zhenzan.z...@intel.com
 *Sent:* Tuesday, March 24, 2015 7:39 PM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Congress] Info on Currently Supported
 Data Drivers

 Hi,



 To check LOADED(or ENABLED) data source drivers for a running system, you
 can use congress cli. Maybe we could add a command to list SUPPORTED data
 source drivers?



 zhenzan@zhenzan-openstack:~$ openstack congress datasource list


 +--+---+-+--+-+

 | id   | name  | enabled | type |
 config
 |


 +--+---+-+--+-+

 | 20a33403-e8d0-440a-b36f-0383bfcfd06f | cinder| True| None |
 {u'username': u'admin', u'tenant_name': u'admin', u'password': u'hidden',
 u'auth_url': 

Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-18 Thread Daniel Comnea
Congrats Steve!

On Wed, Mar 18, 2015 at 12:51 AM, Daneyon Hansen (danehans) 
daneh...@cisco.com wrote:


  Congratulations Steve!

  Regards,
 Daneyon Hansen
 Software Engineer
 Email: daneh...@cisco.com
 Phone: 303-718-0400
 http://about.me/daneyon_hansen

   From: Angus Salkeld asalk...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 17, 2015 at 5:05 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Kolla] PTL Candidacy

There have been no other candidates within the allowed time, so
 congratulations Steve on being the new Kolla PTL.

  Regards
  Angus Salkeld



 On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld asalk...@mirantis.com
 wrote:

  Candidacy confirmed.

  -Angus

  On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

   I am running for PTL for the Kolla project.  I have been executing in
 an unofficial PTL capacity for the project for the Kilo cycle, but I feel
 it is important for our community to have an elected PTL and have asked
 Angus Salkeld, who has no outcome in the election, to officiate the
 election [1].

  For the Kilo cycle our community went from zero LOC to a fully working
 implementation of most of the services based upon Kubernetes as the
 backend.  Recently I led the effort to remove Kubernetes as a backend and
 provide container contents, building, and management on bare metal using
 docker-compose which is nearly finished.  At the conclusion of Kilo, it
 should be possible from one shell script to start an AIO full deployment of
 all of the current OpenStack git-namespaced services using containers built
 from RPM packaging.

  For Liberty, I’d like to take our community and code to the next
 level.  Since our containers are fairly solid, I’d like to integrate with
 existing projects such as TripleO, os-ansible-deployment, or Fuel.
 Alternatively the community has shown some interest in creating a
 multi-node HA-ified installation toolchain.

  I am deeply committed to leading the community where the core
 developers want the project to go, wherever that may be.

  I am strongly in favor of adding HA features to our container
 architecture.

  I would like to add .deb package support and from-source support to
 our docker container build system.

  I would like to implement a reference architecture where our
 containers can be used as a building block for deploying a reference
 platform of 3 controller nodes, ~100 compute nodes, and ~10 storage nodes.

  I am open to expanding our scope to address full deployment, but would
 prefer to merge our work with one or more existing upstreams such as
 TripelO, os-ansible-deployment, and Fuel.

  Finally I want to finish the job on functional testing, so all of our
 containers are functionally checked and gated per commit on Fedora, CentOS,
 and Ubuntu.

  I am experienced as a PTL, leading the Heat Orchestration program from
 zero LOC through OpenStack integration for 3 development cycles.  I write
 code as a PTL and was instrumental in getting the Magnum Container Service
 code-base kicked off from zero LOC where Adrian Otto serves as PTL.  My
 past experiences include leading Corosync from zero LOC to a stable
 building block of High Availability in Linux.  Prior to that I was part of
 a team that implemented Carrier Grade Linux.  I have a deep and broad
 understanding of open source, software development, high performance team
 leadership, and distributed computing.

  I would be pleased to serve as PTL for Kolla for the Liberty cycle and
 welcome your vote.

  Regards
 -steve

  [1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Flush expired tokens automatically ?

2015-01-27 Thread Daniel Comnea
Thanks Adam, Thierry!

Dani

On Tue, Jan 27, 2015 at 1:43 PM, Adam Young ayo...@redhat.com wrote:

 Short term answers:

 The amount of infrastructure we would have to build to replicate CRON is
 not worth it.

 Figuring out a CRON strategy for nontrivial deployment is part of a larger
 data management scheme.


 Long term answers:

 Tokens should not be persisted.  We have been working toward ephemeral
 tokens for a long time, but the vision of how to get there is not uniformly
 shared among the team.  We spent a lot of time arguing about AE tokens,
 which looked promising, but do not support federation.

 Where we are headed is a split of the data in the token into an ephemeral
 portion and a persisted portion.  The persisted portion would be reused,
 and would represent the delegation of authority. The epehmeral portion will
 represent the time aspects of the token: when issued, when expired, etc.
 The ephemeral portion would refer to the persisted portion.

 The revocation events code  is necessary for PKI tokens, and might be
 required depending on how we do the ephemeral/persisted split. With AE
 tokens it would have been necessary, but with a unified delegation
 mechanism, it would be less so.

 If anyone feels the need for ephemeral tokens strongly enough to
 contribute, please let me know.  We've put a lot of design into where we
 are today, and I would encourage you to learn the issues before jumping in
 to the solutions.  I'm more than willing to guide any new development along
 these lines.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators]flush expired tokens and moves deleted instance

2015-01-26 Thread Daniel Comnea
+100

Dani

On Mon, Jan 26, 2015 at 1:10 AM, Tim Bell tim.b...@cern.ch wrote:

  This is often mentioned as one of those items which catches every
 OpenStack cloud operator at some time. It’s not clear to me that there
 could not be a scheduled job built into the system with a default frequency
 (configurable, ideally).



 If we are all configuring this as a cron job, is there a reason that it
 could not be built into the code ?



 Tim



 *From:* Mike Smith [mailto:mism...@overstock.com]
 *Sent:* 24 January 2015 18:08
 *To:* Daniel Comnea
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 openstack-operat...@lists.openstack.org
 *Subject:* Re: [Openstack-operators]
 [openstack-dev][openstack-operators]flush expired tokens and moves deleted
 instance



 It is still mentioned in the Juno installation docs:



 By default, the Identity service stores expired tokens in the database
 indefinitely. The

 accumulation of expired tokens considerably increases the database size
 and might degrade

 service performance, particularly in environments with limited resources.

 We recommend that you use cron to configure a periodic task that purges
 expired tokens

 hourly:

 # (crontab -l -u keystone 21 | grep -q token_flush) || \

 echo '@hourly /usr/bin/keystone-manage token_flush /var/log/keystone/

 keystone-tokenflush.log 21' \

  /var/spool/cron/keystone






 Mike Smith
 Principal Engineer, Website Systems
 Overstock.com



  On Jan 24, 2015, at 10:03 AM, Daniel Comnea comnea.d...@gmail.com
 wrote:



 Hi all,

   I just bumped into Sebastien's blog where he suggested a cron job
 should run in production to tidy up expired tokens - see blog[1]

 Could you please remind me if this is still required in IceHouse/ Juno? (i
 kind of remember i've seen some work being done in this direction but i
 can't find the emails)

   Thanks,
 Dani

 [1]
 http://www.sebastien-han.fr/blog/2014/08/18/a-must-have-cron-job-on-your-openstack-cloud/

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




  --


 CONFIDENTIALITY NOTICE: This message is intended only for the use and
 review of the individual or entity to which it is addressed and may contain
 information that is privileged and confidential. If the reader of this
 message is not the intended recipient, or the employee or agent responsible
 for delivering the message solely to the intended recipient, you are hereby
 notified that any dissemination, distribution or copying of this
 communication is strictly prohibited. If you have received this
 communication in error, please notify sender immediately by telephone or
 return email. Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators]flush expired tokens and moves deleted instance

2015-01-24 Thread Daniel Comnea
Hi all,


I just bumped into Sebastien's blog where he suggested a cron job should
run in production to tidy up expired tokens - see blog[1]

Could you please remind me if this is still required in IceHouse/ Juno? (i
kind of remember i've seen some work being done in this direction but i
can't find the emails)


Thanks,
Dani

[1]
http://www.sebastien-han.fr/blog/2014/08/18/a-must-have-cron-job-on-your-openstack-cloud/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-28 Thread Daniel Comnea
Thanks all for reply.

I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed this
is not possible as of today. Someone else - sorry i forgot his nicname on
IRC suggested to write a Ceilometer query to count the number of instances
but what @ZhiQiang said is true and this is what we have seen via the
instance sample

*@Clint - *that is the case indeed

*@ZhiQiang* - what do you mean by *count of resource should be queried
from specific service's API*? Is it related to Ceilometer's event types
configuration?

*@Mike - *my use case is very simple: i have a group of instances and in
case the # of instances reach the minimum number i set, i would like a new
instance to be spun up - think like a cluster where i want to maintain a
minimum number of members

With regards to the proposal you made -
https://review.openstack.org/#/c/127884/ that works but only in a specific
use case hence is not generic because the assumption is that my instances
are hooked behind a LBaaS which is not always the case.

Looking forward to see the 'convergence' in action.


Cheers,
Dani

On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Daniel Comnea comnea.d...@gmail.com wrote on 10/27/2014 07:16:32 AM:

  Yes i did but if you look at this example
 
 
 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 

  the flow is simple:

  CPU alarm in Ceilometer triggers the type: OS::Heat::ScalingPolicy
  which then triggers the type: OS::Heat::AutoScalingGroup

 Actually the ScalingPolicy does not trigger the ASG.  BTW,
 ScalingPolicy is mis-named; it is not a full policy, it is only an action
 (the condition part is missing --- as you noted, that is in the Ceilometer
 alarm).  The so-called ScalingPolicy does the action itself when
 triggered.  But it respects your configured min and max size.

 What are you concerned about making your scaling group smaller than your
 configured minimum?  Just checking here that there is not a
 misunderstanding.

 As Clint noted, there is a large-scale effort underway to make Heat
 maintain what it creates despite deletion of the underlying resources.

 There is also a small-scale effort underway to make ASGs recover from
 members stopping proper functioning for whatever reason.  See
 https://review.openstack.org/#/c/127884/ for a proposed interface and
 initial implementation.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-25 Thread Daniel Comnea
HI all,


Unfortunately i couldn't find any resource - blueprint/ document/ examples/
presentations about my below use case, hence the question i'm raising now
(if this is not the best place to ask, please let me know).


Having a group of 5 instances, i'd like to always maintain a minimum of 2
instances by using the HEAT autoscaling feature and Ceilometer?

I've seen the Wordpress autoscalling examples based on cpu_util metric but
my use case is more on the number of instances.


Cheers,
Dani
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev