[openstack-dev] Error while creating an Instance - IceHouse / Ubuntu 14.04

2014-04-09 Thread Martinx - ジェームズ
Guys,

I'm trying to create an Instance here at my lab but, I'm seeing the
following error:

command: "nova boot --image dda95a36-71e0-4474-b3e2-4f5ceef79c14 --flavor 2
my_first_vm"

nova-api.log:

---
2014-04-10 03:37:02.250 1743 ERROR nova.api.openstack.wsgi [-] Exception
handling resource: multi() got an unexpected keyword argument 'body'
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi Traceback (most
recent call last):
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi   File
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 983, in
_process_stack
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi
action_result = self.dispatch(meth, request, action_args)
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi   File
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1070,
in dispatch
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi return
method(req=request, **action_args)
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi TypeError:
multi() got an unexpected keyword argument 'body'
2014-04-10 03:37:02.250 1743 TRACE nova.api.openstack.wsgi
2014-04-10 03:37:02.285 1743 INFO nova.osapi_compute.wsgi.server [-]
2001:1291:2bf:fffa::500 "POST
/c24d0871dbd4461da2c854d493ec7cd7/os-server-external-events HTTP/1.1"
status: 400 len: 274 time: 0.0363338
 ---

and at neutron/server.log:

---
2014-04-10 03:37:02.287 2298 ERROR neutron.notifiers.nova [-] Failed to
notify nova on events: [{'status': 'completed', 'tag':
u'9b1e88f0-cb88-4c89-8a20-bac8ef2e9f9e', 'name': 'network-vif-plugged',
'server_uuid': u'649273f9-e382-4fda-9b9a-40201bdc1684'}]
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova Traceback (most
recent call last):
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 187, in
send_events
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova
batched_events)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/v1_1/contrib/server_external_events.py",
line 39, in create
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova
return_raw=True)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/base.py", line 152, in _create
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova _resp, body =
self.api.client.post(url, body=body)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 286, in post
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova return
self._cs_request(url, 'POST', **kwargs)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in
_cs_request
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova **kwargs)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in
_time_request
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova resp, body =
self.request(url, method, **kwargs)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova   File
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in
request
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova raise
exceptions.from_response(resp, body, url, method)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova BadRequest: The
server could not comply with the request since it is either malformed or
otherwise incorrect. (HTTP 400)
2014-04-10 03:37:02.287 2298 TRACE neutron.notifiers.nova
---

Anyway, I'm seeing that the qemu process got started but, its state remains
"spawning"... Few minutes later, qemu process died...

nova-compute.log:

---
2014-04-10 03:41:59.461 1431 WARNING nova.virt.libvirt.driver
[req-7dce4196-58c9-4cc2-bfe4-06c3bd710870 6c2d4385df4d40a2804de042bb6b3466
5e0106fa81104c5cbe21e1ccc9eb1a36] Timeout waiting for vif plugging callback
for instance 649273f9-e382-4fda-9b9a-40201bdc1684
---

I'm trying it with Neutron ML2 Flat, latest packages from Ubuntu 14.04,
with IPv6 for APIs and Endpoints (not trying IPv6 for tenants subnet yet)...

Tips?!

Thanks!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-04-09 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bugs in review page? When i loggin and modify Mailing Address it say I'm not be a member.

2014-04-09 Thread Chenliang (L)
Thank you very much.

I just check in https://review.openstack.org/#/settings/, not check in 
http://www.openstack.org/community/members/.

-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info] 
Sent: Thursday, April 10, 2014 10:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Bugs in review page? When i loggin and modify 
Mailing Address it say I'm not be a member.

On 04/09/2014 10:06 PM, Chenliang (L) wrote:
> Hi.
> 
> I can't complete git review,It say set contact info in 
> https://review.openstack.org/#/settings/contact.
> 
> When I login in https://review.openstack.org/#/settings/contact.( I 
> login in with email hs.c...@huawei.com.) and I modify Mailing Address and 
> Save changes,It raise the following errors:
> Application Error
> Server Error
> The request could not be completed. You may not be a member of the 
> foundation registered under this email address. Beforecontinuing, 
> please make sure you have joined the foundation 
> athttp://openstack.org/register/
> 
> In the Profile tab my info is :
> Username LiangChen
> Full Name LiangChen
> Email Address hs.c...@huawei.com
> Registered Jan 22, 2014 2:34 PM
> Account ID 10070
> 
> And I has signeded ICLA:
> Status Name Description Accepted
>  Verified ICLA OpenStack Individual Contributor License Agreement Jan 
> 22, 2014 2:50 PM Jan 22, 2014 2:50 PM
> 
> Could someone please tell me how to solve it?
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Is this you? http://www.openstack.org/community/members/profile/9193

If not, you need to ensure you are signed up as member of the foundation.

Go here: https://www.openstack.org/join/register/
and fill in the form and register as a foundation member.

Your name needs to appear in the foundation members directory:
http://www.openstack.org/community/members/
before you are sure that step is complete.

review.openstack.org (Gerrit) will be ensuring you are a foundation member 
before you can submit patches.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread isaku yamahata
> Also, how are you proposing to deal with live migration of VMs ? The
> virtio serial channel can get closed due to QEMU migrating while the
> proxy is in the middle of sending data to the guest VM, potentially
> causing a lost or mangled message in the guest and the sender won't
> know this if this channel write-only since there's no ACK.

Basically agent in host will takes care of it. Probably it
periodically pushes necessary information into guest agent.
In Neutron case, the existing neutron agents in host already observe
message loss/reorder,
so the situation doesn't get worse with this proposal


> What sort of things are you expecting the guest agent todo for Neutron ?

It depends on what network service guest agents provides.
For example
In case of firewall, update iptables, update routing tables and so on
in case of loadbalancer, start/stop haproxy with new configuration

neutron server sends message for configuration change.
- guest agents update setting/configuration on new request
  For example, user changes firewall rules, the requested change is
pushed to the agent.

guest agent sends messages to neutron server for maintenance.
- status report: update stats/agent liveness
  In case of loadbalancer example, the loadbalancer needs to report
its status(active/slave) to neutron server.
  then reaction will be taken.
- periodically get the current configuration for message reorder/loss
  agents needs to poll and resync its configuration in case that the
agent local configuration is out of sync with what neutron server
thinks.

thanks,
Isaku Yamahata


On Thu, Apr 10, 2014 at 2:11 AM, Daniel P. Berrange  wrote:
> On Wed, Apr 09, 2014 at 05:33:49PM +0900, Isaku Yamahata wrote:
>> Hello developers.
>>
>>
>> As discussed many times so far[1], there are many projects that needs
>> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
>>
>> My idea is to relay RPC messages from management network into tenant
>> network over file-like object. By file-like object, I mean virtio-serial,
>> unix domain socket, unix pipe and so on.
>> I've wrote some code based on oslo.messaging[2][3] and a documentation
>> on use cases.[4][5]
>> Only file-like transport and proxying messages would be in oslo.messaging
>> and agent side code wouldn't be a part of oslo.messaging.
>>
>>
>> use cases:([5] for more figures)
>> file-like object: virtio-serial, unix domain socket, unix pipe
>>
>>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>>   per VM
>>
>>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>>  agent in tenant network <-> guest agent in VM
>>
>>
>> So far there are security concerns to forward oslo.messaging from management
>> network into tenant network. One approach is to allow only cast-RPC from
>> server to guest agent in VM so that guest agent in VM only receives messages
>> and can't send anything to servers. With unix pipe, it's write-only
>> for server, read-only for guest agent.
>>
>> Thoughts? comments?
>
> I'm still somewhat aprehensive about the idea of just proxying arbitrary
> data betwetween host & guest agent at the message bus protocol level.
> I'd tend to be more comfortable with some that going through the virt
> driver API in the compute node.
>
> Also, how are you proposing to deal with live migration of VMs ? The
> virtio serial channel can get closed due to QEMU migrating while the
> proxy is in the middle of sending data to the guest VM, potentially
> causing a lost or mangled message in the guest and the sender won't
> know this if this channel write-only since there's no ACK.
>
>> Details of Neutron NFV use case[6]:
>> Neutron services so far typically runs agents in host, the host agent
>> in host receives RPCs from neutron server, then it executes necessary
>> operations. Sometimes the agent in host issues RPC to neutron server
>> periodically.(e.g. status report etc)
>> It's desirable to make such services virtualized as Network Function
>> Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
>> natural approach to propagate those RPC message into agents into VMs.
>
> What sort of things are you expecting the guest agent todo for Neutron ?
> You have to bear in mind that the guest OS is 100% untrusted from the
> hosts POV, so anything that Neutron asks the guest agent todo can be
> completely ignored, or manipulated in any way the guest OS decides to.
> Similarly, if there were a feedback channel, any data the Neutron might
> receive back from the guest agent has to be considered untrustworthy,
> so should not be used to make functional decisions in Neutron.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gn

Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2014-04-09 Thread Chris Behrens

On Apr 9, 2014, at 12:50 PM, Dan Smith  wrote:

>>> So I'm a soft -1 on dropping it from hacking.
> 
> Me too.
> 
>> from testtools import matchers
>> ...
>> 
>> Or = matchers.Or
>> LessThan = matchers.LessThan
>> ...
> 
> This is the right way to do it, IMHO, if you have something like
> matchers.Or that needs to be treated like part of the syntax. Otherwise,
> module-only imports massively improves the ability to find where
> something comes from.

+1

My eyes bleed when I open up a python script and find 1 million imports for 
individual functions and classes.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Zhongyue Luo
Hi,

Ignore my last comment. Patch submitted.

https://review.openstack.org/#/c/86495


On Thu, Apr 10, 2014 at 8:08 AM, Doug Hellmann
wrote:

> That looks like it. Thanks, Josh!
>
> On Wed, Apr 9, 2014 at 7:08 PM, Joshua Hesketh
>  wrote:
> > Hey,
> >
> > I suspect you're looking for this :
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/git/openstack.css
> >
> > Hope that helps!
> >
> > Cheers,
> > Josh
> > 
> > From: Doug Hellmann [doug.hellm...@dreamhost.com]
> > Sent: Thursday, April 10, 2014 12:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Infra] How to solve the cgit repository
> browser line number misalignment in Chrome
> >
> > I don't, but someone on the infra team (#openstack-infra) should be
> > able to tell you where the theme is maintained.
> >
> > Doug
> >
> > On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo 
> wrote:
> >> Do you happen to know where the repo for cgit is? I'll submit a patch
> adding
> >> font and font size.
> >>
> >> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
> >> wrote:
> >>>
> >>> Maybe those changes should be added to our cgit stylesheet?
> >>>
> >>> Doug
> >>>
> >>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
> >>> wrote:
> >>> > Hi,
> >>> >
> >>> > I know I'm not the only person who had this problem so here's two
> simple
> >>> > steps to get the lines and line numbers aligned.
> >>> >
> >>> > 1. Install the stylebot extension
> >>> >
> >>> >
> >>> >
> https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
> >>> >
> >>> > 2. Click on the download icon to install the custom style for
> >>> > git.openstack.org
> >>> >
> >>> > http://stylebot.me/styles/5369
> >>> >
> >>> > Thanks!
> >>> >
> >>> > --
> >>> > Intel SSG/STO/DCST/CBE
> >>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
> Shanghai,
> >>> > China
> >>> > +862161166500
> >>> >
> >>> > ___
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev@lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Intel SSG/STO/DCST/CBE*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread i y
Thanks for the explanation. Now I'm seeing how it works.
So the assumption is
- VMs are required to be on a tenant network connected to public network so
that it can reach openstack public REST API

Is this a widely acceptable assumption? acceptable for NFV use case?
I'm not sure for now, I'd like to hear from others.
So far I've assumed that people may want VMs on a tenant network that is
not connected to public network(isolated tenant network)

Thanks,
Isaku Yamahata


On Thu, Apr 10, 2014 at 1:45 AM, Dmitry Mescheryakov <
dmescherya...@mirantis.com> wrote:

> > I agree those arguments.
> > But I don't see how network-based agent approach works with Neutron
> > network for now. Can you please elaborate on it?
>
> Here is the scheme of network-based agent:
>
> server <-> MQ (Marconi) <-> agent
>
> As Doug said, Marconi exposes REST API, just like any other OpenStack
> service. The services it provides are similar to the MQ ones (Rabbit
> MQ, Qpid, etc.). I.e. very simply there are methods:
>  * put_message(queue_name, message_payload)
>  * get_message(queue_name)
>
> Multi-tenancy is provided by the same means as in the other OpenStack
> projects - user supplies Keystone token in the request and it
> determines the tenant used.
>
> As for the network, a networking-based agent requires tcp connection
> to Marconi. I.e. you need an agent running on the VM to be able to
> connect to Marconi, but not vice versa. That does not sound like a
> harsh requirement.
>
> The standard MQ solutions like Rabbit and Qpid actually could be used
> here instead of Marconi with one drawback - it is really hard to
> reliably implement tenant isolation with them.
>
> Thanks,
>
> Dmitry
>
> 2014-04-09 17:38 GMT+04:00 Isaku Yamahata :
> > Hello Dmitry. Thank you for reply.
> >
> > On Wed, Apr 09, 2014 at 03:19:10PM +0400,
> > Dmitry Mescheryakov  wrote:
> >
> >> Hello Isaku,
> >>
> >> Thanks for sharing this! Right now in Sahara project we think to use
> >> Marconi as a mean to communicate with VM. Seems like you are familiar
> >> with the discussions happened so far. If not, please see links at the
> >> bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
> >> supports for multi-tenancy as a huge advantage over other MQ
> >> solutions. Our agent is network-based, so tenant isolation is a real
> >> issue here. For clarity, here is the overview scheme of network based
> >> agent:
> >>
> >> server <-> MQ (Marconi) <-> agent
> >>
> >> All communication goes over network. I've made a PoC of the Marconi
> >> driver for oslo.messaging, you can find it at [2]
> >
> > I'm not familiar with Marconi, so please enlighten me first.
> > How does MQ(Marconi) communicates both to management network and
> > tenant network?
> > Does it work with Neutron network? not nova-network.
> >
> > Neutron network isolates not only tenant networks each other,
> > but also management network at L2. So openstack servers can't send
> > any packets to VMs. VMs can't to openstack servers.
> > This is the reason why neutron introduced HTTP proxy for instance
> metadata.
> > It is also the reason why I choose to introduce new agent on host.
> > If Marconi (or other porjects like sahara) already solved those issues,
> > that's great.
> >
> >
> >> We also considered 'hypervisor-dependent' agents (as I called them in
> >> the initial thread) like the one you propose. They also provide tenant
> >> isolation. But the drawback is _much_ bigger development cost and more
> >> fragile and complex deployment.
> >>
> >> In case of network-based agent all the code is
> >>  * Marconi driver for RPC library (oslo.messaging)
> >>  * thin client for server to make calls
> >>  * a guest agent with thin server-side
> >> If you write your agent on python, it will work on any OS with any
> >> host hypervisor.
> >>
> >>
> >> For hypervisor dependent-agent it becomes much more complex. You need
> >> one more additional component - a proxy-agent running on Compute host,
> >> which makes deployment harder. You also need to support various
> >> transports for various hypervisors: virtio-serial for KVM, XenStore
> >> for Xen, something for Hyper-V, etc. Moreover guest OS must have
> >> driver for these transports and you will probably need to write
> >> different implementation for different OSes.
> >>
> >> Also you mention that in some cases a second proxy-agent is needed and
> >> again in some cases only cast operations could be used. Using cast
> >> only is not an option for Sahara, as we do need feedback from the
> >> agent and sometimes getting the return value is the main reason to
> >> make an RPC call.
> >>
> >> I didn't see a discussion in Neutron on which approach to use (if it
> >> was, I missed it). I see simplicity of network-based agent as a huge
> >> advantage. Could you please clarify why you've picked design depending
> >> on hypervisor?
> >
> > I agree those arguments.
> > But I don't see how network-based agent approach works with Neutron
> > network for

Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Zhongyue Luo
Apparently the patch needs to be submitted to cgit.

http://git.zx2c4.com/cgit/tree/cgit.css#n277


On Thu, Apr 10, 2014 at 8:08 AM, Doug Hellmann
wrote:

> That looks like it. Thanks, Josh!
>
> On Wed, Apr 9, 2014 at 7:08 PM, Joshua Hesketh
>  wrote:
> > Hey,
> >
> > I suspect you're looking for this :
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/git/openstack.css
> >
> > Hope that helps!
> >
> > Cheers,
> > Josh
> > 
> > From: Doug Hellmann [doug.hellm...@dreamhost.com]
> > Sent: Thursday, April 10, 2014 12:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Infra] How to solve the cgit repository
> browser line number misalignment in Chrome
> >
> > I don't, but someone on the infra team (#openstack-infra) should be
> > able to tell you where the theme is maintained.
> >
> > Doug
> >
> > On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo 
> wrote:
> >> Do you happen to know where the repo for cgit is? I'll submit a patch
> adding
> >> font and font size.
> >>
> >> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
> >> wrote:
> >>>
> >>> Maybe those changes should be added to our cgit stylesheet?
> >>>
> >>> Doug
> >>>
> >>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
> >>> wrote:
> >>> > Hi,
> >>> >
> >>> > I know I'm not the only person who had this problem so here's two
> simple
> >>> > steps to get the lines and line numbers aligned.
> >>> >
> >>> > 1. Install the stylebot extension
> >>> >
> >>> >
> >>> >
> https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
> >>> >
> >>> > 2. Click on the download icon to install the custom style for
> >>> > git.openstack.org
> >>> >
> >>> > http://stylebot.me/styles/5369
> >>> >
> >>> > Thanks!
> >>> >
> >>> > --
> >>> > Intel SSG/STO/DCST/CBE
> >>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
> Shanghai,
> >>> > China
> >>> > +862161166500
> >>> >
> >>> > ___
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev@lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Intel SSG/STO/DCST/CBE*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Zhongyue Luo
Thanks! Will submit a patch.


On Thu, Apr 10, 2014 at 8:08 AM, Doug Hellmann
wrote:

> That looks like it. Thanks, Josh!
>
> On Wed, Apr 9, 2014 at 7:08 PM, Joshua Hesketh
>  wrote:
> > Hey,
> >
> > I suspect you're looking for this :
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/git/openstack.css
> >
> > Hope that helps!
> >
> > Cheers,
> > Josh
> > 
> > From: Doug Hellmann [doug.hellm...@dreamhost.com]
> > Sent: Thursday, April 10, 2014 12:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Infra] How to solve the cgit repository
> browser line number misalignment in Chrome
> >
> > I don't, but someone on the infra team (#openstack-infra) should be
> > able to tell you where the theme is maintained.
> >
> > Doug
> >
> > On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo 
> wrote:
> >> Do you happen to know where the repo for cgit is? I'll submit a patch
> adding
> >> font and font size.
> >>
> >> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
> >> wrote:
> >>>
> >>> Maybe those changes should be added to our cgit stylesheet?
> >>>
> >>> Doug
> >>>
> >>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
> >>> wrote:
> >>> > Hi,
> >>> >
> >>> > I know I'm not the only person who had this problem so here's two
> simple
> >>> > steps to get the lines and line numbers aligned.
> >>> >
> >>> > 1. Install the stylebot extension
> >>> >
> >>> >
> >>> >
> https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
> >>> >
> >>> > 2. Click on the download icon to install the custom style for
> >>> > git.openstack.org
> >>> >
> >>> > http://stylebot.me/styles/5369
> >>> >
> >>> > Thanks!
> >>> >
> >>> > --
> >>> > Intel SSG/STO/DCST/CBE
> >>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
> Shanghai,
> >>> > China
> >>> > +862161166500
> >>> >
> >>> > ___
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev@lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Intel SSG/STO/DCST/CBE*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bugs in review page? When i loggin and modify Mailing Address it say I'm not be a member.

2014-04-09 Thread Anita Kuno
On 04/09/2014 10:06 PM, Chenliang (L) wrote:
> Hi.
> 
> I can't complete git review,It say set contact info in 
> https://review.openstack.org/#/settings/contact.
> 
> When I login in https://review.openstack.org/#/settings/contact.( I login in 
> with email hs.c...@huawei.com.)
> and I modify Mailing Address and Save changes,It raise the following errors:
> Application Error
> Server Error
> The request could not be completed. You may not be a member of the foundation 
> registered under this email address. Beforecontinuing, please make sure you 
> have joined the foundation athttp://openstack.org/register/
> 
> In the Profile tab my info is :
> Username LiangChen 
> Full Name LiangChen 
> Email Address hs.c...@huawei.com 
> Registered Jan 22, 2014 2:34 PM 
> Account ID 10070
> 
> And I has signeded ICLA:
> Status Name Description Accepted 
>  Verified ICLA OpenStack Individual Contributor License Agreement Jan 22, 
> 2014 2:50 PM
> Jan 22, 2014 2:50 PM
> 
> Could someone please tell me how to solve it?
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Is this you? http://www.openstack.org/community/members/profile/9193

If not, you need to ensure you are signed up as member of the foundation.

Go here: https://www.openstack.org/join/register/
and fill in the form and register as a foundation member.

Your name needs to appear in the foundation members directory:
http://www.openstack.org/community/members/
before you are sure that step is complete.

review.openstack.org (Gerrit) will be ensuring you are a foundation
member before you can submit patches.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-09 Thread Ian Wells
On 8 April 2014 10:35, Zane Bitter  wrote:

> To attach a port to a network and give it an IP from a specific subnet
>
>> on that network, you would use the *--fixed-ip subnet_id *option.
>>
>> Otherwise, the create port request will use the first subnet it finds
>> attached to that network to allocate the port an IP address. This is why
>> you are encountering the port-> subnet-> network chain. Subnets provide
>> the addresses. Networks are the actual layer 2 boundaries.�
>>
>
> It sounds like maybe Subnets need to be created independently of Networks
> and then passed as a list to the Network when it is created. In Heat
> there's no way to even predict which Subnet will be "first" unless the user
> adds explicit "depends_on" annotations (and even then, a Subnet could have
> been created outside of the template already).
>

A longstanding issue I've had with networks (now fixed, I believe, but
don't hold me to that) is that they don't work without subnets, but they
should - because ports don't work without an address, and yet, again, they
should - because our antispoofing is completely tied up with addresses and
has historically been hard-to-impossible to disable.  In fact, ports have
long been intended to have *one* ipv4 address - no more, which is annoying
for many sorts of IP based failover, and no fewer, which is annoying when
you're not using IP addresses in an obvious fashion (such as Openstack
deployments, if you've ever tried to use Openstack as your testbed for
testing Openstack itself).

Also, subnets seem to be branching out.

In ipv4, subnets are clearly 'here's another chunk of address space for
this network'.  You do need a router attached to be able to *reach* that
additional address space, and that's rather silly - but I've always seen
them as an artifact of ipv4 scarcity.

In ipv6, I believe we're using them, or going to use them, to allow
multiple global addresses on a port.  That's a pretty normal thing in ipv6,
which pretty much starts with the assumption that you have two addresses
per port and works upward from there.

-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bugs in review page? When i loggin and modify Mailing Address it say I'm not be a member.

2014-04-09 Thread Chenliang (L)
Hi.

I can't complete git review,It say set contact info in 
https://review.openstack.org/#/settings/contact.

When I login in https://review.openstack.org/#/settings/contact.( I login in 
with email hs.c...@huawei.com.)
and I modify Mailing Address and Save changes,It raise the following errors:
Application Error
Server Error
The request could not be completed. You may not be a member of the foundation 
registered under this email address. Beforecontinuing, please make sure you 
have joined the foundation athttp://openstack.org/register/

In the Profile tab my info is :
Username LiangChen 
Full Name LiangChen 
Email Address hs.c...@huawei.com 
Registered Jan 22, 2014 2:34 PM 
Account ID 10070

And I has signeded ICLA:
Status Name Description Accepted 
 Verified ICLA OpenStack Individual Contributor License Agreement Jan 22, 2014 
2:50 PM
Jan 22, 2014 2:50 PM

Could someone please tell me how to solve it?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session suggestions for the Juno Design Summit now open

2014-04-09 Thread Tina TSOU
Dear Thierry,



Thanks for your suggestion.



It is submitted as below.

http://summit.openstack.org/cfp/create


Topic

Title (Click to view/edit)

Proposer

Status

Neutron

Scaling Network Performance for Large 
Clouds

Tina Tsou

U Unreviewed






Thank you,

Tina





-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org]
Sent: Wednesday, April 09, 2014 6:36 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Session suggestions for the Juno Design Summit now 
open



Tina TSOU wrote:

> Below is our proposal. Look forward to your feedback.

>

> --

> Description

> This session focuses on how to improve networking performance at large scale 
> deployment.

> For example

> - having many VMs, thousands to tens of thousands, in a single data

> center

> - very heavy traffic between VMs of different physical servers

> - large quantities of OpenFlow flow tables causing slow forwarding on

> OVS and high CPU usage on hypervisor

> - VMs belong to various tenants thus requiring traffic isolation and

> security and lots of configuration on OVS mainly overlay encapsulation

> and OpenFlow tables

> - neutron server taking too long time to process requests

>

> We are introducing a solution designed for the above scenario in this area.

> The main idea is to deploy on the hypervisor a new monitor agent which will 
> periodically check the CPU usage and network load of the NIC and inform SDN 
> controller through plugin/API extension. If the OVS load goes very high, SDN 
> controller can reactively off-load the traffic from OVS to TOR with minimum 
> interruption. It means that initially, the overlay encapsulation might be 
> done on OVS, but some feature rich TORs also provide this functionality which 
> makes TOR capable of taking over whenever necessary. The same strategy will 
> be applied for OpenFlow flow table. By doing this, OVS will have nothing to 
> do other than sending the traffic to TOR. All the time-consuming jobs will be 
> taken over by TOR dynamically. This more advanced strategy does require TOR 
> to be feature-rich so it might cause more TCO.

>

> We believe this is worth doing for large scale deployment.

> --



You should file it at summit.openstack.org so that it can be considered for 
inclusion in the schedule.



Regards,



--

Thierry Carrez (ttx)



___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?

2014-04-09 Thread Lingxian Kong
2014-04-10 0:33 GMT+08:00 Nikola Đipanov :

> On 04/09/2014 03:54 AM, Lingxian Kong wrote:
> > yes, the bp also make sense to nova-cinder interaction, may I submmit
> > a blueprint about that?
> >
> > Any comments?
> >
>
> I was going to propose that same thing for Nova as well, as well as a
> summit session for Atlanta. Would be good to coordinate the work.
>
> Would you be interested in looking at it from Cinder side?
>
> Thanks,
>
> N.
>
>
​hi nikola:

Sounds great! very interested in collaborating to make a contribution
towards this effort.

Could you provide your bp and your summit session? I am willing to get
involved in the discussion and/or the design and implementation. ​


-- 
*-*
*Lingxian Kong*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-09 Thread Fox, Kevin M
I'm not seeing anything here about non http(s) related Load balancing.  We're 
interested in load balancing ssh, ftp, and other services too.

Thanks,
Kevin

From: Samuel Bercovici [samu...@radware.com]
Sent: Sunday, April 06, 2014 5:51 AM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org); 
Eugene Nikanorov (enikano...@mirantis.com)
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui 
screen captures

Per the last LBaaS meeting.


1.   Please find a list of use cases.
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing


a)  Please review and see if you have additional ones for the project-user

b)  We can then chose 2-3 use cases to play around with how the CLI, API, 
etc. would look


2.   Please find a document to place screen captures of web UI. I took the 
liberty to place a few links showing ELB.
https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing


Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-09 Thread Zane Bitter

On 09/04/14 19:20, Kevin Benton wrote:

 >is definitely broken as far as I can tell, because you have to give up
dynamic allocation of IP addresses to use it

What do you mean you have to give up dynamic allocation of IP addresses?
A user is never required to enter an IP address for a port if they don't
want a specific address.�Do you mean you want your instances IP
addresses to change when they are rebooted?�The current model allows
multiple DHCP servers from different subnets operate in the same
broadcast domain because MAC addresses are bound to specific DHCP
servers and they don't answer DHCP requests from arbitrary MAC addresses.�


As far as I can tell, you can create multiple Subnets each with DHCP

enabled and their own allocation pools, but only allocation pools from
the first subnet to be created will ever be used. To assign a port to a
different subnet, you need to specify a static IP.

No. I tried to point this out in my first email. When creating a port,
use the subnet_id parameter after --fixed-ip to specify which subnet to
connect to. For example:
*neutron port-create some_network --fixed-ip
subnet_id=a512cdd3-489d-4317-a06b-10cf894cff5d*

That will dynamically allocate it an address from the specified subnet
instead of the first one.


Oh! So you can assign the subnet by passing a fixed_ips section, even 
though you don't want a fixed IP, and just leaving out the ip_address:


  "fixed_ips": [
{
  "subnet_id":"a512cdd3-489d-4317-a06b-10cf894cff5d",
}
  ],

Thanks for pointing this out. I don't know what kind of historical 
process produced that API, but I hope it's obvious to everyone how 
completely bizarre this is - that passing the --fixed-ip option not only 
has this side effect but actually doesn't even necessarily allocate a 
fixed IP.


Something like this would make a lot more sense:

  "subnets": [
{
  "subnet_id": "a512cdd3-489d-4317-a06b-10cf894cff5d",
},
{
  "subnet_id": "08eae331-0402-425a-923c-34f7cfe39c1b",
  "fixed_address": "10.0.0.3"
}
  ],

I rechecked the documentation, and afaict there is not a single example 
of anywhere that "ip_address" doesn't appear in a "fixed_ips" entry. So 
Nachi was right, I guess that a docs fix could help here.


Putting 2+2 together (finally!) I realise now that there is probably no 
issue assigning multiple IP addresses (e.g. IPv4 + IPv6) to a port by 
assigning a single Port to multiple Subnets... that's not a phrase I 
could have wrapped my brain around at the beginning of the week :D


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Doug Hellmann
That looks like it. Thanks, Josh!

On Wed, Apr 9, 2014 at 7:08 PM, Joshua Hesketh
 wrote:
> Hey,
>
> I suspect you're looking for this : 
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/git/openstack.css
>
> Hope that helps!
>
> Cheers,
> Josh
> 
> From: Doug Hellmann [doug.hellm...@dreamhost.com]
> Sent: Thursday, April 10, 2014 12:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Infra] How to solve the cgit repository browser 
> line number misalignment in Chrome
>
> I don't, but someone on the infra team (#openstack-infra) should be
> able to tell you where the theme is maintained.
>
> Doug
>
> On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo  wrote:
>> Do you happen to know where the repo for cgit is? I'll submit a patch adding
>> font and font size.
>>
>> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
>> wrote:
>>>
>>> Maybe those changes should be added to our cgit stylesheet?
>>>
>>> Doug
>>>
>>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
>>> wrote:
>>> > Hi,
>>> >
>>> > I know I'm not the only person who had this problem so here's two simple
>>> > steps to get the lines and line numbers aligned.
>>> >
>>> > 1. Install the stylebot extension
>>> >
>>> >
>>> > https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
>>> >
>>> > 2. Click on the download icon to install the custom style for
>>> > git.openstack.org
>>> >
>>> > http://stylebot.me/styles/5369
>>> >
>>> > Thanks!
>>> >
>>> > --
>>> > Intel SSG/STO/DCST/CBE
>>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
>>> > China
>>> > +862161166500
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-09 Thread Joshua Harlow
+2

It'd be nice to start putting historical data by default into HDFS (via
sahara?) and leave the databases as only what exists 'now'.

Then people can setup pig or other hadoop jobs and analyze there data as
they wish (slice and dice thousands of ways...)

-Original Message-
From: Robert Collins 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Wednesday, April 9, 2014 at 7:45 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?

>On 10 April 2014 02:32, Chris Friesen  wrote:
>> On 04/09/2014 03:45 AM, Day, Phil wrote:

 -Original Message- From: Russell Bryant
>>
>>
 We were thinking that there may be a use for being able to query a
 full list of instances (including the deleted ones) for a group.
 The API just hasn't made it that far yet.  Just hiding them for now
 leaves room to iterate and doesn't prevent either option (exposing
 the deleted instances, or changing to auto- delete them from the
 group).
>>
>>
>>> Maybe it's just me, but I have a natural aversion to anything that
>>> grows forever in the database - over time and at scale this becomes a
>>> real problem.
>>
>>
>> Not just you.  I want my main database to reflect the current active
>>data.
>> Historical data should go somewhere else.
>
>+1. Fastest way to make an OLTP workload crawl is to mix it up with
>warehousing.
>
>-Rob
>
>-- 
>Robert Collins 
>Distinguished Technologist
>HP Converged Cloud
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-09 Thread Devananda van der Veen
On Tue, Apr 8, 2014 at 3:04 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Guys, thank you very much for your comments,
>
> I thought a lot about why we need to be so limited in IPA use cases. Now
> it much clearer for me. Indeed, having some kind of agent running inside
> host OS is not what many people want to see. And now I'd rather agree with
> that.
>
> But there are still some questions which are difficult to answer for me.
> 0) There are a plenty of old hardware which does not have IPMI/ILO at all.
> How Ironic is supposed to power them off and on? Ssh? But Ironic is not
> supposed to interact with host OS.
>

We can't manage everything... if there's no out-of-band power control,
ironic control the power.


>  1) We agreed that Ironic is that place where we can store hardware info
> ('extra' field in node model). But many modern hardware configurations
> support hot pluggable hard drives, CPUs, and even memory. How Ironic will
> know that hardware configuration is changed? Does it need to know about
> hardware changes at all? Is it supposed that some monitoring agent (NOT
> ironic agent) will be used for that?
>

while an instance is provisioned, ironic does not need to be made aware of
hardware changes.

It has not been written, but it would be fine for Ironic to re-inventory a
server any time it is deleted and before returning it to the pool. I think
it's unnecessary, but it could be possible, with a config option.


> But if we already have discovering extension in Ironic agent, then it
> sounds rational to use this extension for monitoring as well. Right?
>

real time monitoring? nope.


> 2) When I deal with some kind of hypervisor, I can always use 'virsh list
> --all' command in order to know which nodes are running and which aren't.
> How am I supposed to know which nodes are still alive in case of Ironic?
> IPMI? Again IPMI is not always available. And if IPMI is available, then
> why do we need heartbeat in Ironic agent?
>

Again, if there is no out-of-band power control, Ironic can't control the
power. Period.

As far as why is there a heartbeat in agent? Because some operations the
agent performs may take a long time, and so this allows ironic-conductor to
know the agent itself hasn't died (even if the node is still powered on).


Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-09 Thread Kevin Benton
>is definitely broken as far as I can tell, because you have to give up
dynamic allocation of IP addresses to use it

What do you mean you have to give up dynamic allocation of IP addresses? A
user is never required to enter an IP address for a port if they don't want
a specific address. Do you mean you want your instances IP addresses to
change when they are rebooted? The current model allows multiple DHCP
servers from different subnets operate in the same broadcast domain because
MAC addresses are bound to specific DHCP servers and they don't answer DHCP
requests from arbitrary MAC addresses.

>As far as I can tell, you can create multiple Subnets each with DHCP
enabled and their own allocation pools, but only allocation pools from the
first subnet to be created will ever be used. To assign a port to a
different subnet, you need to specify a static IP.

No. I tried to point this out in my first email. When creating a port, use
the subnet_id parameter after --fixed-ip to specify which subnet to connect
to. For example:
*neutron port-create some_network --fixed-ip
subnet_id=a512cdd3-489d-4317-a06b-10cf894cff5d*

That will dynamically allocate it an address from the specified subnet
instead of the first one.

--
Kevin Benton


On Wed, Apr 9, 2014 at 4:02 PM, Zane Bitter  wrote:

> On 07/04/14 21:58, Nachi Ueno wrote:
>
>> Hi Zane
>>
>> Thank you for your very valuable post.
>> We should convert your suggest to multiple bps.
>>
>> 2014-04-07 17:28 GMT-07:00 Zane Bitter :
>>
>>> The Neutron API is a constant cause of pain for us as Heat developers,
>>> but
>>> afaik we've never attempted to bring up the issues we have found in a
>>> cross-project forum. I've recently been doing some more investigation
>>> and I
>>> want to document the exact ways in which the current Neutron API breaks
>>> orchestration, both in the hope that a future version of it might be
>>> better
>>> and as a guide for other API authors.
>>>
>>> BTW it's my contention that an API that is bad for orchestration is also
>>> hard to use for the ordinary user as well. When you're trying to figure
>>> out
>>> the order of operations you need to do, there are two times at which you
>>> could find out you've got it wrong:
>>>
>>> 1) Before you run the command, when you realise you don't have all of the
>>> required data yet; or
>>> 2) After you run the command, when you get a cryptic error message.
>>>
>>> Not only is (1) *mandatory* for a data-driven orchestration system like
>>> Heat, it offers orders-of-magnitude better user experience for everyone.
>>>
>>> I should say at the outset that I know next to nothing about Neutron, and
>>> one of the goals of this message is to find out which parts I am
>>> completely
>>> wrong about. I did know a little bit about traditional networking at one
>>> time, and even remember some of it ;)
>>>
>>>
>>> Neutron has a little documentation on workflow, so let's begin there:
>>> http://docs.openstack.org/api/openstack-network/2.0/content/
>>> Overview-d1e71.html#Theory
>>>
>>> (1) Create a network
>>> Instinctively, I want a Network to be something like a virtual VRF
>>> (VVRF?):
>>> a separate namespace with it's own route table, within which subnet
>>> prefixes
>>> are not overlapping, but which is completely independent of other
>>> Networks
>>> that may contain overlapping subnets. As far as I can tell, this
>>> basically
>>> seems to be the case. The difference, of course, is that instead of
>>> having
>>> to configure a VRF on every switch/router and make sure they're all in
>>> sync
>>> and connected up in the right ways, I just define it in one place
>>> globally
>>> and Neutron does the rest. I call this #winning. Nice work, Neutron.
>>>
>>
>> In Neutron,  "A network is an isolated virtual layer-2 broadcast domain"
>> http://docs.openstack.org/api/openstack-network/2.0/content/
>> Overview-d1e71.html#subnet
>> so the model don't have any L3 stuffs.
>>
>>  (2) Associate a subnet with the network
>>> Slightly odd choice of words, because you're actually creating a new
>>> Subnet
>>> (there's no such thing as a Subnet not associated with a Network), but
>>> this
>>> is probably just a minor documentation nit. Instinctively, I want a
>>> Subnet
>>> to be something like a virtual VLAN (VVLAN?): at its most basic level,
>>> just
>>> a group of ports that share a broadcast domain, but also having other
>>> properties (e.g. if L3 is in use, all IP addresses in the subnet should
>>> be
>>> in the same CIDR). This doesn't seem to be the case, though, it's just a
>>> CIDR prefix, which leaves me wondering how L2 traffic will be treated, as
>>> well as how I would do things like use both IPv4 and IPv6 on a single
>>> port
>>> (by assigning a port to multiple Subnets?). Looking at the docs, there
>>> is a
>>> much bigger emphasis on DHCP client settings than I expected - surely I
>>> might want to want to give two sets of ports in the same Subnet different
>>> DHCP configs? Still, this is not bad - the DHCP conf

Re: [openstack-dev] [Ironic][Agent]

2014-04-09 Thread Devananda van der Veen
On Wed, Apr 9, 2014 at 9:01 AM, Stig Telfer  wrote:

> > -Original Message-
> > From: Matt Wagner [mailto:matt.wag...@redhat.com]
> > Sent: Tuesday, April 08, 2014 6:46 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Ironic][Agent]
> >
> > On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
> > 
> > >0) There are a plenty of old hardware which does not have IPMI/ILO at
> all.
> > >How Ironic is supposed to power them off and on? Ssh? But Ironic is not
> > >supposed to interact with host OS.
> >
> > I'm more accustomed to using PDUs for this type of thing. I.e., a
> > power strip you can ssh into or hit via a web API to toggle power to
> > individual ports.
> >
> > Machines are configured to power up on power restore, plus PXE boot.
> > You have less control than with IPMI -- all you can do is toggle power
> > to the outlet -- but it works well, even for some desktop machines I
> > have in a lab.
> >
> > I don't have a compelling need, but I've often wondered if such a
> > driver would be useful. I can imagine it also being useful if people
> > want to power up non-compute stuff, though that's probably not a top
> > priority right now.
>
> We have developed a driver that might be of interest.  Ironic uses it to
> control the PDUs in our lab cluster through SNMP.  It appears the leading
> brands of PDU implement SNMP interfaces, albeit through vendor-specific
> enterprise MIBs.  As a mechanism for control, I'd suggest that SNMP is
> going to be a better bet than an automated tron for hitting the ssh or web
> interfaces.
>
> Currently our power driver is a point solution for our PDUs, but why not
> make it generalised?  We'd be happy to contribute it.
>
> Best wishes
> Stig Telfer
> Cray Inc.
>
>
A PDU-based power driver has come up in discussions in the past several
times, and I think it's well within Ironic's scope to support this. An
iBoot driver was proposed, but bit rotted. I'd rather see a generic one,
honestly.

FWIW, there already is an SSH-based power driver, which is primarily used
in test environments (we mock real hardware with VMs to cut down the cost
of developer testing), but this could probably be extended to support
connecting to PDU's.

Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Stephen Balukoff
The answers for our organization are generally pretty close to those that
Jorge has said.  So my response is mostly a big +1 to his, with the
following differences:



On Wed, Apr 9, 2014 at 1:49 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   1.   Monitoring Tab:
>
>  a.   Are there users that use load balancing who do not monitor
> members? Can you share the use cases where this makes sense?
>   This is a good question. In our case we supply static ip addresses so
> some users have only one backend node. With one node it isn't necessary.
> Another case I can think of is lbs that are being used for non-critical
> environments (i.e. dev or testing environment). For the most part it would
> make sense to have monitoring.
>

For the case of dev or testing environments: It could be argued that if
you're going to bother to deploy load balancing at all, it's with the
intent of having a functional representative of production. As such, such
non-production environments should probably also make use of monitoring. ;)


> 2.   Logging Tab:
>
> a.   What is logging use for?
>   This is specifically connection logging. It allows the user to see all
> of the requests that went through the load balancer. It is mostly used for
> big data and troubleshooting.
>

We tend to only use error logs for troubleshooting in cases where there
might be a problem at the load-balancer level. Most of our customers get
their big data and other troubleshooting logs from the back-end application
servers.


> 6.   L7
>
> a.   Does any cloud provider support L7 switching and L7 content
> modifications?
>
>
We do. (Both L7 switching and L7 content modifications.)  On the switching
side of things, our most two common use cases are:
1. Switching based on URI base path (ex. anything under "/api" goes to a
different pool)
2. Switching based on HTTP/1.1 hostname (ex. "www.example.com" goes to a
different pool than "api.example.com")

We do have a few customers using cookie-based switching.

The content modification we allow is also pretty basic. Mostly, we insert
the "X-Fowarded-For" header, and allow our customers to do HSTS at the load
balancer. Most everything else can be done at the application server layer,
so we have our customers do that.


> b.  If so can you please add a tab noting how much such features
> are used?
>
Will do, once I get the data. Might not happen before tomorrow's meeting.


> c.   If not, can anyone attest to whether this feature was
> requested by customers?
>
Yep, these features were requested by our customers and have become
mission-critical for some. We could not transition these customers to
another load balancer product without having this functionality now.

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Joshua Hesketh
Hey,

I suspect you're looking for this : 
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/git/openstack.css

Hope that helps!

Cheers,
Josh

From: Doug Hellmann [doug.hellm...@dreamhost.com]
Sent: Thursday, April 10, 2014 12:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Infra] How to solve the cgit repository browser 
line number misalignment in Chrome

I don't, but someone on the infra team (#openstack-infra) should be
able to tell you where the theme is maintained.

Doug

On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo  wrote:
> Do you happen to know where the repo for cgit is? I'll submit a patch adding
> font and font size.
>
> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
> wrote:
>>
>> Maybe those changes should be added to our cgit stylesheet?
>>
>> Doug
>>
>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
>> wrote:
>> > Hi,
>> >
>> > I know I'm not the only person who had this problem so here's two simple
>> > steps to get the lines and line numbers aligned.
>> >
>> > 1. Install the stylebot extension
>> >
>> >
>> > https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
>> >
>> > 2. Click on the download icon to install the custom style for
>> > git.openstack.org
>> >
>> > http://stylebot.me/styles/5369
>> >
>> > Thanks!
>> >
>> > --
>> > Intel SSG/STO/DCST/CBE
>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
>> > China
>> > +862161166500
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-09 Thread Zane Bitter

On 07/04/14 21:58, Nachi Ueno wrote:

Hi Zane

Thank you for your very valuable post.
We should convert your suggest to multiple bps.

2014-04-07 17:28 GMT-07:00 Zane Bitter :

The Neutron API is a constant cause of pain for us as Heat developers, but
afaik we've never attempted to bring up the issues we have found in a
cross-project forum. I've recently been doing some more investigation and I
want to document the exact ways in which the current Neutron API breaks
orchestration, both in the hope that a future version of it might be better
and as a guide for other API authors.

BTW it's my contention that an API that is bad for orchestration is also
hard to use for the ordinary user as well. When you're trying to figure out
the order of operations you need to do, there are two times at which you
could find out you've got it wrong:

1) Before you run the command, when you realise you don't have all of the
required data yet; or
2) After you run the command, when you get a cryptic error message.

Not only is (1) *mandatory* for a data-driven orchestration system like
Heat, it offers orders-of-magnitude better user experience for everyone.

I should say at the outset that I know next to nothing about Neutron, and
one of the goals of this message is to find out which parts I am completely
wrong about. I did know a little bit about traditional networking at one
time, and even remember some of it ;)


Neutron has a little documentation on workflow, so let's begin there:
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#Theory

(1) Create a network
Instinctively, I want a Network to be something like a virtual VRF (VVRF?):
a separate namespace with it's own route table, within which subnet prefixes
are not overlapping, but which is completely independent of other Networks
that may contain overlapping subnets. As far as I can tell, this basically
seems to be the case. The difference, of course, is that instead of having
to configure a VRF on every switch/router and make sure they're all in sync
and connected up in the right ways, I just define it in one place globally
and Neutron does the rest. I call this #winning. Nice work, Neutron.


In Neutron,  "A network is an isolated virtual layer-2 broadcast domain"
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#subnet
so the model don't have any L3 stuffs.


(2) Associate a subnet with the network
Slightly odd choice of words, because you're actually creating a new Subnet
(there's no such thing as a Subnet not associated with a Network), but this
is probably just a minor documentation nit. Instinctively, I want a Subnet
to be something like a virtual VLAN (VVLAN?): at its most basic level, just
a group of ports that share a broadcast domain, but also having other
properties (e.g. if L3 is in use, all IP addresses in the subnet should be
in the same CIDR). This doesn't seem to be the case, though, it's just a
CIDR prefix, which leaves me wondering how L2 traffic will be treated, as
well as how I would do things like use both IPv4 and IPv6 on a single port
(by assigning a port to multiple Subnets?). Looking at the docs, there is a
much bigger emphasis on DHCP client settings than I expected - surely I
might want to want to give two sets of ports in the same Subnet different
DHCP configs? Still, this is not bad - the DHCP configuration is done by the
time the Subnet is created, so there's no problem in connecting stuff to it
immediately after.


so, subnet has many meanings.
In neutron, it means
"A subnet represents an IP address block that can be used to assign IP
addresses to virtual instances."
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#subnet

so "subnet" in your definition is more like "network" in neutron.


Thanks for explaining this :)

I'm trying to think of the possible reasons for wanting to create 
multiple Subnets on one Network:


(a) To provide different DHCP options to different sets of servers.
(b) To allow addresses from multiple families (i.e. IPv4 & IPv6) to be 
assigned on the same Network.

(c) To mix addresses of different prefixes on the same network segment.

(a) is definitely broken as far as I can tell, because you have to give 
up dynamic allocation of IP addresses to use it. The fact that the 
extra-dhcp-opt extension works on a per-port level looks like an 
admission of defeat here. (b) could be broken if Subnets work as has 
been described so far, but it's quite possible there is an exception 
where a port is assigned to multiple Subnets (one per address family, 
rather than just one total). I don't know if that's the case? Finally, I 
always thought that (c) is frowned-upon except as a very temporary measure.


In any event, these are 3 very different use cases (perhaps more 
exist?), and the current Subnet API doesn't seem to quite fit any of them.



(3) Boot a VM and attach it to the network
Here's where you completely lost me. I just creat

Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Eichberger, German
Comments inline.

German

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: Wednesday, April 09, 2014 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

See inline, Susanne

On Wed, Apr 9, 2014 at 4:49 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Answers inlined. Thanks for the questions! They forced me to think about 
certain features.

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 9, 2014 6:10 AM
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]Clarification in regards to 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

Hi,

I have looked at 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1
 and have a few questions:

1.   Monitoring Tab:

a.   Are there users that use load balancing who do not monitor members? 
Can you share the use cases where this makes sense?
This is a good question. In our case we supply static ip addresses so some 
users have only one backend node. With one node it isn't necessary. Another 
case I can think of is lbs that are being used for non-critical environments 
(i.e. dev or testing environment). For the most part it would make sense to 
have monitoring.

b.  Does it make sense to define the different type of monitors (ex: TCP, 
HTTP HTTPS)?
Yes it does. Http monitoring, for example, allows you to monitor specific 
URI's. I just put total utilization for all three to get some data out.

c.   Does any existing cloud service besides the current implementation of 
the LBaaS API supports using multiple monitors on the same pool? Is this a 
required feature?
I would think multiple monitors wouldn't make sense as they could potentially 
conflict. How would a decision be made in such a case?

2.   Logging Tab:

a.   What is logging use for?
This is specifically connection logging. It allows the user to see all of the 
requests that went through the load balancer. It is mostly used for big data 
and troubleshooting.

b.  How does the tenant consume the logs?
For our offering, we send their logs in a compressed format to swift. However, 
I am open to discussion on how to handle this in a more flexible manner.

[Susanne] in our case logs are forwarded to a centralized logging system e.g. 
Logstash/Elastic Search/Kibana/etc.
[German] The internal operator logs get to kibana as Susanne described. We also 
offer a way for customers to get their logs uploaded to Swift.

3.   SSL Tab:

a.   Please explain if SSL means passing SSL traffic through the load 
balancer or using the load balancer to terminate certificates.
SSL termination. I updated the tab.

b.  Does it make sense to separate those (SSL termination and non HTTPS 
terminated traffic) as different rows?
Blue Box added a few extra rows. I identified lbs that terminate only secure 
traffic and lbs that allow both secure and insecure traffic.

c.   Can anyone explain the use cases for SSL_MIXED?
A lot of web sites have mixed content. The lb terminates the secure traffic. 
The insecure traffic passes through normally.

4.   HA Tab:

a.   Is this a tenant facing option or is it the way the operator chose to 
implement the service
For us, this is operator implementation. However, since most lbs are considered 
mission critical almost all production users require HA. I could see this being 
a toggable feature from the tenant side if they wanted to use a lb for testing 
or something non mission critical.

[Susanne] Same for us. It is very important for use as service provider  that 
the LB be resilient so the user doesn't have a choice. It is resilient by 
default.

5.   Content Caching Tab:

a.   Is this a load balancer feature or a CDN like feature.
This is a lb feature. However, depending on the amount of content you'd like to 
cache using a CDN may be overkill. Here is a link that may shed some light: 
http://www.rackspace.com/knowledge_center/article/content-caching-for-cloud-load-balancers

6.   L7

a.   Does any cloud provider support L7 switching and L7 content 
modifications?
We currently do not.
[German] We currently do not have this feature though some customers have 
written small programs which simulate L7 monitoring by reporting the result on 
an arbitrary TCP port on the node. Our LB can monitor any port for system 
health BTW.

[Susanne] we do not have that feature either.

b.  If so c

Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Does anyone have a flowchart of the cloud build/configure process including 
interactions between the various components/stages of TripleO and Heat?

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Wednesday, April 09, 2014 2:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] config options, defaults, oh my!

On 10 April 2014 08:33, Clint Byrum  wrote:

>
> This is exactly what we're doing. We're just suggesting exposing 
> variations in the Heat templates, rather than in the elements. It is 
> worth noting that Heat has grown the ability to grab a local file and 
> inject it into your template at runtime. I think it would actually 
> make sense to have os-apply-config enhanced to be able to override 
> whole template files based on something like this:
>
> resources:
>   server1:
> metadata:
>   template_overrides:
> "/etc/nova/nova.conf":
>   get_file [ "my_special_nova.conf.template" ]
>
> In that, we achieve what you want, but we can do so without rebuilding 
> the whole image.

This makes me a little nervous: its much easier to break os-collect-config by 
forcing os-apply-config to break hard this way, than through bad metadata. I 
think I'm ok with the sentiment, but nervous about impl.

-Rob


--
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra]Requesting consideration of httmock package for test-requirements in Juno

2014-04-09 Thread Jamie Lennox


- Original Message -
> From: "Paul Michali (pcm)" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, April 9, 2014 6:31:09 AM
> Subject: Re: [openstack-dev] [infra]Requesting consideration of httmock 
> package for test-requirements in Juno
> 
> On Apr 8, 2014, at 3:04 PM, Jamie Lennox < jamielen...@redhat.com > wrote:
> 
> 
> 
> 
> 
> 
> - Original Message -
> 
> 
> From: "Paul Michali (pcm)" < p...@cisco.com >
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >
> Cc: jamielen...@gmail.com
> Sent: Wednesday, April 9, 2014 12:09:58 AM
> Subject: [openstack-dev] [infra]Requesting consideration of httmock package
> for test-requirements in Juno
> 
> Reposting this, after discussing with Sean Dague…
> 
> For background, I have developed a REST client lib to talk to a H/W device
> with REST server for VPNaaS in Neutron. To support unit testing of this, I
> created a UT module and a mock REST server module and used the httmock
> package. I found it easy to use, and was able to easily create a sub-class
> of my UT to run the same test cases with real H/W, instead of the mock REST
> server. See the original email below, for links of the UT and REST mock to
> see how I used it.
> 
> 
> I created a bug under requirements, to propose adding httmock to the
> test-requirements. Sean mentioned that there is an existing mock package,
> called httpretty , which I found is used in keystone client UTs), and should
> petition to see if httmock should replace httpretty, since the two appear to
> overlap in functionality.
> 
> I found this link, with a brief comparison of the two:
> http://marekbrzoska.wordpress.com/2013/08/28/mocking-http-requests-in-python/
> 
> So… I’m wondering if the community is interested in adopting this package
> (with the goal of deprecating the httpretty package). Otherwise, I will work
> on reworking the UT code I have to try to use httpretty.
> 
> Would be interested in peoples’ thoughts, especially those who have worked
> with httpretty.
> 
> Thanks in advance!
> 
> So I introduced HTTPretty into the requirements and did the work around
> keystoneclient and am well aware that it has a few warts.
> 
> PCM: Great, I grabbed your name from keystone client logs and was hoping you
> had some knowledge of httpretty.
> 
> 
> 
> 
> 
> 
> At the time we were going through the changeover from httplib to requests and
> httpretty gave a good way to change over the library and ensure that we
> hadn't actually changed the issued requests at all. If we had already been
> on requests i don't know if i'd have made the same choice.
> 
> In general I am in favour of mocking the response layer rather than the
> client layer - whether we do this with httpretty or httmock doesn't bother
> me that much. Honestly I don't think a global patch of the requests Session
> object is that much safer that a global patch of the socket interface, if
> anything requests is under development and so this interface is less
> defined.
> 
> PCM: Not sure that httmock can be considered a global patch. It is a context
> lib that intercepts the call through various decorators where the request
> can be filtered/processed and if not, will fall through and call the actual
> library.
> 
> So, with the context lib, you can define several handlers for the request(s).
> When the call is made, it will try each handler and if they all return None,
> will call the original function, otherwise they return the value of the mock
> routine. Here’s an example front he test cases I cerated:
> 
> with httmock.HTTMock(csr_request.token, csr_request.put,
> csr_request.normal_get):
> keepalive_info = {'interval': 60, 'retry': 4}
> self.csr.configure_ike_keepalive (keepalive_info)
> self.assertEqual(requests.codes.NO_CONTENT, self.csr.status)
> content = self.csr.get_request ('vpn-svc/ike/keepalive')
> self.assertEqual(requests.codes.OK, self.csr.status)
> expected = {'periodic': False}
> expected.update(keepalive_info)
> self.assertDictContainsSubset(expected, content)
> 
> The client code (red) does a POST with authentication info to get token, does
> a PUT with the setting, and then a GET to verify the value. The mock module
> has these methods created:
> 
> @httmock.urlmatch ( netloc =r'localhost')
> def token( url , request):
> if ' auth /token-services' in url.path:
> return {'status_code': requests.codes.OK,
> 'content': {'token-id': 'dummy-token'}}
> 
> 
> @httmock.urlmatch ( netloc =r'localhost')
> def normal_get( url , request):
> if request.method != 'GET':
> return
> if not request.headers.get('X- auth -token', None):
> return {'status_code': requests.codes.UNAUTHORIZED}
> …
> if 'vpn-svc/ike/keepalive' in url.path:
> content = {u'interval': 60,
> u'retry': 4,
> u'periodic': True}
> return httmock.response(requests.codes.OK, content=content)
> 
> @httmock.urlmatch(netloc=r'localhost')
> def put(url, request):
> if request.method !

Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Susanne Balle
See inline, Susanne


On Wed, Apr 9, 2014 at 4:49 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   Answers inlined. Thanks for the questions! They forced me to think
> about certain features.
>
>  Cheers,
> --Jorge
>
>   From: Samuel Bercovici 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, April 9, 2014 6:10 AM
> To: "OpenStack Development Mailing List (openstack-dev@lists.openstack.org)"
> 
> Subject: [openstack-dev] [Neutron][LBaaS]Clarification in regards to
> https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1
>
>   Hi,
>
>
>
> I have looked at
> https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1and
>  have a few questions:
>
> 1.   Monitoring Tab:
>
> a.   Are there users that use load balancing who do not monitor
> members? Can you share the use cases where this makes sense?
>   This is a good question. In our case we supply static ip addresses so
> some users have only one backend node. With one node it isn't necessary.
> Another case I can think of is lbs that are being used for non-critical
> environments (i.e. dev or testing environment). For the most part it would
> make sense to have monitoring.
>
>  b.  Does it make sense to define the different type of monitors (ex:
> TCP, HTTP HTTPS)?
>   Yes it does. Http monitoring, for example, allows you to monitor
> specific URI's. I just put total utilization for all three to get some data
> out.
>
>  c.   Does any existing cloud service besides the current
> implementation of the LBaaS API supports using multiple monitors on the
> same pool? Is this a required feature?
>   I would think multiple monitors wouldn't make sense as they could
> potentially conflict. How would a decision be made in such a case?
>
2.   Logging Tab:
>
> a.   What is logging use for?
>   This is specifically connection logging. It allows the user to see all
> of the requests that went through the load balancer. It is mostly used for
> big data and troubleshooting.
>
>  b.  How does the tenant consume the logs?
>   For our offering, we send their logs in a compressed format to swift.
> However, I am open to discussion on how to handle this in a more flexible
> manner.
>

[Susanne] in our case logs are forwarded to a centralized logging system
e.g. Logstash/Elastic Search/Kibana/etc.

> 3.   SSL Tab:
>
> a.   Please explain if SSL means passing SSL traffic through the load
> balancer or using the load balancer to terminate certificates.
>   SSL termination. I updated the tab.
>
>  b.  Does it make sense to separate those (SSL termination and non
> HTTPS terminated traffic) as different rows?
>   Blue Box added a few extra rows. I identified lbs that terminate only
> secure traffic and lbs that allow both secure and insecure traffic.
>
>  c.   Can anyone explain the use cases for SSL_MIXED?
>   A lot of web sites have mixed content. The lb terminates the secure
> traffic. The insecure traffic passes through normally.
>
>  4.   HA Tab:
>
> a.   Is this a tenant facing option or is it the way the operator
> chose to implement the service
>   For us, this is operator implementation. However, since most lbs are
> considered mission critical almost all production users require HA. I could
> see this being a toggable feature from the tenant side if they wanted to
> use a lb for testing or something non mission critical.
>

[Susanne] Same for us. It is very important for use as service provider
 that the LB be resilient so the user doesn't have a choice. It is
resilient by default.

> 5.   Content Caching Tab:
>
> a.   Is this a load balancer feature or a CDN like feature.
>   This is a lb feature. However, depending on the amount of content you'd
> like to cache using a CDN may be overkill. Here is a link that may shed
> some light:
> http://www.rackspace.com/knowledge_center/article/content-caching-for-cloud-load-balancers
>
>  6.   L7
>
> a.   Does any cloud provider support L7 switching and L7 content
> modifications?
>   We currently do not.
>

[Susanne] we do not have that feature either.

> b.  If so can you please add a tab noting how much such features
> are used?
>   N/A - Delegating to someone who actually has data.
>
>  c.   If not, can anyone attest to whether this feature was requested
> by customers?
>   Good question. I can see the use cases but operator data on this would
> be nice for those that have it. We have had a few requests but not enough
> that would warrant development effort at this time. Hence, I would mark
> this priority low unless we can back it up with data.
>
>
>
> Thanks!
>
> -Sam.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi

[openstack-dev] [Nova] Icehouse RC2 available

2014-04-09 Thread Thierry Carrez
Hello everyone,

Due to various release-critical issues detected in Nova icehouse RC1
(including a security issue), a new release candidate was just
generated. You can find a list of the 12 bugs fixed and a link to the
RC2 source tarball at:

https://launchpad.net/nova/icehouse/icehouse-rc2

Unless new release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the 2014.1 final
version on April 17 next week. You are therefore strongly encouraged to
test and validate this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/nova/tree/milestone-proposed

If you find an issue that could be considered release-critical and
justify a release candidate respin, please file it at:

https://bugs.launchpad.net/nova/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Robert Collins
On 10 April 2014 08:33, Clint Byrum  wrote:

>
> This is exactly what we're doing. We're just suggesting exposing
> variations in the Heat templates, rather than in the elements. It is worth
> noting that Heat has grown the ability to grab a local file and inject
> it into your template at runtime. I think it would actually make sense
> to have os-apply-config enhanced to be able to override whole template
> files based on something like this:
>
> resources:
>   server1:
> metadata:
>   template_overrides:
> "/etc/nova/nova.conf":
>   get_file [ "my_special_nova.conf.template" ]
>
> In that, we achieve what you want, but we can do so without rebuilding
> the whole image.

This makes me a little nervous: its much easier to break
os-collect-config by forcing os-apply-config to break hard this way,
than through bad metadata. I think I'm ok with the sentiment, but
nervous about impl.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-04-09 Thread Carl Baldwin
Tomorrow's meeting will be at 1500 UTC in #openstack-meeting-3.  The
current agenda can be found on the subteam meeting page [1].

New on the agenda this week:  Multiple Subnets on External Network

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QoS] API Design Document v2

2014-04-09 Thread Erik Moe
Hi,

API Design Document
v2


Includes following example:

Response:

{

  "qos":

  [

   {"id": "1234-5678-1234-5678",

 "description": "Gold level service",

  "type": "ratelimit",

  "policy": {"kbps":"10240"}

},

   {"id": "1235-5678-1234-5678",

 "description": "Silver level service",

 "type": "dscp",

  "policy": "af32"

}

  ]
}

It looks like a gold tenant would get ratelimit and a silver tenant would
get dscp policy.

*Is there a proposal of how to set both ratelimt and dscp for a tenant?
Would that tenant be both gold and silver (associated with both)?*

*Regards,*
*Erik*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Jorge Miramontes
Answers inlined. Thanks for the questions! They forced me to think about 
certain features.

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 9, 2014 6:10 AM
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]Clarification in regards to 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

Hi,

I have looked at 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1
 and have a few questions:

1.   Monitoring Tab:

a.   Are there users that use load balancing who do not monitor members? 
Can you share the use cases where this makes sense?

This is a good question. In our case we supply static ip addresses so some 
users have only one backend node. With one node it isn't necessary. Another 
case I can think of is lbs that are being used for non-critical environments 
(i.e. dev or testing environment). For the most part it would make sense to 
have monitoring.

b.  Does it make sense to define the different type of monitors (ex: TCP, 
HTTP HTTPS)?

Yes it does. Http monitoring, for example, allows you to monitor specific 
URI's. I just put total utilization for all three to get some data out.

c.   Does any existing cloud service besides the current implementation of 
the LBaaS API supports using multiple monitors on the same pool? Is this a 
required feature?

I would think multiple monitors wouldn't make sense as they could potentially 
conflict. How would a decision be made in such a case?

2.   Logging Tab:

a.   What is logging use for?

This is specifically connection logging. It allows the user to see all of the 
requests that went through the load balancer. It is mostly used for big data 
and troubleshooting.

b.  How does the tenant consume the logs?

For our offering, we send their logs in a compressed format to swift. However, 
I am open to discussion on how to handle this in a more flexible manner.

3.   SSL Tab:

a.   Please explain if SSL means passing SSL traffic through the load 
balancer or using the load balancer to terminate certificates.

SSL termination. I updated the tab.

b.  Does it make sense to separate those (SSL termination and non HTTPS 
terminated traffic) as different rows?

Blue Box added a few extra rows. I identified lbs that terminate only secure 
traffic and lbs that allow both secure and insecure traffic.

c.   Can anyone explain the use cases for SSL_MIXED?

A lot of web sites have mixed content. The lb terminates the secure traffic. 
The insecure traffic passes through normally.

4.   HA Tab:

a.   Is this a tenant facing option or is it the way the operator chose to 
implement the service

For us, this is operator implementation. However, since most lbs are considered 
mission critical almost all production users require HA. I could see this being 
a toggable feature from the tenant side if they wanted to use a lb for testing 
or something non mission critical.

5.   Content Caching Tab:

a.   Is this a load balancer feature or a CDN like feature.

This is a lb feature. However, depending on the amount of content you'd like to 
cache using a CDN may be overkill. Here is a link that may shed some light: 
http://www.rackspace.com/knowledge_center/article/content-caching-for-cloud-load-balancers

6.   L7

a.   Does any cloud provider support L7 switching and L7 content 
modifications?

We currently do not.

b.  If so can you please add a tab noting how much such features are used?

N/A – Delegating to someone who actually has data.

c.   If not, can anyone attest to whether this feature was requested by 
customers?

Good question. I can see the use cases but operator data on this would be nice 
for those that have it. We have had a few requests but not enough that would 
warrant development effort at this time. Hence, I would mark this priority low 
unless we can back it up with data.

Thanks!
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Clint Byrum
Excerpts from Alexis Lee's message of 2014-04-09 06:44:20 -0700:
> Robert Collins said on Wed, Apr 09, 2014 at 01:58:59AM +1200:
> > I like this - something like
> > 
> > nova:
> >   config:
> > - section: default
> >   values:
> > - option: 'compute_manager'
> >   value: 'ironic.nova.compute.manager.ClusterComputeManager'
> > - section: cells
> >   values:
> > - option: 'driver'
> >   value: nova.cells.rpc_driver.CellsRPCDriver
> > 
> > 
> > should be able to represent most? all (it can handle repeating items)
> > oslo.config settings and render it easily:
> > 
> > {{#config}}
> > {{#comment}} repeats for each section {{/comment}}
> > [{{section}}]
> > {{#values}}
> > {{option}}={{value}}
> > {{/values}}
> > {{/config}}
> 
> Hello,
> 
> I've gone some distance down this road:
>   
> https://review.openstack.org/#/c/83353/6/elements/nova/os-apply-config/etc/nova/log.conf
>   https://review.openstack.org/#/c/83422/6/logstash-source.yaml
> 
> I wouldn't call the result - encoding a complete config file into Heat
> metadata - pretty. And this isn't completely genericised either.
> 

I find your templates pretty easy to read. Mustache was chosen because it
has almost no logic, and thus is _extremely_ easy to follow and predict.

> It'd be much better if TripleO image elements focused on installing and
> starting services and allowed system integrators to define the
> configuration. In one place, in plain text files, the UNIX way. I've
> appended my proposal to Rob's etherpad here:
>   https://etherpad.openstack.org/p/tripleo-config-passthrough
> 

This assumes that we don't want system integrators to contribute to
TripleO, which is the opposite of how things are. We absolutely do,
and in fact, that is part of the point of having a program around
OpenStack deployment. Let's get system integrators into OpenStack's CI
system and let's get a few of the most important scenarios into the gate
of OpenStack.

As a system integrator, do you want to say to your customers that you
start with an unusable set of tools that the community tests individually,
or do you want to say that you start with the deployment that the
community tests directly on every commit, and then enhance based on
individual customer need?

> Soon-to-be outdated copy appended here:
> 
> 
> Hi Rob, I have some serious concerns about the above approaches. For the
> sake of argument, let's suppose we want to write a file that looks like
> a Heat template. How would you write a Mustache template that handles
> that level of nesting? Even if you accomplish that, how readable do you
> think the metadata to fill out that template would look?
> 
> I see the system integration process emerging like this:
> * Figure out what files you want + what you want in them
> * Slice and dice that into metadata
> * Write some fairly complicated templates to reconstitute the metadata
> * Get out more or less what you started with
> 
> I'd like to propose an alternative method where Heat and TripleO barely
> touch the config. The system integrator writes an image element per
> node-flavour, EG "mycorp-compute-config". If they choose, they could
> write more (EG for specific hardware) limited only by their
> devtest-equivalent's ability to allocate those. This element contains a
> 99-os-apply-config directory, the templates from which overwrite any
> templates from normal os-apply-config directories in other elements.
> os-apply-config/install.d/99-install-config-templates will need to be
> patched for this to be possible, but this is very little work in
> comparison to the alternatives. I could also support simply an
> os-apply-config.override directory, if a full numbered set of dirs seems
> overkill, but in this case normal elements would have to be forbidden
> from using it (and people being as they are, someone would). The
> templates in that directory are 99% plain config files, laid out in a
> single filesystem structure exactly as the system integrator wants them.
> The only templated values which need to be supplied by Heat are those
> which vary per-instance.
>

So one reason I'd rather see overrides done generically as heat
parameters/metadata/etc. is that it may not be entirely clear when a
user needs to override from a default, and having to distribute a new
image to do that is not necessarily the best user experience, especially
if one is "tinkering".

> If we do this, tripleo-image-elements should focus on installing and
> starting services. They should only include a minimal viable
> configuration for demo purposes. This should greatly reduce the amount
> of work required to produce a new element. Also the number of Heat
> parameters used by any element (only per-instance would be necessary,
> anything further is a convenience).
>

OpenStack is not for demo purposes, it is for production usage. There
is no reason we

[openstack-dev] [Tripleo] Reminder! Sessions! Summit!

2014-04-09 Thread Robert Collins
Summit time is here - please suggest sessions :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2014-04-09 Thread Dan Smith
>> So I'm a soft -1 on dropping it from hacking.

Me too.

> from testtools import matchers
> ...
> 
> Or = matchers.Or
> LessThan = matchers.LessThan
> ...

This is the right way to do it, IMHO, if you have something like
matchers.Or that needs to be treated like part of the syntax. Otherwise,
module-only imports massively improves the ability to find where
something comes from.

I also think that machine-enforced style, where appropriate, is very
helpful in keeping our code base readable. Repeated patterns and style
help a lot, and anything that can be easily machine-enforced is a win in
my book.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2014-04-09 Thread Duncan Thomas
On 6 August 2013 21:18, Christopher Armstrong
 wrote:

> I think it's really unfortunate that people will block patches based on
> stylistic concerns. The answer, IMO, is to codify in policy that stylistic
> issues *cannot* block a patch from landing.

It think the problems here are:
(a) death by a thousand cuts - code that has many tiny stylistic
differences is harder to read, harder to reason about and harder to
get correct
(b) if the existing code is of highest possible quality, it sets the
bar for incoming code. Requiring new code to be significantly better
than the old code just makes new contributors resentful.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2014-04-09 Thread Duncan Thomas
I totally agree with Sean. If you're going to weaken the rule in a
codeable way (e.g. it doesn't apply to tests, or to certain named
modules or whatever), then great, fix up the HACKING tool and make the
code slightly more readable. But the general advantages of having the
check outway the costs... -1 on removing it / weakening it unless you
can update the tool to understand the new rule.

On 6 August 2013 12:32, Sean Dague  wrote:
> On 08/05/2013 10:38 PM, Monty Taylor wrote:
>>
>>
>>
>> On 08/05/2013 11:26 PM, Robert Collins wrote:
>>>
>>> I wanted to get a temperature reading from everyone on this style
>>> guideline.
>>>
>>> My view on it is that it's a useful heuristic but shouldn't be a
>>> golden rule applied everywhere. Things like matches are designed to be
>>> used as a dsl:
>>>  self.assertThat(foo, Or(Equals("1"), Equals("2")))
>>>
>>> rather than what H302 enforces:
>>>  self.assertThat(foo, matchers.Or(matchers.Equals("1"),
>>> matchers.Equals("2")))
>>>
>>> Further, conflicting module names become harder to manage, when one
>>> could import just the thing.
>>>
>>> Some arguments for requiring imports of modules:
>>>   - makes the source of symbols obvious
>>> - Actually, it has no impact on that as the import is still present
>>> and clear in the file. import * would obfuscate things, but I'm not
>>> arguing for that.
>>> - and package/module names can (and are!) still ambiguous. Like
>>> 'test.' - whats that? -> consult the imports.
>>>   - makes mocking more reliable
>>> - This is arguably the case, but it's a mirage: it isn't a complete
>>> solution because modules still need to be mocked at every place they
>>> are dereferenced : only import modules helps to the extent that one
>>> never mocks modules. Either way this failure mode of mocking is
>>> usually very obvious IME : but keeping the rule as a recommendation,
>>> *particularly* when crossing layers to static resources is a good
>>> idea.
>>>   - It's in the Google Python style guide
>>>
>>> (http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports)
>>> - shrug :)
>>>
>>> What I'd like us to do is weaken it from a MUST to a MAY, unless noone
>>> cares about it at all, in which case lets just turn it off entirely.
>>
>>
>> Enforcing it is hard. The code that does it has to import and then make
>> guesses on failures.
>>
>> Also - I agree with Robert on this. I _like_ writing my code to not
>> import bazillions of things... but I think the hard and fast rule makes
>> things crappy at times.
>
>
> The reason we go hard and fast on certain rules is to reduce review time by
> people. If something is up for debate we get bikeshedding in reviews where
> one reviewer tells someone to do it one way, 2 days later they update their
> review, another reviewer comes in and tells them to do it the otherway.
> (This is not theoretical, it happens quite often, if you do a lot of reviews
> you see it all the time.) It also ends up being something reviewers can stop
> caring about, because the machine will pick it up. Giving them the ability
> to focus on higher order issues, and still keeping the code from natural
> entropy.
>
> MUST == computer can do it, less work for core review time (which is
> realistically one of our most constrained resources in OpenStack)
> MAY == humans have to make a judgement call, which means more work for our
> already constrained review teams
>
> I've found H302 to really be useful on reviewing large chunks of code I've
> not been in much before. And get seriously annoyed being in projects that
> don't have it enforced yet (tempest is guilty of that). Being able to
> quickly know what namespace things are out of saves time.
>
> Honestly, after spending the year with the constraint in OpenStack, I'm
> never going to import modules directly in my personal projects, as I think
> the benefits of the explicitness have shown themselves pretty well.
>
> So I'm a soft -1 on dropping it from hacking.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Sylvain Bauza
2014-04-09 18:57 GMT+02:00 Susanne Balle :
>
> Does Gant work with Devstack? I am assuming the link will give me
> directions on how to test it and contribute to the project.
>
>

https://github.com/openstack/gantt/blob/master/README.rst#disclaimer

Please consider Gantt repository as a no-op repo, without support. At the
moment, this is more likely a sandbox than an active project.
As said previously, any changes should happen in Nova, not in Gantt. As a
gantt-core, I'm trying to follow all changes related to the scheduler in
Nova (thanks to any commit msg having 'scheduler' in it) so as to make sure
both Nova and forklift changes are going into the same direction.

-Sylvain



> Susanne
>
>
> On Wed, Apr 9, 2014 at 12:44 PM, Henrique Truta <
> henriquecostatr...@gmail.com> wrote:
>
>> @Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
>> blueprint
>>
>>
>> 2014-04-09 12:59 GMT-03:00 Sylvain Bauza :
>>
>>
>>>
>>>
>>> 2014-04-09 17:47 GMT+02:00 Jay Lau :
>>>
>>> @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?


>>> I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
>>> forklift of the current Nova scheduler. So, a placement decision based on
>>> dynamic metrics would be worth it.
>>> That said, as Gantt is not targeted to be delivered until Juno at least
>>> (with Nova sched deprecated), I think any progress on a BP should target
>>> Nova with respect to the forklift efforts, so it would automatically be
>>> ported to Gantt once the actual fork would happen.
>>>
>>> -Sylvain
>>>
>>> Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> --
>> Ítalo Henrique Costa Truta
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-04-09 11:11:06 -0700:
> On 8 April 2014 18:25, Clint Byrum  wrote:
> > Excerpts from Jay Dobies's message of 2014-04-08 06:40:07 -0700:
> 
> >> I've always assumed TripleO is very low-level. Put another way,
> >> non-prescriptive. It's not going to push an agenda that says you should
> >> be doing things a certain way, but rather gives you more than enough
> >> rope to hang yourself (just makes it easier).
> >>
> >
> > And I've always looked at TripleO as the opposite. It is a default
> > deployment of OpenStack. That is why we look at having to set a
> > non-default option as a bug. That is why we only currently offer one
> > set of Heat templates.
> >
> > Of course I want to see it more widely configurable, as I understand that
> > there will be whole sections of the OpenStack interested user base that
> > won't want an ovs overlay network. There will be shops that simply refuse
> > to use MySQL, or want to put swift proxies on their own nodes, etc. etc.
> >
> > But if we can't all agree on a widely usable set of defaults, and deploy
> > that, then I think OpenStack, not just TripleO, is forever going to be a
> > framework on which proprietary solutions are built, rather than a whole
> > open source platform.
> 
> I think this is dangerous thinking - the config you want depends so
> hugely on your intended workload and available hardware that trying
> any strong view of what an Openstack deployment should look like into
> the deployment tool forever forces that deployment tool to be a minor,
> niche product that *has* to be replaced by something more expressive
> in order to be widely usable. The config you want for a primary hadoop
> shop is totally different to what you'd want for primary web-host shop
> is somewhat different to what you'd want for a public/generic cloud,
> etc. Things like AZ support, neutron model, cinder back-end choice,
> H/A model etc are dictated by scale and use-cases. If you only want
> your config tool to deal with one deployment type, that tool becomes
> pretty much irrelevant to the totality of the Openstack effort, and
> should be replaced by something more layered/openminded.
> 

I can certainly understand how one might mistake TripleO as "a deployment
tool".

It is no such thing. OpenStack is the deployment suite, with the tools
being Nova, Glance, Neutron, Heat, diskimage-builder, os-*-config, etc.

TripleO is a _program_, in the sense of an effort to gather collaborative
forces, to deploy OpenStack using these tools. So any concern that you
have that these tools will end up being niche tools will in fact affect
all of OpenStack.

The "niche" that we're aiming at is the broadest base of users that
OpenStack has. That would be the ones who we have driven the defaults
toward. If there is no group of users that can use the defaults, then
our first deployment will not have a large uptake beyond OpenStack's
CI itself.

However, there is nothing about this first goal of deploying a default
OpenStack cloud that prevents us from then widening its purpose for more
and more sets of users.

> This isn't to say we must boil the ocean right now and make everything
> available, but rather that decisions should take the long view into
> account.
> 

I don't think any of us suggested sacrificing the long term view for the
short term milestone. What I said is that what we're aiming at now as
a _milestone_ is not a widely configurable cloud, but a default cloud.
And things that are done to support widening the tools' focus slow
progress toward the current milestone.

I get a sense that people are feeling adversarial toward this focus,
rather than collaborative, and that troubles me. It may be my fault,
so I want to make it very clear that I _do_ welcome any and all
contribution. If people are willing to put in the time to get things
done in TripleO, then that is hugely valuable. I'm only suggesting that
if you have a choice in what to do next, I suggest that it be things
that get us closer to our current milestones.

Thanks everyone for your thoughts on this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday April 10th at 17:00UTC

2014-04-09 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, April 10th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Duncan Thomas
On 8 April 2014 18:25, Clint Byrum  wrote:
> Excerpts from Jay Dobies's message of 2014-04-08 06:40:07 -0700:

>> I've always assumed TripleO is very low-level. Put another way,
>> non-prescriptive. It's not going to push an agenda that says you should
>> be doing things a certain way, but rather gives you more than enough
>> rope to hang yourself (just makes it easier).
>>
>
> And I've always looked at TripleO as the opposite. It is a default
> deployment of OpenStack. That is why we look at having to set a
> non-default option as a bug. That is why we only currently offer one
> set of Heat templates.
>
> Of course I want to see it more widely configurable, as I understand that
> there will be whole sections of the OpenStack interested user base that
> won't want an ovs overlay network. There will be shops that simply refuse
> to use MySQL, or want to put swift proxies on their own nodes, etc. etc.
>
> But if we can't all agree on a widely usable set of defaults, and deploy
> that, then I think OpenStack, not just TripleO, is forever going to be a
> framework on which proprietary solutions are built, rather than a whole
> open source platform.

I think this is dangerous thinking - the config you want depends so
hugely on your intended workload and available hardware that trying
any strong view of what an Openstack deployment should look like into
the deployment tool forever forces that deployment tool to be a minor,
niche product that *has* to be replaced by something more expressive
in order to be widely usable. The config you want for a primary hadoop
shop is totally different to what you'd want for primary web-host shop
is somewhat different to what you'd want for a public/generic cloud,
etc. Things like AZ support, neutron model, cinder back-end choice,
H/A model etc are dictated by scale and use-cases. If you only want
your config tool to deal with one deployment type, that tool becomes
pretty much irrelevant to the totality of the Openstack effort, and
should be replaced by something more layered/openminded.

This isn't to say we must boil the ocean right now and make everything
available, but rather that decisions should take the long view into
account.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-09 Thread Clint Byrum
Excerpts from Ruslan Kamaldinov's message of 2014-04-09 10:24:48 -0700:
> On Tue, Apr 8, 2014 at 8:42 PM, Sean Dague  wrote:
> > I think it's important to understand what we mean by "stable" in the
> > gate. It means that the end point is 99.% available. And that it's
> > up or down status is largely under our control.
> >
> > Things that are not stable by this definition which we've moved away
> > from for the gate:
> >  * github.com - one of the reasons for git.openstack.org
> >  * pypi.python.org - one of the reasons for our own pypi mirror
> >  * upstream distro mirrors (we use cloud specific mirrors, which even
> > then do fail some times, more than we'd like)
> >
> > Fedora.org is not stable by this measure either. Downloading an iso from
> > fedora.org fails 5% of the time in the gate.
> >
> > I'm sure the Hortonworks folks are good folks, but by our standards of
> > reliability, no one stacks up. And an outage on their behalf means that
> > any project which gates on it will be blocked from merging any code
> > until it's addressed. If Ceilometer wants to take that risk in their
> > check queue (and be potentially blocked) that might be one thing, and we
> > could talk about that. But we definitely can't co-gate and block all of
> > openstack because of a hortonworks outage (which will happen, especially
> > if we download packages from them 600 - 1000 times a day).
> 
> A natural solution for this would be a local to infra package mirror for
> HBase, Ceilometer, Mongo and all the dependencies not present in upstream
> Ubuntu. It seems straightforward from the technical point of view. It'll help
> to keep the Gate invulnerable to any outages in 3-rd party mirrors. Of course,
> someone has to signup to create scripts for that mirror and support it in the
> future.
> 
> But, other concerns were expressed in the past. Let me quote Jeremy Stanley
> (from https://review.openstack.org/#/c/66884/):
> > This will need to be maintained in Ubuntu (and backported to 12.04 in Ubuntu
> > Cloud Archive or if necessary a PPA managed by the same package maintenance
> > team taking care of it in later Ubuntu releases). We don't install test
> > requirements system-wide on our long-running test slaves unless we can be
> > assured of security support from the Linux distribution vendor.
> 
> There is no easy workaround here. Traditionally this kind of software is
> installed from vendor-supported mirrors and distributions. And they're the
> ones who maintain and provide security updates from Hadoop/HBase packages.
> In case of Ceilometer, I think that importance of having real tests on real
> databases is more important than the requirement for the packages to have
> security support from a Linux distribution.

This is a huge philosophical question for OpenStack in general. Do we
want to recommend things that we won't, ourselves, use in our
infrastructure?

I think for the most part we've taken a middle of the road approach
where we make sure the default backends and drivers are things that
_are_ supported in distros, and are things we're able to use. We also
let in the crazy-sauce backend drivers for those who are willing to run
3rd-party testing for them.

So I think what is needed here is for MagnetoDB to have at least one
backend that _is_ supported in distro, and legally friendly to OpenStack
users. Unfortunately:

* HBase - not in any distro I could find (removed from Debian actually)
* Cassandra - not in any distro
* MongoDB - let's not have that license discussion again

Now, this is no simple matter. When I attempted to package Cassandra for
Ubuntu 3 years ago, there was zero interest upstream in supporting it
without embedding certain java libraries. The attempts at extracting the
embedded libraries resulted in failed tests, patches, and an endless
series of "why are you doing this?" type questions from upstream.
Basically the support model of the distro isn't compatible with the
support model of these databases.

What I think would have to happen, would be infra would need to be
willing to reach out and have a direct relationship with upstream for
Cassandra and HBase, and we would need to be willing to ask OpenStack
users to do the same. Otherwise, I don't think MagnetoDB could ever be
integrated with either of them as the default driver.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Clint Byrum
Excerpts from Isaku Yamahata's message of 2014-04-09 01:33:49 -0700:
> Hello developers.
> 
> 
> As discussed many times so far[1], there are many projects that needs
> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
> 
> My idea is to relay RPC messages from management network into tenant
> network over file-like object. By file-like object, I mean virtio-serial,
> unix domain socket, unix pipe and so on.
> I've wrote some code based on oslo.messaging[2][3] and a documentation
> on use cases.[4][5]
> Only file-like transport and proxying messages would be in oslo.messaging
> and agent side code wouldn't be a part of oslo.messaging.
> 
> 
> use cases:([5] for more figures)
> file-like object: virtio-serial, unix domain socket, unix pipe
> 
>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>   per VM
> 
>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>  agent in tenant network <-> guest agent in VM
> 
> 
> So far there are security concerns to forward oslo.messaging from management
> network into tenant network. One approach is to allow only cast-RPC from
> server to guest agent in VM so that guest agent in VM only receives messages
> and can't send anything to servers. With unix pipe, it's write-only
> for server, read-only for guest agent.
> 

Hi Isaku. I like that you are bringing some new energy into this
discussion.

What if we swapped your local socket out for a connection managed by
something similar to the neutron metadata agent that forwards connections
to the EC2 metadata service? I could see a scheme something like this:

- guest boots, agent contacts link-local on port 80 with a REST request
  for a communication channel to service XYZ.
- metadata agent is allocated a port on the network of the agent and
  proxies that port to the intended endpoint.
- guest now communicates directly with that address, still nicely
  confined to the private network without any sort of gateway, but with
  an ability to talk to "under the cloud" services.

I prefer a scheme that uses the network because it will be generically
usable no matter the transport desired (marconi, amqp, 0mq, whatever)
and is directly modeled in Neutron's terms, rather than requiring tight
coupling with Nova and the hypervisors.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-09 Thread Ruslan Kamaldinov
On Tue, Apr 8, 2014 at 8:42 PM, Sean Dague  wrote:
> I think it's important to understand what we mean by "stable" in the
> gate. It means that the end point is 99.% available. And that it's
> up or down status is largely under our control.
>
> Things that are not stable by this definition which we've moved away
> from for the gate:
>  * github.com - one of the reasons for git.openstack.org
>  * pypi.python.org - one of the reasons for our own pypi mirror
>  * upstream distro mirrors (we use cloud specific mirrors, which even
> then do fail some times, more than we'd like)
>
> Fedora.org is not stable by this measure either. Downloading an iso from
> fedora.org fails 5% of the time in the gate.
>
> I'm sure the Hortonworks folks are good folks, but by our standards of
> reliability, no one stacks up. And an outage on their behalf means that
> any project which gates on it will be blocked from merging any code
> until it's addressed. If Ceilometer wants to take that risk in their
> check queue (and be potentially blocked) that might be one thing, and we
> could talk about that. But we definitely can't co-gate and block all of
> openstack because of a hortonworks outage (which will happen, especially
> if we download packages from them 600 - 1000 times a day).

A natural solution for this would be a local to infra package mirror for
HBase, Ceilometer, Mongo and all the dependencies not present in upstream
Ubuntu. It seems straightforward from the technical point of view. It'll help
to keep the Gate invulnerable to any outages in 3-rd party mirrors. Of course,
someone has to signup to create scripts for that mirror and support it in the
future.

But, other concerns were expressed in the past. Let me quote Jeremy Stanley
(from https://review.openstack.org/#/c/66884/):
> This will need to be maintained in Ubuntu (and backported to 12.04 in Ubuntu
> Cloud Archive or if necessary a PPA managed by the same package maintenance
> team taking care of it in later Ubuntu releases). We don't install test
> requirements system-wide on our long-running test slaves unless we can be
> assured of security support from the Linux distribution vendor.

There is no easy workaround here. Traditionally this kind of software is
installed from vendor-supported mirrors and distributions. And they're the
ones who maintain and provide security updates from Hadoop/HBase packages.
In case of Ceilometer, I think that importance of having real tests on real
databases is more important than the requirement for the packages to have
security support from a Linux distribution.

Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-09 Thread Steve Gordon
- Original Message -
> > -Original Message-
> > From: Chris Friesen [mailto:chris.frie...@windriver.com]
> > Sent: 09 April 2014 15:37
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
> > possible or not ?
> > 
> > On 04/09/2014 03:55 AM, Day, Phil wrote:
> > 
> > > I would guess that affinity is more likely to be a soft requirement
> > > that anti-affinity,  in that I can see some services just not meeting
> > > their HA goals without anti-affinity but I'm struggling to think of a
> > > use case why affinity is a must for the service.
> > 
> > Maybe something related to latency?  Put a database server and several
> > public-facing servers all on the same host and they can talk to each other
> > with less latency then if they had to go over the wire to another host?
> > 
> I can see that as a high-want, but would you actually rather not start the
> service if you couldn't get it ?  I suspect not, as there are many other
> factors that could affect performance.  On the other hand I could imagine a
> case where I declare its not worth having a second VM at all if I can't get
> it on a separate server.   Hence affinity feels more "soft" and
> anti-affinity "hard" in terms or requirments.

As the orchestrator if affinity is important to me and it turns out I can't 
place all of the VMs in the group with affinity, I would likely use the failure 
to place the second (or subsequent) instance as my cue to rollback and destroy 
the original VM(s) as well. I don't think either policy is naturally any more 
hard or soft - it depends on the user and their workloads - this is why I think 
a "soft" implementation of either filter should be in addition to rather than 
instead of the existing ones, though "soft" may make more sense for the 
defaults. 

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Daniel P. Berrange
On Wed, Apr 09, 2014 at 05:33:49PM +0900, Isaku Yamahata wrote:
> Hello developers.
> 
> 
> As discussed many times so far[1], there are many projects that needs
> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
> 
> My idea is to relay RPC messages from management network into tenant
> network over file-like object. By file-like object, I mean virtio-serial,
> unix domain socket, unix pipe and so on.
> I've wrote some code based on oslo.messaging[2][3] and a documentation
> on use cases.[4][5]
> Only file-like transport and proxying messages would be in oslo.messaging
> and agent side code wouldn't be a part of oslo.messaging.
> 
> 
> use cases:([5] for more figures)
> file-like object: virtio-serial, unix domain socket, unix pipe
> 
>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>   per VM
> 
>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>  agent in tenant network <-> guest agent in VM
> 
> 
> So far there are security concerns to forward oslo.messaging from management
> network into tenant network. One approach is to allow only cast-RPC from
> server to guest agent in VM so that guest agent in VM only receives messages
> and can't send anything to servers. With unix pipe, it's write-only
> for server, read-only for guest agent.
>
> Thoughts? comments?

I'm still somewhat aprehensive about the idea of just proxying arbitrary
data betwetween host & guest agent at the message bus protocol level.
I'd tend to be more comfortable with some that going through the virt
driver API in the compute node.

Also, how are you proposing to deal with live migration of VMs ? The
virtio serial channel can get closed due to QEMU migrating while the
proxy is in the middle of sending data to the guest VM, potentially
causing a lost or mangled message in the guest and the sender won't
know this if this channel write-only since there's no ACK.

> Details of Neutron NFV use case[6]:
> Neutron services so far typically runs agents in host, the host agent
> in host receives RPCs from neutron server, then it executes necessary
> operations. Sometimes the agent in host issues RPC to neutron server
> periodically.(e.g. status report etc)
> It's desirable to make such services virtualized as Network Function
> Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
> natural approach to propagate those RPC message into agents into VMs.

What sort of things are you expecting the guest agent todo for Neutron ?
You have to bear in mind that the guest OS is 100% untrusted from the
hosts POV, so anything that Neutron asks the guest agent todo can be
completely ignored, or manipulated in any way the guest OS decides to.
Similarly, if there were a feedback channel, any data the Neutron might
receive back from the guest agent has to be considered untrustworthy,
so should not be used to make functional decisions in Neutron.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-09 Thread Day, Phil
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 09 April 2014 15:37
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
> possible or not ?
> 
> On 04/09/2014 03:55 AM, Day, Phil wrote:
> 
> > I would guess that affinity is more likely to be a soft requirement
> > that anti-affinity,  in that I can see some services just not meeting
> > their HA goals without anti-affinity but I'm struggling to think of a
> > use case why affinity is a must for the service.
> 
> Maybe something related to latency?  Put a database server and several
> public-facing servers all on the same host and they can talk to each other
> with less latency then if they had to go over the wire to another host?
> 
I can see that as a high-want, but would you actually rather not start the 
service if you couldn't get it ?  I suspect not, as there are many other 
factors that could affect performance.  On the other hand I could imagine a 
case where I declare its not worth having a second VM at all if I can't get it 
on a separate server.   Hence affinity feels more "soft" and anti-affinity 
"hard" in terms or requirments.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Susanne Balle
Ditto. I am interested in contributing as well.

Does Gant work with Devstack? I am assuming the link will give me
directions on how to test it and contribute to the project.

Susanne


On Wed, Apr 9, 2014 at 12:44 PM, Henrique Truta <
henriquecostatr...@gmail.com> wrote:

> @Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
> blueprint
>
>
> 2014-04-09 12:59 GMT-03:00 Sylvain Bauza :
>
>
>>
>>
>> 2014-04-09 17:47 GMT+02:00 Jay Lau :
>>
>> @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
>>> placement policy or run time policy or both, can you help clarify?
>>>
>>>
>> I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
>> forklift of the current Nova scheduler. So, a placement decision based on
>> dynamic metrics would be worth it.
>> That said, as Gantt is not targeted to be delivered until Juno at least
>> (with Nova sched deprecated), I think any progress on a BP should target
>> Nova with respect to the forklift efforts, so it would automatically be
>> ported to Gantt once the actual fork would happen.
>>
>> -Sylvain
>>
>> Jay
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> --
> Ítalo Henrique Costa Truta
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Dmitry Mescheryakov
> I agree those arguments.
> But I don't see how network-based agent approach works with Neutron
> network for now. Can you please elaborate on it?

Here is the scheme of network-based agent:

server <-> MQ (Marconi) <-> agent

As Doug said, Marconi exposes REST API, just like any other OpenStack
service. The services it provides are similar to the MQ ones (Rabbit
MQ, Qpid, etc.). I.e. very simply there are methods:
 * put_message(queue_name, message_payload)
 * get_message(queue_name)

Multi-tenancy is provided by the same means as in the other OpenStack
projects - user supplies Keystone token in the request and it
determines the tenant used.

As for the network, a networking-based agent requires tcp connection
to Marconi. I.e. you need an agent running on the VM to be able to
connect to Marconi, but not vice versa. That does not sound like a
harsh requirement.

The standard MQ solutions like Rabbit and Qpid actually could be used
here instead of Marconi with one drawback - it is really hard to
reliably implement tenant isolation with them.

Thanks,

Dmitry

2014-04-09 17:38 GMT+04:00 Isaku Yamahata :
> Hello Dmitry. Thank you for reply.
>
> On Wed, Apr 09, 2014 at 03:19:10PM +0400,
> Dmitry Mescheryakov  wrote:
>
>> Hello Isaku,
>>
>> Thanks for sharing this! Right now in Sahara project we think to use
>> Marconi as a mean to communicate with VM. Seems like you are familiar
>> with the discussions happened so far. If not, please see links at the
>> bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
>> supports for multi-tenancy as a huge advantage over other MQ
>> solutions. Our agent is network-based, so tenant isolation is a real
>> issue here. For clarity, here is the overview scheme of network based
>> agent:
>>
>> server <-> MQ (Marconi) <-> agent
>>
>> All communication goes over network. I've made a PoC of the Marconi
>> driver for oslo.messaging, you can find it at [2]
>
> I'm not familiar with Marconi, so please enlighten me first.
> How does MQ(Marconi) communicates both to management network and
> tenant network?
> Does it work with Neutron network? not nova-network.
>
> Neutron network isolates not only tenant networks each other,
> but also management network at L2. So openstack servers can't send
> any packets to VMs. VMs can't to openstack servers.
> This is the reason why neutron introduced HTTP proxy for instance metadata.
> It is also the reason why I choose to introduce new agent on host.
> If Marconi (or other porjects like sahara) already solved those issues,
> that's great.
>
>
>> We also considered 'hypervisor-dependent' agents (as I called them in
>> the initial thread) like the one you propose. They also provide tenant
>> isolation. But the drawback is _much_ bigger development cost and more
>> fragile and complex deployment.
>>
>> In case of network-based agent all the code is
>>  * Marconi driver for RPC library (oslo.messaging)
>>  * thin client for server to make calls
>>  * a guest agent with thin server-side
>> If you write your agent on python, it will work on any OS with any
>> host hypervisor.
>>
>>
>> For hypervisor dependent-agent it becomes much more complex. You need
>> one more additional component - a proxy-agent running on Compute host,
>> which makes deployment harder. You also need to support various
>> transports for various hypervisors: virtio-serial for KVM, XenStore
>> for Xen, something for Hyper-V, etc. Moreover guest OS must have
>> driver for these transports and you will probably need to write
>> different implementation for different OSes.
>>
>> Also you mention that in some cases a second proxy-agent is needed and
>> again in some cases only cast operations could be used. Using cast
>> only is not an option for Sahara, as we do need feedback from the
>> agent and sometimes getting the return value is the main reason to
>> make an RPC call.
>>
>> I didn't see a discussion in Neutron on which approach to use (if it
>> was, I missed it). I see simplicity of network-based agent as a huge
>> advantage. Could you please clarify why you've picked design depending
>> on hypervisor?
>
> I agree those arguments.
> But I don't see how network-based agent approach works with Neutron
> network for now. Can you please elaborate on it?
>
>
> thanks,
>
>
>> Thanks,
>>
>> Dmitry
>>
>>
>> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
>> [2] https://github.com/dmitrymex/oslo.messaging
>>
>> 2014-04-09 12:33 GMT+04:00 Isaku Yamahata :
>> > Hello developers.
>> >
>> >
>> > As discussed many times so far[1], there are many projects that needs
>> > to propagate RPC messages into VMs running on OpenStack. Neutron in my 
>> > case.
>> >
>> > My idea is to relay RPC messages from management network into tenant
>> > network over file-like object. By file-like object, I mean virtio-serial,
>> > unix domain socket, unix pipe and so on.
>> > I've wrote some code based on oslo.messaging[2][3] and a documentation
>> > on use cases.[4][5]
>> > Only file-like

Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Henrique Truta
@Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
blueprint


2014-04-09 12:59 GMT-03:00 Sylvain Bauza :

>
>
>
> 2014-04-09 17:47 GMT+02:00 Jay Lau :
>
> @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
>> placement policy or run time policy or both, can you help clarify?
>>
>>
> I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
> forklift of the current Nova scheduler. So, a placement decision based on
> dynamic metrics would be worth it.
> That said, as Gantt is not targeted to be delivered until Juno at least
> (with Nova sched deprecated), I think any progress on a BP should target
> Nova with respect to the forklift efforts, so it would automatically be
> ported to Gantt once the actual fork would happen.
>
> -Sylvain
>
> Jay
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Ítalo Henrique Costa Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for multiple sort keys and sort directions in REST GET APIs

2014-04-09 Thread Steven Kaufer
I have submitted a session for the Juno summit for this work:
http://summit.openstack.org/cfp/details/265

Thanks,

Steven Kaufer

Duncan Thomas  wrote on 04/06/2014 01:21:57 AM:

> From: Duncan Thomas 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 04/06/2014 01:28 AM
> Subject: Re: [openstack-dev] Support for multiple sort keys and sort
> directions in REST GET APIs
>
> Stephen
>
> Mike is right, it is mostly (possibly only?) extensions that do double
> lookups. Your plan looks sensible, and definitely useful. I guess I'll
> see if I can actually break it once the review is up :-) I mostly
> wanted to give a heads-up - there are people who are way better at
> reviewing this than me.
>
>
>
> On 3 April 2014 19:15, Mike Perez  wrote:
> > Duncan, I think the point you raise could happen even without
thischange. In
> > the example of listing volumes, you would first query for the list in
some
> > multi-key sort. The API extensions for example that add additional
response
> > keys will do another lookup on that resource for the appropriate column
it's
> > retrieving. There are some extensions that still do this unfortunately,
but
> > quite a few got taken care of in Havana in using cache instead of
> doing these
> > wasteful lookups.
> >
> > Overall Steven, I think this change is useful, especially from one of
the
> > Horizon sessions I heard in Hong Kong for filtering/sorting.
> >
> > --
> > Mike Perez
> >
> > On 11:18 Thu 03 Apr , Duncan Thomas wrote:
> >> Some of the cinder APIs do weird database joins and double lookups and
> >> things, making every field sortable might have some serious database
> >> performance impact and open up a DoS attack. Will need more
> >> investigation to be sure.
> >>
> >> On 2 April 2014 19:42, Steven Kaufer  wrote:
> >> > I have proposed blueprints in both nova and cinder for
> supporting multiple
> >> > sort keys and sort directions for the GET APIs (servers and
> volumes).  I am
> >> > trying to get feedback from other projects in order to have a
> more uniform
> >> > API across services.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?

2014-04-09 Thread Nikola Đipanov
On 04/09/2014 03:54 AM, Lingxian Kong wrote:
> yes, the bp also make sense to nova-cinder interaction, may I submmit 
> a blueprint about that? 
> 
> Any comments? 
> 

I was going to propose that same thing for Nova as well, as well as a
summit session for Atlanta. Would be good to coordinate the work.

Would you be interested in looking at it from Cinder side?

Thanks,

N.

> 
> 
> 2014-04-09 3:58 GMT+08:00 Mike Perez  >:
> 
> On 23:58 Tue 08 Apr , Lingxian Kong wrote:
> > hi there:
> >
> > According to the patch https://review.openstack.org/#/c/80619/, Nova
> > will wait for volume creation for 180s, the config option is
> rejected by
> > Russell and Nikola. But the reason I raise it up is, we found the
> server
> > creation failed due to timeout in our deployment, with LVM as Cinder
> > backend.
> >
> > So, I wander is 180s really suitable here? Are there some guidences
> > about when should we add an option? But at least, we should not
> avoid an
> > option, just because of the existing overwhelming number of them,
> right?
> >
> > Thoughts?
> 
> It looks like this was a temporary accepted solution and the long
> term solution
> is with event callbacks [1].
> 
> [1] -
> https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api
> 
> --
> Mike Perez
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> *---*
> *Lingxian Kong*
> Huawei Technologies Co.,LTD.
> IT Product Line CloudOS PDU
> China, Xi'an
> Mobile: +86-18602962792
> Email: konglingx...@huawei.com ;
> anlin.k...@gmail.com 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?

2014-04-09 Thread Mike Perez
On 09:54 Wed 09 Apr , Lingxian Kong wrote:
> 
> yes, the bp also make sense to nova-cinder interaction, may I submmit
> a blueprint about that?
> 
> Any comments?

Sounds fine to me!

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] need feedback on steps for adding oslo libs to projects

2014-04-09 Thread Doug Hellmann
I have started writing up some general steps for adding oslo libs to
projects, and I would like some feedback about the results. They can't
go into too much detail about specific changes in a project, because
those will vary by library and project. I would like to know if the
order makes sense and if the instructions for the infra updates are
detailed enough. Also, of course, if you think I'm missing any steps.

https://wiki.openstack.org/wiki/Oslo/UsingALibrary

Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Duncan Thomas
On 9 April 2014 08:35, Deepak Shetty  wrote:

> Alternatively, does this mean we need to make name_id a generic field (not a
> ID) and then use somethign like uuidutils.is_uuid_like() to determine if its
> UUID or non-UUID and then backend will accordinly map it ?

Definitely not, overloading fields is horrible. If we are going to do
a mapping, create a new, explicit field for it.

> Lastly,  I said "storage admin will lose track of it" bcos he would have
> named is "my_vol" and when he asks cidner to manage it using "my_cinder_vol"
> its not expected that u wud rename the volume's name on the backend :)
> I mean its good if we could implement manage_existing w/o renaming as then
> it would seem like less disruptive :)

I think this leads to a bad kind of thinking. Once you've given a
volume to cinder, the storage admin shouldn't be /trying/ to keep
track of it. It is a cinder volume now, and cinder can and should do
whatever it feels appropriate with that volume (rename it, migrate it
to a new backend, etc etc etc)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Sylvain Bauza
2014-04-09 17:47 GMT+02:00 Jay Lau :

> @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
> placement policy or run time policy or both, can you help clarify?
>
>
I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
forklift of the current Nova scheduler. So, a placement decision based on
dynamic metrics would be worth it.
That said, as Gantt is not targeted to be delivered until Juno at least
(with Nova sched deprecated), I think any progress on a BP should target
Nova with respect to the forklift efforts, so it would automatically be
ported to Gantt once the actual fork would happen.

-Sylvain

Jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-09 Thread Stig Telfer
> -Original Message-
> From: Matt Wagner [mailto:matt.wag...@redhat.com]
> Sent: Tuesday, April 08, 2014 6:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ironic][Agent]
> 
> On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
> 
> >0) There are a plenty of old hardware which does not have IPMI/ILO at all.
> >How Ironic is supposed to power them off and on? Ssh? But Ironic is not
> >supposed to interact with host OS.
> 
> I'm more accustomed to using PDUs for this type of thing. I.e., a
> power strip you can ssh into or hit via a web API to toggle power to
> individual ports.
> 
> Machines are configured to power up on power restore, plus PXE boot.
> You have less control than with IPMI -- all you can do is toggle power
> to the outlet -- but it works well, even for some desktop machines I
> have in a lab.
> 
> I don't have a compelling need, but I've often wondered if such a
> driver would be useful. I can imagine it also being useful if people
> want to power up non-compute stuff, though that's probably not a top
> priority right now.

We have developed a driver that might be of interest.  Ironic uses it to 
control the PDUs in our lab cluster through SNMP.  It appears the leading 
brands of PDU implement SNMP interfaces, albeit through vendor-specific 
enterprise MIBs.  As a mechanism for control, I'd suggest that SNMP is going to 
be a better bet than an automated tron for hitting the ssh or web interfaces.

Currently our power driver is a point solution for our PDUs, but why not make 
it generalised?  We'd be happy to contribute it.

Best wishes
Stig Telfer
Cray Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-09 Thread Sylvain Bauza
2014-04-07 23:11 GMT+02:00 Sylvain Bauza :

> Hi Phil,
>
>
>
> 2014-04-07 18:48 GMT+02:00 Day, Phil :
>
>   Hi Sylvain,
>>
>>
>>
>> There was a similar thread on this recently - which might be worth
>> reviewing:
>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html
>>
>>
>>
>> Some interesting use cases were posted, and a I don't think a conclusion
>> was reached, which seems to suggest this might be a good case for a session
>> in Atlanta.
>>
>>
>
> The funny fact is that I was already part of this discussion as owner of a
> bug related to it (see the original link I provided).
> That's only when reviewing the code by itself that I found some
> discrepancies and raised the question here, before committing.
>
>
>
>>
>>
>> Personally I'm not sure that selecting more than one AZ really makes a
>> lot of sense - they are generally objects which are few in number and large
>> in scale, so if for example there are 3 AZs and you want to create two
>> servers in different AZs, does it really help if you can do the sequence:
>>
>>
>>
>> -  Create a server in any AZ
>>
>> -  Find the AZ the server is in
>>
>> -  Create a new server in any of the two remaining AZs
>>
>>
>>
>> Rather than just picking two from the list to start with ?
>>
>>
>>
>> If you envisage a system with many AZs, and thereby allow users some
>> pretty find grained choices about where to place their instances, then I
>> think you'll end up with capacity management issues.
>>
>>
>>
>> If the use case is more to get some form of server isolation, then
>> server-groups might be worth looking at, as these are dynamic and per user.
>>
>>
>>
>> I can see a case for allowing more than one set of mutually exclusive
>> host aggregates - at the moment that's a property implemented just for the
>> set of aggregates that are designated as AZs, and generalizing that concept
>> so that there can be other sets (where host overlap is allowed between
>> sets, but not within a set) might be useful.
>>
>>
>>
>> Phil
>>
>>
>>
>
> That's a good point for discussing at the Summit. I don't have yet an
> opinion on this, I'm just trying to stabilize things now :-)
> At the moment, I'm pretty close to submit a change which will fix two
> things :
>  - the decisional will be the same for both adding a server to an
> aggregate and update metadata from an existing aggregate (there was
> duplicate code leading to a few differences)
>  - when checking existing AZs for one host, we will also get the
> aggregates to know if the default AZ is related to an existing aggregate
> with the same name or just something unrelated
>
>
Folks interested in the initial issue can review
https://review.openstack.org/#/c/85961/ for a proposal to fix.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enable live migration with one nova compute

2014-04-09 Thread Steve Gordon
- Original Message -
> Steve,
> The problem with the support of live-migrate would still exist even if we
> decide to manage only one cluster from a compute node, unless one is ok with
> only live-migrate functionality between clusters.  The main debate started
> with supporting the live-migrate between the ESX Hosts in the same cluster.
> 
> Thanks,
> Divakar

We actually originally started off with ~7 migration scenarios in this thread 
[1], I'm speaking to what I consider the most problematic one (scenario 1) 
which is migration between clusters managed by the same nova-compute. I think 
it's if you wish to address both this *and* migration between ESX hosts within 
a cluster that we run into problems whereby it's going to require significant 
changes to Nova because you need not one but two additional levels of 
introspection. That is on top of the concerns I noted regarding fault tolerance 
and high availability when you have a single nova-compute managing multiple (or 
even all) ESX clusters in the environment.

-Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/030768.html

> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: Wednesday, April 09, 2014 8:38 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
> migration with one nova compute
> Importance: High
> 
> - Original Message -
> > I'm not writing off vCenter or its capabilities. I am arguing that the
> > bar for modifying a fundamental design decision in Nova -- that of
> > being horizontally scalable by having a single nova-compute worker
> > responsible for managing a single provider of compute resources -- was
> > WAY too low, and that this decision should be revisited in the future
> > (and possibly as part of the vmware driver refactoring efforts
> > currently underway by the good folks at RH and VMWare).
> 
> +1, This is my main concern about having more than one ESX cluster under a
> single nova-compute agent as well. Currently it works, but it doesn't seem
> particularly advisable as on face value as such an architecture seems to
> break a number of the Nova design guidelines around high availability and
> fault tolerance. To me it seems like such an architecture effectively
> elevates nova-compute into being part of the control plane where it needs to
> have high availability (when discussing on IRC yesterday it seemed like this
> *may* be possible today but more testing is required to shake out any bugs).
> 
> Now may well be the right approach *is* to make some changes to these
> expectations about Nova, but I think it's disingenuous to suggest that what
> is being suggested here isn't a significant re-architecting to resolve
> issues resulting from earlier hacks that allowed this functionality to work
> in the first place. Should be an interesting summit session.
> 
> -Steve
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
@Divakar, exactly, we want do ESX server level live-migrations with vCenter
(VCDriver) by leveraging nova scheduler. Thanks.


2014-04-09 23:36 GMT+08:00 Nandavar, Divakar Padiyar <
divakar.padiyar-nanda...@hp.com>:

> Steve,
> The problem with the support of live-migrate would still exist even if we
> decide to manage only one cluster from a compute node, unless one is ok
> with only live-migrate functionality between clusters.  The main debate
> started with supporting the live-migrate between the ESX Hosts in the same
> cluster.
>
> Thanks,
> Divakar
>
> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: Wednesday, April 09, 2014 8:38 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
> migration with one nova compute
> Importance: High
>
> - Original Message -
> > I'm not writing off vCenter or its capabilities. I am arguing that the
> > bar for modifying a fundamental design decision in Nova -- that of
> > being horizontally scalable by having a single nova-compute worker
> > responsible for managing a single provider of compute resources -- was
> > WAY too low, and that this decision should be revisited in the future
> > (and possibly as part of the vmware driver refactoring efforts
> > currently underway by the good folks at RH and VMWare).
>
> +1, This is my main concern about having more than one ESX cluster under a
> single nova-compute agent as well. Currently it works, but it doesn't seem
> particularly advisable as on face value as such an architecture seems to
> break a number of the Nova design guidelines around high availability and
> fault tolerance. To me it seems like such an architecture effectively
> elevates nova-compute into being part of the control plane where it needs
> to have high availability (when discussing on IRC yesterday it seemed like
> this *may* be possible today but more testing is required to shake out any
> bugs).
>
> Now may well be the right approach *is* to make some changes to these
> expectations about Nova, but I think it's disingenuous to suggest that what
> is being suggested here isn't a significant re-architecting to resolve
> issues resulting from earlier hacks that allowed this functionality to work
> in the first place. Should be an interesting summit session.
>
> -Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Jay Lau
@Oleg, Till now, I'm not sure the target of Gantt, is it for initial
placement policy or run time policy or both, can you help clarify?

@Henrique, not sure if you know IBM PRS (Platform Resource Scheduler) [1],
we have finished the "dynamic scheduler" in our Icehouse version (PRS 2.2),
it has exactly the same feature as your described, we are planning a live
demo for this feature in Atlanta Summit. I'm also writing some document for
run time policy which will cover more run time policies for OpenStack, but
not finished yet. (My shame for the slow progress). The related blueprint
is [2], you can also get some discussion from [3]

[1]
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS213-590&appname=USN
[2]
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
[3] http://markmail.org/~jaylau/OpenStack-DRS

Thanks.


2014-04-09 23:21 GMT+08:00 Oleg Gelbukh :

> Henrique,
>
> You should check out Gantt project [1], it could be exactly the place to
> implement such features. It is a generic cross-project Scheduler as a
> Service forked from Nova recently.
>
> [1] https://github.com/openstack/gantt
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Labs
>
>
> On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta <
> henriquecostatr...@gmail.com> wrote:
>
>> Hello, everyone!
>>
>> I am currently a graduate student and member of a group of contributors
>> to OpenStack. We believe that a dynamic scheduler could improve the
>> efficiency of an OpenStack cloud, either by rebalancing nodes to maximize
>> performance or to minimize the number of active hosts, in order to minimize
>> energy costs. Therefore, we would like to propose a dynamic scheduling
>> mechanism to Nova. The main idea is using the Ceilometer information (e.g.
>> RAM, CPU, disk usage) through the ceilometer-client and dinamically decide
>> whether a instance should be live migrated.
>>
>> This might me done as a Nova periodic task, which will be executed every
>> once in a given period or as a new independent project. In both cases, the
>> current Nova scheduler will not be affected, since this new scheduler will
>> be pluggable. We have done a search and found no such initiative in the
>> OpenStack BPs. Outside the community, we found only a recent IBM
>> announcement for a similiar feature in one of its cloud products.
>>
>> A possible flow is: In the new scheduler, we periodically make a call to
>> Nova, get the instance list from a specific host and, for each instance, we
>> make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
>> cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
>> parameters configured by the user, analyze the meters and do the proper
>> migrations.
>>
>> Do you have any comments or suggestions?
>>
>> --
>> Ítalo Henrique Costa Truta
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Mark McLoughlin
Hi,

On Wed, 2014-04-09 at 17:33 +0900, Isaku Yamahata wrote:
> Hello developers.
> 
> 
> As discussed many times so far[1], there are many projects that needs
> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
> 
> My idea is to relay RPC messages from management network into tenant
> network over file-like object. By file-like object, I mean virtio-serial,
> unix domain socket, unix pipe and so on.
> I've wrote some code based on oslo.messaging[2][3] and a documentation
> on use cases.[4][5]
> Only file-like transport and proxying messages would be in oslo.messaging
> and agent side code wouldn't be a part of oslo.messaging.
> 
> 
> use cases:([5] for more figures)
> file-like object: virtio-serial, unix domain socket, unix pipe
> 
>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>   per VM
> 
>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>  agent in tenant network <-> guest agent in VM
> 
> 
> So far there are security concerns to forward oslo.messaging from management
> network into tenant network. One approach is to allow only cast-RPC from
> server to guest agent in VM so that guest agent in VM only receives messages
> and can't send anything to servers. With unix pipe, it's write-only
> for server, read-only for guest agent.
> 
> 
> Thoughts? comments?

Nice work. This is a pretty gnarly topic, but I think you're doing a
good job thinking through a good solution here.

The advantage this has over Marconi is that it avoids relying on
something which might not be commonplace in OpenStack deployments for a
number of releases yet.

Using vmchannel/virtio-serial to talk to an oslo.messaging proxy server
(with would have a configurable security policy) over a unix socket
oslo.messaging transport in order to allow limited bridging from the
tenant network to management network ... definitely sounds like a
reasonable proposal.

Looking forward to your session at the summit! I also hope to look at
your patches before then.

Thanks,
Mark.



> 
> 
> Details of Neutron NFV use case[6]:
> Neutron services so far typically runs agents in host, the host agent
> in host receives RPCs from neutron server, then it executes necessary
> operations. Sometimes the agent in host issues RPC to neutron server
> periodically.(e.g. status report etc)
> It's desirable to make such services virtualized as Network Function
> Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
> natural approach to propagate those RPC message into agents into VMs.
> 
> 
> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> [2] https://review.openstack.org/#/c/77862/
> [3] https://review.openstack.org/#/c/77863/
> [4] https://blueprints.launchpad.net/oslo.messaging/+spec/message-proxy-server
> [5] https://wiki.openstack.org/wiki/Oslo/blueprints/message-proxy-server
> [6] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enable live migration with one nova compute

2014-04-09 Thread Nandavar, Divakar Padiyar
Steve,
The problem with the support of live-migrate would still exist even if we 
decide to manage only one cluster from a compute node, unless one is ok with 
only live-migrate functionality between clusters.  The main debate started with 
supporting the live-migrate between the ESX Hosts in the same cluster.

Thanks,
Divakar

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, April 09, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

- Original Message -
> I'm not writing off vCenter or its capabilities. I am arguing that the 
> bar for modifying a fundamental design decision in Nova -- that of 
> being horizontally scalable by having a single nova-compute worker 
> responsible for managing a single provider of compute resources -- was 
> WAY too low, and that this decision should be revisited in the future 
> (and possibly as part of the vmware driver refactoring efforts 
> currently underway by the good folks at RH and VMWare).

+1, This is my main concern about having more than one ESX cluster under a 
single nova-compute agent as well. Currently it works, but it doesn't seem 
particularly advisable as on face value as such an architecture seems to break 
a number of the Nova design guidelines around high availability and fault 
tolerance. To me it seems like such an architecture effectively elevates 
nova-compute into being part of the control plane where it needs to have high 
availability (when discussing on IRC yesterday it seemed like this *may* be 
possible today but more testing is required to shake out any bugs).

Now may well be the right approach *is* to make some changes to these 
expectations about Nova, but I think it's disingenuous to suggest that what is 
being suggested here isn't a significant re-architecting to resolve issues 
resulting from earlier hacks that allowed this functionality to work in the 
first place. Should be an interesting summit session.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-09 Thread Susanne Balle
Hi



I wasn't able to get % for the spreadsheet but our Product Manager
prioritized the features:



*Function*

*Priority (0 = highest)*

*HTTP+HTTPS on one device*

5

*L7 Switching*

2

*SSL Offloading*

1

*High Availability*

0

*IP4 & IPV6 Address Support*

6

*Server Name Indication (SNI) Support*

3

*UDP Protocol*

7

*Round Robin Algorithm*

4



 Susanne


On Thu, Apr 3, 2014 at 9:32 AM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>
>
> The document has Vendor  column, it should be from Cloud
> Operator?
>
>
>
> Thanks,
>
> Vijay V.
>
>
>
>
>
> *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
> *Sent:* Thursday, April 3, 2014 11:23 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases.
> Data from Operators needed.
>
>
>
> Stephen,
>
>
>
> Agree with you. Basically the page starts looking as requirements page.
>
> I think we need to move to google spreadsheet, where table is organized
> easily.
>
> Here's the doc that may do a better job for us:
>
>
> https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing
>
>
>
> Thanks,
>
> Eugene.
>
>
>
> On Thu, Apr 3, 2014 at 5:34 AM, Prashanth Hari  wrote:
>
>  More additions to the use cases (
> https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases).
>
> I have updated some of the features we are interested in.
>
>
>
>
>
>
>
> Thanks,
>
> Prashanth
>
>
>
>
>
> On Wed, Apr 2, 2014 at 8:12 PM, Stephen Balukoff 
> wrote:
>
>  Hi y'all--
>
>
>
> Looking at the data in the page already, it looks more like a feature
> wishlist than actual usage data. I thought we agreed to provide data based
> on percentage usage of a given feature, the end result of the data
> collection being that it would become more obvious which features are the
> most relevant to the most users, and therefore are more worthwhile targets
> for software development.
>
>
>
> Specifically, I was expecting to see something like the following (using
> hypothetical numbers of course, and where technical people from "Company A"
> & etc. fill out the data for their organization):
>
>
>
> == L7 features ==
>
>
>
> "Company A" (Cloud operator serving external customers): 56% of
> load-balancer instances use
>
> "Company B" (Cloud operator serving external customers): 92% of
> load-balancer instances use
>
> "Company C" (Fortune 100 company serving internal customers): 0% of
> load-balancer instances use
>
>
>
> == SSL termination ==
>
>
>
> "Company A" (Cloud operator serving external customers): 95% of
> load-balancer instances use
>
> "Company B" (Cloud operator serving external customers): 20% of
> load-balancer instances use
>
> "Company C" (Fortune 100 company serving internal customers): 50% of
> load-balancer instances use.
>
>
>
> == Racing stripes ==
>
>
>
> "Company A" (Cloud operator serving external customers): 100% of
> load-balancer instances use
>
> "Company B" (Cloud operator serving external customers): 100% of
> load-balancer instances use
>
> "Company C" (Fortune 100 company serving internal customers): 100% of
> load-balancer instances use
>
>
>
>
>
> In my mind, a wish-list of features is only going to be relevant to this
> discussion if (after we agree on what the items under consideration ought
> to be) each technical representative presents a prioritized list for their
> organization. :/ A wish-list is great for brain-storming what ought to be
> added, but is less relevant for prioritization.
>
>
>
> In light of last week's meeting, it seems useful to list the features most
> recently discussed in that meeting and on the mailing list as being points
> on which we want to gather actual usage data (ie. from what people are
> actually using on the load balancers in their organization right now).
> Should we start a new page that lists actual usage percentages, or just
> re-vamp the one above?  (After all, wish-list can be useful for discovering
> things we're missing, especially if we get people new to the discussion to
> add their $0.02.)
>
>
>
> Thanks,
>
> Stephen
>
>
>
>
>
>
>
> On Wed, Apr 2, 2014 at 3:46 PM, Jorge Miramontes <
> jorge.miramon...@rackspace.com> wrote:
>
>   Thanks Eugene,
>
>
>
> I added our data onto the requirements page since I was hoping to
> prioritize requirements based on the operator data that gets provided. We
> can move it over to the other page if you think that makes sense. See
> everyone on the weekly meeting tomorrow!
>
>
>
> Cheers,
>
> --Jorge
>
>
>
> *From: *Susanne Balle 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Tuesday, April 1, 2014 4:09 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases.
> Data from Operators needed.
>
>
>
> I added two more. I am still working on our HA u

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
@Divakar, yes, the "Proxy Compute" model is not new, but I'm not sure if
this model can be accepted by community to manage both VM and PM. Anyway, I
will try to file a bp and get more comments then. Thanks.


2014-04-09 22:52 GMT+08:00 Nandavar, Divakar Padiyar <
divakar.padiyar-nanda...@hp.com>:

> Hi Jay,
> Managing multiple clusters using the "Compute Proxy" is not new right?
> Prior to this "nova baremetal" driver has used this model already.   Also
> this "Proxy Compute" model gives flexibility to deploy as many computes
> required based on the requirement.   For example, one can setup one proxy
> compute node to manage a set of clusters and another proxy compute to
> manage a separate set of clusters or launch compute node for each of the
> clusters.
>
> Thanks,
> Divakar
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, April 09, 2014 6:23 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
> migration with one nova compute
> Importance: High
>
> Hi Juan, thanks for your response. Comments inline.
>
> On Mon, 2014-04-07 at 10:22 +0200, Juan Manuel Rey wrote:
> > Hi,
> >
> > I'm fairly new to this list, actually this is my first email sent, and
> > to OpenStack in general, but I'm not new at all to VMware so I'll try
> > to give you my point of view about possible use case here.
> >
> > Jay you are saying that by using Nova to manage ESXi hosts we don't
> > need vCenter because they basically overlap in their capabilities.
>
> Actually, no, this is not my main point. My main point is that Nova should
> not change its architecture to fit the needs of one particular host
> management platform (vCenter).
>
> Nova should, as much as possible, communicate with vCenter to perform some
> operations -- in the same way that Nova communicates with KVM or XenServer
> to perform some operations. But Nova should not be re-architected (and I
> believe that is what has gone on here with the code change to have one
> nova-compute worker talking to multiple vCenter
> clusters) just so that one particular host management scheduler/platform
> (vCenter) can have all of its features exposed to Nova.
>
> >  I agree with you to some extent, Nova may have similar capabilities
> > as vCenter Server but as you know OpenStack as a full cloud solution
> > adds a lot more features that vCenter lacks, like multitenancy just to
> > name one.
>
> Sure, however, my point is that Nova shouldn't need to be re-architected
> just to adhere to one particular host management platform's concepts of an
> atomic provider of compute resources.
>
> > Also in any vSphere environment managing ESXi hosts individually, this
> > is without vCenter, is completely out of the question. vCenter is the
> > enabler of many vSphere features. And precisely that's is, IMHO, the
> > use case of using Nova to manage vCenter to manage vSphere. Without
> > vCenter we only have a bunch of hypervisors and none of the HA or DRS
> > (dynamic resource balancing) capabilities that a vSphere cluster
> > provides, this in my experience with vSphere users/customers is a no
> > go scenario.
>
> Understood. Still doesn't change my opinion though :)
>
> Best,
> -jay
>
> > I don't know why the decision to manage vCenter with Nova was made but
> > based on the above I understand the reasoning.
> >
> >
> > Best,
> > ---
> > Juan Manuel Rey
> >
> > @jreypo
> >
> >
> > On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes  wrote:
> > On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar
> > wrote:
> > > >> Well, it seems to me that the problem is the above
> > blueprint and the code it introduced. This is an anti-feature
> > IMO, and probably the best solution would be to remove the
> > above code and go back to having a single >> nova-compute
> > managing a single vCenter cluster, not multiple ones.
> > >
> > > Problem is not introduced by managing multiple clusters from
> > single nova-compute proxy node.
> >
> >
> > I strongly disagree.
> >
> > > Internally this proxy driver is still presenting the
> > "compute-node" for each of the cluster its managing.
> >
> >
> > In what way?
> >
> > >  What we need to think about is applicability of the live
> > migration use case when a "cluster" is modelled as a compute.
> > Since the "cluster" is modelled as a compute, it is assumed
> > that a typical use case of live-move is taken care by the
> > underlying "cluster" itself.   With this there are other
> > use cases which are no-op today like host maintenance mode,
> > live move, setting instance affinity etc., In order to
> > resolve this I was thinking of
> > > "A way to expose operations on individual ESX Hosts like
> > Putting host in maintenance mode,  live move, instance
> >

Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Costantino, Leandro I

Hi Enrique,
This bp [1] may cover the uses cases you are proposing. ( maybe not 
using ceilometer )
Also, you can take a look at openstack-neat [2] ( outside project ), 
that try to achieve something similar, but seems to be outdated.


There's another initiate to have an external scheduler (GANTT) , so 
maybe there could be some place there for this kind of functionality.


[1] 
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service

[2] http://openstack-neat.org

El 09/04/2014 11:41 a.m., Henrique Truta escribió:


Hello, everyone!


I am currently a graduate student and member of a group of 
contributors to OpenStack. We believe that a dynamic scheduler could 
improve the efficiency of an OpenStack cloud, either by rebalancing 
nodes to maximize performance or to minimize the number of active 
hosts, in order to minimize energy costs. Therefore, we would like to 
propose a dynamic scheduling mechanism to Nova. The main idea is using 
the Ceilometer information (e.g. RAM, CPU, disk usage) through the 
ceilometer-client and dinamically decide whether a instance should be 
live migrated.



This might me done as a Nova periodic task, which will be executed 
every once in a given period or as a new independent project. In both 
cases, the current Nova scheduler will not be affected, since this new 
scheduler will be pluggable. We have done a search and found no such 
initiative in the OpenStack BPs. Outside the community, we found only 
a recent IBM announcement for a similiar feature in one of its cloud 
products.



A possible flow is: In the new scheduler, we periodically make a call 
to Nova, get the instance list from a specific host and, for each 
instance, we make a call to the ceilometer-client (e.g. $ ceilometer 
statistics -m cpu_util -q resource=$INSTANCE_ID) and then, according 
to some specific parameters configured by the user, analyze the meters 
and do the proper migrations.



Do you have any comments or suggestions?


--
Ítalo Henrique Costa Truta




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Oleg Gelbukh
Henrique,

You should check out Gantt project [1], it could be exactly the place to
implement such features. It is a generic cross-project Scheduler as a
Service forked from Nova recently.

[1] https://github.com/openstack/gantt

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta  wrote:

> Hello, everyone!
>
> I am currently a graduate student and member of a group of contributors to
> OpenStack. We believe that a dynamic scheduler could improve the efficiency
> of an OpenStack cloud, either by rebalancing nodes to maximize performance
> or to minimize the number of active hosts, in order to minimize energy
> costs. Therefore, we would like to propose a dynamic scheduling mechanism
> to Nova. The main idea is using the Ceilometer information (e.g. RAM, CPU,
> disk usage) through the ceilometer-client and dinamically decide whether a
> instance should be live migrated.
>
> This might me done as a Nova periodic task, which will be executed every
> once in a given period or as a new independent project. In both cases, the
> current Nova scheduler will not be affected, since this new scheduler will
> be pluggable. We have done a search and found no such initiative in the
> OpenStack BPs. Outside the community, we found only a recent IBM
> announcement for a similiar feature in one of its cloud products.
>
> A possible flow is: In the new scheduler, we periodically make a call to
> Nova, get the instance list from a specific host and, for each instance, we
> make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
> cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
> parameters configured by the user, analyze the meters and do the proper
> migrations.
>
> Do you have any comments or suggestions?
>
> --
> Ítalo Henrique Costa Truta
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly subteam meeting April 10 14-00 UTC

2014-04-09 Thread Eugene Nikanorov
Hi folks,

Our next meeting is as usual on Thursday, 14-00 UTC

>From the last meeting there was basically two major action items:
1) contribute to deployment scenarios statistics:
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc

2) Make proposals for 'single-call API' as was discussed on the meeting.
The use cases that needs to be addressed are in the document, that Sam has
prepeared:
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit

In other words, proposals should show a call/sequence of calls that
configure lbaas for certain use case.

Unfortunately we didn't get much replies on ML.
So please try to present your ideas on ML+wiki+etherpad+google
docs/whatever, so we have material to review and discuss and not just ideas
off the top of someone's head.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Steve Gordon
- Original Message -
> I'm not writing off vCenter or its capabilities. I am arguing that the
> bar for modifying a fundamental design decision in Nova -- that of being
> horizontally scalable by having a single nova-compute worker responsible
> for managing a single provider of compute resources -- was WAY too low,
> and that this decision should be revisited in the future (and possibly
> as part of the vmware driver refactoring efforts currently underway by
> the good folks at RH and VMWare).

+1, This is my main concern about having more than one ESX cluster under a 
single nova-compute agent as well. Currently it works, but it doesn't seem 
particularly advisable as on face value as such an architecture seems to break 
a number of the Nova design guidelines around high availability and fault 
tolerance. To me it seems like such an architecture effectively elevates 
nova-compute into being part of the control plane where it needs to have high 
availability (when discussing on IRC yesterday it seemed like this *may* be 
possible today but more testing is required to shake out any bugs).

Now may well be the right approach *is* to make some changes to these 
expectations about Nova, but I think it's disingenuous to suggest that what is 
being suggested here isn't a significant re-architecting to resolve issues 
resulting from earlier hacks that allowed this functionality to work in the 
first place. Should be an interesting summit session.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Nandavar, Divakar Padiyar
Hi Jay,
Managing multiple clusters using the "Compute Proxy" is not new right?   Prior 
to this "nova baremetal" driver has used this model already.   Also this "Proxy 
Compute" model gives flexibility to deploy as many computes required based on 
the requirement.   For example, one can setup one proxy compute node to manage 
a set of clusters and another proxy compute to manage a separate set of 
clusters or launch compute node for each of the clusters.

Thanks,
Divakar

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, April 09, 2014 6:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

Hi Juan, thanks for your response. Comments inline.

On Mon, 2014-04-07 at 10:22 +0200, Juan Manuel Rey wrote:
> Hi,
> 
> I'm fairly new to this list, actually this is my first email sent, and 
> to OpenStack in general, but I'm not new at all to VMware so I'll try 
> to give you my point of view about possible use case here.
> 
> Jay you are saying that by using Nova to manage ESXi hosts we don't 
> need vCenter because they basically overlap in their capabilities.

Actually, no, this is not my main point. My main point is that Nova should not 
change its architecture to fit the needs of one particular host management 
platform (vCenter).

Nova should, as much as possible, communicate with vCenter to perform some 
operations -- in the same way that Nova communicates with KVM or XenServer to 
perform some operations. But Nova should not be re-architected (and I believe 
that is what has gone on here with the code change to have one nova-compute 
worker talking to multiple vCenter
clusters) just so that one particular host management scheduler/platform
(vCenter) can have all of its features exposed to Nova.

>  I agree with you to some extent, Nova may have similar capabilities 
> as vCenter Server but as you know OpenStack as a full cloud solution 
> adds a lot more features that vCenter lacks, like multitenancy just to 
> name one.

Sure, however, my point is that Nova shouldn't need to be re-architected just 
to adhere to one particular host management platform's concepts of an atomic 
provider of compute resources.

> Also in any vSphere environment managing ESXi hosts individually, this 
> is without vCenter, is completely out of the question. vCenter is the 
> enabler of many vSphere features. And precisely that's is, IMHO, the 
> use case of using Nova to manage vCenter to manage vSphere. Without 
> vCenter we only have a bunch of hypervisors and none of the HA or DRS 
> (dynamic resource balancing) capabilities that a vSphere cluster 
> provides, this in my experience with vSphere users/customers is a no 
> go scenario.

Understood. Still doesn't change my opinion though :)

Best,
-jay

> I don't know why the decision to manage vCenter with Nova was made but 
> based on the above I understand the reasoning.
> 
> 
> Best,
> ---
> Juan Manuel Rey
> 
> @jreypo
> 
> 
> On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes  wrote:
> On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar
> wrote:
> > >> Well, it seems to me that the problem is the above
> blueprint and the code it introduced. This is an anti-feature
> IMO, and probably the best solution would be to remove the
> above code and go back to having a single >> nova-compute
> managing a single vCenter cluster, not multiple ones.
> >
> > Problem is not introduced by managing multiple clusters from
> single nova-compute proxy node.
> 
> 
> I strongly disagree.
> 
> > Internally this proxy driver is still presenting the
> "compute-node" for each of the cluster its managing.
> 
> 
> In what way?
> 
> >  What we need to think about is applicability of the live
> migration use case when a "cluster" is modelled as a compute.
> Since the "cluster" is modelled as a compute, it is assumed
> that a typical use case of live-move is taken care by the
> underlying "cluster" itself.   With this there are other
> use cases which are no-op today like host maintenance mode,
> live move, setting instance affinity etc., In order to
> resolve this I was thinking of
> > "A way to expose operations on individual ESX Hosts like
> Putting host in maintenance mode,  live move, instance
> affinity etc., by introducing Parent - Child compute node
> concept.   Scheduling can be restricted to Parent compute node
> and Child compute node can be used for providing more drill
> down on compute and also enable additional compute
> operations".Any thoughts on this?
> 
> 
> The fundamental problem is that hacks were put in place in
>

Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Demo of current state of "Tuskar-UI"

2014-04-09 Thread Jaromir Coufal

On 2014/09/04 16:31, mar...@redhat.com wrote:

Jarda thanks this was great to watch - seems a lot of things have been
fixed/tweaked in last couple weeks. Is everything running from current
master branches?

marios


Yes, everything what you see is currently in the master branch (last 
changes were merged yesterday night). So it is showing actually the 
latest state.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Geraint North
I personally don't like the rename approach (and I implemented it!). 
However, as Avishay says, we don't have that many options.

One thing that we could do is start to use the admin_metadata associated 
with a volume to store a reference to the volume other than the name 
(which is the UUID).  However, this requires that individual drivers must 
change to support that - e.g. the Storwize driver could choose to store 
the vdisk ID/UUID in admin_metadata, and use it whenever it needed to 
perform an operation on a volume.  Similarly, the LVM driver could do the 
same, and use that in preference to assuming that the LVM was named from 
the volume['name'] if it existed, but these are going to be fairly 
signficant changes.

Thanks,
Geraint.

Geraint North
Storage Virtualization Architect and Master Inventor, Cloud Systems 
Software.
IBM Manchester Lab, UK.



From:   Avishay Traeger 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   09/04/2014 11:23
Subject:Re: [openstack-dev] [Cinder] Regarding manage_existing and 
unmanage



On Wed, Apr 9, 2014 at 8:35 AM, Deepak Shetty  wrote:



On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger  
wrote:
On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty  wrote:
Hi List,
I had few Qs on the implementation of manage_existing and unmanage API 
extns

1) For LVM case, it renames the lv.. isn't it better to use name_id (one 
used during cinder migrate to keep id same for a diff backend name/id) to 
map cinder name/id to backend name/id and thus avoid renaming the backend 
storage. Renaming isn't good since it changes the original name of the 
storage object and hence storage admin may lose track? The Storwize uses 
UID and changes vdisk_name on the backend array which isn't good either. 
Is renaming a must, if yes why ?

'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
In migration, both the original and new volumes use the same template for 
volume names, just with a different ID, so name_id works well for that. 
 When importing a volume that wasn't created by Cinder, chances are it 
won't conform to this template, and so name_id won't work (i.e., I can 
call the volume 'my_very_important_db_volume', and name_id can't help with 
that).  When importing, the admin should give the volume a proper name and 
description, and won't lose track of it - it is now being managed by 
Cinder.

Avishay,
thanks for ur reply.. it did help. Just one more Q tho...

 >>(i.e., I can call the volume 'my_very_important_db_volume', and name_id 
can't help with that).
This is the name of the volume. but isn't it common for most arrays to 
provide name and ID (which is again UUID) for a volume on the backend.. so 
name_id can still point to the UID which has the name 
'my_very_important_db_volume'
In fact in storwize, you are using vdisk_id itself and changing the 
vdisk_name to match what the user gave.. and vdisk_id is a UUID and 
matches w/ name_id format

Not exactly, it's a number (like '5'), not a UUID like 
c8b3d8e2-2410-4362-b24b-548a13fa850b
 
Alternatively, does this mean we need to make name_id a generic field (not 
a ID) and then use somethign like uuidutils.is_uuid_like() to determine if 
its UUID or non-UUID and then backend will accordinly map it ?

Lastly,  I said "storage admin will lose track of it" bcos he would have 
named is "my_vol" and when he asks cidner to manage it using 
"my_cinder_vol" its not expected that u wud rename the volume's name on 
the backend :)
I mean its good if we could implement manage_existing w/o renaming as then 
it would seem like less disruptive :)

 I think there are a few trade-offs here - making it less disruptive in 
this sense makes it more disruptive to:
1. Managing the storage over its lifetime.  If we assume that the admin 
will stick with Cinder for managing their volumes, and if they need to 
find the volume on the storage, it should be done uniformly (i.e., go to 
the backend and find the volume named 'volume-%s' % name_id).
2. The code, where a change of this kind could make things messy. 
 Basically the rename approach has a little bit of complexity overhead 
when you do manage_existing, but from then on it's just like any other 
volume.  Otherwise, it's always a special case in different code paths, 
which could be tricky.

If you still feel that rename is wrong and that there is a better 
approach, I encourage you to try, and post code if it works.  I don't mind 
being proved wrong. :)

Thanks,
Avishay___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.open

Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-09 Thread Robert Collins
On 10 April 2014 02:32, Chris Friesen  wrote:
> On 04/09/2014 03:45 AM, Day, Phil wrote:
>>>
>>> -Original Message- From: Russell Bryant
>
>
>>> We were thinking that there may be a use for being able to query a
>>> full list of instances (including the deleted ones) for a group.
>>> The API just hasn't made it that far yet.  Just hiding them for now
>>> leaves room to iterate and doesn't prevent either option (exposing
>>> the deleted instances, or changing to auto- delete them from the
>>> group).
>
>
>> Maybe it's just me, but I have a natural aversion to anything that
>> grows forever in the database - over time and at scale this becomes a
>> real problem.
>
>
> Not just you.  I want my main database to reflect the current active data.
> Historical data should go somewhere else.

+1. Fastest way to make an OLTP workload crawl is to mix it up with warehousing.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Henrique Truta
Hello, everyone!

I am currently a graduate student and member of a group of contributors to
OpenStack. We believe that a dynamic scheduler could improve the efficiency
of an OpenStack cloud, either by rebalancing nodes to maximize performance
or to minimize the number of active hosts, in order to minimize energy
costs. Therefore, we would like to propose a dynamic scheduling mechanism
to Nova. The main idea is using the Ceilometer information (e.g. RAM, CPU,
disk usage) through the ceilometer-client and dinamically decide whether a
instance should be live migrated.

This might me done as a Nova periodic task, which will be executed every
once in a given period or as a new independent project. In both cases, the
current Nova scheduler will not be affected, since this new scheduler will
be pluggable. We have done a search and found no such initiative in the
OpenStack BPs. Outside the community, we found only a recent IBM
announcement for a similiar feature in one of its cloud products.

A possible flow is: In the new scheduler, we periodically make a call to
Nova, get the instance list from a specific host and, for each instance, we
make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
parameters configured by the user, analyze the meters and do the proper
migrations.

Do you have any comments or suggestions?

--
Ítalo Henrique Costa Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting April 10 1800 UTC [savanna]

2014-04-09 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_April.2C_10
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140410T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-09 Thread Chris Friesen

On 04/09/2014 03:55 AM, Day, Phil wrote:


I would guess that affinity is more likely to be a soft requirement
that anti-affinity,  in that I can see some services just not meeting
their HA goals without anti-affinity but I'm struggling to think of a
use case why affinity is a must for the service.


Maybe something related to latency?  Put a database server and several 
public-facing servers all on the same host and they can talk to each 
other with less latency then if they had to go over the wire to another 
host?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-09 Thread Chris Friesen

On 04/09/2014 03:45 AM, Day, Phil wrote:

-Original Message- From: Russell Bryant



We were thinking that there may be a use for being able to query a
full list of instances (including the deleted ones) for a group.
The API just hasn't made it that far yet.  Just hiding them for now
leaves room to iterate and doesn't prevent either option (exposing
the deleted instances, or changing to auto- delete them from the
group).



Maybe it's just me, but I have a natural aversion to anything that
grows forever in the database - over time and at scale this becomes a
real problem.


Not just you.  I want my main database to reflect the current active 
data.  Historical data should go somewhere else.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Demo of current state of "Tuskar-UI"

2014-04-09 Thread mar...@redhat.com
On 09/04/14 16:54, Jaromir Coufal wrote:
> Hello OpenStackers,
> 
> I would like to share with you non-narrated demo of current version of
> 'Tuskar-UI' project, which is very close to Icehouse release (one or two
> more patches to come in).
> 
> Tuskar-UI is a user interface based on TripleO approach which allows
> user to register nodes (currently nova-baremetal -> ironic), define
> hardware profiles (nova-flavors), design OpenStack deployment (Tuskar)
> and based on HW profiles to deploy OpenStack on your baremetal nodes
> (Heat).
> 
> Demo: https://www.youtube.com/watch?v=3_u2PmeF36k
> 
> Juno roadmap - Tuskar planning for J cycle:
> https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning
> 
> If you have any questions, we are happy to help you. Just ask here on
> the mailing list, or you can find many folks on following channels:
> #tuskar, #tripleo, #openstack-horizon (UI related channel)
> 
> Cheers
> -- Jarda

Jarda thanks this was great to watch - seems a lot of things have been
fixed/tweaked in last couple weeks. Is everything running from current
master branches?

marios

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-09 Thread Doug Hellmann
I don't, but someone on the infra team (#openstack-infra) should be
able to tell you where the theme is maintained.

Doug

On Tue, Apr 8, 2014 at 7:26 PM, Zhongyue Luo  wrote:
> Do you happen to know where the repo for cgit is? I'll submit a patch adding
> font and font size.
>
> On Apr 8, 2014 10:24 PM, "Doug Hellmann" 
> wrote:
>>
>> Maybe those changes should be added to our cgit stylesheet?
>>
>> Doug
>>
>> On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo 
>> wrote:
>> > Hi,
>> >
>> > I know I'm not the only person who had this problem so here's two simple
>> > steps to get the lines and line numbers aligned.
>> >
>> > 1. Install the stylebot extension
>> >
>> >
>> > https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
>> >
>> > 2. Click on the download icon to install the custom style for
>> > git.openstack.org
>> >
>> > http://stylebot.me/styles/5369
>> >
>> > Thanks!
>> >
>> > --
>> > Intel SSG/STO/DCST/CBE
>> > 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
>> > China
>> > +862161166500
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Doug Hellmann
On Wed, Apr 9, 2014 at 9:38 AM, Isaku Yamahata  wrote:
> Hello Dmitry. Thank you for reply.
>
> On Wed, Apr 09, 2014 at 03:19:10PM +0400,
> Dmitry Mescheryakov  wrote:
>
>> Hello Isaku,
>>
>> Thanks for sharing this! Right now in Sahara project we think to use
>> Marconi as a mean to communicate with VM. Seems like you are familiar
>> with the discussions happened so far. If not, please see links at the
>> bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
>> supports for multi-tenancy as a huge advantage over other MQ
>> solutions. Our agent is network-based, so tenant isolation is a real
>> issue here. For clarity, here is the overview scheme of network based
>> agent:
>>
>> server <-> MQ (Marconi) <-> agent
>>
>> All communication goes over network. I've made a PoC of the Marconi
>> driver for oslo.messaging, you can find it at [2]
>
> I'm not familiar with Marconi, so please enlighten me first.
> How does MQ(Marconi) communicates both to management network and
> tenant network?
> Does it work with Neutron network? not nova-network.
>
> Neutron network isolates not only tenant networks each other,
> but also management network at L2. So openstack servers can't send
> any packets to VMs. VMs can't to openstack servers.
> This is the reason why neutron introduced HTTP proxy for instance metadata.
> It is also the reason why I choose to introduce new agent on host.
> If Marconi (or other porjects like sahara) already solved those issues,
> that's great.

Marconi has a REST API that runs at the same access levels as other
OpenStack APIs, and all clients interact with Marconi via that API.
Tenants don't need access to private management networks, since all of
the traffic occurs over public shared networks.

Doug

>
>
>> We also considered 'hypervisor-dependent' agents (as I called them in
>> the initial thread) like the one you propose. They also provide tenant
>> isolation. But the drawback is _much_ bigger development cost and more
>> fragile and complex deployment.
>>
>> In case of network-based agent all the code is
>>  * Marconi driver for RPC library (oslo.messaging)
>>  * thin client for server to make calls
>>  * a guest agent with thin server-side
>> If you write your agent on python, it will work on any OS with any
>> host hypervisor.
>>
>>
>> For hypervisor dependent-agent it becomes much more complex. You need
>> one more additional component - a proxy-agent running on Compute host,
>> which makes deployment harder. You also need to support various
>> transports for various hypervisors: virtio-serial for KVM, XenStore
>> for Xen, something for Hyper-V, etc. Moreover guest OS must have
>> driver for these transports and you will probably need to write
>> different implementation for different OSes.
>>
>> Also you mention that in some cases a second proxy-agent is needed and
>> again in some cases only cast operations could be used. Using cast
>> only is not an option for Sahara, as we do need feedback from the
>> agent and sometimes getting the return value is the main reason to
>> make an RPC call.
>>
>> I didn't see a discussion in Neutron on which approach to use (if it
>> was, I missed it). I see simplicity of network-based agent as a huge
>> advantage. Could you please clarify why you've picked design depending
>> on hypervisor?
>
> I agree those arguments.
> But I don't see how network-based agent approach works with Neutron
> network for now. Can you please elaborate on it?
>
>
> thanks,
>
>
>> Thanks,
>>
>> Dmitry
>>
>>
>> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
>> [2] https://github.com/dmitrymex/oslo.messaging
>>
>> 2014-04-09 12:33 GMT+04:00 Isaku Yamahata :
>> > Hello developers.
>> >
>> >
>> > As discussed many times so far[1], there are many projects that needs
>> > to propagate RPC messages into VMs running on OpenStack. Neutron in my 
>> > case.
>> >
>> > My idea is to relay RPC messages from management network into tenant
>> > network over file-like object. By file-like object, I mean virtio-serial,
>> > unix domain socket, unix pipe and so on.
>> > I've wrote some code based on oslo.messaging[2][3] and a documentation
>> > on use cases.[4][5]
>> > Only file-like transport and proxying messages would be in oslo.messaging
>> > and agent side code wouldn't be a part of oslo.messaging.
>> >
>> >
>> > use cases:([5] for more figures)
>> > file-like object: virtio-serial, unix domain socket, unix pipe
>> >
>> >   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>> >   per VM
>> >
>> >   server <-> AMQP <-> agent in host <-unix socket/pipe->
>> >  agent in tenant network <-> guest agent in VM
>> >
>> >
>> > So far there are security concerns to forward oslo.messaging from 
>> > management
>> > network into tenant network. One approach is to allow only cast-RPC from
>> > server to guest agent in VM so that guest agent in VM only receives 
>> > messages
>> > and can't send anything to servers. With unix p

[openstack-dev] [Horizon] [TripleO] [Tuskar] Demo of current state of "Tuskar-UI"

2014-04-09 Thread Jaromir Coufal

Hello OpenStackers,

I would like to share with you non-narrated demo of current version of 
'Tuskar-UI' project, which is very close to Icehouse release (one or two 
more patches to come in).


Tuskar-UI is a user interface based on TripleO approach which allows 
user to register nodes (currently nova-baremetal -> ironic), define 
hardware profiles (nova-flavors), design OpenStack deployment (Tuskar) 
and based on HW profiles to deploy OpenStack on your baremetal nodes (Heat).


Demo: https://www.youtube.com/watch?v=3_u2PmeF36k

Juno roadmap - Tuskar planning for J cycle: 
https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning


If you have any questions, we are happy to help you. Just ask here on 
the mailing list, or you can find many folks on following channels:

#tuskar, #tripleo, #openstack-horizon (UI related channel)

Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-09 Thread Alexis Lee
Robert Collins said on Wed, Apr 09, 2014 at 01:58:59AM +1200:
> I like this - something like
> 
> nova:
>   config:
> - section: default
>   values:
> - option: 'compute_manager'
>   value: 'ironic.nova.compute.manager.ClusterComputeManager'
> - section: cells
>   values:
> - option: 'driver'
>   value: nova.cells.rpc_driver.CellsRPCDriver
> 
> 
> should be able to represent most? all (it can handle repeating items)
> oslo.config settings and render it easily:
> 
> {{#config}}
> {{#comment}} repeats for each section {{/comment}}
> [{{section}}]
> {{#values}}
> {{option}}={{value}}
> {{/values}}
> {{/config}}

Hello,

I've gone some distance down this road:
  
https://review.openstack.org/#/c/83353/6/elements/nova/os-apply-config/etc/nova/log.conf
  https://review.openstack.org/#/c/83422/6/logstash-source.yaml

I wouldn't call the result - encoding a complete config file into Heat
metadata - pretty. And this isn't completely genericised either.

It'd be much better if TripleO image elements focused on installing and
starting services and allowed system integrators to define the
configuration. In one place, in plain text files, the UNIX way. I've
appended my proposal to Rob's etherpad here:
  https://etherpad.openstack.org/p/tripleo-config-passthrough

Soon-to-be outdated copy appended here:


Hi Rob, I have some serious concerns about the above approaches. For the
sake of argument, let's suppose we want to write a file that looks like
a Heat template. How would you write a Mustache template that handles
that level of nesting? Even if you accomplish that, how readable do you
think the metadata to fill out that template would look?

I see the system integration process emerging like this:
* Figure out what files you want + what you want in them
* Slice and dice that into metadata
* Write some fairly complicated templates to reconstitute the metadata
* Get out more or less what you started with

I'd like to propose an alternative method where Heat and TripleO barely
touch the config. The system integrator writes an image element per
node-flavour, EG "mycorp-compute-config". If they choose, they could
write more (EG for specific hardware) limited only by their
devtest-equivalent's ability to allocate those. This element contains a
99-os-apply-config directory, the templates from which overwrite any
templates from normal os-apply-config directories in other elements.
os-apply-config/install.d/99-install-config-templates will need to be
patched for this to be possible, but this is very little work in
comparison to the alternatives. I could also support simply an
os-apply-config.override directory, if a full numbered set of dirs seems
overkill, but in this case normal elements would have to be forbidden
from using it (and people being as they are, someone would). The
templates in that directory are 99% plain config files, laid out in a
single filesystem structure exactly as the system integrator wants them.
The only templated values which need to be supplied by Heat are those
which vary per-instance.

If we do this, tripleo-image-elements should focus on installing and
starting services. They should only include a minimal viable
configuration for demo purposes. This should greatly reduce the amount
of work required to produce a new element. Also the number of Heat
parameters used by any element (only per-instance would be necessary,
anything further is a convenience).

Some usecases where this approach is superior:
* Files where order is important, EG squid.conf
* Files which multiple elements want to touch, EG nova.conf
* When the system integrator wants to add config unforeseen by the
  appropriate element or where using an element would be
  heavyweight. EG to configure the MOTD or add a global vimrc.
* Easy to add hardware-specific configuration

Final thoughts - the "mycorp-compute-config" element might need to do a
bit of chmod + chown'ing as well as just providing 99-os-apply-config.

If OpenStack wants to provide a complete off-the-shelf configured
solution, we could provide a system integrator element which expresses
OpenStack opinion on what that solution should look like. In fact we
could provide several, suitable to different scales.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviewers] passthru config option - priority

2014-04-09 Thread Alexis Lee
Steve Baker said on Wed, Apr 09, 2014 at 10:56:14AM +1200:
> On 09/04/14 10:09, Robert Collins wrote:
> > https://etherpad.openstack.org/p/tripleo-config-passthrough

Blast, replied to the earlier thread before I saw this one. That'll
teach me. I've also appended my competing solution, what do you think
please?


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Isaku Yamahata
Hello Dmitry. Thank you for reply.

On Wed, Apr 09, 2014 at 03:19:10PM +0400,
Dmitry Mescheryakov  wrote:

> Hello Isaku,
> 
> Thanks for sharing this! Right now in Sahara project we think to use
> Marconi as a mean to communicate with VM. Seems like you are familiar
> with the discussions happened so far. If not, please see links at the
> bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
> supports for multi-tenancy as a huge advantage over other MQ
> solutions. Our agent is network-based, so tenant isolation is a real
> issue here. For clarity, here is the overview scheme of network based
> agent:
> 
> server <-> MQ (Marconi) <-> agent
> 
> All communication goes over network. I've made a PoC of the Marconi
> driver for oslo.messaging, you can find it at [2]

I'm not familiar with Marconi, so please enlighten me first.
How does MQ(Marconi) communicates both to management network and
tenant network?
Does it work with Neutron network? not nova-network.

Neutron network isolates not only tenant networks each other,
but also management network at L2. So openstack servers can't send
any packets to VMs. VMs can't to openstack servers.
This is the reason why neutron introduced HTTP proxy for instance metadata.
It is also the reason why I choose to introduce new agent on host.
If Marconi (or other porjects like sahara) already solved those issues,
that's great.


> We also considered 'hypervisor-dependent' agents (as I called them in
> the initial thread) like the one you propose. They also provide tenant
> isolation. But the drawback is _much_ bigger development cost and more
> fragile and complex deployment.
> 
> In case of network-based agent all the code is
>  * Marconi driver for RPC library (oslo.messaging)
>  * thin client for server to make calls
>  * a guest agent with thin server-side
> If you write your agent on python, it will work on any OS with any
> host hypervisor.
> 
> 
> For hypervisor dependent-agent it becomes much more complex. You need
> one more additional component - a proxy-agent running on Compute host,
> which makes deployment harder. You also need to support various
> transports for various hypervisors: virtio-serial for KVM, XenStore
> for Xen, something for Hyper-V, etc. Moreover guest OS must have
> driver for these transports and you will probably need to write
> different implementation for different OSes.
> 
> Also you mention that in some cases a second proxy-agent is needed and
> again in some cases only cast operations could be used. Using cast
> only is not an option for Sahara, as we do need feedback from the
> agent and sometimes getting the return value is the main reason to
> make an RPC call.
> 
> I didn't see a discussion in Neutron on which approach to use (if it
> was, I missed it). I see simplicity of network-based agent as a huge
> advantage. Could you please clarify why you've picked design depending
> on hypervisor?

I agree those arguments.
But I don't see how network-based agent approach works with Neutron
network for now. Can you please elaborate on it?


thanks,


> Thanks,
> 
> Dmitry
> 
> 
> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> [2] https://github.com/dmitrymex/oslo.messaging
> 
> 2014-04-09 12:33 GMT+04:00 Isaku Yamahata :
> > Hello developers.
> >
> >
> > As discussed many times so far[1], there are many projects that needs
> > to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
> >
> > My idea is to relay RPC messages from management network into tenant
> > network over file-like object. By file-like object, I mean virtio-serial,
> > unix domain socket, unix pipe and so on.
> > I've wrote some code based on oslo.messaging[2][3] and a documentation
> > on use cases.[4][5]
> > Only file-like transport and proxying messages would be in oslo.messaging
> > and agent side code wouldn't be a part of oslo.messaging.
> >
> >
> > use cases:([5] for more figures)
> > file-like object: virtio-serial, unix domain socket, unix pipe
> >
> >   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
> >   per VM
> >
> >   server <-> AMQP <-> agent in host <-unix socket/pipe->
> >  agent in tenant network <-> guest agent in VM
> >
> >
> > So far there are security concerns to forward oslo.messaging from management
> > network into tenant network. One approach is to allow only cast-RPC from
> > server to guest agent in VM so that guest agent in VM only receives messages
> > and can't send anything to servers. With unix pipe, it's write-only
> > for server, read-only for guest agent.
> >
> >
> > Thoughts? comments?
> >
> >
> > Details of Neutron NFV use case[6]:
> > Neutron services so far typically runs agents in host, the host agent
> > in host receives RPCs from neutron server, then it executes necessary
> > operations. Sometimes the agent in host issues RPC to neutron server
> > periodically.(e.g. status report etc)
> > It's desirable to make such services virtual

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Pipes
On Mon, 2014-04-07 at 15:47 +0100, Matthew Booth wrote:
> On 07/04/14 06:20, Jay Pipes wrote:
> > On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar wrote:
>  Well, it seems to me that the problem is the above blueprint and the 
>  code it introduced. This is an anti-feature IMO, and probably the best 
>  solution would be to remove the above code and go back to having a 
>  single >> nova-compute managing a single vCenter cluster, not multiple 
>  ones.
> >>
> >> Problem is not introduced by managing multiple clusters from single 
> >> nova-compute proxy node.  
> > 
> > I strongly disagree.
> > 
> >> Internally this proxy driver is still presenting the "compute-node" for 
> >> each of the cluster its managing.
> > 
> > In what way?
> > 
> >>  What we need to think about is applicability of the live migration use 
> >> case when a "cluster" is modelled as a compute.   Since the "cluster" is 
> >> modelled as a compute, it is assumed that a typical use case of live-move 
> >> is taken care by the underlying "cluster" itself.   With this there 
> >> are other use cases which are no-op today like host maintenance mode, live 
> >> move, setting instance affinity etc., In order to resolve this I was 
> >> thinking of 
> >> "A way to expose operations on individual ESX Hosts like Putting host in 
> >> maintenance mode,  live move, instance affinity etc., by introducing 
> >> Parent - Child compute node concept.   Scheduling can be restricted to 
> >> Parent compute node and Child compute node can be used for providing more 
> >> drill down on compute and also enable additional compute operations".
> >> Any thoughts on this?
> > 
> > The fundamental problem is that hacks were put in place in order to make
> > Nova defer control to vCenter, when the design of Nova and vCenter are
> > not compatible, and we're paying the price for that right now.
> > 
> > All of the operations you describe above -- putting a host in
> > maintenance mode, live-migration of an instance, ensuring a new instance
> > is launched near or not-near another instance -- depend on a fundamental
> > design feature in Nova: that a nova-compute worker fully controls and
> > manages a host that provides a place to put server instances. We have
> > internal driver interfaces for the *hypervisor*, not for the *manager of
> > hypervisors*, because, you know, that's what Nova does.
> 
> I'm going to take you to task here for use of the word 'fundamental'.
> What does Nova do? Apparently: 'OpenStack Nova provides a cloud
> computing fabric controller, supporting a wide variety of virtualization
> technologies, including KVM, Xen, LXC, VMware, and more. In addition to
> its native API, it includes compatibility with the commonly encountered
> Amazon EC2 and S3 APIs.' There's nothing in there about the ratio of
> Nova instances to hypervisors: that's an implementation detail. Now this
> change may or may not sit well with design decisions which have been
> made in the past, but the concept of managing multiple clusters from a
> single Nova instance is certainly not fundamentally wrong. It may not be
> pragmatic; it may require further changes to Nova which were not made,
> but there is nothing about it which is fundamentally at odds with the
> stated goals of the project.
> 
> Why did I bother with that? I think it's in danger of being lost. Nova
> has been around for a while now and it has a lot of code and a lot of
> developers behind it. We need to remember, though, that's it's all for
> nothing if nobody wants to use it. VMware is different, but not wrong.
> Let's stay fresh.

Please see my previous email to Juan about this. I'm not anti-VMWare.
I'm just opposed to changing an important part of the implementation of
Nova just so that certain vCenter operations can be supported.

> > The problem with all of the vCenter stuff is that it is trying to say to
> > Nova "don't worry, I got this" but unfortunately, Nova wants and needs
> > to manage these things, not surrender control to a different system that
> > handles orchestration and scheduling in its own unique way.
> 
> Again, I'll flip that round. Nova *currently* manages these things, and
> working efficiently with a platform which also does these things would
> require rethinking some design above the driver level. It's not
> something we want to do naively, which the VMware driver is suffering
> from in this area. It may take time to get this right, but we shouldn't
> write it off as fundamentally wrong. It's useful to users and not
> fundamentally at odds with the project's goals.

I'm not writing off vCenter or its capabilities. I am arguing that the
bar for modifying a fundamental design decision in Nova -- that of being
horizontally scalable by having a single nova-compute worker responsible
for managing a single provider of compute resources -- was WAY too low,
and that this decision should be revisited in the future (and possibly
as part of the v

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Pipes
Hi Juan, thanks for your response. Comments inline.

On Mon, 2014-04-07 at 10:22 +0200, Juan Manuel Rey wrote:
> Hi, 
> 
> I'm fairly new to this list, actually this is my first email sent, and
> to OpenStack in general, but I'm not new at all to VMware so I'll try
> to give you my point of view about possible use case here. 
> 
> Jay you are saying that by using Nova to manage ESXi hosts we don't
> need vCenter because they basically overlap in their capabilities.

Actually, no, this is not my main point. My main point is that Nova
should not change its architecture to fit the needs of one particular
host management platform (vCenter).

Nova should, as much as possible, communicate with vCenter to perform
some operations -- in the same way that Nova communicates with KVM or
XenServer to perform some operations. But Nova should not be
re-architected (and I believe that is what has gone on here with the
code change to have one nova-compute worker talking to multiple vCenter
clusters) just so that one particular host management scheduler/platform
(vCenter) can have all of its features exposed to Nova.

>  I agree with you to some extent, Nova may have similar capabilities
> as vCenter Server but as you know OpenStack as a full cloud solution
> adds a lot more features that vCenter lacks, like multitenancy just to
> name one.

Sure, however, my point is that Nova shouldn't need to be re-architected
just to adhere to one particular host management platform's concepts of
an atomic provider of compute resources.

> Also in any vSphere environment managing ESXi hosts individually, this
> is without vCenter, is completely out of the question. vCenter is the
> enabler of many vSphere features. And precisely that's is, IMHO, the
> use case of using Nova to manage vCenter to manage vSphere. Without
> vCenter we only have a bunch of hypervisors and none of the HA or DRS
> (dynamic resource balancing) capabilities that a vSphere cluster
> provides, this in my experience with vSphere users/customers is a no
> go scenario.

Understood. Still doesn't change my opinion though :)

Best,
-jay

> I don't know why the decision to manage vCenter with Nova was made but
> based on the above I understand the reasoning.
> 
> 
> Best,
> ---
> Juan Manuel Rey
> 
> @jreypo
> 
> 
> On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes  wrote:
> On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar
> wrote:
> > >> Well, it seems to me that the problem is the above
> blueprint and the code it introduced. This is an anti-feature
> IMO, and probably the best solution would be to remove the
> above code and go back to having a single >> nova-compute
> managing a single vCenter cluster, not multiple ones.
> >
> > Problem is not introduced by managing multiple clusters from
> single nova-compute proxy node.
> 
> 
> I strongly disagree.
> 
> > Internally this proxy driver is still presenting the
> "compute-node" for each of the cluster its managing.
> 
> 
> In what way?
> 
> >  What we need to think about is applicability of the live
> migration use case when a "cluster" is modelled as a compute.
> Since the "cluster" is modelled as a compute, it is assumed
> that a typical use case of live-move is taken care by the
> underlying "cluster" itself.   With this there are other
> use cases which are no-op today like host maintenance mode,
> live move, setting instance affinity etc., In order to
> resolve this I was thinking of
> > "A way to expose operations on individual ESX Hosts like
> Putting host in maintenance mode,  live move, instance
> affinity etc., by introducing Parent - Child compute node
> concept.   Scheduling can be restricted to Parent compute node
> and Child compute node can be used for providing more drill
> down on compute and also enable additional compute
> operations".Any thoughts on this?
> 
> 
> The fundamental problem is that hacks were put in place in
> order to make
> Nova defer control to vCenter, when the design of Nova and
> vCenter are
> not compatible, and we're paying the price for that right now.
> 
> All of the operations you describe above -- putting a host in
> maintenance mode, live-migration of an instance, ensuring a
> new instance
> is launched near or not-near another instance -- depend on a
> fundamental
> design feature in Nova: that a nova-compute worker fully
> controls and
> manages a host that provides a place to put server instances.
> We have
> internal driver interfaces for the *hypervisor*, not for the
> *manager of
> hypervisors*, because, you know, that'

[openstack-dev] [Neutron] Icehouse RC2 available

2014-04-09 Thread Thierry Carrez
Hello everyone,

Due to various release-critical issues detected in Neutron icehouse
RC1, a new release candidate was just generated. You can find a list of
the 34 bugs fixed and a link to the RC2 source tarball at:

https://launchpad.net/neutron/icehouse/icehouse-rc2

Unless new release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the 2014.1 final
version on April 17 next week. Given the unusually large number of bugs
fixed in this RC, you are strongly encouraged to test and validate this
tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/neutron/tree/milestone-proposed

If you find an issue that could be considered release-critical and
justify a release candidate respin, please file it at:

https://bugs.launchpad.net/neutron/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Problem with kombu version.

2014-04-09 Thread Dmitry Burmistrov
Mattew, main reason is global-requirements.txt
It defines that your app should work with kombu v2.4.8 and upper

On Wed, Apr 9, 2014 at 3:38 PM, Matthew Mosesohn  wrote:
> Dmitry, I don't think you should drop kombu.five so soon.
> We haven't heard directly from Fuel python team, such as Dmitry
> Pyzhov, what reason we have to lock kombu at version 2.5.14.
> I wrote to him earlier today out of band, so hopefully he will get
> back to this message soon.
>
> On Wed, Apr 9, 2014 at 3:27 PM, Dmitry Teselkin  
> wrote:
>> Hi again,
>>
>> So there is a reply from the Dmitry Burmistrov which for some reason was
>> missed in this thread:
>>> Nailgun requires exact version of kombu ( == 2.5.14 ).
>>> This is the only reason why we can't update it.
>>> I think you should talk to Dmitry P. about this version conflict.
>>> I want to take this opportunity to remind everyone that we should
>>> adhere to the global-requirements.txt in order to avoid such
>>> conflicts.
>>
>> Hopefully our developers decided to get rid of kombu.five usage what looks
>> an easy task.
>>
>> Thanks, everyone.
>>
>>
>>
>> On Mon, Apr 7, 2014 at 8:33 PM, Dmitry Teselkin 
>> wrote:
>>>
>>> Hello,
>>>
>>> I'm working on Murano integration into FUEL-5.0, and have faced the
>>> following problem: our current implementation depends on 'kombu.five'
>>> module, but this module (actually a single file) is available only starting
>>> at kombu 3.0. So this means that murano-api component depends on kombu
>>> >=3.0. This meets the OpenStack global requirements list, where kombu
>>> >=2.4.8 is declared. Unfortunately, this also means that "system-wide"
>>> version upgrade is required.
>>>
>>> So the question is - what is the right way to solve the promblem? I see
>>> the following options:
>>> 1. change kombu version requirement to >=3.0 for entire FUEL installation
>>> - it doesn't break global requirements constraint but some other FUEL
>>> components could be affected.
>>> 2. replace calls to functions from 'python.kombu' and use existing version
>>> - I'm not sure if it's possible, I'm awaiting answer from our developers.
>>>
>>> Which is the most suitable variant, or are there any other solutions for
>>> the problem?
>>>
>>>
>>> --
>>> Thanks,
>>> Dmitry Teselkin
>>> Deployment Engineer
>>> Mirantis
>>> http://www.mirantis.com
>>
>>
>>
>>
>> --
>> Thanks,
>> Dmitry Teselkin
>> Deployment Engineer
>> Mirantis
>> http://www.mirantis.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
2014-04-09 19:04 GMT+08:00 Matthew Booth :

> On 09/04/14 07:07, Chen CH Ji wrote:
> > we used to have one compute service corresponding to multiple
> > hypervisors (like host and nodes concept )
> > our major issue on our platform is we can't run nova-compute service on
> > the hypervisor and we need to find another place to run the nova-compute
> > in order to talk to
> > hypervisor management API through REST API
>
> It may not be directly relevant to this discussion, but I'm interested
> to know what constraint prevents you running nova-compute on the
> hypervisor.
>
Actually, VMWare has two drivers, one is ESXDriver and the other is
VCDriver.

When using ESXDrvier, one nova compute can only manage one ESX host, but
ESXDriver do not support some advanced features such as live migration,
resize etc. And this driver has been deprecated.

We are now talking about VCDriver which will talk to vCenter via wsdl API
and the  VCDriver is intend to support all VM operations, but we need some
enhancement to make VCDriver can work well for some advacend features such
as live migration.


> Matt
>
> --
> Matthew Booth, RHCA, RHCSS
> Red Hat Engineering, Virtualisation Team
>
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Problem with kombu version.

2014-04-09 Thread Matthew Mosesohn
Dmitry, I don't think you should drop kombu.five so soon.
We haven't heard directly from Fuel python team, such as Dmitry
Pyzhov, what reason we have to lock kombu at version 2.5.14.
I wrote to him earlier today out of band, so hopefully he will get
back to this message soon.

On Wed, Apr 9, 2014 at 3:27 PM, Dmitry Teselkin  wrote:
> Hi again,
>
> So there is a reply from the Dmitry Burmistrov which for some reason was
> missed in this thread:
>> Nailgun requires exact version of kombu ( == 2.5.14 ).
>> This is the only reason why we can't update it.
>> I think you should talk to Dmitry P. about this version conflict.
>> I want to take this opportunity to remind everyone that we should
>> adhere to the global-requirements.txt in order to avoid such
>> conflicts.
>
> Hopefully our developers decided to get rid of kombu.five usage what looks
> an easy task.
>
> Thanks, everyone.
>
>
>
> On Mon, Apr 7, 2014 at 8:33 PM, Dmitry Teselkin 
> wrote:
>>
>> Hello,
>>
>> I'm working on Murano integration into FUEL-5.0, and have faced the
>> following problem: our current implementation depends on 'kombu.five'
>> module, but this module (actually a single file) is available only starting
>> at kombu 3.0. So this means that murano-api component depends on kombu
>> >=3.0. This meets the OpenStack global requirements list, where kombu
>> >=2.4.8 is declared. Unfortunately, this also means that "system-wide"
>> version upgrade is required.
>>
>> So the question is - what is the right way to solve the promblem? I see
>> the following options:
>> 1. change kombu version requirement to >=3.0 for entire FUEL installation
>> - it doesn't break global requirements constraint but some other FUEL
>> components could be affected.
>> 2. replace calls to functions from 'python.kombu' and use existing version
>> - I'm not sure if it's possible, I'm awaiting answer from our developers.
>>
>> Which is the most suitable variant, or are there any other solutions for
>> the problem?
>>
>>
>> --
>> Thanks,
>> Dmitry Teselkin
>> Deployment Engineer
>> Mirantis
>> http://www.mirantis.com
>
>
>
>
> --
> Thanks,
> Dmitry Teselkin
> Deployment Engineer
> Mirantis
> http://www.mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Problem with kombu version.

2014-04-09 Thread Dmitry Teselkin
Hi again,

So there is a reply from the Dmitry Burmistrov which for some reason was
missed in this thread:
> Nailgun requires exact version of kombu ( == 2.5.14 ).
> This is the only reason why we can't update it.
> I think you should talk to Dmitry P. about this version conflict.
> I want to take this opportunity to remind everyone that we should
> adhere to the global-requirements.txt in order to avoid such
> conflicts.

Hopefully our developers decided to get rid of kombu.five usage what looks
an easy task.

Thanks, everyone.



On Mon, Apr 7, 2014 at 8:33 PM, Dmitry Teselkin wrote:

> Hello,
>
> I'm working on Murano integration into FUEL-5.0, and have faced the
> following problem: our current implementation depends on 'kombu.five'
> module, but this module (actually a single file) is available only starting
> at kombu 3.0. So this means that murano-api component depends on kombu
> >=3.0. This meets the OpenStack global requirements list, where kombu
> >=2.4.8 is declared. Unfortunately, this also means that "system-wide"
> version upgrade is required.
>
> So the question is - what is the right way to solve the promblem? I see
> the following options:
> 1. change kombu version requirement to >=3.0 for entire FUEL installation
> - it doesn't break global requirements constraint but some other FUEL
> components could be affected.
> 2. replace calls to functions from 'python.kombu' and use existing version
> - I'm not sure if it's possible, I'm awaiting answer from our developers.
>
> Which is the most suitable variant, or are there any other solutions for
> the problem?
>
>
> --
> Thanks,
> Dmitry Teselkin
> Deployment Engineer
> Mirantis
> http://www.mirantis.com
>



-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

2014-04-09 Thread Dmitry Mescheryakov
Hello Isaku,

Thanks for sharing this! Right now in Sahara project we think to use
Marconi as a mean to communicate with VM. Seems like you are familiar
with the discussions happened so far. If not, please see links at the
bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
supports for multi-tenancy as a huge advantage over other MQ
solutions. Our agent is network-based, so tenant isolation is a real
issue here. For clarity, here is the overview scheme of network based
agent:

server <-> MQ (Marconi) <-> agent

All communication goes over network. I've made a PoC of the Marconi
driver for oslo.messaging, you can find it at [2]


We also considered 'hypervisor-dependent' agents (as I called them in
the initial thread) like the one you propose. They also provide tenant
isolation. But the drawback is _much_ bigger development cost and more
fragile and complex deployment.

In case of network-based agent all the code is
 * Marconi driver for RPC library (oslo.messaging)
 * thin client for server to make calls
 * a guest agent with thin server-side
If you write your agent on python, it will work on any OS with any
host hypervisor.


For hypervisor dependent-agent it becomes much more complex. You need
one more additional component - a proxy-agent running on Compute host,
which makes deployment harder. You also need to support various
transports for various hypervisors: virtio-serial for KVM, XenStore
for Xen, something for Hyper-V, etc. Moreover guest OS must have
driver for these transports and you will probably need to write
different implementation for different OSes.

Also you mention that in some cases a second proxy-agent is needed and
again in some cases only cast operations could be used. Using cast
only is not an option for Sahara, as we do need feedback from the
agent and sometimes getting the return value is the main reason to
make an RPC call.

I didn't see a discussion in Neutron on which approach to use (if it
was, I missed it). I see simplicity of network-based agent as a huge
advantage. Could you please clarify why you've picked design depending
on hypervisor?

Thanks,

Dmitry


[1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
[2] https://github.com/dmitrymex/oslo.messaging

2014-04-09 12:33 GMT+04:00 Isaku Yamahata :
> Hello developers.
>
>
> As discussed many times so far[1], there are many projects that needs
> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
>
> My idea is to relay RPC messages from management network into tenant
> network over file-like object. By file-like object, I mean virtio-serial,
> unix domain socket, unix pipe and so on.
> I've wrote some code based on oslo.messaging[2][3] and a documentation
> on use cases.[4][5]
> Only file-like transport and proxying messages would be in oslo.messaging
> and agent side code wouldn't be a part of oslo.messaging.
>
>
> use cases:([5] for more figures)
> file-like object: virtio-serial, unix domain socket, unix pipe
>
>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>   per VM
>
>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>  agent in tenant network <-> guest agent in VM
>
>
> So far there are security concerns to forward oslo.messaging from management
> network into tenant network. One approach is to allow only cast-RPC from
> server to guest agent in VM so that guest agent in VM only receives messages
> and can't send anything to servers. With unix pipe, it's write-only
> for server, read-only for guest agent.
>
>
> Thoughts? comments?
>
>
> Details of Neutron NFV use case[6]:
> Neutron services so far typically runs agents in host, the host agent
> in host receives RPCs from neutron server, then it executes necessary
> operations. Sometimes the agent in host issues RPC to neutron server
> periodically.(e.g. status report etc)
> It's desirable to make such services virtualized as Network Function
> Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
> natural approach to propagate those RPC message into agents into VMs.
>
>
> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> [2] https://review.openstack.org/#/c/77862/
> [3] https://review.openstack.org/#/c/77863/
> [4] https://blueprints.launchpad.net/oslo.messaging/+spec/message-proxy-server
> [5] https://wiki.openstack.org/wiki/Oslo/blueprints/message-proxy-server
> [6] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> --
> Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Samuel Bercovici
Hi,

I have looked at 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1
 and have a few questions:

1.   Monitoring Tab:

a.   Are there users that use load balancing who do not monitor members? 
Can you share the use cases where this makes sense?

b.  Does it make sense to define the different type of monitors (ex: TCP, 
HTTP HTTPS)?

c.   Does any existing cloud service besides the current implementation of 
the LBaaS API supports using multiple monitors on the same pool? Is this a 
required feature?

2.   Logging Tab:

a.   What is logging use for?

b.  How does the tenant consume the logs?

3.   SSL Tab:

a.   Please explain if SSL means passing SSL traffic through the load 
balancer or using the load balancer to terminate certificates.

b.  Does it make sense to separate those (SSL termination and non HTTPS 
terminated traffic) as different rows?

c.   Can anyone explain the use cases for SSL_MIXED?

4.   HA Tab:

a.   Is this a tenant facing option or is it the way the operator chose to 
implement the service

5.   Content Caching Tab:

a.   Is this a load balancer feature or a CDN like feature.

6.   L7

a.   Does any cloud provider support L7 switching and L7 content 
modifications?

b.  If so can you please add a tab noting how much such features are used?

c.   If not, can anyone attest to whether this feature was requested by 
customers?

Thanks!
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Matthew Booth
On 09/04/14 07:07, Chen CH Ji wrote:
> we used to have one compute service corresponding to multiple
> hypervisors (like host and nodes concept )
> our major issue on our platform is we can't run nova-compute service on
> the hypervisor and we need to find another place to run the nova-compute
> in order to talk to
> hypervisor management API through REST API

It may not be directly relevant to this discussion, but I'm interested
to know what constraint prevents you running nova-compute on the hypervisor.

Matt

-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session suggestions for the Juno Design Summit now open

2014-04-09 Thread Thierry Carrez
Tina TSOU wrote:
> Below is our proposal. Look forward to your feedback.
> 
> --
> Description 
> This session focuses on how to improve networking performance at large scale 
> deployment.
> For example
> - having many VMs, thousands to tens of thousands, in a single data center 
> - very heavy traffic between VMs of different physical servers 
> - large quantities of OpenFlow flow tables causing slow forwarding on OVS and 
> high CPU usage on hypervisor 
> - VMs belong to various tenants thus requiring traffic isolation and security 
> and lots of configuration on OVS mainly overlay encapsulation and OpenFlow 
> tables
> - neutron server taking too long time to process requests
> 
> We are introducing a solution designed for the above scenario in this area.
> The main idea is to deploy on the hypervisor a new monitor agent which will 
> periodically check the CPU usage and network load of the NIC and inform SDN 
> controller through plugin/API extension. If the OVS load goes very high, SDN 
> controller can reactively off-load the traffic from OVS to TOR with minimum 
> interruption. It means that initially, the overlay encapsulation might be 
> done on OVS, but some feature rich TORs also provide this functionality which 
> makes TOR capable of taking over whenever necessary. The same strategy will 
> be applied for OpenFlow flow table. By doing this, OVS will have nothing to 
> do other than sending the traffic to TOR. All the time-consuming jobs will be 
> taken over by TOR dynamically. This more advanced strategy does require TOR 
> to be feature-rich so it might cause more TCO.   
> 
> We believe this is worth doing for large scale deployment. 
> --

You should file it at summit.openstack.org so that it can be considered
for inclusion in the schedule.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get Keystone user details

2014-04-09 Thread Naveen Kumar.S
source openrc demo demo
keystone user-list
You are not authorized to perform the requested action, admin_required. (HTTP 
403)




On Tuesday, April 8, 2014 6:06 PM, Naveen Kumar.S  wrote:
 
For an user with role as "Member" , how to get the contents of "extra" column 
from user table in keystone DB using python keystone 
API. Also for a user  who is already logged in from horizon how can this column 
be extracted on Django side.



Thanks,
Naveen.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heartbleed

2014-04-09 Thread Thierry Carrez
Aryeh Friedman wrote:
> What components (if any) are vulnerable to heartbleed?

OpenStack in itself is not vulnerable to heartbleed, however OpenStack
makes use of the host SSL library (libssl) and that one should be
properly patched.

If you have a production deployment of OpenStack, you should consider
the SSL private keys for your SSL endpoints potentially compromised and
revoke / renew them (primary key material).

Once you've done that, you should warn your users that passwords and
tokens used over that previously-flawed secure connection could have
been compromised and encourage them to change their own passwords and
expire existing tokens (secondary key material).

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Avishay Traeger
On Wed, Apr 9, 2014 at 8:35 AM, Deepak Shetty  wrote:

>
>
>
> On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger 
> wrote:
>
>> On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty wrote:
>>
>>> Hi List,
>>> I had few Qs on the implementation of manage_existing and unmanage
>>> API extns
>>>
>>> 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
>>> used during cinder migrate to keep id same for a diff backend name/id) to
>>> map cinder name/id to backend name/id and thus avoid renaming the backend
>>> storage. Renaming isn't good since it changes the original name of the
>>> storage object and hence storage admin may lose track? The Storwize uses
>>> UID and changes vdisk_name on the backend array which isn't good either. Is
>>> renaming a must, if yes why ?
>>>
>>
>> 'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
>> In migration, both the original and new volumes use the same template for
>> volume names, just with a different ID, so name_id works well for that.
>>  When importing a volume that wasn't created by Cinder, chances are it
>> won't conform to this template, and so name_id won't work (i.e., I can call
>> the volume 'my_very_important_db_volume', and name_id can't help with
>> that).  When importing, the admin should give the volume a proper name and
>> description, and won't lose track of it - it is now being managed by Cinder.
>>
>
> Avishay,
> thanks for ur reply.. it did help. Just one more Q tho...
>
>  >>(i.e., I can call the volume 'my_very_important_db_volume', and name_id
> can't help with that).
> This is the name of the volume. but isn't it common for most arrays to
> provide name and ID (which is again UUID) for a volume on the backend.. so
> name_id can still point to the UID which has the name
> 'my_very_important_db_volume'
> In fact in storwize, you are using vdisk_id itself and changing the
> vdisk_name to match what the user gave.. and vdisk_id is a UUID and matches
> w/ name_id format
>

Not exactly, it's a number (like '5'), not a UUID like
c8b3d8e2-2410-4362-b24b-548a13fa850b


> Alternatively, does this mean we need to make name_id a generic field (not
> a ID) and then use somethign like uuidutils.is_uuid_like() to determine if
> its UUID or non-UUID and then backend will accordinly map it ?
>
> Lastly,  I said "storage admin will lose track of it" bcos he would have
> named is "my_vol" and when he asks cidner to manage it using
> "my_cinder_vol" its not expected that u wud rename the volume's name on the
> backend :)
> I mean its good if we could implement manage_existing w/o renaming as then
> it would seem like less disruptive :)
>

 I think there are a few trade-offs here - making it less disruptive in
this sense makes it more disruptive to:
1. Managing the storage over its lifetime.  If we assume that the admin
will stick with Cinder for managing their volumes, and if they need to find
the volume on the storage, it should be done uniformly (i.e., go to the
backend and find the volume named 'volume-%s' % name_id).
2. The code, where a change of this kind could make things messy.
 Basically the rename approach has a little bit of complexity overhead when
you do manage_existing, but from then on it's just like any other volume.
 Otherwise, it's always a special case in different code paths, which could
be tricky.

If you still feel that rename is wrong and that there is a better approach,
I encourage you to try, and post code if it works.  I don't mind being
proved wrong. :)

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-09 Thread Day, Phil
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 08 April 2014 15:19
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
> possible or not ?
> 
> On 04/08/2014 07:25 AM, Jay Pipes wrote:
> > On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:
> >> On a large cloud you're protect against this to some extent if the
> >> number of servers is >> number of instances in the quota.
> >>
> >> However it does feel that there are a couple of things missing to
> >> really provide some better protection:
> >>
> >> - A quota value on the maximum size of a server group
> >> - A policy setting so that the ability to use service-groups
> >> can be controlled on a per project basis
> >
> > Alternately, we could just have the affinity filters serve as
> > weighting filters instead of returning NoValidHosts.
> >
> > That way, a request containing an affinity hint would cause the
> > scheduler to prefer placing the new VM near (or not-near) other
> > instances in the server group, but if no hosts exist that meet that
> > criteria, the filter simply finds a host with the most (or fewest, in
> > case of anti-affinity) instances that meet the affinity criteria.
> 
> I'd be in favor of this.   I've actually been playing with an internal
> patch to do both of these things, though in my case I was just doing it via
> metadata on the group and a couple hacks in the scheduler and the compute
> node.
> 
> Basically I added a group_size metadata field and a "best_effort" flag to
> indicate whether we should error out or continue on if the policy can't be
> properly met.
> 
I like the idea of the user being able to say if the affinity should be treated 
as a filter or weight.

In terms of group_size I'd want to able to impose a limit on that as an 
operator, not just have it in the control of the user (hence the quota idea)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >