Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-20 Thread Noel Burton-Krahn
Thanks for the pointer!

I like how the first google hit for this is:

Add details on dhcp_agents_per_network option for DHCP agent HA
https://bugs.launchpad.net/openstack-manuals/+bug/1370934

:) Seems reasonable to set dhcp_agents_per_network > 1.  What happens when
a DHCP agent dies?  Does the scheduler automatically bind another agent to
that network?

Cheers,
--
Noel



On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen  wrote:

> See dhcp_agents_per_network in neutron.conf.
>
> https://bugs.launchpad.net/neutron/+bug/1174132
>
> 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn :
>
>> I've been working on failover for dhcp and L3 agents.  I see that in [1],
>> multiple dhcp agents can host the same network.  However, it looks like I
>> have to manually assign networks to multiple dhcp agents, which won't
>> work.  Shouldn't multiple dhcp agents automatically fail over?
>>
>> [1]
>> http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best,
>
> Jian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] integration_tests : httplib timed out when running from eclipse

2014-10-20 Thread Wu, Hong-Guang (ES-Best-Shore-Services-China-BJ)

Running integration_test from command line is ok :
(.venv)whg@whg-HP:/opt/stack/horizon$ nosetests 
openstack_dashboard.test.integration_tests.tests.test_user_settings
openstack_dashboard.test.integration_tests.tests.test_user_settings.TestUserSettings.test_user_settings_change
 ... ok

--
Ran 1 test in 86.292s
OK


But it always reports timeout error immediately after I launch the  test in 
eclipse.

nosetests 
openstack_dashboard.test.integration_tests.tests.test_login.py:TestLogin.test_login

openstack_dashboard.test.integration_tests.tests.test_login.TestLogin.test_login
 ... ERROR Destroying test database for alias 'default' (':memory:')...
==
ERROR: 
openstack_dashboard.test.integration_tests.tests.test_login.TestLogin.test_login
--
_StringException: Traceback (most recent call last):
  File 
"/opt/stack/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 34, in setUp
self.driver = webdriver.Chrome()
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/selenium/webdriver/chrome/webdriver.py",
 line 67, in __init__
self.quit()
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/selenium/webdriver/chrome/webdriver.py",
 line 82, in quit
self.service.stop()
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/selenium/webdriver/chrome/service.py",
 line 97, in stop
url_request.urlopen("http://127.0.0.1:%d/shutdown"; % self.port)
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.7/urllib2.py", line 1187, in do_open
r = h.getresponse(buffering=True)
  File "/usr/lib/python2.7/httplib.py", line 1045, in getresponse
response.begin()
  File "/usr/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
timeout: timed out


--
Ran 1 test in 3.332s

FAILED (errors=1)

 





Thanks

Hong-Guang


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-10-20 Thread Tim Bell
> -Original Message-
> From: Christopher Aedo [mailto:d...@aedo.net]
> Sent: 21 October 2014 04:45
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [All] Maintenance mode in OpenStack during
> patching/upgrades
> 
...
> 
> Also, I would like to see "maintenance mode" for Nova be limited just to
> stopping any further VMs being sent there, and the node reporting that it's in
> maintenance mode.  I think proactive workload migration should be handled
> independently, as I can imaging scenarios where maintenance mode might be
> desired without coupling migration to it.
> 

A typical scenario we have is a non-fatal hardware repair. If a node is 
reporting ECC memory errors, you want to schedule a repair which
Will be disruptive for any VMs running on that host. The users get annoyed when 
you give them their new VM and then immediately tell them the
hardware is going to be repaired.

Setting into maintenance for me should mean no new work. I assume that stopping 
the service has a negative impact on other functions like Telemetry.

Tim

> I would love to keep discussing this further - a small session in Paris would 
> be
> great.  But it seems like there's never enough time at the summits, so I don't
> have high hopes for making much progress on this specific topic there.  Just 
> the
> same, if anything gets pulled together, I'll be keeping an eye out for it.
> 
> -Christopher
> 
> On Fri, Oct 17, 2014 at 9:21 PM, Joe Cropper  wrote:
> > I’m glad to see this topic getting some focus once again.  :-)
> >
> > From several of the administrators I talk with, when they think of putting a
> host into maintenance mode, the common requests I hear are:
> >
> > 1. Don’t schedule more VMs to the host 2. Provide an optional way to
> > automatically migrate all (usually active) VMs off the host so that
> > users’ workloads remain “unaffected” by the maintenance operation
> >
> > #1 can easily be achieved, as has been mentioned several times, by simply
> disabling the compute service.  However, #2 involves a little more work,
> although certainly possible using all the operations provided by nova today 
> (e.g.,
> live migration, etc.).  I believe these types of discussions have come up 
> several
> times over the past several OpenStack releases—certainly since Grizzly (i.e.,
> when I started watching this space).
> >
> > It seems that the general direction is to have the type of workflow needed 
> > for
> #2 outside of nova (which is certainly a valid stance).  To that end, it 
> would be
> fairly straightforward to build some code that logically sits on top of nova, 
> that
> when entering maintenance:
> >
> > 1. Prevents VMs from being scheduled to the host; 2. Maintains state
> > about the maintenance operation (e.g., not in maintenance, migrations
> > in progress, in maintenance, or error); 3. Provides mechanisms to, upon
> entering maintenance, dictates which VMs (active, all, none) to migrate and
> provides some throttling capabilities to prevent hundreds of parallel 
> migrations
> on densely packed hosts (all done via a REST API).
> >
> > If anyone has additional questions, comments, or would like to discuss some
> options, please let me know.  If interested, upon request, I could even share 
> a
> video of how such cases might work.  :-)  My colleagues and I have given these
> use cases a lot of thought and consideration and I’d love to talk more about
> them (perhaps a small session in Paris would be possible).
> >
> > - Joe
> >
> > On Oct 17, 2014, at 4:18 AM, John Garbutt  wrote:
> >
> >> On 17 October 2014 02:28, Matt Riedemann 
> wrote:
> >>>
> >>>
> >>> On 10/16/2014 7:26 PM, Christopher Aedo wrote:
> 
>  On Tue, Sep 9, 2014 at 2:19 PM, Mike Scherbakov
>   wrote:
> >>
> >> On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum 
> wrote:
> >
> > The idea is not simply deny or hang requests from clients, but
> > provide them "we are in maintenance mode, retry in X seconds"
> >
> >> You probably would want 'nova host-servers-migrate '
> >
> > yeah for migrations - but as far as I understand, it doesn't help
> > with disabling this host in scheduler - there is can be a chance
> > that some workloads will be scheduled to the host.
> 
> 
>  Regarding putting a compute host in maintenance mode using "nova
>  host-update --maintenance enable", it looks like the blueprint and
>  associated commits were abandoned a year and a half ago:
>  https://blueprints.launchpad.net/nova/+spec/host-maintenance
> 
>  It seems that "nova service-disable  nova-compute"
>  effectively prevents the scheduler from trying to send new work
>  there.  Is this the best approach to use right now if you want to
>  pull a compute host out of an environment before migrating VMs off?
> 
>  I agree with Tim and Mike that having something respond "down for
>  maintenance" rather than ignore

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-20 Thread Chris Friesen

On 10/19/2014 09:33 AM, Avishay Traeger wrote:

Hi Preston,
Replies to some of your cinder-related questions:
1. Creating a snapshot isn't usually an I/O intensive operation.  Are
you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
CPU usage of cinder-api spike sometimes - not sure why.
2. The 'dd' processes that you see are Cinder wiping the volumes during
deletion.  You can either disable this in cinder.conf, or you can use a
relatively new option to manage the bandwidth used for this.

IMHO, deployments should be optimized to not do very long/intensive
management operations - for example, use backends with efficient
snapshots, use CoW operations wherever possible rather than copying full
volumes/images, disabling wipe on delete, etc.


In a public-cloud environment I don't think it's reasonable to disable 
wipe-on-delete.


Arguably it would be better to use encryption instead of wipe-on-delete. 
 When done with the backing store, just throw away the key and it'll be 
secure enough for most purposes.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Ken'ichi Ohmichi
Hi Chris,

2014-10-21 13:41 GMT+09:00 Christopher Yeoh :
>
> On Tue, Oct 21, 2014 at 12:15 PM, Kenichi Oomichi  
> wrote:
>>
>> Hi Amit,
>>
>> Thanks for picking this topic up,
>>
>> Honestly I don't have a strong opinion about validation libraries.
>> Each project implements based on different web frameworks and
>> the options of validation libraries would be limited from its web
>> framework. For example, Nova implements its own wsgi framework and
>> it is difficult to use pecan/wsme due to its API routing/parameter
>> names. So Nova uses jsonschema for a new API(Nova v2.1 API) because
>> of its flexibility and portability.
>> From quick seeing, stoplight seems flexible and it could cover many
>> cases. That would be nice for me, but I'm not sure that stoplight is
>> the best library because jsonschema is a common way/library and portable.
>>
>> Related to this topic, I'd like to suggest that we have common validation
>> patterns across OpenStack projects. Now each project contains its owns
>> validation patterns for the other project's resource. For example, Nova
>> contains validation patterns for project-id and image-id on the code[1].
>> Ideally, these validation patterns would be nice to be ported/shared from
>> Keystone and Glance and it is the best to use the same validation patterns
>> between whole OpenStack projects for consistent interfaces. Maybe we can
>> implement these patterns even if using different validation libraries.
>>
>
> This sounds good. Would you mind adding it to the wiki here?
>
> https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
>
> So we don't lose track of it - we don't have the git/gerrit repository up 
> quite yet.


I see, I wrote it as "POST/PUT body validation" on the wiki.

Thanks
Ken Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Christopher Yeoh
On Tue, Oct 21, 2014 at 12:15 PM, Kenichi Oomichi  wrote:

> Hi Amit,
>
> Thanks for picking this topic up,
>
> Honestly I don't have a strong opinion about validation libraries.
> Each project implements based on different web frameworks and
> the options of validation libraries would be limited from its web
> framework. For example, Nova implements its own wsgi framework and
> it is difficult to use pecan/wsme due to its API routing/parameter
> names. So Nova uses jsonschema for a new API(Nova v2.1 API) because
> of its flexibility and portability.
> From quick seeing, stoplight seems flexible and it could cover many
> cases. That would be nice for me, but I'm not sure that stoplight is
> the best library because jsonschema is a common way/library and portable.
>
> Related to this topic, I'd like to suggest that we have common validation
> patterns across OpenStack projects. Now each project contains its owns
> validation patterns for the other project's resource. For example, Nova
> contains validation patterns for project-id and image-id on the code[1].
> Ideally, these validation patterns would be nice to be ported/shared from
> Keystone and Glance and it is the best to use the same validation patterns
> between whole OpenStack projects for consistent interfaces. Maybe we can
> implement these patterns even if using different validation libraries.
>
>
This sounds good. Would you mind adding it to the wiki here?

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

So we don't lose track of it - we don't have the git/gerrit repository up
quite yet.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-20 Thread Jian Wen
See dhcp_agents_per_network in neutron.conf.

https://bugs.launchpad.net/neutron/+bug/1174132

2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn :

> I've been working on failover for dhcp and L3 agents.  I see that in [1],
> multiple dhcp agents can host the same network.  However, it looks like I
> have to manually assign networks to multiple dhcp agents, which won't
> work.  Shouldn't multiple dhcp agents automatically fail over?
>
> [1]
> http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best,

Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Removing nova-bm support within os-cloud-config

2014-10-20 Thread Joe Gordon
On Mon, Oct 20, 2014 at 8:00 PM, Steve Kowalik 
wrote:

> With the move to removing nova-baremetal, I'm concerned that portions
> of os-cloud-config will break once python-novaclient has released with
> the bits of the nova-baremetal gone -- import errors, and such like.
>

Nova won't be removing nova-baremetal support in the client until Juno is
end of lifed. As clients aren't part of the integrated release and need to
work with all supported versions.


>
> I'm also concerned about backward compatibility -- in that we can't
> really remove the functionality, because it will break that
> compatibility. A further concern is that because nova-baremetal is no
> longer checked in CI, code paths may bitrot.
>
> Should we pony up and remove support for talking to nova-baremetal in
> os-cloud-config? Or any other suggestions?
>
> --
> Steve
> If it (dieting) was like a real time strategy game, I'd have loaded a
> save game from ten years ago.
>  - Greg, Columbia Internet
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Why doesn't ml2-ovs work when it's "host" != the dhcp agent's host?

2014-10-20 Thread Kevin Benton
The current suggested way for DHCP agent fault tolerance is multiple agents
per network. Is there a reason you don't want to use that option?
On Oct 20, 2014 5:13 PM, "Noel Burton-Krahn"  wrote:

> Thanks, Robert.
>
> So, ML2 needs the host attribute to match to bind the port.  My other
> requirement is that the dhcp agent must be able to migrate to a new host on
> failover.  The issue there is that if the dhcp service starts on a new host
> with a new host name, then it will not take over the networks that were
> served by the old host name.  I'm looking for a way to start the dhcp agent
> on a new host using the old host's config.
>
> --
> Noel
>
>
> On Mon, Oct 20, 2014 at 11:10 AM, Robert Kukura 
> wrote:
>
>>  Hi Noel,
>>
>> The ML2 plugin uses the binding:host_id attribute of port to control port
>> binding. For compute ports, nova sets binding:host_id when
>> creating/updating the neutron port, and ML2's openvswitch mechanism driver
>> will look in agents_db to make sure the openvswitch L2 agent is running on
>> that host, and that it has a bridge mapping for any needed physical network
>> or has the appropriate tunnel type enabled. The binding:host_id attribute
>> also gets set on DHCP, L3, and other agents' ports, and must match the host
>> of the openvswitch-agent on that node or ML2 will not be able to bind the
>> port. I suspect your configuration may be resulting in these not matching,
>> and the DHCP port's binding:vif_type attribute being 'binding_failed'.
>>
>> I'd suggest running "neutron port-show" as admin on the DHCP port to see
>> what the values of binding_vif_type and binding:host_id are, and running
>> "neutron agent-list" as admin to make sure there is an L2 agent on that
>> node and maybe "neutron agent-show" as admin to get that agents config
>> details.
>>
>> -Bob
>>
>>
>>
>> On 10/20/14 1:28 PM, Noel Burton-Krahn wrote:
>>
>> I'm running OpenStack Icehouse with Neutron ML2/OVS.  I've configured the
>> ml2-ovs-plugin on all nodes with host = the IP of the host itself.
>> However, my dhcp-agent may float from host to host for failover, so I
>> configured it with host="floating".  That doesn't work.  In this case, the
>> ml2-ovs-plugin creates a namespace and a tap interface for the dhcp agent,
>> but OVS doesn't route any traffic to the dhcp agent.  It *does* work if the
>> dhcp agent's host is the same as the ovs plugin's host, but if my dhcp
>> agent migrates to another host, it loses its configuration since it now has
>> a different host name.
>>
>>  So my question is, what does host mean for the ML2 dhcp agent and host
>> can I get it to work if the dhcp agent's host != host for the ovs plugin?
>>
>>  Case 1: fails: running with dhcp agent's host = "floating", ovs
>> plugin's host = IP-of-server
>> dhcp agent is running in netns created by ovs-plugin
>> dhcp agent never receives network traffic
>>
>>  Case 2: ok: running with dhcp agent's host = ovs plugin's host =
>> IP-of-server
>>  dhcp agent is running in netns created by ovs-plugin (different tap
>> name than case 1)
>>  dhcp agent works
>>
>>  --
>> Noel
>>
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][log] output of the x-openstack-request-id

2014-10-20 Thread wpf
when checking the nova/api.log , I found the following entry as below

2014-10-20 11:26:11.387 3549 INFO nova.osapi_compute.wsgi.server
[req-d7cc3757-f1e3-4af9-8700-a6f7fa096a6b None] 10.104.0.138 "GET
/v2/896bfb02c3f945d8a397c79f0741557a/os-floating-ips HTTP/1.1" status: 200
len: 192 time: 0.1119120

I understand that it's the combination of the x-openstack-request-id ,
request url and also the response status and etc.

Since I also want to add the similar logline into other projects ,so want
to mimic the the nova source code.

but anyway, after searching the code with 'x-openstack-request-id',  even
with 'LOG.info' ,  I failed to find it.

anyone can help me with that where/what codes to generated the output?

Thanks



-- 

Cheers & Best regards,
Peng Fei Wang (王鹏飞)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Michael McCune


- Original Message -
> However I don't think we should be mandating specific libraries, but we
> can make recommendations (good or bad) based on actual experience. This
> will be especially useful to new projects starting up to benefit from
> the pain other projects have experienced.


+1

i think it's also important to ensure that these recommendations live in a
place that is easy to find. it would be really nice to have example design
recipes that are commonly used across projects, but i'm not sure if that
gets too far into dictating or mandating certain usage patterns.

regards,
mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Removing nova-bm support within os-cloud-config

2014-10-20 Thread Steve Kowalik
With the move to removing nova-baremetal, I'm concerned that portions
of os-cloud-config will break once python-novaclient has released with
the bits of the nova-baremetal gone -- import errors, and such like.

I'm also concerned about backward compatibility -- in that we can't
really remove the functionality, because it will break that
compatibility. A further concern is that because nova-baremetal is no
longer checked in CI, code paths may bitrot.

Should we pony up and remove support for talking to nova-baremetal in
os-cloud-config? Or any other suggestions?

-- 
Steve
If it (dieting) was like a real time strategy game, I'd have loaded a
save game from ten years ago.
 - Greg, Columbia Internet

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-10-20 Thread Christopher Aedo
I'm glad to see there's more than one interested person here too :)

Regarding the Xen-specific host maintenance mode, if it gets dropped I
would not complain since it's useful only to those running Xen at the
moment.  The issues around when it works and doesn't work are my
bigger concern - as similar limitations exist in the migrate code
today.  They're not xen-specific, but do seem to consider few
deployment scenarios (and don't seem to work if you're using
ceph-backed storage for instance).

As Joe pointed out, there's definitely a need for maintenance mode.
Having a reliable method to pull a compute node out of a cluster would
be incredibly valuable.  This will certainly be a required component
of any full-environment upgrade path.

The scenario Joe outlined is the only working approach I'm aware of
right now, but I'm not a fan of disabling the compute service.  For
one thing, hopefully it will raise an alarm with your monitoring
system.  It also has the potential of interfering with other
operations that are ongoing (and with nova compute disabled, will you
still/always be able to reliably migrate a VM off the host?)

Also, I would like to see "maintenance mode" for Nova be limited just
to stopping any further VMs being sent there, and the node reporting
that it's in maintenance mode.  I think proactive workload migration
should be handled independently, as I can imaging scenarios where
maintenance mode might be desired without coupling migration to it.

I would love to keep discussing this further - a small session in
Paris would be great.  But it seems like there's never enough time at
the summits, so I don't have high hopes for making much progress on
this specific topic there.  Just the same, if anything gets pulled
together, I'll be keeping an eye out for it.

-Christopher

On Fri, Oct 17, 2014 at 9:21 PM, Joe Cropper  wrote:
> I’m glad to see this topic getting some focus once again.  :-)
>
> From several of the administrators I talk with, when they think of putting a 
> host into maintenance mode, the common requests I hear are:
>
> 1. Don’t schedule more VMs to the host
> 2. Provide an optional way to automatically migrate all (usually active) VMs 
> off the host so that users’ workloads remain “unaffected” by the maintenance 
> operation
>
> #1 can easily be achieved, as has been mentioned several times, by simply 
> disabling the compute service.  However, #2 involves a little more work, 
> although certainly possible using all the operations provided by nova today 
> (e.g., live migration, etc.).  I believe these types of discussions have come 
> up several times over the past several OpenStack releases—certainly since 
> Grizzly (i.e., when I started watching this space).
>
> It seems that the general direction is to have the type of workflow needed 
> for #2 outside of nova (which is certainly a valid stance).  To that end, it 
> would be fairly straightforward to build some code that logically sits on top 
> of nova, that when entering maintenance:
>
> 1. Prevents VMs from being scheduled to the host;
> 2. Maintains state about the maintenance operation (e.g., not in maintenance, 
> migrations in progress, in maintenance, or error);
> 3. Provides mechanisms to, upon entering maintenance, dictates which VMs 
> (active, all, none) to migrate and provides some throttling capabilities to 
> prevent hundreds of parallel migrations on densely packed hosts (all done via 
> a REST API).
>
> If anyone has additional questions, comments, or would like to discuss some 
> options, please let me know.  If interested, upon request, I could even share 
> a video of how such cases might work.  :-)  My colleagues and I have given 
> these use cases a lot of thought and consideration and I’d love to talk more 
> about them (perhaps a small session in Paris would be possible).
>
> - Joe
>
> On Oct 17, 2014, at 4:18 AM, John Garbutt  wrote:
>
>> On 17 October 2014 02:28, Matt Riedemann  wrote:
>>>
>>>
>>> On 10/16/2014 7:26 PM, Christopher Aedo wrote:

 On Tue, Sep 9, 2014 at 2:19 PM, Mike Scherbakov
  wrote:
>>
>> On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum  wrote:
>
> The idea is not simply deny or hang requests from clients, but provide
> them
> "we are in maintenance mode, retry in X seconds"
>
>> You probably would want 'nova host-servers-migrate '
>
> yeah for migrations - but as far as I understand, it doesn't help with
> disabling this host in scheduler - there is can be a chance that some
> workloads will be scheduled to the host.


 Regarding putting a compute host in maintenance mode using "nova
 host-update --maintenance enable", it looks like the blueprint and
 associated commits were abandoned a year and a half ago:
 https://blueprints.launchpad.net/nova/+spec/host-maintenance

 It seems that "nova service-disable  nova-compute" effectively
 prevents the scheduler from trying to send new work there.

Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Kenichi Oomichi
Hi Amit,

Thanks for picking this topic up,

Honestly I don't have a strong opinion about validation libraries.
Each project implements based on different web frameworks and
the options of validation libraries would be limited from its web
framework. For example, Nova implements its own wsgi framework and
it is difficult to use pecan/wsme due to its API routing/parameter
names. So Nova uses jsonschema for a new API(Nova v2.1 API) because
of its flexibility and portability.
>From quick seeing, stoplight seems flexible and it could cover many
cases. That would be nice for me, but I'm not sure that stoplight is
the best library because jsonschema is a common way/library and portable.

Related to this topic, I'd like to suggest that we have common validation
patterns across OpenStack projects. Now each project contains its owns
validation patterns for the other project's resource. For example, Nova
contains validation patterns for project-id and image-id on the code[1].
Ideally, these validation patterns would be nice to be ported/shared from
Keystone and Glance and it is the best to use the same validation patterns
between whole OpenStack projects for consistent interfaces. Maybe we can
implement these patterns even if using different validation libraries.

Thanks
Ken Ohmichi

---
[1]: 
https://github.com/openstack/nova/blob/master/nova/api/validation/parameter_types.py#L68


> -Original Message-
> From: Amit Gandhi [mailto:amit.gan...@rackspace.com]
> Sent: Saturday, October 18, 2014 2:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: r...@ryanpetrello.com
> Subject: [openstack-dev] [api] Request Validation - Stoplight
> 
> Hi API Working Group
> 
> Last night at the Openstack Meetup in Atlanta, a group of us discussed how 
> request validation is being performed over
> various projects and how some teams are using pecan wsmi, or warlock, 
> jsonschema etc.
> 
> Each of these libraries have their own pro’s and con’s.  My understanding is 
> that the API working group is in the early
> stages of looking into these various libraries and will likely provide 
> guidance in the near future on this.
> 
> I would like to suggest another library to evaluate when deciding this.  Some 
> of our teams have started to use a library
> named “Stoplight”[1][2] in our projects.  For example, in the Poppy CDN 
> project, we found it worked around some of the
> issues we had with warlock such as validating nested json correctly [3].
> 
> Stoplight is an input validation framework for python.  It can be used to 
> decorate any function (including routes in pecan
> or falcon) to validate its parameters.
> 
> Some good examples can be found here [4] on how to use Spotlight.
> 
> Let us know your thoughts/interest and we would be happy to discuss further 
> on if and how this would be valuable as a
> library for API request validation in Openstack.
> 
> 
> Thanks
> 
> 
> Amit Gandhi
> Senior Manager ? Rackspace
> 
> 
> 
> [1] https://pypi.python.org/pypi/stoplight
> [2] https://github.com/painterjd/stoplight
> [3] 
> https://github.com/stackforge/poppy/blob/master/poppy/transport/pecan/controllers/v1/services.py#L108
> [4] 
> https://github.com/painterjd/stoplight/blob/master/stoplight/tests/test_validation.py#L138


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-20 Thread Adam Young

On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:

Now that we have an API working group forming, I'd like to kick off some
discussion over one point I'd really like to see our APIs using (and
I'll probably drop it in to the repo once that gets fully set up): the
difference between synchronous and asynchronous operations.  Using nova
as an example—right now, if you kick off a long-running operation, such
as a server create or a reboot, you watch the resource itself to
determine the status of the operation.  What I'd like to propose is that
future APIs use a separate "operation" resource to track status
information on the particular operation.  For instance, if we were to
rebuild the nova API with this idea in mind, booting a new server would
give you a server handle and an operation handle; querying the server
resource would give you summary information about the state of the
server (running, not running) and pending operations, while querying the
operation would give you detailed information about the status of the
operation.  As another example, issuing a reboot would give you the
operation handle; you'd see the operation in a queue on the server
resource, but the actual state of the operation itself would be listed
on that operation.  As a side effect, this would allow us (not require,
though) to queue up operations on a resource, and allow us to cancel an
operation that has not yet been started.

Thoughts?
I'd like to couple this approach with a a greater use of Keystone trusts 
for delegation of authority.  Trusts and async calls are designed to 
work together.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-10-20 Thread Monty Taylor
On 10/20/2014 07:11 PM, Devananda van der Veen wrote:
> Hi all,
> 
> I was reminded in the Ironic meeting today that the words "hardware
> discovery" are overloaded and used in different ways by different
> people. Since this is something we are going to talk about at the
> summit (again), I'd like to start the discussion by building consensus
> in the language that we're going to use.
> 
> So, I'm starting this thread to explain how I use those two words, and
> some other words that I use to mean something else which is what some
> people mean when they use those words. I'm not saying my words are the
> right words -- they're just the words that make sense to my brain
> right now. If someone else has better words, and those words also make
> sense (or make more sense) then I'm happy to use those instead.
> 
> So, here are rough definitions for the terms I've been using for the
> last six months to disambiguate this:
> 
> "hardware discovery"
> The process or act of identifying hitherto unknown hardware, which is
> addressable by the management system, in order to later make it
> available for provisioning and management.
> 
> "hardware introspection"
> The process or act of gathering information about the properties or
> capabilities of hardware already known by the management system.
> 
> 
> Why is this disambiguation important? At the last midcycle, we agreed
> that "hardware discovery" is out of scope for Ironic -- finding new,
> unmanaged nodes and enrolling them with Ironic is best left to other
> services or processes, at least for the forseeable future.
> 
> However, "introspection" is definitely within scope for Ironic. Even
> though we couldn't agree on the details during Juno, we are going to
> revisit this at the Kilo summit. This is an important feature for many
> of our current users, and multiple proof of concept implementations of
> this have been done by different parties over the last year.
> 
> It may be entirely possible that no one else in our developer
> community is using the term "introspection" in the way that I've
> defined it above -- if so, that's fine, I can stop calling that
> "introspection", but I don't know a better word for the thing that is
> find-unknown-hardware.
> 
> Suggestions welcome,
> Devananda

I have never landed a meaningful patch to Ironic - but I +1 all of the
above. I _HAVE_ had MANY confusing discussions with product managers and
customers where someone says "does it do discovery" and half the room
thinks one definition and half the room thinks the other.

> P.S.
> 
> For what it's worth, googling for "hardware discovery" yields several
> results related to identifying unknown network-connected devices and
> adding them to inventory systems, which is the way that I'm using the
> term right now, so I don't feel completely off in continuing to say
> "discovery" when I mean "find unknown network devices and add them to
> Ironic".
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][users]

2014-10-20 Thread Behzad Dastur (bdastur)
Hi Boris,
Does rally provide any synchronization mechanism to synchronize between 
multiple scenario, when running in parallel? Rally spawns multiple processes, 
with each process running the scenario.  We need a way to synchronize between 
these to start a perf test operation at the same time.


regards,
Behzad



From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Wednesday, September 24, 2014 11:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [rally][users]

Ajay,

Ya adding support of  benchmarking OpenStack clouds using ordinary user 
accounts that already exist is one of our majors goals for already more then 
half of year. As I said in my previous message, we will support it soon finally.



Btw we have feature request page:
https://github.com/stackforge/rally/tree/master/doc/feature_request
With the list of features that we are working now.


Best regards,
Boris Pavlovic

On Thu, Sep 25, 2014 at 5:30 AM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi Boris
Existing users is one thing but according to Rally page it says admin account 
benchmarking is already supported

Rally is on its way to support of benchmarking OpenStack clouds using ordinary 
user accounts that already exist. Rally lacked such functionality (it only 
supported benchmarking either from an admin account or from a bunch of 
temporarily created users), which posed a problem since some deployments don't 
allow temporary users creation. There have been 
two 
patches that prepare the code for 
this new functionality. It is going to come very soon - stay tuned.


Ajay

From: Boris Pavlovic mailto:bpavlo...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 24, 2014 at 6:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally][users]

Ajay,

I am working that feature. It's almost ready.
I'll let you know when I finish.


Best regards,
Boris Pavlovic

On Thu, Sep 25, 2014 at 5:02 AM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
Our default mode of execution of rally is allowing Rally to create a new user 
and tenant. Is there a way to have rally use the existing admin tenant and user.
I need to use Rally for some tests which would need a admin access so I would 
like Rally to use existing admin tenant and admin user for tests
Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] disambiguating the term "discovery"

2014-10-20 Thread Devananda van der Veen
Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building consensus
in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words, and
some other words that I use to mean something else which is what some
people mean when they use those words. I'm not saying my words are the
right words -- they're just the words that make sense to my brain
right now. If someone else has better words, and those words also make
sense (or make more sense) then I'm happy to use those instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which is
addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.


Why is this disambiguation important? At the last midcycle, we agreed
that "hardware discovery" is out of scope for Ironic -- finding new,
unmanaged nodes and enrolling them with Ironic is best left to other
services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for many
of our current users, and multiple proof of concept implementations of
this have been done by different parties over the last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that is
find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields several
results related to identifying unknown network-connected devices and
adding them to inventory systems, which is the way that I'm using the
term right now, so I don't feel completely off in continuing to say
"discovery" when I mean "find unknown network devices and add them to
Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Christopher Yeoh
On Mon, 20 Oct 2014 10:38:58 -0400
Jay Pipes  wrote:
> 
> > For stackers who are interested in different validation frameworks
> > to implement validation, I recommend checking out Stoplight.
> 
> Just my two cents on this particular topic, I think it's more
> important to standardize ways in which our public REST APIs expose
> the payload expectations and response schemas to clients. In other
> words... we need to focus on methods for API discovery. Once you have
> standardized resource URI, request payload, and response schema
> discovery, then any number of validation libraries may be used.
> 

I agree standardising our APIs is more important. However there is an
advantage to projects using the same libraries where practical. Its
easier for developers to move from project to project, fewer
dependencies for the project overall and when we do run into problems
with libraries there are more people familiar with library. 

However I don't think we should be mandating specific libraries, but we
can make recommendations (good or bad) based on actual experience. This
will be especially useful to new projects starting up to benefit from
the pain other projects have experienced.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Triple-O] Openstack Onboarding

2014-10-20 Thread Adam Lawson
I made a similar comment to the Triple-O design summmit etherpad in hopes
others have a similar interest in Kilo but I wanted to share evangelize my
thoughts with the community for discussion:

For better or for worse, one thing I've heard over and over is how
Openstack community/TC approves/prefers the use of TripleO and Ironic to
deploy Openstack on bare metal. Cool, but for the majority of users
considering using Openstack in their organization, the question always goes
back to: If I'm not savvy enough yet to install Openstack without these
tools, how do I setup TripleO and Ironic? Seems like a chien and egg thing.

There has not been much discussion (that I've noticed) re making a
deployment process easy to erect. That should be the easy part but it's as
confusing as the second part for most who are starting out. Using Openstack
to deploy Openstack means the installer method should be straight forward
and itself should be easy to install for users with limited understanding
of Openstack or the tooling methods used by OOO and Ironic. But the bar to
use Openstack continues to be a relatively-high engineering hurdle. It
always has been and I'd love to see that change in the next cycle.

Something that comes to mind:

   - Setup Process Definition
   - Quickstart Wizards
   - Tooling

The above may seem to be dumbing down the process but widespread Openstack
adoption requires an easy on-boarding process and so far, it simply doesn't
exist.

Thoughts?



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-20 Thread Christopher Yeoh
On Mon, 20 Oct 2014 14:44:15 -0500
Anne Gentle  wrote:
> >
> >
> I think adding APIImpact will be useful.
> 
> I also want to point to the addition of Compute v2 (haven't yet
> proposed a spec for v2.1) to the nova-specs repo here:
> 
> https://review.openstack.org/#/c/129329/
> 
> The goal is to move information from the compute-api repo into the
> -specs repo. I sent an email to the PTLs in August and then added it
> in the What's Up Doc Oct 7th so hopefully this doesn't take anyone by
> surprise. You'll notice if you review that I don't have the template
> test in place that exist for the other blueprint templates, rather
> the ideal is that a new file would be added into api/v2.1 if a
> blueprint affects the API design that describes the correct
> response/request, error codes, and other relevant info. Also if the
> feature affects faults, limits, links, pagination, and so on, the
> spec review would address that in the api spec.

I think this would be a good thing to do. I've added some comments on
the review.

Also here is the review for adding a requirement for an APIImpact
flag in the commit message:

https://review.openstack.org/#/c/129757/

Regards,

Chris

> 
> 
> > Regards,
> >
> > Chris
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] HA of dhcp agents?

2014-10-20 Thread Noel Burton-Krahn
I've been working on failover for dhcp and L3 agents.  I see that in [1],
multiple dhcp agents can host the same network.  However, it looks like I
have to manually assign networks to multiple dhcp agents, which won't
work.  Shouldn't multiple dhcp agents automatically fail over?

[1]
http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-20 Thread Belmiro Moreira
Hi Andrew,

great that you have started the “cells” discussion.

Looking forward to see cells as default setup in Kilo.



The feature gap is really painful for current cells users.

We are looking into these features for some time and the main concern is
really where

these concepts should live.



cheers,

Belmiro

On Mon, Oct 20, 2014 at 8:14 PM, Mathieu Gagné  wrote:

> On 2014-10-20 2:00 PM, Andrew Laski wrote:
>
>> One of the big goals for the Kilo cycle by users and developers of the
>> cells functionality within Nova is to get it to a point where it can be
>> considered a first class citizen of Nova.
>>
>
> [...]
>
>  Shortcomings:
>>
>> Flavor syncing
>>  This needs to be addressed now.
>>
>>
>> What does cells do:
>>
>> Schedule an instance to a cell based on flavor slots available.
>>
>
> =)
>
>  Thoughts?
>>
>>
> I'm pleased to see concrete efforts at making Nova cells a first class
> citizen. I'm looking forward to it. Thanks!
>
> --
> Mathieu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread Joe Gordon
On Mon, Oct 20, 2014 at 11:12 AM, gordon chung  wrote:

> > The issue I'm highlighting is that those projects using the code now have
> > to update their api-paste.ini files to import from the new location,
> > presumably while giving some warning to operators about the impending
> > removal of the old code.
>
> This was the issue i ran into when trying to switch projects to
> oslo.middleware where i couldn't get jenkins to pass -- grenade tests
> successfully did their job. we had a discussion on openstack-qa and it was
> suggested to add a upgrade script to grenade to handle the new reference
> and document the switch. [1]
>
> if there's any issue with this solution, feel free to let us know.
>

Going down this route means every deployment that wishes to upgrade now has
an extra step, and should be avoided whenever possible. Why not just have a
wrapper in project.openstack.common pointing to the new oslo.middleware
library. If that is not a viable solution, we should give operators one
full cycle where the oslo-incubator version is deprecated and they can
migrate to the new copy outside of the upgrade process itself. Since there
is no deprecation warning in Juno [0], We can deprecate the oslo-incubator
copy in Kilo and remove in L.


[0] first email in this thread


>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2014-10-10.log
>  (search
> for gordc)
>
> cheers,
> *gord*
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Tentative schedule for Kilo Design Summit in Paris

2014-10-20 Thread Sean Roberts
Chance we can move the congress session from Monday 14:30-16:00 to co-locate 
Tuesday with GBP? either before or after...

~ sean

On Oct 7, 2014, at 1:25 AM, Thierry Carrez  wrote:

> Sylvain Bauza wrote:
>> I only see 3 slots for discussing about other projects. Do you know if
>> it would be possible to get more as there are particular slots around
>> 14:50, 15:40 and 17:20 which are quite empty ?
> 
> There are actually "Other projects" sessions on those slots (the
> placeholders cover multiple 40-min slots), as can be seen at:
> 
> http://kilodesignsummit.sched.org/grid/
> 
> So we currently have 6 "other projects" slots, but I hope we can secure
> a few more.
> 
> Regards,
> 
> -- 
> Thierry Carrez (ttx)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] NFV BoF session for OpenStack Summit Paris

2014-10-20 Thread Steve Gordon
Hi all,

I took an action item in one of the meetings to try and find a date/time/space 
to do another NFV BoF session for Paris to take advantage of the fact that many 
of us will be in attendance for a face to face session.

To try and avoid clashing with the general and design summit sessions I am 
proposing that we meet either before the sessions start one morning, during the 
lunch break, or after the sessions finish for the day. For the lunch sessions 
the meeting would be shorter to ensure people actually have time to grab lunch 
beforehand.

I've put together a form here, please register your preferred date/time if you 
would be interested in attending an NFV BoF session:

http://doodle.com/qchvmn4sw5x39cps

I will try and work out the *where* once we have a clear picture of the 
preferences for the above. We can discuss further in the weekly meeting.

Thanks!

Steve

[1] 
https://openstacksummitnovember2014paris.sched.org/event/f5bcb6033064494390342031e48747e3#.VEWEIOKmhkM

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack] [Barbican] [Cinder] Cinder and Barbican

2014-10-20 Thread Nathan Reller
> is Cinder capable today to use Barbican for encryption?

Yes, Cinder has a KeyManager abstraction, and one of the implementations is
Barbican. Checkout cinder.keymgr.barbican.py. We have successfully used
Barbican within Cinder.

I think the python-barbicanclient has recently changed. This change has
temporarily broken the code, but we plan to submit a patch to fix that soon.

-Nate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Why doesn't ml2-ovs work when it's "host" != the dhcp agent's host?

2014-10-20 Thread Noel Burton-Krahn
Thanks, Robert.

So, ML2 needs the host attribute to match to bind the port.  My other
requirement is that the dhcp agent must be able to migrate to a new host on
failover.  The issue there is that if the dhcp service starts on a new host
with a new host name, then it will not take over the networks that were
served by the old host name.  I'm looking for a way to start the dhcp agent
on a new host using the old host's config.

--
Noel


On Mon, Oct 20, 2014 at 11:10 AM, Robert Kukura 
wrote:

>  Hi Noel,
>
> The ML2 plugin uses the binding:host_id attribute of port to control port
> binding. For compute ports, nova sets binding:host_id when
> creating/updating the neutron port, and ML2's openvswitch mechanism driver
> will look in agents_db to make sure the openvswitch L2 agent is running on
> that host, and that it has a bridge mapping for any needed physical network
> or has the appropriate tunnel type enabled. The binding:host_id attribute
> also gets set on DHCP, L3, and other agents' ports, and must match the host
> of the openvswitch-agent on that node or ML2 will not be able to bind the
> port. I suspect your configuration may be resulting in these not matching,
> and the DHCP port's binding:vif_type attribute being 'binding_failed'.
>
> I'd suggest running "neutron port-show" as admin on the DHCP port to see
> what the values of binding_vif_type and binding:host_id are, and running
> "neutron agent-list" as admin to make sure there is an L2 agent on that
> node and maybe "neutron agent-show" as admin to get that agents config
> details.
>
> -Bob
>
>
>
> On 10/20/14 1:28 PM, Noel Burton-Krahn wrote:
>
> I'm running OpenStack Icehouse with Neutron ML2/OVS.  I've configured the
> ml2-ovs-plugin on all nodes with host = the IP of the host itself.
> However, my dhcp-agent may float from host to host for failover, so I
> configured it with host="floating".  That doesn't work.  In this case, the
> ml2-ovs-plugin creates a namespace and a tap interface for the dhcp agent,
> but OVS doesn't route any traffic to the dhcp agent.  It *does* work if the
> dhcp agent's host is the same as the ovs plugin's host, but if my dhcp
> agent migrates to another host, it loses its configuration since it now has
> a different host name.
>
>  So my question is, what does host mean for the ML2 dhcp agent and host
> can I get it to work if the dhcp agent's host != host for the ovs plugin?
>
>  Case 1: fails: running with dhcp agent's host = "floating", ovs plugin's
> host = IP-of-server
> dhcp agent is running in netns created by ovs-plugin
> dhcp agent never receives network traffic
>
>  Case 2: ok: running with dhcp agent's host = ovs plugin's host =
> IP-of-server
>  dhcp agent is running in netns created by ovs-plugin (different tap name
> than case 1)
>  dhcp agent works
>
>  --
> Noel
>
>
>
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-20 Thread Chris Dent

On Mon, 20 Oct 2014, Jim Mankovich wrote:

I'll  propose something via a spec to ceilometer for sensor naming which will
include the ability to support the new health sensor information.


Excellent.


Do you happen to know what some of the use cases are for the current
reporting of sensor information?


Sadly, not really. I'm hoping some observers of this thread will chime
in.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-20 Thread Adam Young

On 10/16/2014 02:58 PM, David Chadwick wrote:

Dave

when federation is used, the user's group is stored in a mapping rule.
So we do have a mechanism for storing group memberships without using
LDAP or creating an entry for the user in the SQL backend. (The only
time this is kinda not true is if we have a specific rule for each
federated user, so that then each mapping rule is equivalent to an entry
for each user). But usually we might expect many users to use the same
mapping rule.

Mapping rules should be usable for Kerberos logins. I dont know if the
current code does have this ability or not, but if it doesn't, then it
should be re-engineered to. (it was always in my design that all remote
logins should have a mapping capability)
I most certainly does, and can be extended to get additional env vars 
from mod_lookup_identity as well.





regards

David

On 16/10/2014 19:15, Dave Walker wrote:

Hi,

Currently we have two ways of doing Identity Auth backends, these are
sql and ldap.

The SQL backend is the default and is for situations where Keyston is
the canonical Identity provider with username / password being
directly compared to the Keystone database.

LDAP is the current option if Keystone isn't the canonical Identity
provider and passes the username and password to an LDAP server for
comparison and retrieves the groups.

For a few releases we have supported External auth (or Kerberos),
where we authenticate the user at the edge and trust the REMOTE_USER
is valid.  In these situations Keystone doesn't require the Username
or Password to be valid.

Particularly in Kerberos situations, no password is used to
successfully authenticate at the edge.  This works well, but LDAP
cannot be used as no password is passed through.  The other option is
SQL, but that then requires a user to be created in Keystone first.

We do not seem to cover the situation where Identity is provided by an
external mechanism.  The only system currently available is Federation
via SAML, which isn't always the best fit.

Therefore, I'd like to suggest the introduction of a third backend.
This would be the external identity provider.  This would seem to be
pretty simple, as the current checks would simply return success (as
we trust auth at the edge), and not store user_id in the database, but
generate it at runtime.

The issue I have, is that this doesn't cover Group membership.

So, am I a:
  - Barking totally up the wrong tree
  - Add support to the current LDAP plugin to support external auth
(but still use LDAP for groups)
  - Write a standalone external plugin, but then do what for Groups?  I
would be reasonably happy to just have 1:1 mapping of users to groups.

Does this make sense?

Thanks

--
Kind Regards,
Daviey Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-20 Thread Adam Young

On 10/16/2014 03:18 PM, Dave Walker wrote:

On 16 October 2014 20:07, David Stanek  wrote:


I may be missing something, but can you use the external auth method with
the LDAP backend?


No, as the purpose of the LDAP backend is to validate user/pass
combination are valid.  With the external auth plugin, these are not
provided to keystone (and may not even exist).  If they did exist, we
would be doing auth at the edge and at the backend - which seems
needlessly expensive.

--
Kind Regards,
Daviey Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The short of it  is that what you are describing is handled by Federation.

I think that there is some confusing in the processing of an authN/authZ 
request which we call "create a token"


Here's how I would expect it to work in a Kerberos case (the archetype 
for external) before the use of Federation


1.  mod_auth_kerb authenticates the user and sets REMOTE_USER before 
calling the Keystone WSGI app

2.  Keystone accepts REMOTE_USER and looks up the user in LDAP to get groups
3.  Userid and Groups are used to  fetch roles to populate the token

We can also use the OSand mod_lookup_identity to get us Groups:  see 
this write up for how to use Federation with SSSD


http://adam.younglogic.com/2014/05/keystone-federation-via-mod_lookup_identity/

That is old and needs to be updated, but the concepts are the same.


With Federation, you provide a mapping and a bunch of env vars to the 
Keystone server, and there is no need to persist the user in the user table.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-20 Thread Anne Gentle
On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan <
vadivel.openst...@gmail.com> wrote:

> Hi,
>
>
>
>
>
>
>
> * On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton  > wrote:>> I think you will probably have to
> wait until after the summit so we can> see the direction that will be
> taken with the rest of the in-tree> drivers/plugins. It seems like we
> are moving towards removing all of them so> we would definitely need a
> solution to documenting out-of-tree drivers as> you suggested.*
>
> [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
> to setup the third-party CI/Test system and meet its requirements to get my
> mechanism_driver listed in the Kilo's documentation, in parallel.
>
> Couple of questions/confirmations before i proceed further on this
> direction...
>
> 1) Is there anything more required other than the third-party CI/Test
> requirements ??.. like should I still need to go-through the entire
> development process of submit/review/approval of the blue-print and code of
> my ML2 driver which was already developed and in-use?...
>
>
The neutron PTL Kyle Mestery can answer if there are any additional
requirements.


> 2) Who is the authority to clarify and confirm the above (and how do i
> contact them)?...
>

Elections just completed, and the newly elected PTL is Kyle Mestery,
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.


>
> Thanks again for your inputs...
>
> Regards,
> Vad
> --
>
> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle  wrote:
>
>>
>>
>> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan <
>> vadivel.openst...@gmail.com> wrote:
>>
>>> Agreed on the requirements of test results to qualify the vendor plugin
>>> to be listed in the upstream docs.
>>> Is there any procedure/infrastructure currently available for this
>>> purpose?..
>>> Pls. fwd any link/pointers on those info.
>>>
>>>
>> Here's a link to the third-party testing setup information.
>>
>> http://ci.openstack.org/third_party.html
>>
>> Feel free to keep asking questions as you dig deeper.
>> Thanks,
>> Anne
>>
>>
>>> Thanks,
>>> Vad
>>> --
>>>
>>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki 
>>> wrote:
>>>
 I agree with Kevin and Kyle. Even if we decided to use separate tree
 for neutron
 plugins and drivers, they still will be regarded as part of the
 upstream.
 These plugins/drivers need to prove they are well integrated with
 Neutron master
 in some way and gating integration proves it is well tested and
 integrated.
 I believe it is a reasonable assumption and requirement that a vendor
 plugin/driver
 is listed in the upstream docs. This is a same kind of question as
 what vendor plugins
 are tested and worth documented in the upstream docs.
 I hope you work with the neutron team and run the third party
 requirements.

 Thanks,
 Akihiro

 On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery 
 wrote:
 > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton 
 wrote:
 >>>The OpenStack dev and docs team dont have to worry about
 >>> gating/publishing/maintaining the vendor specific plugins/drivers.
 >>
 >> I disagree about the gating part. If a vendor wants to have a link
 that
 >> shows they are compatible with openstack, they should be reporting
 test
 >> results on all patches. A link to a vendor driver in the docs should
 signify
 >> some form of testing that the community is comfortable with.
 >>
 > I agree with Kevin here. If you want to play upstream, in whatever
 > form that takes by the end of Kilo, you have to work with the existing
 > third-party requirements and team to take advantage of being a part of
 > things like upstream docs.
 >
 > Thanks,
 > Kyle
 >
 >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
 >>  wrote:
 >>>
 >>> Hi,
 >>>
 >>> If the plan is to move ALL existing vendor specific plugins/drivers
 >>> out-of-tree, then having a place-holder within the OpenStack domain
 would
 >>> suffice, where the vendors can list their plugins/drivers along
 with their
 >>> documentation as how to install and use etc.
 >>>
 >>> The main Openstack Neutron documentation page can explain the plugin
 >>> framework (ml2 type drivers, mechanism drivers, serviec plugin and
 so on)
 >>> and its purpose/usage etc, then provide a link to refer the
 currently
 >>> supported vendor specific plugins/drivers for more details.  That
 way the
 >>> documentation will be accurate to what is "in-tree" and limit the
 >>> documentation of external plugins/drivers to have just a reference
 link. So
 >>> its now vendor's responsibility to keep their  driver's up-to-date
 and their
 >>> documentation accurate. The OpenStack dev and docs team dont have
 to worry
 >>> about gating/publishing/mai

Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-20 Thread Anne Gentle
On Wed, Oct 15, 2014 at 5:52 AM, Christopher Yeoh  wrote:

> On Wed, Oct 15, 2014 at 8:58 PM, Sylvain Bauza  wrote:
>
>>
>> Le 15/10/2014 11:56, Christopher Yeoh a écrit :
>>
>>
>> On Wed, Oct 15, 2014 at 7:31 PM, Alex Xu  wrote:
>>
>>>  On 2014年10月15日 14:20, Christopher Yeoh wrote:
>>>
 Hi,

 I was wondering what people thought of having a convention of adding
 an APIImpact flag to proposed nova specs commit messages where the
 Nova API will change? It would make it much easier to find proposed
 specs which affect the API as its not always clear from the gerrit
 summary listing.

>>>  +1, and is there any tool can be used by search flag?
>>>


>>  Can use the message: filter in the gerrit web search interface to
>> search in commit messages, or
>> alternatively use gerritlib to write something custom.
>>
>>
>> IMHO, asking people to put a tag on a commit msg is good but errorprone
>> because there could be some misses.
>> Considering that API changes require new templates, why not asking for
>> people to provide on a separate tpl file the changes they want to provide,
>> and make use of the Gerrit file pattern search like
>> specs/kilo/approved/*.tpl ?
>>
>>
>>
> We don't require new templates as part of nova-specs and api changes don't
> necessarily change the api sample tpl files. We do ask for some jsonschema
> descriptions of the new APIs input but they work pretty well in the spec
> document itself. I agree it could be prone to spelling mistakes etc, though
> just being able to search for 'api' would be sufficient and people who
> review specs could pick up missing or mispelled flags in the commit message
> (and it wouldn't necessarily need to be restricted to just APIImpact as
> possible flags).
>
>
I think adding APIImpact will be useful.

I also want to point to the addition of Compute v2 (haven't yet proposed a
spec for v2.1) to the nova-specs repo here:

https://review.openstack.org/#/c/129329/

The goal is to move information from the compute-api repo into the -specs
repo. I sent an email to the PTLs in August and then added it in the What's
Up Doc Oct 7th so hopefully this doesn't take anyone by surprise. You'll
notice if you review that I don't have the template test in place that
exist for the other blueprint templates, rather the ideal is that a new
file would be added into api/v2.1 if a blueprint affects the API design
that describes the correct response/request, error codes, and other
relevant info. Also if the feature affects faults, limits, links,
pagination, and so on, the spec review would address that in the api spec.

Let me know your thoughts on the review.
Thanks,
Anne


> Regards,
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [oslo.db] model_query() future and neutron specifics

2014-10-20 Thread Mike Bayer
As I’ve established oslo.db blueprints which will roll out new SQLAlchemy 
connectivity patterns for consuming applications within both API [1] and tests 
[2], one of the next big areas I’m to focus on is that of querying.   If one 
looks at how SQLAlchemy ORM queries are composed across Openstack, the most 
prominent feature one finds is the prevalent use of the model_query() 
initiation function.This is a function that is implemented in a specific 
way for each consuming application; its purpose is to act as a factory for new 
Query objects, starting from the point of acquiring a Session, starting up the 
Query against a selected model, and then augmenting that Query right off with 
criteria derived from the given application context, typically oriented around 
the widespread use of so-called “soft-delete” columns, as well as a few other 
fixed criteria.

There’s a few issues with model_query() that I will be looking to solve, 
starting with the proposal of a new blueprint.   Key issues include that it 
will need some changes to interact with my new connectivity specification, it 
may need a big change in how it is invoked in order to work with some new 
querying features I also plan on proposing at some point (see 
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Baked_Queries), and 
also it’s current form in some cases tends to slightly discourage the 
construction of appropriate queries.

In order to propose a new system for model_query(), I have to do a survey of 
how this function is implemented and used across projects.  Which is why we 
find me talking about Neutron today - Neutron’s model_query() system is a much 
more significant construct compared to that of all other projects.   It is 
interesting because it makes clear some use cases that SQLAlchemy may very well 
be able to help with.  It also seems to me that in its current form it leads to 
SQL queries that are poorly formed - as I see this, on one hand we can blame 
the structure of neutron’s model_query() for how this occurs, but on the other, 
we can blame SQLAlchemy for not providing more tools oriented towards what 
Neutron is trying to do.   The use case Neutron has here is very common 
throughout many Python applications, but as yet I’ve not had the opportunity to 
address this kind of pattern in a comprehensive way.   

I first sketched out my concerns on a Neutron issue 
https://bugs.launchpad.net/neutron/+bug/1380823, however I was encouraged to 
move it over to the mailing list.

Specifically with Neutron’s model_query(), we're talking here about the plugin 
architecture in neutron/db/common_db_mixin.py, where the 
register_model_query_hook() method presents a way of applying modifiers to 
queries. This system appears to be used by: db/external_net_db.py, 
plugins/ml2/plugin.py, db/portbindings_db.py, 
plugins/metaplugin/meta_neutron_plugin.py.

What the use of the hook has in common in these cases is that a LEFT OUTER JOIN 
is applied to the Query early on, in anticipation of either the filter_hook or 
result_filters being applied to the query, but only *possibly*, and then even 
within those hooks as supplied, again only *possibly*. It's these two 
"*possiblies*" that leads to the use of LEFT OUTER JOIN - this extra table is 
present in the query's FROM clause, but if we decide we don't need to filter on 
it, the idea is that it's just a left outer join, which will not change the 
primary result if not added to what’s being filtered. And even, in the case of 
external_net_db.py, maybe we even add a criteria "WHERE  IS 
NULL", that is doing a "not contains" off of this left outer join.

The result is that we can get a query like this:

SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS NOT NULL

this can happen for example if using External_net_db_mixin, the outerjoin to 
ExternalNetwork is created, _network_filter_hook applies 
"expr.or_(ExternalNetwork.network_id != expr.null())", and that's it.

The database will usually have a much easier time if this query is expressed 
correctly [3]:

   SELECT a.* FROM a INNER JOIN b ON a.id=b.aid

the reason this bugs me is because the SQL output is being compromised as a 
result of how the plugin system is organized. Preferable would be a system 
where the plugins are either organized into fewer functions that perform all 
the checking at once, or if the plugin system had more granularity to know that 
it needs to apply an optional JOIN or not.   My thoughts for new 
SQLAlchemy/oslo.db features are being driven largely by Neutron’s use case here.

Towards my goal of proposing a better system of model_query(), along with 
Neutron’s heavy use of generically added criteria, I’ve put some thoughts down 
on a new SQLAlchemy feature which would also be backported to oslo.db. The 
initial sketch is at 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-heuristic-inspector-event,
 and the main idea is that Query would include a system by which we can ask 
que

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-20 Thread Vadivel Poonathan
Hi,







* On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton > wrote:>> I think you will probably have to
wait until after the summit so we can> see the direction that will be
taken with the rest of the in-tree> drivers/plugins. It seems like we
are moving towards removing all of them so> we would definitely need a
solution to documenting out-of-tree drivers as> you suggested.*

[Vad] while i 'm waiting for the conclusion on this subject, i 'm trying to
setup the third-party CI/Test system and meet its requirements to get my
mechanism_driver listed in the Kilo's documentation, in parallel.

Couple of questions/confirmations before i proceed further on this
direction...

1) Is there anything more required other than the third-party CI/Test
requirements ??.. like should I still need to go-through the entire
development process of submit/review/approval of the blue-print and code of
my ML2 driver which was already developed and in-use?...

2) Who is the authority to clarify and confirm the above (and how do i
contact them)?...

Thanks again for your inputs...

Regards,
Vad
--

On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle  wrote:

>
>
> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan <
> vadivel.openst...@gmail.com> wrote:
>
>> Agreed on the requirements of test results to qualify the vendor plugin
>> to be listed in the upstream docs.
>> Is there any procedure/infrastructure currently available for this
>> purpose?..
>> Pls. fwd any link/pointers on those info.
>>
>>
> Here's a link to the third-party testing setup information.
>
> http://ci.openstack.org/third_party.html
>
> Feel free to keep asking questions as you dig deeper.
> Thanks,
> Anne
>
>
>> Thanks,
>> Vad
>> --
>>
>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki 
>> wrote:
>>
>>> I agree with Kevin and Kyle. Even if we decided to use separate tree for
>>> neutron
>>> plugins and drivers, they still will be regarded as part of the upstream.
>>> These plugins/drivers need to prove they are well integrated with
>>> Neutron master
>>> in some way and gating integration proves it is well tested and
>>> integrated.
>>> I believe it is a reasonable assumption and requirement that a vendor
>>> plugin/driver
>>> is listed in the upstream docs. This is a same kind of question as
>>> what vendor plugins
>>> are tested and worth documented in the upstream docs.
>>> I hope you work with the neutron team and run the third party
>>> requirements.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery 
>>> wrote:
>>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton 
>>> wrote:
>>> >>>The OpenStack dev and docs team dont have to worry about
>>> >>> gating/publishing/maintaining the vendor specific plugins/drivers.
>>> >>
>>> >> I disagree about the gating part. If a vendor wants to have a link
>>> that
>>> >> shows they are compatible with openstack, they should be reporting
>>> test
>>> >> results on all patches. A link to a vendor driver in the docs should
>>> signify
>>> >> some form of testing that the community is comfortable with.
>>> >>
>>> > I agree with Kevin here. If you want to play upstream, in whatever
>>> > form that takes by the end of Kilo, you have to work with the existing
>>> > third-party requirements and team to take advantage of being a part of
>>> > things like upstream docs.
>>> >
>>> > Thanks,
>>> > Kyle
>>> >
>>> >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
>>> >>  wrote:
>>> >>>
>>> >>> Hi,
>>> >>>
>>> >>> If the plan is to move ALL existing vendor specific plugins/drivers
>>> >>> out-of-tree, then having a place-holder within the OpenStack domain
>>> would
>>> >>> suffice, where the vendors can list their plugins/drivers along with
>>> their
>>> >>> documentation as how to install and use etc.
>>> >>>
>>> >>> The main Openstack Neutron documentation page can explain the plugin
>>> >>> framework (ml2 type drivers, mechanism drivers, serviec plugin and
>>> so on)
>>> >>> and its purpose/usage etc, then provide a link to refer the currently
>>> >>> supported vendor specific plugins/drivers for more details.  That
>>> way the
>>> >>> documentation will be accurate to what is "in-tree" and limit the
>>> >>> documentation of external plugins/drivers to have just a reference
>>> link. So
>>> >>> its now vendor's responsibility to keep their  driver's up-to-date
>>> and their
>>> >>> documentation accurate. The OpenStack dev and docs team dont have to
>>> worry
>>> >>> about gating/publishing/maintaining the vendor specific
>>> plugins/drivers.
>>> >>>
>>> >>> The built-in drivers such as LinuxBridge or OpenVSwitch etc can
>>> continue
>>> >>> to be "in-tree" and their documentation will be part of main
>>> Neutron's docs.
>>> >>> So the Neutron is guaranteed to work with built-in plugins/drivers
>>> as per
>>> >>> the documentation and the user is informed to refer the "external
>>> vendor
>>> >>> plug-in page" for additional/specific plugins/drivers.
>>> >>>
>>> >>>
>>> >>> 

Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-20 Thread Jim Mankovich

Chris,
Use case point well taken :-)

I'll  propose something via a spec to ceilometer for sensor naming which 
will

include the ability to support the new health sensor information.

From a use case perspective, I want to provide the health of every 
platform
so an administrator can be notified when a platforms health drops below 
100%.

I also want to provide an administrator the ability to investigate exactly
what components in the platform are not working correctly if health is
reported at less than 100%.

With the current sensor information, the use case I was interested in 
was the

graphical display of individual platform sensor information.

Do you happen to know what some of the use cases are for the current 
reporting

of sensor information?

Thanks,
Jim

On 10/20/2014 11:14 AM, Chris Dent wrote:

On Mon, 20 Oct 2014, Jim Mankovich wrote:

On 10/20/2014 6:53 AM, Chris Dent wrote:

On Fri, 17 Oct 2014, Jim Mankovich wrote:
See answers inline. I don't have any concrete answers as to how to 
deal

with some of questions you brought up, but I do have some more detail
that may be useful to further the discussion.


That seems like progress to me.


And thanks for keeping it going some more. I'm going to skip your
other (very useful) comments and go (almost) straight (below) to
one thing which goes to the root of the queries I've been making.

Most of the rest of what you said makes sense and we seem to be
mostly in agreement. I suppose the next step would be propose a
spec? https://github.com/openstack/ceilometer-specs


We have 2 use cases,
Get all the sensors within a given platform (based on ironic node id)
Get all the sensors of a given "type/name". independent of platform
Others?


These are not use cases, these are tasks. That's because these say
nothing about the thing you are actually trying to achieve. "Get all
the sensors with a given platform" is a task without a purpose.
You're not just going to stop there are you? If so why did you get
the information in the first place.

A use case could be:

* I want to get all the sensors of a given platform so I can .

Or even better something like:

* I want to .

And the way to do that would just so happen to be getting all the
sensors.

I realize this is perhaps pedantic hair-splitting, but I think it
can be useful at least some of the time. I know that from my own
experience I am very rarely able to get the Ceilometer API to give
me the information that I actually want (e.g. "How many vcpus are
currently in action). This feels like the result of data availability
driving the query engine rather than vice versa.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD host support

2014-10-20 Thread Joe Gordon
On Sat, Oct 18, 2014 at 10:04 AM, Roman Bogorodskiy <
rbogorods...@mirantis.com> wrote:

> Hi,
>
> In discussion of this spec proposal:
> https://review.openstack.org/#/c/127827/ it was suggested by Joe Gordon
> to start a discussion on the mailing list.
>
> So I'll share my thoughts and a long term plan on adding FreeBSD host
> support for OpenStack.
>
> An ultimate goal is to allow using libvirt/bhyve as a compute driver.
> However, I think it would be reasonable to start with libvirt/qemu
> support first as it will allow to prepare the ground.
>

Before diving into the technical details below, I have one question. Why,
What is the benefit of this, besides the obvious 'we not support FreeBSD'?
Adding support for a new kernel introduces yet another column in our
support matrix, and will require a long term commitment to testing and
maintaining OpenStack on FreeBSD.



>
> High level overview of what needs to be done:
>
>  - Nova
>   * linux_net needs to be re-factored to allow to plug in FreeBSD
> support (that's what the spec linked above is about)
>   * nova.virt.disk.mount needs to be extended to support FreeBSD's
> mdconfig(8) in a similar way to Linux's losetup
>  - Glance and Keystone
> These components are fairly free of system specifics. Most likely
> they will require some small fixes like e.g. I made for Glance
> https://review.openstack.org/#/c/94100/
>  - Cinder
> I didn't look close at Cinder from a porting perspective, tbh.
> Obviously, it'll need some backend driver that would work on
> FreeBSD, e.g. ZFS. I've seen some patches floating around for ZFS
> though. Also, I think it'll need an implementation of iSCSI stack
> on FreeBSD, because it has its own stack, not stgt. On the other
> hand, Cinder is not required for a minimal installation and that
> could be done after adding support of the other components.
>

What about neutron? We are in the process of trying to deprecate
nova-network, so any new thing needs to support neutron.


>
> Also, it's worth to mention that a discussion on this topic already
> happened on this maillist:
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031431.html
>
> Some of the limitations were resolved since then, specifically,
> libvirt/bhyve has no limitation on count of disk and ethernet devices
> anymore.
>
> Roman Bogorodskiy
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-20 Thread Mathieu Gagné

On 2014-10-20 2:00 PM, Andrew Laski wrote:

One of the big goals for the Kilo cycle by users and developers of the
cells functionality within Nova is to get it to a point where it can be
considered a first class citizen of Nova.


[...]


Shortcomings:

Flavor syncing
 This needs to be addressed now.


What does cells do:

Schedule an instance to a cell based on flavor slots available.


=)


Thoughts?



I'm pleased to see concrete efforts at making Nova cells a first class 
citizen. I'm looking forward to it. Thanks!


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Pulling nova/virt/hardware.py into nova/objects/

2014-10-20 Thread Dan Smith
> OK, so in reviewing Dan B's patch series that refactors the virt
> driver's get_available_resource() method [1], I am stuck between two
> concerns. I like (love even) much of the refactoring work involved in
> Dan's patches. They replace a whole bunch of our nested dicts that are
> used in the resource tracker with real objects -- and this is something
> I've been harping on for months that really hinders developer's
> understanding of Nova's internals.

dict['line1'] = 'Agreed, this is extremely important stuff.'
dict['line2'] = 'The current dict mess that we have there is '
dict['line3'] = 'really obscure and confusing.'
reply = jsonutils.dumps(dict)

> However, all of the object classes that Dan B has introduced have been
> unversioned objects -- i.e. they have not derived from
> nova.objects.base.NovaObject. This means that these objects cannot be
> sent over the wire via an RPC API call. In practical terms, this issue
> has not yet reared its head, because the resource tracker still sends a
> dictified JSON representation of the object's fields directly over the
> wire, in the same format as Icehouse, therefore there have been no
> breakages in RPC API compatibility.

Right, so the blueprint for this work states that it's not to be sent
over the RPC wire or stored in the database. However, it already is in
some cases (at least the ComputeNode object has the unversioned
JSONified version of some of these hardware models in it).

If the modeling is purely for internal-to-compute-node purposes, then
it's all good. However, it surely seems like with the pending scheduler
isolation work, we're in a spot where we are building two parallel model
hierarchies, and I'm not really sure why.

> My proposal is that before we go and approve any BPs or patches that add
> to nova/virt/hardware.py, we first put together a patch series that
> moves the object models in nova/virt/hardware.py to being full-fledged
> objects in nova/objects/*

I'm not sure that just converting them all to NovaObjects is really
necessary here. If it's all stuff that is going to go over the wire
eventually as part of the resource tracker's expansion, then probably
so. If there are bits of the model that only serve to let the resource
tracker do its calculations, then perhaps it doesn't make sense to
require those be NovaObjects.

Regardless, it sounds like we need some discussion on how best to
proceed here. Since it's entirely wrapped up in the scheduler work, we
should definitely try to make sure that what we're doing here fits with
those plans. Last I heard, we weren't sure where we were going to draw
the line between nova bits and scheduler bits, so erring on the side of
"more versioned interfaces" seems safest to me.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread gordon chung
> The issue I'm highlighting is that those projects using the code now have
> to update their api-paste.ini files to import from the new location,
> presumably while giving some warning to operators about the impending
> removal of the old code.
This was the issue i ran into when trying to switch projects to oslo.middleware 
where i couldn't get jenkins to pass -- grenade tests successfully did their 
job. we had a discussion on openstack-qa and it was suggested to add a upgrade 
script to grenade to handle the new reference and document the switch. [1]
if there's any issue with this solution, feel free to let us know.
[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2014-10-10.log
 (search for gordc)
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Why doesn't ml2-ovs work when it's "host" != the dhcp agent's host?

2014-10-20 Thread Robert Kukura

Hi Noel,

The ML2 plugin uses the binding:host_id attribute of port to control 
port binding. For compute ports, nova sets binding:host_id when 
creating/updating the neutron port, and ML2's openvswitch mechanism 
driver will look in agents_db to make sure the openvswitch L2 agent is 
running on that host, and that it has a bridge mapping for any needed 
physical network or has the appropriate tunnel type enabled. The 
binding:host_id attribute also gets set on DHCP, L3, and other agents' 
ports, and must match the host of the openvswitch-agent on that node or 
ML2 will not be able to bind the port. I suspect your configuration may 
be resulting in these not matching, and the DHCP port's binding:vif_type 
attribute being 'binding_failed'.


I'd suggest running "neutron port-show" as admin on the DHCP port to see 
what the values of binding_vif_type and binding:host_id are, and running 
"neutron agent-list" as admin to make sure there is an L2 agent on that 
node and maybe "neutron agent-show" as admin to get that agents config 
details.


-Bob


On 10/20/14 1:28 PM, Noel Burton-Krahn wrote:
I'm running OpenStack Icehouse with Neutron ML2/OVS.  I've configured 
the ml2-ovs-plugin on all nodes with host = the IP of the host 
itself.  However, my dhcp-agent may float from host to host for 
failover, so I configured it with host="floating".  That doesn't 
work.  In this case, the ml2-ovs-plugin creates a namespace and a tap 
interface for the dhcp agent, but OVS doesn't route any traffic to the 
dhcp agent.  It *does* work if the dhcp agent's host is the same as 
the ovs plugin's host, but if my dhcp agent migrates to another host, 
it loses its configuration since it now has a different host name.


So my question is, what does host mean for the ML2 dhcp agent and host 
can I get it to work if the dhcp agent's host != host for the ovs plugin?


Case 1: fails: running with dhcp agent's host = "floating", ovs 
plugin's host = IP-of-server

dhcp agent is running in netns created by ovs-plugin
dhcp agent never receives network traffic

Case 2: ok: running with dhcp agent's host = ovs plugin's host = 
IP-of-server
dhcp agent is running in netns created by ovs-plugin (different tap 
name than case 1)

dhcp agent works

--
Noel





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Cells conversation starter

2014-10-20 Thread Andrew Laski
One of the big goals for the Kilo cycle by users and developers of the 
cells functionality within Nova is to get it to a point where it can be 
considered a first class citizen of Nova.  Ultimately I think this comes 
down to getting it tested by default in Nova jobs, and making it easy 
for developers to work with.  But there's a lot of work to get there.  
In order to raise awareness of this effort, and get the conversation 
started on a few things, I've summarized a little bit about cells and 
this effort below.



Goals:

Testing of a single cell setup in the gate.
Feature parity.
Make cells the default implementation.  Developers write code once and 
it works for  cells.


Ultimately the goal is to improve maintainability of a large feature 
within the Nova code base.



Feature gaps:

Host aggregates
Security groups
Server groups


Shortcomings:

Flavor syncing
This needs to be addressed now.

Cells scheduling/rescheduling
Instances can not currently move between cells
These two won't affect the default one cell setup so they will be 
addressed later.



What does cells do:

Schedule an instance to a cell based on flavor slots available.
Proxy API requests to the proper cell.
Keep a copy of instance data at the global level for quick retrieval.
Sync data up from a child cell to keep the global level up to date.


Simplifying assumptions:

Cells will be treated as a two level tree structure.


Plan:

Fix flavor breakage in child cell which causes boot tests to fail. 
Currently the libvirt driver needs flavor.extra_specs which is not 
synced to the child cell.  Some options are to sync flavor and extra 
specs to child cell db, or pass full data with the request. 
https://review.openstack.org/#/c/126620/1 offers a means of passing full 
data with the request.


Determine proper switches to turn off Tempest tests for features that 
don't work with the goal of getting a voting job.  Once this is in place 
we can move towards feature parity and work on internal refactorings.


Work towards adding parity for host aggregates, security groups, and 
server groups.  They should be made to work in a single cell setup, but 
the solution should not preclude them from being used in multiple 
cells.  There needs to be some discussion as to whether a host aggregate 
or server group is a global concept or per cell concept.


Work towards merging compute/api.py and compute/cells_api.py so that 
developers only need to make changes/additions in once place.  The goal 
is for as much as possible to be hidden by the RPC layer, which will 
determine whether a call goes to a compute/conductor/cell.


For syncing data between cells, look at using objects to handle the 
logic of writing data to the cell/parent and then syncing the data to 
the other.


A potential migration scenario is to consider a non cells setup to be a 
child cell and converting to cells will mean setting up a parent cell 
and linking them.  There are periodic tasks in place to sync data up 
from a child already, but a manual kick off mechanism will need to be added.



Future plans:

Something that has been considered, but is out of scope for now, is that 
the parent/api cell doesn't need the same data model as the child cell.  
Since the majority of what it does is act as a cache for API requests, 
it does not need all the data that a cell needs and what data it does 
need could be stored in a form that's optimized for reads.



Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday October 21st at 19:00 UTC

2014-10-20 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday October 21st, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pulling nova/virt/hardware.py into nova/objects/

2014-10-20 Thread Jay Pipes

Hi Dan, Dan, Nikola, all Nova devs,

OK, so in reviewing Dan B's patch series that refactors the virt 
driver's get_available_resource() method [1], I am stuck between two 
concerns. I like (love even) much of the refactoring work involved in 
Dan's patches. They replace a whole bunch of our nested dicts that are 
used in the resource tracker with real objects -- and this is something 
I've been harping on for months that really hinders developer's 
understanding of Nova's internals.


However, all of the object classes that Dan B has introduced have been 
unversioned objects -- i.e. they have not derived from 
nova.objects.base.NovaObject. This means that these objects cannot be 
sent over the wire via an RPC API call. In practical terms, this issue 
has not yet reared its head, because the resource tracker still sends a 
dictified JSON representation of the object's fields directly over the 
wire, in the same format as Icehouse, therefore there have been no 
breakages in RPC API compatibility.


The problems with having all these objects not modelled by deriving from 
nova.objects.base.NovaObject are two-fold:


 * The object's fields/schema cannot be changed -- or rather, cannot be 
changed without introducing upgrade problems.
 * The objects introduce a different way of serializing the object 
contents than is used in nova/objects -- it's not that much different, 
but it's different, and only has not caused a problem because the 
serialization routines are not yet being used to transfer data over the wire


So, what to do? Clearly, I think the nova/virt/hardware.py objects are 
badly needed. However, one of (the top?) priorities of the Nova project 
is upgradeability, and by not deriving from 
nova.objects.base.NovaObject, these nova.virt.hardware objects are 
putting that mission in jeopardy, IMO.


My proposal is that before we go and approve any BPs or patches that add 
to nova/virt/hardware.py, we first put together a patch series that 
moves the object models in nova/virt/hardware.py to being full-fledged 
objects in nova/objects/*


Thoughts?

-jay

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-get-available-resources-object,n,z


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Why doesn't ml2-ovs work when it's "host" != the dhcp agent's host?

2014-10-20 Thread Noel Burton-Krahn
I'm running OpenStack Icehouse with Neutron ML2/OVS.  I've configured the
ml2-ovs-plugin on all nodes with host = the IP of the host itself.
However, my dhcp-agent may float from host to host for failover, so I
configured it with host="floating".  That doesn't work.  In this case, the
ml2-ovs-plugin creates a namespace and a tap interface for the dhcp agent,
but OVS doesn't route any traffic to the dhcp agent.  It *does* work if the
dhcp agent's host is the same as the ovs plugin's host, but if my dhcp
agent migrates to another host, it loses its configuration since it now has
a different host name.

So my question is, what does host mean for the ML2 dhcp agent and host can
I get it to work if the dhcp agent's host != host for the ovs plugin?

Case 1: fails: running with dhcp agent's host = "floating", ovs plugin's
host = IP-of-server
dhcp agent is running in netns created by ovs-plugin
dhcp agent never receives network traffic

Case 2: ok: running with dhcp agent's host = ovs plugin's host =
IP-of-server
dhcp agent is running in netns created by ovs-plugin (different tap name
than case 1)
dhcp agent works

--
Noel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] E12* rules

2014-10-20 Thread Joe Gordon
On Fri, Oct 17, 2014 at 6:40 AM, Andrey Kurilin 
wrote:

> Hi everyone!
>
> I'm working on enabling E12* PEP8 rules in novaclient(status of my work
> listed below). Imo, PEP8 rules should be ignored only in extreme cases/for
> important reasons and we should decrease a number of ignored rules. This
> helps to keep code in more strict, readable form, which is very important
> when working in community.
>
> While working on rule E126, we started discussion with Joe Gordon about
> demand of these rules. I have no idea about reasons of why they should be
> ignored, so I want to know:
> - Why these rules should be ignored?
> - What do you think about enabling these rules?
>

I found the source of my confusion. See my inline comments in
https://review.openstack.org/#/c/122888/10/tox.ini

Hopefully this patch should clarify things:
https://review.openstack.org/129677



>
> Please, leave your opinion about E12* rules.
>
> Already enabled rules:
>   E121,E125 - https://review.openstack.org/#/c/122888/
>   E122 - https://review.openstack.org/#/c/123830/
>   E123 - https://review.openstack.org/#/c/123831/
>
> Abandoned rule:
>   E124 - https://review.openstack.org/#/c/123832/
>
> Pending review:
>   E126 - https://review.openstack.org/#/c/123850/
>   E127 - https://review.openstack.org/#/c/123851/
>   E128 - https://review.openstack.org/#/c/127559/
>   E129 - https://review.openstack.org/#/c/123852/
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-20 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/16/2014 11:09 PM, Joe Gordon wrote:
> First step in fixing this, put a cap on it: 
> https://review.openstack.org/129125

To get the maximums down to more reasonable levels, I have two patches
that bring the libvirt driver _get_guest_config() from 67 down to 40,
and nova/tests/db/fakes.py stub_out_db_network_api from 57 down to 2.

https://review.openstack.org/#/c/129325/
https://review.openstack.org/#/c/129648/

These are just refactorings that will the maximum set in
https://review.openstack.org/#/c/129125/ to be set to something a little
saner than 68!


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJURUVUAAoJEKMgtcocwZqLwXgQAKEdX/Ir5ud6DJIwyBgbBwIW
ScFcfIuNQZmZ5OWx/YGsiTBo9bkuARmknOcQm0TwUtXvKJkjPVzFl+6oSYov+fzT
aF4qsk7tAJaDbCi06YEAxGfy2yqQKO6D5FayTBwXUmMwg1IUQhZXMLhPlkbu65F1
WQDLJeGLXyoBbBQetUgy39jAHeCseIEe0IqR0PWgG/CVoGwRz96orNXCzFFvZPQT
nJ5qN3NFvwmBUha98Axu5rnt/k+Ua4rnbyY4dycn0jcGxFAgt7E086NgLKMLSBMv
ncgFfUzUfTr+dhQcaAaVCj63sgNRAtZIrHwniYsXtVo7o4O1QrKE9gnT0haWmUFb
dkwoNT/tYFVLFIkgIhucoWUf/6csyB4Sm2xd3jFAIPLa+Zmxc6jvBD/YWthjSlHu
rBSSujqAJxJ5jldh4YiCJMyXXp+W6t4wwjV0JDhNYDWyx2vHScp5H2Nf5DjTufad
Jm85h7GsaYLQcqUuaoBqZnwXzLan76DfPoSnfJsMSR3HmNSutMFumb7vEHDoO9Ib
keIGun1SKvM/Th5d+QJ0CMIcxJRD2HHlOtE2te/SB6YVlbTzxxZuykbmEhE/8nHW
ir/JaTCpp96zK0uLScNW3RXeVBymU2YyK4UnL2co0VzLUK/OUhP1FKXmul3aBYp+
JHvMZy8/LsqFr75v20cs
=dszR
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] proposed summit session topics

2014-10-20 Thread Doug Hellmann
After today’s meeting, we have filled our seven session slots. Here’s the 
proposed list, in no particular order. If you think something else needs to be 
on the list, speak up today because I’ll be plugging all of this into the 
scheduling tool in the next day or so.

https://etherpad.openstack.org/p/kilo-oslo-summit-topics

* oslo.messaging
  * need more reviewers
  * what to do about keeping drivers up to date / moving them out of the main 
tree
  * python 3 support

* Graduation schedule

* Python 3
  * what other than oslo.messaging / eventlet should (or can) we be working on?

* Alpha versioning

* Namespace packaging

* Quota management
  * What should the library do?
  * How do we manage database schema info from the incubator or a library if 
the app owns the migration scripts?

* taskflow
  * needs more reviewers
  * removing duplication with other oslo libraries

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-20 Thread Chris Dent

On Mon, 20 Oct 2014, Jim Mankovich wrote:

On 10/20/2014 6:53 AM, Chris Dent wrote:

On Fri, 17 Oct 2014, Jim Mankovich wrote:

See answers inline. I don't have any concrete answers as to how to deal
with some of questions you brought up, but I do have some more detail
that may be useful to further the discussion.


That seems like progress to me.


And thanks for keeping it going some more. I'm going to skip your
other (very useful) comments and go (almost) straight (below) to
one thing which goes to the root of the queries I've been making.

Most of the rest of what you said makes sense and we seem to be
mostly in agreement. I suppose the next step would be propose a
spec? https://github.com/openstack/ceilometer-specs


We have 2 use cases,
Get all the sensors within a given platform (based on ironic node id)
Get all the sensors of a given "type/name". independent of platform
Others?


These are not use cases, these are tasks. That's because these say
nothing about the thing you are actually trying to achieve. "Get all
the sensors with a given platform" is a task without a purpose.
You're not just going to stop there are you? If so why did you get
the information in the first place.

A use case could be:

* I want to get all the sensors of a given platform so I can .

Or even better something like:

* I want to .

And the way to do that would just so happen to be getting all the
sensors.

I realize this is perhaps pedantic hair-splitting, but I think it
can be useful at least some of the time. I know that from my own
experience I am very rarely able to get the Ceilometer API to give
me the information that I actually want (e.g. "How many vcpus are
currently in action). This feels like the result of data availability
driving the query engine rather than vice versa.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 10/20/2014

2014-10-20 Thread Renat Akhmerov
Thanks for joining us today at #openstack-meeting.

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-10-20-16.02.html
 

Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-10-20-16.02.log.html
 


The next meeting will be on Oct 27 at the same time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Summit scheduling

2014-10-20 Thread Devananda van der Veen
As hopefully everyone is aware by now, the format of this summit will
be somewhat different from previous summits. Monday will be dedicated
to an Ops Summit, and Tuesday is dedicated to cross-project
discussions. Wednesday and Thursday are project-specific design
tracks, and on Friday projects have informal meetups for either half
or the whole day.

Ironic will have one slot in the Ops summit, five slots in the Dev
summit, an unconference-style half day on Friday, and a pod for the
week where we can gather to discuss or hack on code, review specs, or
otherwise plan to take over the world [1]. The five slots in the dev
summit will have the most external visibility and so present the
greatest opportunity for us to get feedback from and share our plans
with the wider community of users, driver developers, potential new
developers, and core developers on other OpenStack projects.

We've had about 30 proposals for discussion topics [2], and so we'll
use the meeting time today to go over that. Keeping in mind that I
would like to schedule each topic in a time and space where we will
get the most benefit from engaging with that particular type of
audience, I have prepared a draft schedule, on the second tab of [2].

I have also noted any presentations that I've found in the main
conference on the third tab; please add to it if I missed any.


Regards,
Devananda



[1] https://www.youtube.com/watch?v=MHn0pBePN7I

[2] 
https://docs.google.com/spreadsheets/d/1XBKdeDeGfaRYaThjIIoYRwe_zPensECnxsKUuqdoVmQ

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread Sandy Walsh
Phew :)

Thanks Steve. 

From: Steven Hardy [sha...@redhat.com]
Sent: Monday, October 20, 2014 12:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] request_id deprecation strategy question

On Mon, Oct 20, 2014 at 02:17:54PM +, Sandy Walsh wrote:
> Does this mean we're losing request-id's?

No, it just means the implementation has moved from oslo-incubator[1] to
oslo.middleware[2].

The issue I'm highlighting is that those projects using the code now have
to update their api-paste.ini files to import from the new location,
presumably while giving some warning to operators about the impending
removal of the old code.

All I'm seeking to clarify is the most operator sensitive way to handle
this transition, given that we seem to have missed the boat on including a
nice deprecation warning for Juno.

Steve

[1] 
https://github.com/openstack/oslo-incubator/blob/stable/juno/openstack/common/middleware/request_id.py#L33
[2] 
https://github.com/openstack/oslo.middleware/blob/master/oslo/middleware/request_id.py

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread Steven Hardy
On Mon, Oct 20, 2014 at 02:17:54PM +, Sandy Walsh wrote:
> Does this mean we're losing request-id's?

No, it just means the implementation has moved from oslo-incubator[1] to
oslo.middleware[2].

The issue I'm highlighting is that those projects using the code now have
to update their api-paste.ini files to import from the new location,
presumably while giving some warning to operators about the impending
removal of the old code.

All I'm seeking to clarify is the most operator sensitive way to handle
this transition, given that we seem to have missed the boat on including a
nice deprecation warning for Juno.

Steve

[1] 
https://github.com/openstack/oslo-incubator/blob/stable/juno/openstack/common/middleware/request_id.py#L33
[2] 
https://github.com/openstack/oslo.middleware/blob/master/oslo/middleware/request_id.py

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-20 Thread Jim Mankovich


On 10/20/2014 6:53 AM, Chris Dent wrote:

On Fri, 17 Oct 2014, Jim Mankovich wrote:


See answers inline. I don't have any concrete answers as to how to deal
with some of questions you brought up, but I do have some more detail
that may be useful to further the discussion.


That seems like progress to me.


Personally, I would like to see the _(0x##) removed form the Sensor ID
string (by the ipmitool driver) before it returns sensors to the
Ironic conductor. I just don't see any value in this extra info. This
0x## addition only helps if a vendor used the exact same Sensor ID
string for multiple sensors of the same sensor type. i.e. Multiple
sensors of type "Temperature", each with the exact same Sensor ID
string of "CPU" instead of giving each Sensor ID string a unique name
like "CPU 1 ", " CPU 2",...


Is it worthwhile metadata to save, even if it isn't in the meter
name?

Removing the _(0x##) from the sensor name and keeping the _(0x##) in the
metadata Sensor ID string works for me.



In a heterogeneous platform environment, the Sensor ID string is
likely going to be different per vendor, so your question "If
temperate...on any system board...on any hardware, notify the
authorities" is going to be tough because each vendor may name their
"system board" differently. But, I bet that vendors use similar
strings, so worst case, your alarm creation could require 1 alarm
definition per vendor.


The alarm defintion I want to make is (as an operator not as a dev):
"My puter's too hot, hlp!"

Making that easy is the proper (to me) endpoint of a conversation
about how to name meters.

I understand your operator example, but it could be the case every different
vendors putter has a different definition of its "too hot" temperature. 
  If you are
going to act on puters that are too hot, you might believe there is a 
heat problem
with a puter if you lump everything together, but I guess that's an 
operators choice.
It's not really clear to me that this query makes practical  sense even 
though it

seems like a logical query to want to make.

Note: I' trying to provide puter health information so an operator can
easily query "platform.health.overall" to determine whether or not a puter
is healthy and if you really care why you can dig deeper into individual 
standard
puter components like "platform.health.fan", 
"platform.health.temperature",...
I think this would enable the kind of generic query across platforms 
that you
are thinking about.Health is generated in a vendor and platform 
specific way
by interpretation of all the different sensors.   Other vendors than HP 
could

provide these meters and then the query you are proposing would make both
logical and practical sense.




I see generic naming as somewhat problematic. If you lump all the
temperature sensors for a platform under hardware.temperature the
consumer will always need to query for a specific temperature sensor
that it is interested in, like "system board". The notion of having
different samples from multiple sensors under a single generic name
seems harder to deal with to me. If you have multiple temperature
samples under the same generic meter name, how do you figure out what
all the possible temperature samples actual exist?


I'm not suggestion all temperate sensors under one name
("hardware.temperature"), but all sensors which identify as the same
thing (e.g. "hardware.temperature.system_board") under the same name.


Good.

I'm not very informed about IMPI or hardware sensors, but I do have
some experiencing in using names and identifiers (don't we all!) and
I find that far too often we name things based on where they come
from rather than how we wish to address them after genesis.

I understand wantng to name sensors based on how you want to
address them, but interpretation of them once you've addressed
them is going to vendor dependent.

Throughout ceilometer I think there are tons of opportunities to
improve the naming of meters and as a result improve the UI for
people who want to do things with the data.

So from my perspective, with regard to naming IPMI (and other hardware
sensor) related samples, I think we need to make a better list of the
use cases which the samples need to satisfy and use that to drive a
naming scheme.


We have 2 use cases,
Get all the sensors within a given platform (based on ironic node id)
Get all the sensors of a given "type/name". independent of platform
Others?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Salvatore Orlando
On 20 October 2014 15:38, Jay Pipes  wrote:

> On 10/20/2014 10:26 AM, Amit Gandhi wrote:
>
>> Thanks for the clarification Sam.
>>
>> Its good to know where the mission of the API working group starts and
>> stops.  During the meetup discussions, my understanding was that the
>> working group would recommend the technologies to use while building apis
>> (e.g. Pecan, validation frameworks, etc) and were in the process of
>> looking into tools such as warlock.
>>
>
> Sorry, did I miss something? What meetup discussions are you referring to?
> I'm not aware of any meetings of the API working group so far...


The original poster was probably referring to conversations held in
openstack meetups around the world.


>
>
> > Hence the recommendation to add
>
>> another library into the mix for evaluation, based on advise by other
>> stackers in the community.
>>
>> Your response clarifies that the aim of the API working group is just to
>> recommend on standardizing the interfaces from various API's (which I am
>> looking forward to) and not the libraries used to implement that
>> interface.
>>
>
> I don't really think the working group has decided yet what it will be
> producing, with regards to recommendations and what topics it may provide
> guidance on. Heck, AFAIK, we still haven't settled on a day of the week and
> time to hold IRC meetings! ;)


Eh... a working group on API standards that can't even achieve consensus on
a day of the week!


>
>
>  For stackers who are interested in different validation frameworks to
>> implement validation, I recommend checking out Stoplight.
>>
>
> Just my two cents on this particular topic, I think it's more important to
> standardize ways in which our public REST APIs expose the payload
> expectations and response schemas to clients. In other words... we need to
> focus on methods for API discovery. Once you have standardized resource
> URI, request payload, and response schema discovery, then any number of
> validation libraries may be used.
>

I completely agree with Jay. The mission has not yet been scoped, but my
feeling is that technologies and/or frameworks won't be something that will
be addressed.
My personal opinion is that this group will need to focus on the API seen
as the way in which consumers interact with Openstack, try and provide a
common experience across Openstack API endpoints, and possibly improve the
consumer experience by reducing nonsenses and antipatterns in our API.
How this is implemented - and hence which technologies and frameworks
should be used - is possibly not really in the scope of this group.

Salvatore


> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Amit Gandhi


On 10/20/14, 10:38 AM, "Jay Pipes"  wrote:

>On 10/20/2014 10:26 AM, Amit Gandhi wrote:
>> Thanks for the clarification Sam.
>>
>> Its good to know where the mission of the API working group starts and
>> stops.  During the meetup discussions, my understanding was that the
>> working group would recommend the technologies to use while building
>>apis
>> (e.g. Pecan, validation frameworks, etc) and were in the process of
>> looking into tools such as warlock.
>
>Sorry, did I miss something? What meetup discussions are you referring
>to? I'm not aware of any meetings of the API working group so far...

Sorry, I was referring to the local Atlanta Openstack Meetup that happened
last Thursday (mentioned in my initial email that started this thread).

>
> > Hence the recommendation to add
>> another library into the mix for evaluation, based on advise by other
>> stackers in the community.
>>
>> Your response clarifies that the aim of the API working group is just to
>> recommend on standardizing the interfaces from various API's (which I am
>> looking forward to) and not the libraries used to implement that
>>interface.
>
>I don't really think the working group has decided yet what it will be
>producing, with regards to recommendations and what topics it may
>provide guidance on. Heck, AFAIK, we still haven't settled on a day of
>the week and time to hold IRC meetings! ;)

Okay good to know.  That probably explains why I¹m hearing different
things from different people who probably all have different visions of
what the working group is.

>
>> For stackers who are interested in different validation frameworks to
>> implement validation, I recommend checking out Stoplight.
>
>Just my two cents on this particular topic, I think it's more important
>to standardize ways in which our public REST APIs expose the payload
>expectations and response schemas to clients. In other words... we need
>to focus on methods for API discovery. Once you have standardized
>resource URI, request payload, and response schema discovery, then any
>number of validation libraries may be used.

+1

>
>Best,
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Jay Pipes

On 10/20/2014 10:26 AM, Amit Gandhi wrote:

Thanks for the clarification Sam.

Its good to know where the mission of the API working group starts and
stops.  During the meetup discussions, my understanding was that the
working group would recommend the technologies to use while building apis
(e.g. Pecan, validation frameworks, etc) and were in the process of
looking into tools such as warlock.


Sorry, did I miss something? What meetup discussions are you referring 
to? I'm not aware of any meetings of the API working group so far...


> Hence the recommendation to add

another library into the mix for evaluation, based on advise by other
stackers in the community.

Your response clarifies that the aim of the API working group is just to
recommend on standardizing the interfaces from various API's (which I am
looking forward to) and not the libraries used to implement that interface.


I don't really think the working group has decided yet what it will be 
producing, with regards to recommendations and what topics it may 
provide guidance on. Heck, AFAIK, we still haven't settled on a day of 
the week and time to hold IRC meetings! ;)



For stackers who are interested in different validation frameworks to
implement validation, I recommend checking out Stoplight.


Just my two cents on this particular topic, I think it's more important 
to standardize ways in which our public REST APIs expose the payload 
expectations and response schemas to clients. In other words... we need 
to focus on methods for API discovery. Once you have standardized 
resource URI, request payload, and response schema discovery, then any 
number of validation libraries may be used.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery

2014-10-20 Thread Dolph Mathews
On Mon, Oct 20, 2014 at 7:04 AM, Jamie Lennox 
wrote:

>
>
> - Original Message -
> > From: "Dolph Mathews" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Tuesday, October 7, 2014 6:56:15 PM
> > Subject: Re: [openstack-dev] Horizon and Keystone: API Versions and
> Discovery
> >
> >
> >
> > On Tuesday, October 7, 2014, Adam Young < ayo...@redhat.com > wrote:
> >
> >
> > Horizon has a config options which says which version of the Keystone
> API it
> > should work against: V2 or V3. I am not certain that there is still any
> > reason for Horizon to go against V2. However, If we defer the decision to
> > Keystone, we come up against the problem of discovery.
> >
> > On the surface it is easy, as the Keystone client supports version
> discovery.
> > The problem is that discovery must be run for each new client creation,
> and
> > Horizon uses a new client per request. That would mean that every
> request to
> > Horizon that talks to Keystone would generate at least one additional
> > request.
> >
> >
> >
> > The response is cacheable.
>
> Not only is it cachable it is cached by default within the Session object
> you use so that the session will only make one discovery request per
> service per session. So horizon can manage how long to cache discovery for
> by how long they hold on to a session object. As the session object doesn't
> contain any personal or sensitive date (that is all restricted to the auth
> plugin) the session object can be persisted between requests.
>

Is there any reason not to cache to disk across sessions? The GET response
is entirely endpoint-specific, not exactly session-based.


>
> Whether or not horizon works that way today - and whether the other
> services work with discovery as well as keystone does i'm not sure.
>
> >
> > Is this significant?
> >
> > It gets a little worse when you start thinking about all of the other
> > services out there. If each new request that has to talk to multiple
> > services needs to run discovery, you can image that soon the majority of
> > network chatter would be discovery based.
> >
> >
> > It seems to me that Horizon should somehow cache this data, and share it
> > among clients. Note that I am not talking about user specific data like
> the
> > endpoints from the service catalog for a specific project. But the
> overall
> > service catalog, as well as the supported versions of the API, should be
> > cacheable. We can use the standard HTTP cache management API on the
> Keystone
> > side to specify how long Horizon can trust the data to be current.
> >
> > I think this actually goes for the rest of the endpoints as well: we
> want to
> > get to a much smaller service catalog, and we can do that by making the
> > catalog holds on IDs. The constraints spec for endpoint binding will be
> > endpoint only anyway, and so having the rest of the endpoint data cached
> > will be valuable there as well.
> >
> >
> > __ _
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit scheduling - using our time together wisely.

2014-10-20 Thread Ben Nemec
I guess my only concern would be whether either of those things are
contentious (both sound like must-do's at some point) and whether there
is anything on either topic that requires f2f conversation to resolve.
There's a spec out for Cinder HA already
(https://review.openstack.org/#/c/101237/) that seems to have at least
general support from everyone, and it's not clear to me that the L3 one
can be resolved by us.  It sounds like we need Nova and Neutron changes
for that.  Of course, if we can get some Nova and Neutron folks to
commit to attending that session then I could see that being helpful.

In general both of those topics on the etherpad are a little light on
details, so I'd personally like to see some more specifics on what we'd
be talking about.

-Ben

On 10/16/2014 03:14 PM, Clint Byrum wrote:
> The format has changed slightly this summit, to help encourage a more
> collaborative design experience, rather than rapid fire mass-inclusion
> summit sessions. So we have two 40-minute long slots, and one whole day
> of contributor meetup.[1]
> 
> Our etherpad topics page has received quite a few additions now [2], and
> so I'd like to hear thoughts on what things we want to talk about in the
> meetup versus the sessions.
> 
> A few things I think we should stipulate:
> 
> * The scheduled sessions will be heavily attended by the community at
>   large. This often includes those who are just curious, or those who
>   want to make sure that their voice is heard. These sessions should be
>   reserved for those topics which have the most external influence or
>   are the most dependent on other projects.
> 
> * The meetup will be at the end of the week so at the end of it, we
>   can't then go to any other meetups and ask for things / participate
>   in those design activities. This reinforces that scheduled session
>   time should be focused on things that are externally focused so that
>   we can take the result of those discussions into any of the sessions
>   that are after.
> 
> * The Ops Summit is Wendesday/Thursday [3], which overlaps with these
>   sessions. I am keenly interested in gathering more contribution from
>   those already operating and deploying OpenStack. It can go both ways,
>   but I think it might make sense to have more ops-centric topics
>   discussed on Friday, when those participants might not be fully
>   wrapped up in the ops sessions.
> 
> If we can all agree on those points, given the current topics, I think
> our scheduled sessions should target at least (but not limited to):
> 
> * Cinder + CEPH
> * Layer 3 segmentation
> 
> I think those might fit into 40 minutes, as long as we hash some things
> out here on the mailing list first. Cinder + CEPH is really just a
> check-in to make sure we're on track to providing it. Layer 3, I've had
> discussions with Ironic and Neutron people and I think we have a plan,
> but I wanted to present it in the open and discuss the short term goals
> to see if it satisfies what users may want for the Kilo time frame.
> 
> So, I would encourage you all to look at the etherpad, and expand on
> topics or add more, and then reply to this thread with ideas for how
> best to use our precious time together.
> 
> [1] http://kilodesignsummit.sched.org/overview/type/tripleo
> [2] https://etherpad.openstack.org/p/kilo-tripleo-summit-topics
> [3] http://kilodesignsummit.sched.org/overview/type/ops+summit
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-20 Thread Evgeniy L
Hi guys,

*Romans' questions:*

>> I feel like we should not require user to unpack the plugin before
installing it.
>> Moreover, we may chose to distribute plugins in our own format, which we
>> may potentially change later. E.g. "lbaas-v2.0.fp".

I like the idea of putting plugin installation functionality in fuel
client, which is installed
on master node.
But in the current version plugin installation requires files operations on
the master,
as result we can have problems if user's fuel-client is installed on
another env.
What we can do is to try to determine where fuel-client is installed, if
it's master
node, we can perform installation, if it isn't master node, we can show
user the
message, that in the current version remote plugin installation is not
supported.
In the next versions if we implement plugin manager (which is separate
service
for plugins management) we will be able to do it remotely.

>> How are we planning to distribute fuel plugin builder and its updates?

Yes, as Mike mentioned our plan is to release it on PyPi which is python
packages
repository, so any developer will be able to run `pip install fpb` and get
the tool.

>> What happens if an error occurs during plugin installation?

Plugins installation process is very simple, our plan is to have some kind
of transaction,
to make it atomic.

1. register plugin via API
2. copy the files

In case of error on the 1st step, we can do nothing, in case of error on
the 2nd step,
remove files if there are any, and delete a plugin via rest api. And show
user a message.

>> What happens if an error occurs during plugin execution?

In the first iteration we are going to interrupt deployment if there are
any errors for plugin's
tasks, also we are thinking how to improve it, for example we wanted to
provide a special
flag for each task, like fail_deploument_on_error, and only if it's true,
we fail deployment in
case of failed task. But it can be tricky to implement, it requires to
change the current
orchestrator/nailgun error handling logic. So, I'm not sure if we can
implement this logic in
the first release.

Regarding to meaningful error messages, yes, we want to show the
user, which plugin
causes the error.

>> Shall we consider a separate place in UI (tab) for plugins?

+1 to Mike's answer

>> When are we planning to focus on the 2 plugins which were identified as
must-haves
>> for 6.0? Cinder & LBaaS

For Cinder we are going to implement plugin which configures GlusterFS as
cinder backend,
so, if user has installed GlusterFS cluster, we can configure our cinder to
work with it,
I want to mention that we don't install GlusterFS nodes, we just configure
cinder to work
with user's GlusterFS cluster.
Stanislaw B. already did some scripts which configures cinder to work with
GlusterFS, so
we are on testing stage.

Regarding to LBaaS, Stanislaw B. did multinode implementation, ha
implementation is tricky
and requires some additional work, we are working on it.

Nathan's questions:

Looks like Mike answered UI related questions.

>> Do we offer any kind of validation for settings on plug-ins?   Or some
way to for the developer
>> to ensure that setting that cannot be default or computed get requested
for the plug-in?

Yes, each field can have regexp which is used during the validation.

*Mike's questions:*

>> One minor thing from me, which I forgot to mention during the demo:
verbosity of fpb run. I
>> understand it might sound like a bikeshedding now, but I believe if we
develop it right from
>> the very beginning, then we can save some time later. So I would suggest
normal, short INFO
>> output, and verbose one with --debug.

Agree.

Thanks for your feedback,

On Sun, Oct 19, 2014 at 1:11 PM, Mike Scherbakov 
wrote:

> Hi all,
> I moved this conversation to openstack-dev to get a broader audience,
> since we started to discuss technical details.
>
> Raw notes from demo session:
> https://etherpad.openstack.org/p/cinder-neutron-plugins-second-demo.
>
> Let me start answering on a few questions below from Roman & Nathan.
>
>> How are we planning to distribute fuel plugin builder and its updates?
>> Ideally, it should be available externally (outside of master node). I
>> don't want us to repeat the same mistake as we did with Fuel client, which
>> doesn't seem to be usable as an external dependency.
>
> The plan was to have Fuel Plugin Builder (fpb) on PyPI. Ideally it should
> be backward compatible with older Fuel release, i.e. when there is Fuel 7.0
> out, you should be still able to create plugin for Fuel 6.0. If that it is
> going to be overcomplicated - I suggested to produce fpb for every Fuel
> release, and name it like fpb60, fpb61, fpb70, etc. Then it becomes easier
> to support and maintain plugin builders for certain versions of Fuel.
> Speaking about Fuel Client - there is no mistake. It's been discussed
> dozens of times, it's just lack of resources to make it on PyPI as well as
> to fix a few other things. I hope it could be done a

Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Amit Gandhi
Thanks for the clarification Sam.

Its good to know where the mission of the API working group starts and
stops.  During the meetup discussions, my understanding was that the
working group would recommend the technologies to use while building apis
(e.g. Pecan, validation frameworks, etc) and were in the process of
looking into tools such as warlock.  Hence the recommendation to add
another library into the mix for evaluation, based on advise by other
stackers in the community.

Your response clarifies that the aim of the API working group is just to
recommend on standardizing the interfaces from various API's (which I am
looking forward to) and not the libraries used to implement that interface.

For stackers who are interested in different validation frameworks to
implement validation, I recommend checking out Stoplight.

Thanks

Amit Gandhi.




On 10/20/14, 9:36 AM, "Michael McCune"  wrote:

>+1
>
>that's a great way to state it Sam.
>
>regards,
>mike
>
>- Original Message -
>> Hi Amit,
>> 
>> Keeping in mind this viewpoint is nothing but my own personal view, my
>> recommendation would be to not mandate the use of a particular
>>validation
>> framework, but to instead define what kind of validation clients should
>> expect the server to perform in general. For example, I would expect a
>> service to return an error code and not perform any action if I called
>> "Create server" but did not include a request body, but the actual
>>manner in
>> which that error is generated within the service does not matter from
>>the
>> client's perspective.
>> 
>> This is not to say the API Working Group wouldn't help you evaluate the
>> potential of Stoplight to meet the needs of a service. To the contrary,
>>by
>> clearly defining the expectations of a service's responses to requests,
>> you'll have a great idea of exactly what to look for in your
>>evaluation, and
>> your final decision would be based on objective results.
>> 
>> Thank you,
>> Sam Harwell
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread Sandy Walsh
Does this mean we're losing request-id's?

Will they still appear in the Context objects?

And there was the effort to keep consistent request-id's in cross-service 
requests, will this deprecation affect that?

-S


From: Steven Hardy [sha...@redhat.com]
Sent: Monday, October 20, 2014 10:58 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo] request_id deprecation strategy question

Hi all,

I have a question re the deprecation strategy for the request_id module,
which was identified as a candidate for removal in Doug's recent message[1],
as it's moved from oslo-incubator to oslo.middleware.

The problem I see is that oslo-incubator deprecated this in Juno, but
(AFAICS) all projects shipped Juno without the versionutils deprecation
warning sync'd [2]

Thus, we can't remove the local openstack.common.middleware.request_id, or
operators upgrading from Juno to Kilo without changing their api-paste.ini
files will experience breakage without any deprecation warning.

I'm sure I've read and been told that all backwards incompatible config
file changes require a deprecation period of at least one cycle, so does
this mean all projects just sync the Juno oslo-incubator request_id into
their kilo trees, leave it there until kilo releases, while simultaneously
switching their API configs to point to oslo.middleware?

Guidance on how to proceed would be great, if folks have thoughts on how
best to handle this.

Thanks!

Steve


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048303.html
[2] 
https://github.com/openstack/oslo-incubator/blob/stable/juno/openstack/common/middleware/request_id.py#L33

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer-Alarm-Not Working

2014-10-20 Thread david jhon
Hi,

Thanks for your response Chris, Ceilometer-api
Pls let me know if /etc/ceilometer/ceilometer.conf file is correct:

[DEFAULT]

#
# Options defined in ceilometer.middleware
#

# Exchanges name to listen for notifications (multi valued)
http_control_exchanges=nova
http_control_exchanges=glance
http_control_exchanges=neutron
http_control_exchanges=cinder


#
# Options defined in ceilometer.pipeline
#

# Configuration file for pipeline definition (string value)
pipeline_cfg_file=pipeline.yaml


#
# Options defined in ceilometer.sample
#

# Source for samples emited on this instance (string value)
#sample_source=openstack


#
# Options defined in ceilometer.api.app
#

# The strategy to use for auth: noauth or keystone. (string
# value)
auth_strategy=keystone

# Deploy the deprecated v1 API. (boolean value)
#enable_v1_api=true


#
# Options defined in ceilometer.compute.notifications
#

# Exchange name for Nova notifications (string value)
nova_control_exchange=nova


#
# Options defined in ceilometer.compute.pollsters.util
#

# list of metadata prefixes reserved for metering use (list
# value)
reserved_metadata_namespace=metering.

# limit on length of reserved metadata values (integer value)
#reserved_metadata_length=256


#
# Options defined in ceilometer.compute.virt.inspector
#

# Inspector to use for inspecting the hypervisor layer (string
# value)
#hypervisor_inspector=libvirt


#
# Options defined in ceilometer.compute.virt.libvirt.inspector
#

# Libvirt domain type (valid options are: kvm, lxc, qemu, uml,
# xen) (string value)
#libvirt_type=kvm

# Override the default libvirt URI (which is dependent on
# libvirt_type) (string value)
#libvirt_uri=


#
# Options defined in ceilometer.image.notifications
#

# Exchange name for Glance notifications (string value)
glance_control_exchange=glance


#
# Options defined in ceilometer.network.notifications

#

# Exchange name for Neutron notifications (string value)
neutron_control_exchange=neutron


#
# Options defined in ceilometer.objectstore.swift
#

# Swift reseller prefix. Must be on par with reseller_prefix
# in proxy-server.conf. (string value)
#reseller_prefix=AUTH_


#
# Options defined in ceilometer.openstack.common.db.sqlalchemy.session
#

# the filename to use with sqlite (string value)
#sqlite_db=ceilometer.sqlite

# If true, use synchronous mode for sqlite (boolean value)
#sqlite_synchronous=true


#
# Options defined in ceilometer.openstack.common.eventlet_backdoor
#

# Enable eventlet backdoor.  Acceptable values are 0, ,
# and :, where 0 results in listening on a random
# tcp port number;  results in listening on the
# specified port number (and not enabling backdoor if that
# port is in use); and : results in listening on
# the smallest unused port number within the specified range
# of port numbers.  The chosen port is displayed in the
# service's log file. (string value)
#backdoor_port=


#
# Options defined in ceilometer.openstack.common.lockutils
#

# Whether to disable inter-process locks (boolean value)
#disable_process_locking=false

# Directory to use for lock files. (string value)
#lock_path=


#
# Options defined in ceilometer.openstack.common.log
#

# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
#debug=false

# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
#verbose=false

# Log output to standard error (boolean value)
#use_stderr=true

# format string to use for log messages with context (string
# value)
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s]
%(instance)s%(message)s

# format string to use for log messages without context
# (string value)
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [-] %(instance)s%(message)s

# data to append to log format when level is DEBUG (string
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

# prefix each line of exception output with this format
# (string value)
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE
%(name)s %(instance)s

# list of logger=LEVEL pairs (list value)
#default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN

# publish error events (boolean value)
#publish_errors=false

# make deprecations fatal (boolean value)
#fatal_deprecations=false

# If an instance is passed with the log message, format it
# like this (string value)
#instance_format="[instance: %(uuid)s] "

# If an instance UUID is passed with the log message, format
# it like this (string value)
#instance_uuid_format="[instance: %(uuid)s] "

# If this option is specified, the logging configuration file
# specified is used and overrides any other logging options
# specified. Please see the Python logging module
# documentation for details on logging configuration files.
# (string v

[openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread Steven Hardy
Hi all,

I have a question re the deprecation strategy for the request_id module,
which was identified as a candidate for removal in Doug's recent message[1],
as it's moved from oslo-incubator to oslo.middleware.

The problem I see is that oslo-incubator deprecated this in Juno, but
(AFAICS) all projects shipped Juno without the versionutils deprecation
warning sync'd [2]

Thus, we can't remove the local openstack.common.middleware.request_id, or
operators upgrading from Juno to Kilo without changing their api-paste.ini
files will experience breakage without any deprecation warning.

I'm sure I've read and been told that all backwards incompatible config
file changes require a deprecation period of at least one cycle, so does
this mean all projects just sync the Juno oslo-incubator request_id into
their kilo trees, leave it there until kilo releases, while simultaneously
switching their API configs to point to oslo.middleware?

Guidance on how to proceed would be great, if folks have thoughts on how
best to handle this.

Thanks!

Steve


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048303.html
[2] 
https://github.com/openstack/oslo-incubator/blob/stable/juno/openstack/common/middleware/request_id.py#L33

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Question regarding Service Catalog and Identity entries...

2014-10-20 Thread Ben Meyer
On 10/20/2014 08:12 AM, Jamie Lennox wrote:
> - Original Message -
>> From: "Ben Meyer" 
>> To: openstack-dev@lists.openstack.org
>> Cc: "Jamie Painter" 
>> Sent: Tuesday, October 7, 2014 4:31:16 PM
>> Subject: [openstack-dev] [Keystone] Question regarding Service Catalog and   
>> Identity entries...
>>
>> I am trying to use the Python Keystone client to integration
>> authentication functionality into a project I am contributing to
>> (https://github.com/rackerlabs/deuce-client).
>> However, I ran into a situation where if I do the following:
>>
> c = keystoneclient.v2_0.client.Client(username='username',
>> password='password',
>> auth_url="https://keystone-compatible-service.example.com/v2.0/";)
>> Failed to retrieve management_url from token
>>
>> I traced it through the Python Keystoneclient code and it fails due to
>> not finding the "identity" service entry in the Service Catalog. The
>> authentication otherwise happens in that it has already received a
>> "successful" response and a full Service Catalog, aside from the
>> "missing" identity service. This happens with both version 0.10 and 0.11
>> python keystone clients; I did not try older clients.
>>
>> Talking with the service provider, their version does not include itself
>> in the Service Catalog, and they learned the Keystone itself inserts
>> itself into the Service Catalog.
>> I can certainly understand why it having the identity service entry be
>> part of the Service Catalog, but for them it is (at least for now) not
>> desirable to do so.
>>
>> Questions:
>> - Is it now a standard that Keystone inserts itself into the Service
>> Catalog?
> It's not a standard that keystone inserts itself into the catalog, the cloud 
> operator should maintain the list of endpoints for their deployment and the 
> 'identity' service should be amongst those endpoints. I'm unclear as to why 
> it would be undesirable to list the identity endpoint in the service catalog. 
> How would this addition change their deployment? 
The argument is that the Service Catalog is too big so they are hesitant
to add new entries to it; and 'identity' in the catalog is redundant
since you have to know the 'identity' end-point to even get the service
catalog in the first place.

Not saying I agree, just that's the argument being made. If it is
"required by Keystone" to be self-referential then they are likely to
add it.

> The problem with the code that you provided is that the token that is being 
> returned from your code is unscoped. Which means that it is not associated 
> with a project and therefore it doesn't have a service catalog because the 
> catalog can be project specific. Thus when you go to perform an operation the 
> client will look for the URL it is supposed to talk to in an empty list and 
> fail to find the identity endpoint. This message really needs to be improved. 
> If you add a project_id or project_name to the client information then you 
> should get back a token with a catalog. 

In my normal case I'm using the project_id field; but have found that it
didn't really matter what was used for the credentials in this case
since they simply don't have the 'identity' end-points in the Service
Catalog.

>> - Or is the Python Keystone Client broken because it is forcing it to be so?
> I wouldn't say that it is broken because having an identity endpoint in your 
> catalog is a required part of a deployment, in the same way that having a 
> 'compute' endpoint is required if you want to talk to nova. I would be 
> surprised by any decision to purposefully omit the 'identity' endpoint from 
> the service catalog. 

See above; but from what you are presenting here it sounds like the
deployment is "broken" so it is in fact "required by Keystone", even if
only "a required part of a deployment".

Thanks

Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-20 Thread Michael McCune
+1

that's a great way to state it Sam.

regards,
mike

- Original Message -
> Hi Amit,
> 
> Keeping in mind this viewpoint is nothing but my own personal view, my
> recommendation would be to not mandate the use of a particular validation
> framework, but to instead define what kind of validation clients should
> expect the server to perform in general. For example, I would expect a
> service to return an error code and not perform any action if I called
> "Create server" but did not include a request body, but the actual manner in
> which that error is generated within the service does not matter from the
> client's perspective.
> 
> This is not to say the API Working Group wouldn't help you evaluate the
> potential of Stoplight to meet the needs of a service. To the contrary, by
> clearly defining the expectations of a service's responses to requests,
> you'll have a great idea of exactly what to look for in your evaluation, and
> your final decision would be based on objective results.
> 
> Thank you,
> Sam Harwell

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Turbo hipster problems

2014-10-20 Thread Gary Kotton
Thanks. Yes, it is up and running again!

From: Joshua Hesketh 
mailto:joshua.hesk...@rackspace.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, October 20, 2014 at 3:45 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova] Turbo hipster problems

Hi Gary,

Sorry we had a mis-hap over the weekend. The Database CI should be back up and 
running now. Let me know if you see any more problems.

Cheers,
Josh

Rackspace Australia

On 10/18/14 1:55 AM, Gary Kotton wrote:
Hi,
Anyone aware why Turbo his peter is failing with:

real-db-upgrade_nova_percona_user_002:th-perconaException:
 [Errno 2] No such file or directory: 
'/var/lib/turbo-hipster/datasets_user_002' in 0s

Thanks
Gary



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][keystone] why is lxml only in test-requirements.txt?

2014-10-20 Thread Xu (Simon) Chen
I am trying to understand why lxml is only in test-requirements.txt... The
default pipelines do contain xml_body and xml_body_v2 filters, which
depends on lxml to function properly.

Since lxml is not in requirements.txt, my packaging system won't include
lxml in the deployment drop. At the same time, my environment involves
using browsers to directly authenticate with keystone - and browsers
(firefox/chrome alike) send "accept: application/xml" in their request
headers, which triggers xml_body to perform json to xml conversion, which
fails because lxml is not there.

My opinion is that if xml_body filters are in the example/default paste.ini
file, lxml should be included in requirements.txt.

Comments?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-20 Thread Chris Dent

On Fri, 17 Oct 2014, Jim Mankovich wrote:


See answers inline. I don't have any concrete answers as to how to deal
with some of questions you brought up, but I do have some more detail
that may be useful to further the discussion.


That seems like progress to me.


Personally, I would like to see the _(0x##) removed form the Sensor ID
string (by the ipmitool driver) before it returns sensors to the
Ironic conductor. I just don't see any value in this extra info. This
0x## addition only helps if a vendor used the exact same Sensor ID
string for multiple sensors of the same sensor type. i.e. Multiple
sensors of type "Temperature", each with the exact same Sensor ID
string of "CPU" instead of giving each Sensor ID string a unique name
like "CPU 1 ", " CPU 2",...


Is it worthwhile metadata to save, even if it isn't in the meter
name?


In a heterogeneous platform environment, the Sensor ID string is
likely going to be different per vendor, so your question "If
temperate...on any system board...on any hardware, notify the
authorities" is going to be tough because each vendor may name their
"system board" differently. But, I bet that vendors use similar
strings, so worst case, your alarm creation could require 1 alarm
definition per vendor.


The alarm defintion I want to make is (as an operator not as a dev):
"My puter's too hot, hlp!"

Making that easy is the proper (to me) endpoint of a conversation
about how to name meters.


I see generic naming as somewhat problematic. If you lump all the
temperature sensors for a platform under hardware.temperature the
consumer will always need to query for a specific temperature sensor
that it is interested in, like "system board". The notion of having
different samples from multiple sensors under a single generic name
seems harder to deal with to me. If you have multiple temperature
samples under the same generic meter name, how do you figure out what
all the possible temperature samples actual exist?


I'm not suggestion all temperate sensors under one name
("hardware.temperature"), but all sensors which identify as the same
thing (e.g. "hardware.temperature.system_board") under the same name.

I'm not very informed about IMPI or hardware sensors, but I do have
some experiencing in using names and identifiers (don't we all!) and
I find that far too often we name things based on where they come
from rather than how we wish to address them after genesis.

Throughout ceilometer I think there are tons of opportunities to
improve the naming of meters and as a result improve the UI for
people who want to do things with the data.

So from my perspective, with regard to naming IPMI (and other hardware
sensor) related samples, I think we need to make a better list of the
use cases which the samples need to satisfy and use that to drive a
naming scheme.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] E12* rules

2014-10-20 Thread Roman Podoliaka
Hi Andrey,

Generally I'm opposed to such changes enabling random PEP8 checks, but
in this particular case I kind of like the fact you fix the mess with
indents in the code.

python-novaclient code base is fairly small, CI nodes are not
overloaded at this point of the release cycle, code looks better
now... FWIW, I'd +1 your patches :)

Thanks,
Roman

On Fri, Oct 17, 2014 at 4:40 PM, Andrey Kurilin  wrote:
> Hi everyone!
>
> I'm working on enabling E12* PEP8 rules in novaclient(status of my work
> listed below). Imo, PEP8 rules should be ignored only in extreme cases/for
> important reasons and we should decrease a number of ignored rules. This
> helps to keep code in more strict, readable form, which is very important
> when working in community.
>
> While working on rule E126, we started discussion with Joe Gordon about
> demand of these rules. I have no idea about reasons of why they should be
> ignored, so I want to know:
> - Why these rules should be ignored?
> - What do you think about enabling these rules?
>
> Please, leave your opinion about E12* rules.
>
> Already enabled rules:
>   E121,E125 - https://review.openstack.org/#/c/122888/
>   E122 - https://review.openstack.org/#/c/123830/
>   E123 - https://review.openstack.org/#/c/123831/
>
> Abandoned rule:
>   E124 - https://review.openstack.org/#/c/123832/
>
> Pending review:
>   E126 - https://review.openstack.org/#/c/123850/
>   E127 - https://review.openstack.org/#/c/123851/
>   E128 - https://review.openstack.org/#/c/127559/
>   E129 - https://review.openstack.org/#/c/123852/
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-20 Thread Jamie Lennox


- Original Message -
> From: "Nathan Kinder" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, October 14, 2014 2:25:35 AM
> Subject: Re: [openstack-dev] [all][policy][keystone] Better Policy Model and 
> Representing Capabilites
> 
> 
> 
> On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
> > Description of the problem: Without attempting an action on an endpoint
> > with a current scoped token, it is impossible to know what actions are
> > available to a user.
> > 
> > 
> > Horizon makes some attempts to solve this issue by sourcing all of the
> > policy files from all of the services to determine what a user can
> > accomplish with a given role. This is highly inefficient as it requires
> > processing the various policy.json files for each request in multiple
> > places and presents a mechanism that is not really scalable to understand
> > what a user can do with the current authorization. Horizon may not be the
> > only service that (in the long term) would want to know what actions a
> > token can take.
> 
> This is also extremely useful for being able to actually support more
> restricted tokens as well.  If I as an end user want to request a token
> that only has the roles required to perform a particular action, I'm
> going to need to have a way of knowing what those roles are.  I think
> that is one of the main things missing to allow the "role-filtered
> tokens" option that I wrote up after the last Summit to be a viable
> approach:
> 
>   https://blog-nkinder.rhcloud.com/?p=101
> 
> > 
> > I would like to start a discussion on how we should improve our policy
> > implementation (OpenStack wide) to help make it easier to know what is
> > possible with a current authorization context (Keystone token). The key
> > feature should be that whatever the implementation is, it doesn’t require
> > another round-trip to a third party service to “enforce” the policy which
> > avoids another scaling point like UUID Keystone token validation.
> > 
> > Here are a couple of ideas that we’ve discussed over the last few
> > development cycles (and none of this changes the requirements to manage
> > scope of authorization, e.g. project, domain, trust, ...):
> > 
> > 1. Keystone is the holder of all policy files. Each service gets it’s
> > policy file from Keystone and it is possible to validate the policy (by
> > any other service) against a token provided they get the relevant policy
> > file from the authoritative source (Keystone).
> > 
> > Pros: This is nearly completely compatible with the current policy system.
> > The biggest change is that policy files are published to Keystone instead
> > of to a local file on disk. This also could open the door to having
> > keystone build “stacked” policies (user/project/domain/endpoint/service
> > specific) where the deployer could layer policy definitions (layering
> > would allow for stricter enforcement at more specific levels, e.g. users
> > from project X can’t terminate any VMs).
> 
> I think that there are a some additional advantages to centralizing
> policy storage (not enforcement).
> 
> - The ability to centralize management of policy would be very nice.  If
> I want to update the policy for all of my compute nodes, I can do it in
> one location without the need for external configuration management
> solutions.
> 
> - We could piggy-back on Keystone's signing capabilities to allow policy
> to be signed, providing protection against policy tampering on an
> individual endpoint.
> 
> > 
> > Cons: This doesn’t ease up the processing requirement or the need to hold
> > (potentially) a significant number of policy files for each service that
> > wants to evaluate what actions a token can do.
> 
> Are you thinking of there being a call to keystone that answers "what
> can I do with token A against endpoint B"?  This seems similar in
> concept to the LDAP "get effective rights" control.  There would
> definitely be some processing overhead to this though you could set up
> multiple keystone instances and replicate the policy to spread out the
> load.  It also might be possible to index the enforcement points by role
> in an attempt to minimize the processing for this sort of call.
> 
> > 
> > 
> > 2. Each enforcement point in a service is turned into an attribute/role,
> > and the token contains all of the information on what a user can do
> > (effectively shipping the entire policy information with the token).
> > 
> > Pros: It is trivial to know what a token provides access to: the token
> > would contain something like `{“nova”: [“terminate”, “boot”], “keystone”:
> > [“create_user”, “update_user”], ...}`. It would be easily possible to
> > allow glance “get image” nova “boot” capability instead of needing to know
> > the roles for policy.json for both glance and nova work for booting a new
> > VM.
> > 
> > Cons: This would likely require a central registry of all the actions that
> > could be taken (something akin to an IANA port list). Without a

Re: [openstack-dev] [Ceilometer] Ceilometer-Alarm-Not Working

2014-10-20 Thread Chris Dent

On Mon, 20 Oct 2014, david jhon wrote:


2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service
CommunicationError: Error communicating with http://193.168.4.121:8777
[Errno 111]$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service

How do I fix it?


It looks like it may be that either your ceilometer-api service is
not running or is not bound to the correct network interface.

Since you've got a ceilometer-api.log is may be that the service is
unreachable because of some configuration setting (firewall, alarm
and api service on different networks, that sort of thing).

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Question regarding Service Catalog and Identity entries...

2014-10-20 Thread Jamie Lennox


- Original Message -
> From: "Ben Meyer" 
> To: openstack-dev@lists.openstack.org
> Cc: "Jamie Painter" 
> Sent: Tuesday, October 7, 2014 4:31:16 PM
> Subject: [openstack-dev] [Keystone] Question regarding Service Catalog and
> Identity entries...
> 
> I am trying to use the Python Keystone client to integration
> authentication functionality into a project I am contributing to
> (https://github.com/rackerlabs/deuce-client).
> However, I ran into a situation where if I do the following:
> 
> >>> c = keystoneclient.v2_0.client.Client(username='username',
> password='password',
> auth_url="https://keystone-compatible-service.example.com/v2.0/";)
> Failed to retrieve management_url from token
> 
> I traced it through the Python Keystoneclient code and it fails due to
> not finding the "identity" service entry in the Service Catalog. The
> authentication otherwise happens in that it has already received a
> "successful" response and a full Service Catalog, aside from the
> "missing" identity service. This happens with both version 0.10 and 0.11
> python keystone clients; I did not try older clients.
> 
> Talking with the service provider, their version does not include itself
> in the Service Catalog, and they learned the Keystone itself inserts
> itself into the Service Catalog.
> I can certainly understand why it having the identity service entry be
> part of the Service Catalog, but for them it is (at least for now) not
> desirable to do so.
> 
> Questions:
> - Is it now a standard that Keystone inserts itself into the Service
> Catalog?

It's not a standard that keystone inserts itself into the catalog, the cloud 
operator should maintain the list of endpoints for their deployment and the 
'identity' service should be amongst those endpoints. I'm unclear as to why it 
would be undesirable to list the identity endpoint in the service catalog. How 
would this addition change their deployment? 

The problem with the code that you provided is that the token that is being 
returned from your code is unscoped. Which means that it is not associated with 
a project and therefore it doesn't have a service catalog because the catalog 
can be project specific. Thus when you go to perform an operation the client 
will look for the URL it is supposed to talk to in an empty list and fail to 
find the identity endpoint. This message really needs to be improved. If you 
add a project_id or project_name to the client information then you should get 
back a token with a catalog. 

> - Or is the Python Keystone Client broken because it is forcing it to be so?

I wouldn't say that it is broken because having an identity endpoint in your 
catalog is a required part of a deployment, in the same way that having a 
'compute' endpoint is required if you want to talk to nova. I would be 
surprised by any decision to purposefully omit the 'identity' endpoint from the 
service catalog. 

> Thanks,
> 
> Benjamen R. Meyer
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery

2014-10-20 Thread Jamie Lennox


- Original Message -
> From: "Dolph Mathews" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, October 7, 2014 6:56:15 PM
> Subject: Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery
> 
> 
> 
> On Tuesday, October 7, 2014, Adam Young < ayo...@redhat.com > wrote:
> 
> 
> Horizon has a config options which says which version of the Keystone API it
> should work against: V2 or V3. I am not certain that there is still any
> reason for Horizon to go against V2. However, If we defer the decision to
> Keystone, we come up against the problem of discovery.
> 
> On the surface it is easy, as the Keystone client supports version discovery.
> The problem is that discovery must be run for each new client creation, and
> Horizon uses a new client per request. That would mean that every request to
> Horizon that talks to Keystone would generate at least one additional
> request.
> 
> 
> 
> The response is cacheable.

Not only is it cachable it is cached by default within the Session object you 
use so that the session will only make one discovery request per service per 
session. So horizon can manage how long to cache discovery for by how long they 
hold on to a session object. As the session object doesn't contain any personal 
or sensitive date (that is all restricted to the auth plugin) the session 
object can be persisted between requests. 

Whether or not horizon works that way today - and whether the other services 
work with discovery as well as keystone does i'm not sure. 

> 
> Is this significant?
> 
> It gets a little worse when you start thinking about all of the other
> services out there. If each new request that has to talk to multiple
> services needs to run discovery, you can image that soon the majority of
> network chatter would be discovery based.
> 
> 
> It seems to me that Horizon should somehow cache this data, and share it
> among clients. Note that I am not talking about user specific data like the
> endpoints from the service catalog for a specific project. But the overall
> service catalog, as well as the supported versions of the API, should be
> cacheable. We can use the standard HTTP cache management API on the Keystone
> side to specify how long Horizon can trust the data to be current.
> 
> I think this actually goes for the rest of the endpoints as well: we want to
> get to a much smaller service catalog, and we can do that by making the
> catalog holds on IDs. The constraints spec for endpoint binding will be
> endpoint only anyway, and so having the rest of the endpoint data cached
> will be valuable there as well.
> 
> 
> __ _
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Murano 2014.2 "Juno" is released

2014-10-20 Thread Ruslan Kamaldinov
I'm glad to announce release of Murano 2014.2 code-named "Juno". This
release includes 39 implemented blueprints and 140 bugfixes. Source
tarballs along with detailed list of features and bugfixes can be
found at the following link:

https://launchpad.net/murano/+milestone/2014.2


Release notes:

https://wiki.openstack.org/wiki/Murano/ReleaseNotes/Juno


Thanks to everyone who participated in development of Murano!

PS: special thanks to Sergey Lukjanov for handling the release process.

--
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer-Alarm-Not Working

2014-10-20 Thread david jhon
Hi,

Just opened /var/log/ceilometer/ceilometer-alarm-evaluator.log file and
found following errors:

2014-10-20 16:32:15.263 23166 TRACE ceilometer.alarm.service
2014-10-20 16:33:07.854 30437 ERROR ceilometer.alarm.service [-] alarm
evaluation cycle failed
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service Traceback
(most recent call last):
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometer/alarm/service.py", line 96, in$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service alarms =
self._assigned_alarms()
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometer/alarm/service.py", line 139, i$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service 'value':
True}])
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometerclient/v2/alarms.py", line 61, $
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service return
self._list(options.build_url(self._path(), q))
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometerclient/common/base.py", line 57$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service resp, body
= self.api.json_request('GET', url)
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometerclient/common/http.py", line 18$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service resp,
body_iter = self._http_request(url, method, **kwargs)
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service   File
"/usr/lib/python2.7/dist-packages/ceilometerclient/common/http.py", line 15$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service raise
exc.CommunicationError(message=message)
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service
CommunicationError: Error communicating with http://193.168.4.121:8777
[Errno 111]$
2014-10-20 16:33:07.854 30437 TRACE ceilometer.alarm.service

How do I fix it?

On Mon, Oct 20, 2014 at 4:02 PM, david jhon  wrote:

> Hi all,
>
> I am working with Ceilometer, Havana-All-in-one on Ubuntu 12.04.
> Initially, ceilometer configuration installed four ceilometer services:
>
> ceilometer-agent-central
> ceilometer-agent-compute
> ceilometer-api
> ceilometer-collector
>
> but later I came to know that there should be two other services running
> as well: ceilometer-alarm evaluator, ceilometer-alarm-notifier, I installed
> these services as given on the link:
> http://www.brucemartins.com/2014/03/openstack-havana-ceilometer-alarm.html
>
> I created alarm at a resource by using this command: ceilometer -k
> alarm-threshold-create --name tester_cpu_high --description 'overheating?'
> --meter-name cpu_util --threshold 3.0 --comparison-operator gt --statistic
> avg --period 10  --query resource_id=e0bbdad3-ebdb-4acd-8fb5-cd2e10bb10f4
>
> but whenever I check the status of alarm, it shows me 'insufficient data'
> status.
>
> I tested it by running different applications on my instance but they made
> no change in alarm_status.
>
> Here is the log from /var/log/ceilometer/ceilometer-agent-compute.log:
>
> 2014-10-20 15:37:13.957 23126 WARNING ceilometer.transformer.conversions
> [-] dropping sample with no predecessor:  2014-10-20 15:37:14.023 23126 WARNING ceilometer.transformer.conversions
> [-] dropping sample with no predecessor: 
> /var/log/ceilometer/ceilometer-api.log :
>
> 2014-10-20 15:37:10.652 23136 INFO keystoneclient.middleware.auth_token
> [-] Starting keystone auth_token middleware
> 2014-10-20 15:37:10.653 23136 INFO keystoneclient.middleware.auth_token
> [-] Using /tmp/keystone-signing-bGIm6r as cache directory for signing c$
> 2014-10-20 15:37:13.039 23136 INFO keystoneclient.middleware.auth_token
> [-] Starting keystone auth_token middleware
> 2014-10-20 15:37:13.039 23136 INFO keystoneclient.middleware.auth_token
> [-] Using /tmp/keystone-signing-y7L_NW as cache directory for signing c
>
> Please tell me which step is missing or what exact procedure should be
> followed in order to monitor a meter for a resource.
>
> Moreover, what steps should be taken in order to add new meter. How would
> I debug ceilometer source code in /usr/lib/python2.7/ceilometer/* if I
> follow the following link:
> http://docs.openstack.org/developer/ceilometer/contributing/plugins.html
>
> Thank you!
>
>
> Regards,
> Jhon David
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-dev] Detailed Fuel release version definition

2014-10-20 Thread Dmitry Pyzhov
I'm agree with Dmitry. We can change version text. Use '2014.2-6.0-pre' in
openstack.yaml and '6.0-pre' in PRODUCT_VERSION. Yep, we need to test it
first. Also think about sorting on ui and other places where we can compare
versions.

On Mon, Oct 20, 2014 at 2:36 PM, Aleksandra Fedorova  wrote:

> Hi, everyone,
>
> NOTE: I am moving this discussion into openstack-dev@ as we plan to
> deprecate fuel-...@lists.launchpad.net mailing list.
>
> Regarding the problem discussed:
>
> could you please add a bit more details about this versioning and how
> should it fit into our build process.
>
> Let's say we have 6.0 iso image. We build it now based on the
> PRODUCT_VERSION parameter from config.mk. How should we create the
> Technical Preview or GA iso images?
>
> Should we build them from scratch in a separate tasks with different set
> of predefined variables or should we choose one of the iso images we have
> and label it as Technical Preview and imprint this tag into iso somehow?
>
>
> On Mon, Oct 20, 2014 at 1:00 PM, Aleksey Kasatkin 
> wrote:
>
>> But we need to distinguish what Fuel was used to deploy this env. Aren't
>> we? E.g. install 5.1.1 env with Fuel 6.0 beta.
>> I'd propose to have both then (in version.yaml and in releases).
>>
>>
>> Aleksey Kasatkin
>>
>>
>> On Mon, Oct 20, 2014 at 11:55 AM, Alexander Kislitsky <
>> akislit...@mirantis.com> wrote:
>>
>>> Adding tag into the version will be the best solution - because we have
>>> it already implemented :-)
>>>
>>> On Mon, Oct 20, 2014 at 12:44 PM, Dmitriy Shulyak >> > wrote:
>>>
 Maybe we dont need any separate entity to store this difference?
 What if we can introduce tag of release into release version itself, so
 it will look like:
 2014.2.1-6.0-beta or 2014.2.1-6.0.beta
 and change name of the release accordingly - Ubuntu 6.0 Technical
 Preview (beta)

 Is this an option?

 On Mon, Oct 20, 2014 at 11:28 AM, Aleksey Kasatkin <
 akasat...@mirantis.com> wrote:

> AFAIC, it is more like "release_stage" or just "stage" (see
> http://en.wikipedia.org/wiki/Software_release_life_cycle).
>
> Meaning of "release_name" is may be more like "Icehouse", "Juno" in
> OpenStack.
> I'm agree to store it in verslion.yaml.
>
>
> Aleksey Kasatkin
>
>
> On Mon, Oct 20, 2014 at 11:19 AM, Mike Scherbakov <
> mscherba...@mirantis.com> wrote:
>
>> Any progress on this?
>> I support this idea in general. Alex - may be you can prepare POC for
>> this and then we will present it to the crowd?
>>
>> Thanks,
>>
>> On Wed, Oct 15, 2014 at 11:46 AM, Alexander Kislitsky <
>> akislit...@mirantis.com> wrote:
>>
>>> Hi, all!
>>>
>>> For Fuel release 6.0 we are going to have two versions: Technical
>>> Preview and GA.
>>> We need to consider these differences, for example in the statistics
>>> reports. So we need to store information about 'name of Fuel release'. I
>>> propose store this detailed information in the VERSION section of the
>>> verslion.yaml file with release info, placed in /etc/fuel/releases/. The
>>> field name can be 'release_name'.
>>>
>>> --
>>> Mailing list: https://launchpad.net/~fuel-dev
>>> Post to : fuel-...@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~fuel-dev
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>>
>> --
>> Mailing list: https://launchpad.net/~fuel-dev
>> Post to : fuel-...@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~fuel-dev
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to : fuel-...@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>

>>>
>>
>> --
>> Mailing list: https://launchpad.net/~fuel-dev
>> Post to : fuel-...@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~fuel-dev
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Aleksandra Fedorova
> bookwar
>
> --
> You received this message because you are subscribed to the Google Groups
> "fuel-core-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fuel-core-team+unsubscr...@mirantis.com.
> For more options, visit https://groups.google.com/a/mirantis.com/d/optout.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Ceilometer-Alarm-Not Working

2014-10-20 Thread david jhon
Hi all,

I am working with Ceilometer, Havana-All-in-one on Ubuntu 12.04. Initially,
ceilometer configuration installed four ceilometer services:

ceilometer-agent-central
ceilometer-agent-compute
ceilometer-api
ceilometer-collector

but later I came to know that there should be two other services running as
well: ceilometer-alarm evaluator, ceilometer-alarm-notifier, I installed
these services as given on the link:
http://www.brucemartins.com/2014/03/openstack-havana-ceilometer-alarm.html

I created alarm at a resource by using this command: ceilometer -k
alarm-threshold-create --name tester_cpu_high --description 'overheating?'
--meter-name cpu_util --threshold 3.0 --comparison-operator gt --statistic
avg --period 10  --query resource_id=e0bbdad3-ebdb-4acd-8fb5-cd2e10bb10f4

but whenever I check the status of alarm, it shows me 'insufficient data'
status.

I tested it by running different applications on my instance but they made
no change in alarm_status.

Here is the log from /var/log/ceilometer/ceilometer-agent-compute.log:

2014-10-20 15:37:13.957 23126 WARNING ceilometer.transformer.conversions
[-] dropping sample with no predecessor: http://docs.openstack.org/developer/ceilometer/contributing/plugins.html

Thank you!


Regards,
Jhon David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-20 Thread A, Keshava
Hi,


1.   From where the MPLS traffic will be initiated ?

2.   How it will be mapped ?


Regards,
Keshava
From: Damon Wang [mailto:damon.dev...@gmail.com]
Sent: Friday, October 17, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

Good news, +1

2014-10-17 0:48 GMT+08:00 Mathieu Rohon 
mailto:mathieu.ro...@gmail.com>>:
Hi all,

as discussed during today's l3-meeting, we keep on working on BGPVPN
service plugin implementation [1].
MPLS encapsulation is now supported in OVS [2], so we would like to
summit a design to leverage OVS capabilities. A first design proposal,
based on l3agent, can be found here :

https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit

this solution is based on bagpipe [3], and its capacity to manipulate
OVS, based on advertised and learned routes.

[1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
[2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
[3]https://github.com/Orange-OpenSource/bagpipe-bgp


Thanks

Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-dev] Detailed Fuel release version definition

2014-10-20 Thread Aleksandra Fedorova
Hi, everyone,

NOTE: I am moving this discussion into openstack-dev@ as we plan to
deprecate fuel-...@lists.launchpad.net mailing list.

Regarding the problem discussed:

could you please add a bit more details about this versioning and how
should it fit into our build process.

Let's say we have 6.0 iso image. We build it now based on the
PRODUCT_VERSION parameter from config.mk. How should we create the
Technical Preview or GA iso images?

Should we build them from scratch in a separate tasks with different set of
predefined variables or should we choose one of the iso images we have and
label it as Technical Preview and imprint this tag into iso somehow?


On Mon, Oct 20, 2014 at 1:00 PM, Aleksey Kasatkin 
wrote:

> But we need to distinguish what Fuel was used to deploy this env. Aren't
> we? E.g. install 5.1.1 env with Fuel 6.0 beta.
> I'd propose to have both then (in version.yaml and in releases).
>
>
> Aleksey Kasatkin
>
>
> On Mon, Oct 20, 2014 at 11:55 AM, Alexander Kislitsky <
> akislit...@mirantis.com> wrote:
>
>> Adding tag into the version will be the best solution - because we have
>> it already implemented :-)
>>
>> On Mon, Oct 20, 2014 at 12:44 PM, Dmitriy Shulyak 
>> wrote:
>>
>>> Maybe we dont need any separate entity to store this difference?
>>> What if we can introduce tag of release into release version itself, so
>>> it will look like:
>>> 2014.2.1-6.0-beta or 2014.2.1-6.0.beta
>>> and change name of the release accordingly - Ubuntu 6.0 Technical
>>> Preview (beta)
>>>
>>> Is this an option?
>>>
>>> On Mon, Oct 20, 2014 at 11:28 AM, Aleksey Kasatkin <
>>> akasat...@mirantis.com> wrote:
>>>
 AFAIC, it is more like "release_stage" or just "stage" (see
 http://en.wikipedia.org/wiki/Software_release_life_cycle).

 Meaning of "release_name" is may be more like "Icehouse", "Juno" in
 OpenStack.
 I'm agree to store it in verslion.yaml.


 Aleksey Kasatkin


 On Mon, Oct 20, 2014 at 11:19 AM, Mike Scherbakov <
 mscherba...@mirantis.com> wrote:

> Any progress on this?
> I support this idea in general. Alex - may be you can prepare POC for
> this and then we will present it to the crowd?
>
> Thanks,
>
> On Wed, Oct 15, 2014 at 11:46 AM, Alexander Kislitsky <
> akislit...@mirantis.com> wrote:
>
>> Hi, all!
>>
>> For Fuel release 6.0 we are going to have two versions: Technical
>> Preview and GA.
>> We need to consider these differences, for example in the statistics
>> reports. So we need to store information about 'name of Fuel release'. I
>> propose store this detailed information in the VERSION section of the
>> verslion.yaml file with release info, placed in /etc/fuel/releases/. The
>> field name can be 'release_name'.
>>
>> --
>> Mailing list: https://launchpad.net/~fuel-dev
>> Post to : fuel-...@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~fuel-dev
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to : fuel-...@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>

 --
 Mailing list: https://launchpad.net/~fuel-dev
 Post to : fuel-...@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~fuel-dev
 More help   : https://help.launchpad.net/ListHelp


>>>
>>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to : fuel-...@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Aleksandra Fedorova
bookwar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules

2014-10-20 Thread Andreas Jaeger
On 10/13/2014 03:20 PM, Doug Hellmann wrote:
> I’ve put together a little script to generate a report of the projects using 
> modules that used to be in the oslo-incubator but that have moved to 
> libraries [1]. These modules have been deleted, and now only exist in the 
> stable/juno branch of the incubator. We do not anticipate back-porting fixes 
> except for serious security concerns, so it is important to update all 
> projects to use the libraries where the modules now live.
> 
> Liaisons, please look through the list below and file bugs against your 
> project for any changes needed to move to the new libraries and start working 
> on the updates. We need to prioritize this work for early in Kilo to ensure 
> that your projects do not fall further out of step. K-1 is the ideal target, 
> with K-2 as an absolute latest date. I anticipate having several more 
> libraries by the time the K-2 milestone arrives.
> 
> Most of the porting work involves adding dependencies and updating import 
> statements, but check the documentation for each library for any special 
> guidance. Also, because the incubator is updated to use our released 
> libraries, you may end up having to port to several libraries *and* sync a 
> copy of any remaining incubator dependencies that have not graduated all in a 
> single patch in order to have a working copy. I suggest giving your review 
> teams a heads-up about what to expect to avoid -2 for the scope of the patch.

I've started on manila and python-manilaclient,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Passing multiple values in table.Column in Horizon DataTable

2014-10-20 Thread Julie Pichon
On 19/10/14 12:52, Rajdeep Dua wrote:
> Hi,
> I need to pass two values in the link generated from the table.Column as
> shown below.
> 
> 
> class DatasourcesTablesTable(tables.DataTable):
> data_source = tables.Column("column1", verbose_name=_("Column1"))
> id = tables.Column("id", verbose_name=_("ID"),
>link="horizon:admin:path1:path2:rows_table" )
> 
>  The existing link gets value of "id" in kwargs
> Also need to pass value of "column1".
> 
> Any pointers on how this can be done would be helpful

You should be able to do this by calling to a function instead of
hardcoding the link directly. That'll give you the flexibility to use
whatever attributes you want for building up the url. You can see an
example in the EventsTable on the stacks panel [1] on the
logical_resource column.

Hope this helps,

Julie

[1]
https://github.com/openstack/horizon/blob/2a9349bd67/openstack_dashboard/dashboards/project/stacks/tables.py#L131

> Thanks
> Rajdeep
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Mysql issue

2014-10-20 Thread Rajdeep Dua
Swapnil,
Thanks for your response, i tried after running ./clean.sh and it worked on
a new installation.
I wonder if it was a local issue i was facing

Thanks
Rajdeep

On Mon, Oct 20, 2014 at 2:25 PM, Swapnil Kulkarni 
wrote:

> Rajdeep, I did couple of different devstack runs today and did not run
> into any issue as you mentioned.
> Can you give more information about your installation? localrc contents
> would help
>
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
>
> On Mon, Oct 20, 2014 at 1:03 PM, Rajdeep Dua 
> wrote:
>
>> Facing this issue on trying to start a devstack installation
>> Tried with an existing as well as  new installation.
>>
>> 42201 CRITICAL keystone [-] DBConnectionError: (OperationalError) (2013,
>> 'Lost connection to MySQL server during query') 'ALTER TABLE domain ADD
>> CONSTRAINT ixu_domain_name UNIQUE (name)' ()
>>
>> This happens when keystone is starting
>>
>> 42201 TRACE keystone raise
>> exception.DBConnectionError(operational_error)
>> 42201 TRACE keystone DBConnectionError: (OperationalError) (2013, 'Lost
>> connection to MySQL server during query') 'ALTER TABLE domain ADD
>> CONSTRAINT ixu_domain_name UNIQUE (name)' ()
>>
>>
>> Thanks
>> Rajdeep
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Mysql issue

2014-10-20 Thread Swapnil Kulkarni
Rajdeep, I did couple of different devstack runs today and did not run into
any issue as you mentioned.
Can you give more information about your installation? localrc contents
would help

Best Regards,
Swapnil Kulkarni
irc : coolsvap

On Mon, Oct 20, 2014 at 1:03 PM, Rajdeep Dua  wrote:

> Facing this issue on trying to start a devstack installation
> Tried with an existing as well as  new installation.
>
> 42201 CRITICAL keystone [-] DBConnectionError: (OperationalError) (2013,
> 'Lost connection to MySQL server during query') 'ALTER TABLE domain ADD
> CONSTRAINT ixu_domain_name UNIQUE (name)' ()
>
> This happens when keystone is starting
>
> 42201 TRACE keystone raise
> exception.DBConnectionError(operational_error)
> 42201 TRACE keystone DBConnectionError: (OperationalError) (2013, 'Lost
> connection to MySQL server during query') 'ALTER TABLE domain ADD
> CONSTRAINT ixu_domain_name UNIQUE (name)' ()
>
>
> Thanks
> Rajdeep
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD host support

2014-10-20 Thread Daniel P. Berrange
On Sat, Oct 18, 2014 at 09:04:20PM +0400, Roman Bogorodskiy wrote:
> Hi,
> 
> In discussion of this spec proposal:
> https://review.openstack.org/#/c/127827/ it was suggested by Joe Gordon
> to start a discussion on the mailing list.
> 
> So I'll share my thoughts and a long term plan on adding FreeBSD host
> support for OpenStack. 
> 
> An ultimate goal is to allow using libvirt/bhyve as a compute driver.
> However, I think it would be reasonable to start with libvirt/qemu
> support first as it will allow to prepare the ground.

Agreed, I'd avoid the temptation to try to do everything at once. Taking
an iterative approach of attacking small chunks of work at a time is much
more practical.  So by targetting libvirt+qemu you are able to focus on
just identifying the Linux specific bits of the existing libvirt+qemu
support. Once complete, then you can focus on the separate task of porting
to the libvirt+bhyve driver.

> High level overview of what needs to be done:
> 
>  - Nova
>   * linux_net needs to be re-factored to allow to plug in FreeBSD
> support (that's what the spec linked above is about)

Yep, this is biggest piece of Linux specific code in Nova codepaths
for VM startup at least. So makes sense to deal with this.

>   * nova.virt.disk.mount needs to be extended to support FreeBSD's
> mdconfig(8) in a similar way to Linux's losetup

Broken file injection isn't a show-stopper for booting VMs
but is obviously nice to have and shouldn't be too difficult
as we already have a decent abstraction layer here.

>  - Glance and Keystone
> These components are fairly free of system specifics. Most likely
> they will require some small fixes like e.g. I made for Glance
> https://review.openstack.org/#/c/94100/

Glance & Keystone are obviously core things to get working in order
to be able to boot a VM. 

>  - Cinder
> I didn't look close at Cinder from a porting perspective, tbh.
> Obviously, it'll need some backend driver that would work on
> FreeBSD, e.g. ZFS. I've seen some patches floating around for ZFS
> though. Also, I think it'll need an implementation of iSCSI stack
> on FreeBSD, because it has its own stack, not stgt. On the other
> hand, Cinder is not required for a minimal installation and that
> could be done after adding support of the other components.

I wouldn't worry about doing anything in Cinder until you have the
rest of Nova almost fully functional on FreeBSD.

There are bound to be a number of other things that we can't think of
right now that will appear as you do the work & get to test more and
more functional areas.  I wouldn't bother trying to imagine what these
are right now nor create specs for them. Instead I'd very much recommend
taking an iterative approach to specs + bugs. ie when you come across
new problems wrt porting, just file new specs (for big problems needing
refactoring) or bugs (for minor problems easily fixed) to deal with the
issues as you see fit at the time.

IOW I'd just encourage you to jump right into the networking refactor
work. That mess badly needs cleaning up even if we don't do FreeBSD
work, so is a very worthwhile thing to work on for Kilo regardless.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] Today's (October 20th) meeting time change

2014-10-20 Thread Flavio Percoco
Greetings,

Since we need to do a final discussion on the design session topics, I'm
moving today's meeting to our second reserved time slot. That is,
instead of being at 15 UTC, the meeting will be at 21 UTC.

The main reason is that I won't be able to join our meeting at 15 UTC
since I'll be traveling. The second reason is that in our second time
slot, we can also have Fei Long joining us.

https://wiki.openstack.org/wiki/Meetings/Zaqar#Next_meeting

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Mysql issue

2014-10-20 Thread Rajdeep Dua
Facing this issue on trying to start a devstack installation
Tried with an existing as well as  new installation.

42201 CRITICAL keystone [-] DBConnectionError: (OperationalError) (2013,
'Lost connection to MySQL server during query') 'ALTER TABLE domain ADD
CONSTRAINT ixu_domain_name UNIQUE (name)' ()

This happens when keystone is starting

42201 TRACE keystone raise
exception.DBConnectionError(operational_error)
42201 TRACE keystone DBConnectionError: (OperationalError) (2013, 'Lost
connection to MySQL server during query') 'ALTER TABLE domain ADD
CONSTRAINT ixu_domain_name UNIQUE (name)' ()


Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects

2014-10-20 Thread Mike Scherbakov
Igor, congratulations! Will be happy to see even more thorough reviews from
you!

On Tue, Oct 14, 2014 at 11:54 AM, Aleksey Kasatkin 
wrote:

>
> +1
>
> Aleksey Kasatkin
>
>
> Hi everyone!
>>
>> I would like to propose Igor Kalnitsky as a core reviewer on the
>> Fuel-web team. Igor has been working on openstack patching,
>> nailgun, fuel upgrade and provided a lot of good reviews [1]. In
>> addition he's also very active in IRC and mailing list.
>>
>> Can the other core team members please reply with your votes
>> if you agree or disagree.
>>
>> Thanks!
>>
>
>> [1]
>>
>> http://stackalytics.com/?project_type=stackforge&release=juno&module=fuel-web
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Boot up 1,000 homogeneous VMs in 1 minite

2014-10-20 Thread John Zhang
Dear all,

We proposed a new blueprint (at https://review.openstack.org/#/c/129116/)
for booting up a large number of homogeneous VMs in a very short period
of time. This feature may be targeted to the Kilo version.

All requirements, suggestions and comments are welcome.

Thank you!
VMThunder Group
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev