[openstack-dev] [neutron-lbaas][tempest] tempest v2 API tests failing with logging_noop driver

2015-04-05 Thread santosh sharma
I am using latest git version (after
https://review.openstack.org/#/c/165716/ merge):

There are 18 tests failing( (logging noop driver) with latest changes
Attaching tempest log files.




Thanks
Santosh
stack@devstack:~/neutron-lbaas$ tox -e tempest
tempest develop-inst-nodeps: /opt/stack/neutron-lbaas
tempest runtests: commands[0] | sh tools/pretty_tox.sh
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} 
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron_lbaas/tests/unit}

{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor
 [1.427909s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_extra_attribute
 [0.027572s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_invalid_attribute
 [0.011046s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_missing_attribute
 [0.008790s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_delete_health_monitor
 [1.091580s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_get_health_monitor
 [1.644487s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_empty
 [0.012580s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_one
 [1.239384s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_two
 [6.300297s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_udpate_health_monitor_invalid_attribute
 [1.525625s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_update_health_monitor
 [1.822102s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_update_health_monitor_extra_attribute
 [1.471891s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener
 [1.602446s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_admin_state_up
 [0.156564s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_connection_limit
 [0.317751s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_description
 [1.852935s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_load_balancer_id
 [0.117215s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_name
 [1.410016s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_protocol
 [0.148219s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_protocol_port
 [0.133760s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_incorrect_attribute
 [0.236768s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_admin_state_up
 [0.259135s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_connection_limit
 [0.117983s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_description
 ... SKIPPED: Skipped until Bug: 1434717 is resolved.
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_empty_tenant_id
 [0.156728s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron_lbaas/tests/tempest/v2/api/test_listeners.py", line 313, in 
test_create_listener_invalid_empty_tenant_id
tenant_id="")
  File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/opt/s

[openstack-dev] [nova][neutron] bring NIC attributes to OPSK

2015-04-05 Thread Moshe Levi
Hi,
After talking to several customers, we noticed that there is a solid 
requirement that choosing a compute node would take into account other 
attributes as NIC's attributes and capabilities ( speed, RDAM enable,  
supported link modes and etc... ).
I was searching around old BP, and  was thinking on several options to 
implement it:

1.   Nova compute would recognize and report its Nic's capabilities, and 
Nova's filter or scheduler will take them into consideration.

(There is  a nova  blueprint for link state  awareness 
https://review.openstack.org/#/c/87978/3/specs/juno/nic-state-aware-scheduling.rst,
 but here the idea is just NIC's capabilities without the link state.)

2.   New Neutron physical topology service(service plugin)  that will 
retrieve this information and store it in Neutron either by  L2 Agents or SDN 
driver, and a Nova filter that will query Neutron [1].

[1] 
https://review.openstack.org/#/c/91275/19/specs/juno/physical-network-topology.rst

Please share your thoughts and feel free to comment.

Best Regard,
Moshe Levi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] PTL Candidacy

2015-04-05 Thread Serg Melikyan
Hi folks,

I'd like to announce my candidacy as PTL for Murano [1].

I was handling PTL responsibilities in this release cycle so far,
after Ruslan Kamaldinov (who was handling them in Juno cycle) hand
over them to me on OpenStack Summit in Paris. I am working on Murano
since it's kick-off two years ago [2][3].

As a PTL of Murano I'll continue my work on making Murano better with
each day and become project of choice for building own Application
Catalogs & Marketplaces for private & public clouds on OpenStack. I
will focus on building great environment for contributors and work on
relationships built around the project itself not only around the
features needed by contributors.

[1] http://wiki.openstack.org/wiki/Murano
[2] http://stackalytics.com/report/contribution/murano/90
[3] 
http://stackalytics.com/?release=kilo&metric=commits&project_type=stackforge&module=murano-group

P.S. I understand that it is strange to see same application to same
position just few weeks after being officially elected, but since
Murano is now part of the Big Tent and in order to comply with the
existing processes we are running Murano PTL Election once again for
Liberty.

http://lists.openstack.org/pipermail/openstack-dev/2015-April/060472.html

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Documentation] PTL Candidacy

2015-04-05 Thread Lana Brindley
Hi everyone,

I am announcing my candidacy for Documentation PTL.

I have been contributing to the OpenStack documentation project since 2013, and 
have been a core contributor since early 2014. During that time I have been 
supporting and promoting the OpenStack documentation community in the southern 
hemisphere, through the implementation and running of regular APAC 
documentation meetings. These are designed to alternate with the US-timed 
meetings, and provide a forum for people in the Asia-Pacific region to be able 
to collaborate during a suitable time slot. I have also been actively 
coordinating Australia-based meet ups and documentation swarms. Some of you 
might have met me at the Paris Summit, where Anne and I gave a talk about 
working in an enterprise environment with an upstream[1].

I am employed by Rackspace, and I work from my home in Brisbane, Australia. My 
job title is officially “Senior Manager, Information Development”, but what 
that really means is that I look after a team of fantastic writers in Australia 
and the United States, all of whom are OpenStack ATCs. I have been managing 
documentation teams in some capacity for nearly five years, and I’ve been a 
technical writer for a decade, but (like most writers) I’ve been writing all of 
my life.

Over the past year or so I have been working closely with Anne, seeing the 
amazing things she has done (and continues to do) with this group. If elected 
as PTL, I intend to continue that close relationship, and build on what she has 
already achieved. We need to continue the RST conversion, and move closer to an 
Every Page is Page One-style delivery mechanism. I also want to keep a focus on 
information architecture as a whole, and making sure that we’re delivering 
documentation that our readers can really use. I would also like to work 
towards a more effective collaboration with our corporate contributors: giving 
them greater access to documentation that they can use as a base for their own 
products, and enabling them to more easily give back to our upstream community.

I’d love to have your support for the PTL role, I’m looking forward to being 
able to better serve this community, and continue to see it grow and flourish.

Thanks,
Lana

1: https://www.youtube.com/watch?v=9hWdD2t43JY

Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-05 Thread Boris Pavlovic
Brant,

I run profimp with and without patch
https://review.openstack.org/#/c/164066/:
And it really works well:

before 170ms:
http://boris-42.github.io/keystone/before.html

after 76ms:
http://boris-42.github.io/keystone/after.html


Best regards,
Boris Pavlovic


On Fri, Apr 3, 2015 at 2:44 AM, Monty Taylor  wrote:

> On 04/02/2015 06:22 PM, Brant Knudson wrote:
> > On Thu, Apr 2, 2015 at 4:52 PM, Boris Pavlovic 
> wrote:
> >
> >> Hi stackers,
> >>
> >> Recently, I started working on speeding up Rally cli.
> >>
> >> What I understand immediately  is that I don't understand why it takes
> >> 700-800ms
> >> to just run "rally version" command and it is impossible hard task to
> find
> >> what takes so much time just by reading the code.
> >>
> >> I started playing with patching __import__ and make a simple but
> powerful
> >> tool
> >> that allows you to trace any imports and get pretty graphs of nested
> >> importing with timings:
> >>
> >> https://github.com/boris-42/profimp
> >>
> >> So now it's simple to understand what imports take most of time, just by
> >> running:
> >>
> >>   profimp "import " --html
> >>
> >>
> >> Let's optimize OpenStack libs and clients?
> >>
> >>
> >> Best regards,
> >> Boris Pavlovic
> >>
> >>
> > There's a review in python-keystoneclient to do lazy importing of modules
> > here: https://review.openstack.org/#/c/164066/ . It would be
> interesting to
> > know if this improves the initial import time significantly. Also, this
> can
> > be an example of how to improve other libraries.
>
> Yes please.
>
> Also - for libraries - let's try to not import lots of things.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Common Libraries PTL Candidacy

2015-04-05 Thread Davanum Srinivas
Hi Everyone,

After shadowing Doug for a while, It's going to be a really hard job
filling Doug's shoes. I am very thankful and very glad we had him so
long to guide us. I'd like to continue his good work going forward.

We've achieved a lot last couple of cycles, and the proof is the
almost empty oslo-incubator and the numerous libraries that we have
shipped. There is a lot of work yet to be done in adoption of our oslo
libraries in Liberty and beyond. We now have a good handle on the
release process, working with our CI, stable branches etc. We should
figure out how to help other projects pick up lessons we learned as
well. I'd also like to concentrate on getting more participation and
expertise in some of our really critical projects like oslo.messaging,
oslo.db etc. We have some very good solutions like Taskflow which
could use some help with adoption in existing projects like Nova. I'd
like to figure out how to do that as well.

If you would like to have me as your PTL, i'd need your help and
patience to make things happen :)

Thanks,
Dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Issue for backup speed

2015-04-05 Thread Jae Sang Lee
Send without compression is more slower because it send a large data.
In my environment, send 10GB disk to swift except compression spent 6min
40sec but with compression 4min 13sec.


2015-04-02 23:16 GMT+09:00 Murali Balcha :

>  Just curious. What is the overhead of compression and other backup
> processes?  How much time does it take to upload a simple 50GB file to
> swift compare to backup of 50 GB to swift?
>
>
>
> *From:* Duncan Thomas [mailto:duncan.tho...@gmail.com]
> *Sent:* Wednesday, April 01, 2015 6:13 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [cinder] Issue for backup speed
>
>
>
> This is something we're working on (I work with the author of the patch
> you referenced) but the refactoring of the backup code in this cycle has
> made progress challenging. If you have a patch that works, please submit
> it, even if it needs some cleaning up, we'd be happy to work with you on
> and testing, cleaning up and improvements.
>
>
>
> The basic problem is that backup is CPU bound (compression, ssl) so the
> existing parallelisation techniques used in cinder don't help. Running many
> cinder-backup processes can give you good aggregate throughput if you're
> running many backups at once, but this appears not to be a common case,
> even in a large public cloud.
>
>
>
> On 1 April 2015 at 11:41, Jae Sang Lee  wrote:
>
>  Hi,
>
>
>
> I tested Swift backup driver in Cinder-backup and that performance isn't
> high.
>
> In our test environment, The average time for backup 50G volume is 20min.
>
>
>
>
>
> I found a patch for this that add multi thread for swift backup driver(
> https://review.openstack.org/#/c/111314) but It's also too slow. It looks
> like that patch doesn't implement thread properly.
>
>
>
> Is there any improvement way about this? I'd appreciate other's thoughts
> on these issues.
>
>
>
> Thanks.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Issue for backup speed

2015-04-05 Thread Duncan Thomas
It heavily depends on your swift and network setup. We see better
throughout without compression, by about 20% if I remember correctly.
On 6 Apr 2015 04:17, "Jae Sang Lee"  wrote:

> Send without compression is more slower because it send a large data.
> In my environment, send 10GB disk to swift except compression spent 6min
> 40sec but with compression 4min 13sec.
>
>
> 2015-04-02 23:16 GMT+09:00 Murali Balcha :
>
>>  Just curious. What is the overhead of compression and other backup
>> processes?  How much time does it take to upload a simple 50GB file to
>> swift compare to backup of 50 GB to swift?
>>
>>
>>
>> *From:* Duncan Thomas [mailto:duncan.tho...@gmail.com]
>> *Sent:* Wednesday, April 01, 2015 6:13 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [cinder] Issue for backup speed
>>
>>
>>
>> This is something we're working on (I work with the author of the patch
>> you referenced) but the refactoring of the backup code in this cycle has
>> made progress challenging. If you have a patch that works, please submit
>> it, even if it needs some cleaning up, we'd be happy to work with you on
>> and testing, cleaning up and improvements.
>>
>>
>>
>> The basic problem is that backup is CPU bound (compression, ssl) so the
>> existing parallelisation techniques used in cinder don't help. Running many
>> cinder-backup processes can give you good aggregate throughput if you're
>> running many backups at once, but this appears not to be a common case,
>> even in a large public cloud.
>>
>>
>>
>> On 1 April 2015 at 11:41, Jae Sang Lee  wrote:
>>
>>  Hi,
>>
>>
>>
>> I tested Swift backup driver in Cinder-backup and that performance isn't
>> high.
>>
>> In our test environment, The average time for backup 50G volume is 20min.
>>
>>
>>
>>
>>
>> I found a patch for this that add multi thread for swift backup driver(
>> https://review.openstack.org/#/c/111314) but It's also too slow. It
>> looks like that patch doesn't implement thread properly.
>>
>>
>>
>> Is there any improvement way about this? I'd appreciate other's thoughts
>> on these issues.
>>
>>
>>
>> Thanks.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> --
>>
>> Duncan Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host maintenance notification

2015-04-05 Thread Jay Pipes

On 03/30/2015 03:16 AM, Balázs Gibizer wrote:

Hi,

I have the following scenario. I have an application consisting of
multiple VMs on different compute hosts. The admin puts one of the
hosts into maintenance mode (nova-manage service disable ...) because
there will  be some maintenance activity on that host in the near
future. Is there a way to  get a notification from Nova when a host
is put into maintenance mode? If it is not the case today would the
nova community support such an addition to Nova?

As a subsequent question is there a way for an external system to
listen to such a notification published on the message bus?


Hi Gibi!

I don't believe there is a notification currently sent when a service is 
disabled. I agree this would be a good (and pretty easy) addition to Nova.


Please feel free to add a blueprint in Launchpad. I don't see this as 
needing a full spec, really. It shouldn't be more than a few lines of 
code to send a new notification message.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [RFC/FFE] Finishing state machine work for Kilo

2015-04-05 Thread Ramakrishnan G
+1 from me.  Since we don't have ENROLL state as per the state machine, I
think it should be MANAGEABLE when we enroll a node.  At least, it can also
prevent nodes getting into a ready state even before an operator getting
hands on it.

One comment on #2.  Before we make a new client release with v1.6,
shouldn't the behaviour of the 0.x.x python-ironicclient be that, newly
enroll nodes have provision_state as NOSTATE again instead of AVAILABLE ?


On Fri, Apr 3, 2015 at 1:59 PM, Dmitry Tantsur  wrote:

> Hi all!
>
> Today I got an internal email, stating that new ironicclient brakes
> ironic-discoverd. Indeed, after rebase to the latest ironicclient git
> master, discoverd started receiving "AVAILABLE" state instead of "None" for
> newly enrolled nodes. It's not a valid state for introspection, valid are
> "MANAGEABLE" (discoverd stand-alone usage), "INSPECTING" (discoverd via
> Ironic driver) and None (Juno + discoverd stand-alone). Looks like despite
> introducing microversions we did manage to break 3rdparty apps relying on
> states... Also we're in a bit weird situation where nodes appear ready for
> scheduling, despite us having a special state for managing nodes _before_
> being ready for scheduling.
>
> I find the situation pretty confusing, and I'd like your comments on the
> following proposal to be implemented before RC1:
>
> 1. add new micro-version 1.7. nodes created by API with this version will
> appear in state MANAGEABLE;
> 2. make a client release with current API version set to 1.6 (thus
> excluding change #1 from users of this version);
> 3. set current API version for ironicclient to 1.7 and release
> ironicclient version 2.0.0 to designate behavior changes;
> 4. document the whole thingy properly
>
> #1 should be a small change, but it definitely requires FFE.
> Thoughts?
>
> Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Report status of Huawei Volume CI from April 1

2015-04-05 Thread Mike Perez
On 12:47 Sat 04 Apr , liuxinguo wrote:
> Hello Cinder team,
> 
> · Huawei Volume CI has posted to about forty cinder reviews from 
> April 1 to now and will keep reporting stably.
> ·
> · It now runs 304 tests, passed 293 and skipped 11.
> 
> The following are links to the cinder reviews our CI has posted results to, 
> please check it:
> https://review.openstack.org/12‍

I've tried a few of the links and unfortunately none of the log pages are
loading for me. Is anyone else having luck?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Report status of Huawei Volume CI from April 1

2015-04-05 Thread Steve Martinelli
Just trying this out for fun (Mike's reply made me curious)

A lot of the link on the initial note weren't working for me, but this one 
did: https://review.openstack.org/#/c/169781/
and it's CI reports seem to work as well. They route me to: 
http://182.138.104.29:8088/huawei-18000-fc-dsvm-tempest-full/13/
and http://182.138.104.29:8088/huawei-18000-iscsi-dsvm-tempest-full/13/

I found that the ones that don't work (like this patch 
https://review.openstack.org/#/c/159704/) are attempting to
connect to a different ip address (and port): 
http://182.138.104.27/huawei-18000-iscsi-dsvm-tempest-full/70

Thanks,

Steve Martinelli
OpenStack Keystone Core

Mike Perez  wrote on 04/06/2015 01:05:00 AM:

> From: Mike Perez 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/06/2015 01:09 AM
> Subject: Re: [openstack-dev] [cinder] Report status of Huawei Volume
> CI from April 1
> 
> On 12:47 Sat 04 Apr , liuxinguo wrote:
> > Hello Cinder team,
> > 
> > · Huawei Volume CI has posted to about forty cinder 
> reviews from April 1 to now and will keep reporting stably.
> > ·
> > · It now runs 304 tests, passed 293 and skipped 11.
> > 
> > The following are links to the cinder reviews our CI has posted 
> results to, please check it:
> > https://review.openstack.org/12‍
> 
> I've tried a few of the links and unfortunately none of the log pages 
are
> loading for me. Is anyone else having luck?
> 
> -- 
> Mike Perez
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-04-05 Thread Mike Perez
On 18:20 Mon 23 Mar , Diem Tran wrote:
> Hello Cinder team,
> 
> Oracle ZFSSA CI has been reporting since March 20th. Below is a link
> to the list of results the CI already posted:
> 
> https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z
> 
> Our CI system will be running and reporting results from now on,
> hence I kindly request that you accept our CI results and consider
> re-integrating our drivers back in Kilo RC.
> 
> If there is any concern, please let us know.

Patch is up to revert the removal.

https://review.openstack.org/#/c/170770/

Since this CI only covers ZFSSANFSDriver and not also the ZFSSAISCSIDriver, we
won't be able to add back in iscsi driver this release.

https://review.openstack.org/#/c/169283/
http://ec2-54-149-113-183.us-west-2.compute.amazonaws.com/ci_results/refs-changes-83-169283-1-nfs/logs/etc/cinder/cinder.conf

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host maintenance notification

2015-04-05 Thread Chris Friesen

On 04/05/2015 09:17 PM, Jay Pipes wrote:

On 03/30/2015 03:16 AM, Balázs Gibizer wrote:

Hi,

I have the following scenario. I have an application consisting of
multiple VMs on different compute hosts. The admin puts one of the
hosts into maintenance mode (nova-manage service disable ...) because
there will  be some maintenance activity on that host in the near
future. Is there a way to  get a notification from Nova when a host
is put into maintenance mode? If it is not the case today would the
nova community support such an addition to Nova?

As a subsequent question is there a way for an external system to
listen to such a notification published on the message bus?


Hi Gibi!

I don't believe there is a notification currently sent when a service is
disabled. I agree this would be a good (and pretty easy) addition to Nova.

Please feel free to add a blueprint in Launchpad. I don't see this as needing a
full spec, really. It shouldn't be more than a few lines of code to send a new
notification message.


Wouldn't a new notification message count as an API change?  Or are we saying 
that it's such a small API change that any discussion can happen in the blueprint?


(I'm trying to figure out how this relates to 
"http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html"; which 
says that any API change requires a spec.)


Also, if the notification messages are considered part of the API, should they 
be versioned?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Report status of Huawei Volume CI from April 1

2015-04-05 Thread liuxinguo
Yes! There is a block at 80 port, and if you change it to 8088, for example 
 :
Change   http://182.138.104.27/huawei-18000-iscsi-dsvm-tempest-full/70
to  http://182.138.104.27:8088/huawei-18000-iscsi-dsvm-tempest-full/70
then it works!

And I have updated my apache configuration to ajust this.

Liu

发件人: Steve Martinelli [mailto:steve...@ca.ibm.com]
发送时间: 2015年4月6日 13:42
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [cinder] Report status of Huawei Volume CI from April 1

Just trying this out for fun (Mike's reply made me curious)

A lot of the link on the initial note weren't working for me, but this one did: 
https://review.openstack.org/#/c/169781/
and it's CI reports seem to work as well. They route me to: 
http://182.138.104.29:8088/huawei-18000-fc-dsvm-tempest-full/13/
and http://182.138.104.29:8088/huawei-18000-iscsi-dsvm-tempest-full/13/

I found that the ones that don't work (like this patch 
https://review.openstack.org/#/c/159704/) are attempting to
connect to a different ip address (and port): 
http://182.138.104.27/huawei-18000-iscsi-dsvm-tempest-full/70

Thanks,

Steve Martinelli
OpenStack Keystone Core

Mike Perez mailto:thin...@gmail.com>> wrote on 04/06/2015 
01:05:00 AM:

> From: Mike Perez mailto:thin...@gmail.com>>
> To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-dev@lists.openstack.org>>
> Date: 04/06/2015 01:09 AM
> Subject: Re: [openstack-dev] [cinder] Report status of Huawei Volume
> CI from April 1
>
> On 12:47 Sat 04 Apr , liuxinguo wrote:
> > Hello Cinder team,
> >
> > · Huawei Volume CI has posted to about forty cinder
> reviews from April 1 to now and will keep reporting stably.
> > ·
> > · It now runs 304 tests, passed 293 and skipped 11.
> >
> > The following are links to the cinder reviews our CI has posted
> results to, please check it:
> > https://review.openstack.org/12‍
>
> I've tried a few of the links and unfortunately none of the log pages are
> loading for me. Is anyone else having luck?
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Huawei Volume CI have changed the port of log server from 80 to 8088

2015-04-05 Thread liuxinguo
Hi Cinder team,

Since the 80 port or our log server have some issue to be accessed from the 
public, we have changed the port from 80 to 8088.

* So if you opened a link like 
http://182.138.104.27/huawei-18000-iscsi-dsvm-tempest-full/70, please change 
the access port to 8088,
* like 
http://182.138.104.27:8088/huawei-18000-iscsi-dsvm-tempest-full/70.
*
* And I make an apology for the 
inconvenient when you access the CI logs:).

Liu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev