Re: [openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

2017-08-17 Thread Anil Venkata
I will work on DVR+HA migration jobs.

Thanks
Anilvenkata

On Fri, Aug 11, 2017 at 2:40 AM, Sławek Kapłoński 
wrote:

> Hello,
>
> I’m still checking this QoS scenario test and I found something strange
> IMHO.
> For example, almost all failed tests from last 2 days were executed on
> nodes with names like:
> * ubuntu-xenial-2-node-citycloud-YYY- - on those nodes almost (or
> even all) all scenario tests was failed due to failed SSH connection to
> instance,
> * ubuntu-xenial-2-node-rax-iad- - on those nodes QoS test was failed
> because of timeout during reading data
>
> I’m noob in gate tests and how it’s exactly working so my conclusions can
> be completely wrong but maybe those issues are related somehow to some
> cloud providers which provides infrastructure for tests?
> Maybe someone more experienced could take a look on that and help me? Thx
> in advance.
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
> > Wiadomość napisana przez Ihar Hrachyshka  w dniu
> 03.08.2017, o godz. 23:40:
> >
> > Thanks for those who stepped in (Armando and Slawek).
> >
> > We still have quite some failures that would benefit from initial log
> > triage and fixes. If you feel like in this feature freeze time you
> > have less things to do, helping with those scenario failures would be
> > a good way to contribute to the project.
> >
> > Thanks,
> > Ihar
> >
> > On Fri, Jul 28, 2017 at 6:02 AM, Sławek Kapłoński 
> wrote:
> >> Hello,
> >>
> >> I will try to check QoS tests in this job.
> >>
> >> —
> >> Best regards
> >> Slawek Kaplonski
> >> sla...@kaplonski.pl
> >>
> >>
> >>
> >>
> >>> Wiadomość napisana przez Jakub Libosvar  w dniu
> 28.07.2017, o godz. 14:49:
> >>>
> >>> Hi all,
> >>>
> >>> as sending out a call for help with our precious jobs was very
> >>> successful last time and we swept all Python 3 functional from Neutron
> >>> pretty fast (kudos the the team!), here comes a new round of failures.
> >>>
> >>> This time I'm asking for your help  >>> poster here> with gate-tempest-dsvm-neutron-dvr-multinode-scenario
> >>> non-voting job. This job has been part of check queue for a while and
> is
> >>> very very unstable. Such job covers scenarios like router dvr/ha/legacy
> >>> migrations, qos, trunk and dvr. I went through current failures and
> >>> created an etherpad [1] with categorized failures and logstash queries
> >>> that give you latest failures with given particular tests.
> >>>
> >>> If you feel like doing troubleshooting and sending fixes for gates,
> >>> please pick one test and write down your name to the test.
> >>>
> >>> Thanks to all who are willing to participate.
> >>>
> >>> Have a great weekend.
> >>> Jakub
> >>>
> >>>
> >>> [1]
> >>> https://etherpad.openstack.org/p/neutron-dvr-multinode-
> scenario-gate-failures
> >>>
> >>>
> >>> 
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

2017-08-10 Thread Sławek Kapłoński
Hello,

I’m still checking this QoS scenario test and I found something strange IMHO.
For example, almost all failed tests from last 2 days were executed on nodes 
with names like:
* ubuntu-xenial-2-node-citycloud-YYY- - on those nodes almost (or even all) 
all scenario tests was failed due to failed SSH connection to instance,
* ubuntu-xenial-2-node-rax-iad- - on those nodes QoS test was failed 
because of timeout during reading data

I’m noob in gate tests and how it’s exactly working so my conclusions can be 
completely wrong but maybe those issues are related somehow to some cloud 
providers which provides infrastructure for tests?
Maybe someone more experienced could take a look on that and help me? Thx in 
advance.

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl




> Wiadomość napisana przez Ihar Hrachyshka  w dniu 
> 03.08.2017, o godz. 23:40:
> 
> Thanks for those who stepped in (Armando and Slawek).
> 
> We still have quite some failures that would benefit from initial log
> triage and fixes. If you feel like in this feature freeze time you
> have less things to do, helping with those scenario failures would be
> a good way to contribute to the project.
> 
> Thanks,
> Ihar
> 
> On Fri, Jul 28, 2017 at 6:02 AM, Sławek Kapłoński  wrote:
>> Hello,
>> 
>> I will try to check QoS tests in this job.
>> 
>> —
>> Best regards
>> Slawek Kaplonski
>> sla...@kaplonski.pl
>> 
>> 
>> 
>> 
>>> Wiadomość napisana przez Jakub Libosvar  w dniu 
>>> 28.07.2017, o godz. 14:49:
>>> 
>>> Hi all,
>>> 
>>> as sending out a call for help with our precious jobs was very
>>> successful last time and we swept all Python 3 functional from Neutron
>>> pretty fast (kudos the the team!), here comes a new round of failures.
>>> 
>>> This time I'm asking for your help >> poster here> with gate-tempest-dsvm-neutron-dvr-multinode-scenario
>>> non-voting job. This job has been part of check queue for a while and is
>>> very very unstable. Such job covers scenarios like router dvr/ha/legacy
>>> migrations, qos, trunk and dvr. I went through current failures and
>>> created an etherpad [1] with categorized failures and logstash queries
>>> that give you latest failures with given particular tests.
>>> 
>>> If you feel like doing troubleshooting and sending fixes for gates,
>>> please pick one test and write down your name to the test.
>>> 
>>> Thanks to all who are willing to participate.
>>> 
>>> Have a great weekend.
>>> Jakub
>>> 
>>> 
>>> [1]
>>> https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-failures
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

2017-08-03 Thread Ihar Hrachyshka
Thanks for those who stepped in (Armando and Slawek).

We still have quite some failures that would benefit from initial log
triage and fixes. If you feel like in this feature freeze time you
have less things to do, helping with those scenario failures would be
a good way to contribute to the project.

Thanks,
Ihar

On Fri, Jul 28, 2017 at 6:02 AM, Sławek Kapłoński  wrote:
> Hello,
>
> I will try to check QoS tests in this job.
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
>> Wiadomość napisana przez Jakub Libosvar  w dniu 
>> 28.07.2017, o godz. 14:49:
>>
>> Hi all,
>>
>> as sending out a call for help with our precious jobs was very
>> successful last time and we swept all Python 3 functional from Neutron
>> pretty fast (kudos the the team!), here comes a new round of failures.
>>
>> This time I'm asking for your help > poster here> with gate-tempest-dsvm-neutron-dvr-multinode-scenario
>> non-voting job. This job has been part of check queue for a while and is
>> very very unstable. Such job covers scenarios like router dvr/ha/legacy
>> migrations, qos, trunk and dvr. I went through current failures and
>> created an etherpad [1] with categorized failures and logstash queries
>> that give you latest failures with given particular tests.
>>
>> If you feel like doing troubleshooting and sending fixes for gates,
>> please pick one test and write down your name to the test.
>>
>> Thanks to all who are willing to participate.
>>
>> Have a great weekend.
>> Jakub
>>
>>
>> [1]
>> https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-failures
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

2017-07-28 Thread Sławek Kapłoński
Hello,

I will try to check QoS tests in this job.

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl




> Wiadomość napisana przez Jakub Libosvar  w dniu 
> 28.07.2017, o godz. 14:49:
> 
> Hi all,
> 
> as sending out a call for help with our precious jobs was very
> successful last time and we swept all Python 3 functional from Neutron
> pretty fast (kudos the the team!), here comes a new round of failures.
> 
> This time I'm asking for your help  poster here> with gate-tempest-dsvm-neutron-dvr-multinode-scenario
> non-voting job. This job has been part of check queue for a while and is
> very very unstable. Such job covers scenarios like router dvr/ha/legacy
> migrations, qos, trunk and dvr. I went through current failures and
> created an etherpad [1] with categorized failures and logstash queries
> that give you latest failures with given particular tests.
> 
> If you feel like doing troubleshooting and sending fixes for gates,
> please pick one test and write down your name to the test.
> 
> Thanks to all who are willing to participate.
> 
> Have a great weekend.
> Jakub
> 
> 
> [1]
> https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-failures
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

2017-07-28 Thread Jakub Libosvar
Hi all,

as sending out a call for help with our precious jobs was very
successful last time and we swept all Python 3 functional from Neutron
pretty fast (kudos the the team!), here comes a new round of failures.

This time I'm asking for your help  with gate-tempest-dsvm-neutron-dvr-multinode-scenario
non-voting job. This job has been part of check queue for a while and is
very very unstable. Such job covers scenarios like router dvr/ha/legacy
migrations, qos, trunk and dvr. I went through current failures and
created an etherpad [1] with categorized failures and logstash queries
that give you latest failures with given particular tests.

If you feel like doing troubleshooting and sending fixes for gates,
please pick one test and write down your name to the test.

Thanks to all who are willing to participate.

Have a great weekend.
Jakub


[1]
https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-failures


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev