Re: Automated Testing for Flavors -- Update

2016-03-09 Thread Nicholas Skaggs

On 03/09/2016 05:08 PM, floccul...@gmx.co.uk wrote:

On 09/03/16 21:56, Simon Quigley wrote:

Flocculant,

First of all, I would like to say thank you for providing input on this.

There is no function implemented in the API to update the notice 
board (unless it is really REALLY subtle and I can't see the function).


I actually think Nicholas' idea of having results on the tracker 
would be fine. I think there needs to be a comment on the submitted 
test case that says something like "this is automated" would be good.

I don't agree.
  I'm not the release manager for a flavor, but my guess is, you 
don't just look at the test case completion and say, "oh, it's failing"


no of course not - we would check what the person failing it has 
reported as the cause - the bug
  and just leave it at that. Plus, if autopilot tests are failing, 
that's probably bad and should be addressed anyways.


This is completely beside the point at the moment - the tests have 
been failing for over a year now - so I see absolutely no rush to get 
this causing issues now we're only a few weeks away from release.



  And it would be much easier to hack up a script to do this, I just 
need to fetch the autopilot results, then submit it. So I'm 
supporting Nicholas' original idea.


Now, if someone, within the next hour or two (while I'm out eating), 
decides to hack together an API function for this and gets an MP out, 
I'm all for flocculant's solution, but for now, we don't have that. :)


Let me know if you have another idea that uses the API and I will be 
glad to do it. :)


Thanks,
Simon Quigley
tsimo...@ubuntu.com
tsimonq2 on Freenode
If we are going to randomly add fails to the tracker for no other 
reason than it seems like a good idea - then I'll have to ask that 
Xubuntu isn't included - there are enough problems with the tracker 
results without adding completely pointless fail results to it.


When there is actually a sensible reason to add things to the tracker 
then you'd find me much more accomodating.


Simon, while I do support you hacking on things and doing a proof of 
concept (we can delete the entries you make), I'm not yet ready to turn 
the beast loose just yet. We have to get it running on the same server 
for one, which needs a deployment, an MP to check in the code, etc.


We can explore some different ways to display the information, but 
getting a proof of concept done as entries on the tracker is one data 
point to consider. I will defer to the opinions of the release managers 
for flavors on this, as it's intended to make there lives easier. Even 
if we end up posting the results as just another entry, there is no 
point to spamming the tracker until the tests are fixed. We do know they 
are broken at the moment so there's no value in clogging up the tracker 
with a bunch of failures.


On the notice board updating, I think I prefer that suggestion myself, 
but clearly I'm still easily swayed. So perhaps we need to extend the 
API to support it. But first, the proof of concept.


Nicholas

--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread flocculant

On 09/03/16 21:56, Simon Quigley wrote:

Flocculant,

First of all, I would like to say thank you for providing input on this.

There is no function implemented in the API to update the notice board (unless 
it is really REALLY subtle and I can't see the function).

I actually think Nicholas' idea of having results on the tracker would be fine. I think 
there needs to be a comment on the submitted test case that says something like 
"this is automated" would be good.

I don't agree.

  I'm not the release manager for a flavor, but my guess is, you don't just look at the 
test case completion and say, "oh, it's failing"


no of course not - we would check what the person failing it has 
reported as the cause - the bug

  and just leave it at that. Plus, if autopilot tests are failing, that's 
probably bad and should be addressed anyways.


This is completely beside the point at the moment - the tests have been 
failing for over a year now - so I see absolutely no rush to get this 
causing issues now we're only a few weeks away from release.




  And it would be much easier to hack up a script to do this, I just need to 
fetch the autopilot results, then submit it. So I'm supporting Nicholas' 
original idea.

Now, if someone, within the next hour or two (while I'm out eating), decides to 
hack together an API function for this and gets an MP out, I'm all for 
flocculant's solution, but for now, we don't have that. :)

Let me know if you have another idea that uses the API and I will be glad to do 
it. :)

Thanks,
Simon Quigley
tsimo...@ubuntu.com
tsimonq2 on Freenode
If we are going to randomly add fails to the tracker for no other reason 
than it seems like a good idea - then I'll have to ask that Xubuntu 
isn't included - there are enough problems with the tracker results 
without adding completely pointless fail results to it.


When there is actually a sensible reason to add things to the tracker 
then you'd find me much more accomodating.


--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread Simon Quigley
Flocculant,

First of all, I would like to say thank you for providing input on this.

There is no function implemented in the API to update the notice board (unless 
it is really REALLY subtle and I can't see the function).

I actually think Nicholas' idea of having results on the tracker would be fine. 
I think there needs to be a comment on the submitted test case that says 
something like "this is automated" would be good. I'm not the release manager 
for a flavor, but my guess is, you don't just look at the test case completion 
and say, "oh, it's failing" and just leave it at that. Plus, if autopilot tests 
are failing, that's probably bad and should be addressed anyways. And it would 
be much easier to hack up a script to do this, I just need to fetch the 
autopilot results, then submit it. So I'm supporting Nicholas' original idea.

Now, if someone, within the next hour or two (while I'm out eating), decides to 
hack together an API function for this and gets an MP out, I'm all for 
flocculant's solution, but for now, we don't have that. :)

Let me know if you have another idea that uses the API and I will be glad to do 
it. :)

Thanks,
Simon Quigley
tsimo...@ubuntu.com
tsimonq2 on Freenode

-- 
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread Nicholas Skaggs

On 03/09/2016 04:11 PM, floccul...@gmx.co.uk wrote:

On 09/03/16 20:59, Nicholas Skaggs wrote:

On 03/09/2016 03:51 PM, floccul...@gmx.co.uk wrote:

On 09/03/16 20:43, Nicholas Skaggs wrote:
Sure. Each of the test runs are a 1 to 1 copy of a manual test. So, 
in the same way you would add a result, we'll have a bot account 
add a result with a Pass or Fail to the daily image. It should also 
leave a comment linking to the run so you can learn more if you are 
curious. Simon has actually agreed to hack on this, so I hope we'll 
start to see some bot results (though they will be failures!) on 
the tracker soon. We could still use some help with fixing the 
actual tests however, so they can provide value!



Nicholas

I would really really not want to see anything on the tracker from a 
bot - that's reporting a fail on the test not the reality :(




[snip]




Hmm, well any other opinions? You are correct at this point in that 
it would be showing failures which are not true. However, we want the 
tests to run and show proper pass/fails! So it should be a temporary 
thing. That said, we could not post results until the tests are 
working, but it's certainly possible to have a failed test in the 
future that isn't a real failure.


Nicholas
The trouble with posting any fail (assuming they run properly) from 
these tests to the tracker is there is absolutely no way of knowing 
what failed - just gobbledygook.


So a flavour QA team would have to run the image to see if the fail is 
real or not, and 'where' it failed - at that point what have we gained?


It won't be adding a bug will it - or I would assume not.

Personally given the option to grab rss feeds from Jenkins - then 
people interested in whether an image has failed could do that. That's 
what I had intended way back when.


Maybe a seperate area on the tracker for them?

I was trying to keep it simple, and integrating it was the simpliest way 
I could think of. A post to the notice board? Would that work better?


Nicholas

--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread flocculant

On 09/03/16 20:59, Nicholas Skaggs wrote:

On 03/09/2016 03:51 PM, floccul...@gmx.co.uk wrote:

On 09/03/16 20:43, Nicholas Skaggs wrote:
Sure. Each of the test runs are a 1 to 1 copy of a manual test. So, 
in the same way you would add a result, we'll have a bot account add 
a result with a Pass or Fail to the daily image. It should also 
leave a comment linking to the run so you can learn more if you are 
curious. Simon has actually agreed to hack on this, so I hope we'll 
start to see some bot results (though they will be failures!) on the 
tracker soon. We could still use some help with fixing the actual 
tests however, so they can provide value!



Nicholas

I would really really not want to see anything on the tracker from a 
bot - that's reporting a fail on the test not the reality :(




[snip]




Hmm, well any other opinions? You are correct at this point in that it 
would be showing failures which are not true. However, we want the 
tests to run and show proper pass/fails! So it should be a temporary 
thing. That said, we could not post results until the tests are 
working, but it's certainly possible to have a failed test in the 
future that isn't a real failure.


Nicholas
The trouble with posting any fail (assuming they run properly) from 
these tests to the tracker is there is absolutely no way of knowing what 
failed - just gobbledygook.


So a flavour QA team would have to run the image to see if the fail is 
real or not, and 'where' it failed - at that point what have we gained?


It won't be adding a bug will it - or I would assume not.

Personally given the option to grab rss feeds from Jenkins - then people 
interested in whether an image has failed could do that. That's what I 
had intended way back when.


Maybe a seperate area on the tracker for them?

--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread Nicholas Skaggs

On 03/09/2016 03:51 PM, floccul...@gmx.co.uk wrote:

On 09/03/16 20:43, Nicholas Skaggs wrote:
Sure. Each of the test runs are a 1 to 1 copy of a manual test. So, 
in the same way you would add a result, we'll have a bot account add 
a result with a Pass or Fail to the daily image. It should also leave 
a comment linking to the run so you can learn more if you are 
curious. Simon has actually agreed to hack on this, so I hope we'll 
start to see some bot results (though they will be failures!) on the 
tracker soon. We could still use some help with fixing the actual 
tests however, so they can provide value!



Nicholas

I would really really not want to see anything on the tracker from a 
bot - that's reporting a fail on the test not the reality :(




[snip]




Hmm, well any other opinions? You are correct at this point in that it 
would be showing failures which are not true. However, we want the tests 
to run and show proper pass/fails! So it should be a temporary thing. 
That said, we could not post results until the tests are working, but 
it's certainly possible to have a failed test in the future that isn't a 
real failure.


Nicholas

--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread flocculant

On 09/03/16 20:43, Nicholas Skaggs wrote:
Sure. Each of the test runs are a 1 to 1 copy of a manual test. So, in 
the same way you would add a result, we'll have a bot account add a 
result with a Pass or Fail to the daily image. It should also leave a 
comment linking to the run so you can learn more if you are curious. 
Simon has actually agreed to hack on this, so I hope we'll start to 
see some bot results (though they will be failures!) on the tracker 
soon. We could still use some help with fixing the actual tests 
however, so they can provide value!



Nicholas

I would really really not want to see anything on the tracker from a bot 
- that's reporting a fail on the test not the reality :(




[snip]




--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-09 Thread Nicholas Skaggs
Sure. Each of the test runs are a 1 to 1 copy of a manual test. So, in 
the same way you would add a result, we'll have a bot account add a 
result with a Pass or Fail to the daily image. It should also leave a 
comment linking to the run so you can learn more if you are curious. 
Simon has actually agreed to hack on this, so I hope we'll start to see 
some bot results (though they will be failures!) on the tracker soon. We 
could still use some help with fixing the actual tests however, so they 
can provide value!



Nicholas

On 03/04/2016 11:31 PM, Istimsak Abdulbasir wrote:


Could you explain more how the tracker will be able to store test 
results for each daily test?


On Mar 4, 2016 3:44 PM, "Nicholas Skaggs" 
> 
wrote:


It's been an up and down cycle with mostly failing Autopilot tests
for Ubiquity automated testing. I wanted to re-iterate Max's
status updates as to what's going on so people understand what
will exist for help with testing the final images for Xenial.

What's been happening:
Max has been working on various automated tests for images all
cycle. In addition to these test for ubiquity, Max is also working
on automated smoke tests, upgrade tests, etc. The AP tests for
ubiquity are fragile and can often break, especially as Ubiquity
changes. That's been happening for some time sadly. The goal is
still to have them running for Final Beta, but time is running out.

 * You can see the runs here: http://162.213.34.238:8080/. They
are not nice or pretty, but they do exist!

What we need:

* Folks to come and hack on the tests and fix them! Not just fix
them, but look after them and expand them to keep them running.
Everything you need to know is in the source tree. Branch
lp:ubiquity and look at the README in the autopilot folder. Ask
questions if needed. The tests themselves aren't too difficult to
understand, but gtk2 is painful to automate at times. Dan has done
wonderful work in writing and stepping in to fix them on occasion,
but he's also writing the wonderful dekko mail client. It's time
for others to jump in, if they are interested!

* More tests! Some have suggested simple tests such as merely
attempting to boot the image. These tests are trivial to write
since the hardwork of getting the environment ready and running
them is already in place. Using autopilot or not, if you have a
simple test idea it can probably be added and run.

* A nicer way to grok the test output. Simon has recently begun
reviving the API for the tracker, and it would be lovely to
somehow display the results on the tracker. Writing a scipt to add
a test result submission against the daily test would be simple to do!

* We've now hit issues with scalability of hardware, so I'm
looking to get a dedicated machine for this. I know we turned down
hardware in the past, but it is now becoming a problem. Obviously
it's lower priority as the reality is these tests have proven
harder to keep running than imagined. As such, folks willing to
stick with them would be wonderful!

Nicholas

-- 
Ubuntu-quality mailing list

Ubuntu-quality@lists.ubuntu.com

Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality




--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Automated Testing for Flavors -- Update

2016-03-04 Thread Istimsak Abdulbasir
Could you explain more how the tracker will be able to store test results
for each daily test?
On Mar 4, 2016 3:44 PM, "Nicholas Skaggs" 
wrote:

> It's been an up and down cycle with mostly failing Autopilot tests for
> Ubiquity automated testing. I wanted to re-iterate Max's status updates as
> to what's going on so people understand what will exist for help with
> testing the final images for Xenial.
>
> What's been happening:
> Max has been working on various automated tests for images all cycle. In
> addition to these test for ubiquity, Max is also working on automated smoke
> tests, upgrade tests, etc. The AP tests for ubiquity are fragile and can
> often break, especially as Ubiquity changes. That's been happening for some
> time sadly. The goal is still to have them running for Final Beta, but time
> is running out.
>
>  * You can see the runs here: http://162.213.34.238:8080/. They are not
> nice or pretty, but they do exist!
>
> What we need:
>
> * Folks to come and hack on the tests and fix them! Not just fix them, but
> look after them and expand them to keep them running. Everything you need
> to know is in the source tree. Branch lp:ubiquity and look at the README in
> the autopilot folder. Ask questions if needed. The tests themselves aren't
> too difficult to understand, but gtk2 is painful to automate at times. Dan
> has done wonderful work in writing and stepping in to fix them on occasion,
> but he's also writing the wonderful dekko mail client. It's time for others
> to jump in, if they are interested!
>
> * More tests! Some have suggested simple tests such as merely attempting
> to boot the image. These tests are trivial to write since the hardwork of
> getting the environment ready and running them is already in place. Using
> autopilot or not, if you have a simple test idea it can probably be added
> and run.
>
> * A nicer way to grok the test output. Simon has recently begun reviving
> the API for the tracker, and it would be lovely to somehow display the
> results on the tracker. Writing a scipt to add a test result submission
> against the daily test would be simple to do!
>
> * We've now hit issues with scalability of hardware, so I'm looking to get
> a dedicated machine for this. I know we turned down hardware in the past,
> but it is now becoming a problem. Obviously it's lower priority as the
> reality is these tests have proven harder to keep running than imagined. As
> such, folks willing to stick with them would be wonderful!
>
> Nicholas
>
> --
> Ubuntu-quality mailing list
> Ubuntu-quality@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality
>
-- 
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Automated Testing for Flavors -- Update

2016-03-04 Thread Nicholas Skaggs
It's been an up and down cycle with mostly failing Autopilot tests for 
Ubiquity automated testing. I wanted to re-iterate Max's status updates 
as to what's going on so people understand what will exist for help with 
testing the final images for Xenial.


What's been happening:
Max has been working on various automated tests for images all cycle. In 
addition to these test for ubiquity, Max is also working on automated 
smoke tests, upgrade tests, etc. The AP tests for ubiquity are fragile 
and can often break, especially as Ubiquity changes. That's been 
happening for some time sadly. The goal is still to have them running 
for Final Beta, but time is running out.


 * You can see the runs here: http://162.213.34.238:8080/. They are not 
nice or pretty, but they do exist!


What we need:

* Folks to come and hack on the tests and fix them! Not just fix them, 
but look after them and expand them to keep them running. Everything you 
need to know is in the source tree. Branch lp:ubiquity and look at the 
README in the autopilot folder. Ask questions if needed. The tests 
themselves aren't too difficult to understand, but gtk2 is painful to 
automate at times. Dan has done wonderful work in writing and stepping 
in to fix them on occasion, but he's also writing the wonderful dekko 
mail client. It's time for others to jump in, if they are interested!


* More tests! Some have suggested simple tests such as merely attempting 
to boot the image. These tests are trivial to write since the hardwork 
of getting the environment ready and running them is already in place. 
Using autopilot or not, if you have a simple test idea it can probably 
be added and run.


* A nicer way to grok the test output. Simon has recently begun reviving 
the API for the tracker, and it would be lovely to somehow display the 
results on the tracker. Writing a scipt to add a test result submission 
against the daily test would be simple to do!


* We've now hit issues with scalability of hardware, so I'm looking to 
get a dedicated machine for this. I know we turned down hardware in the 
past, but it is now becoming a problem. Obviously it's lower priority as 
the reality is these tests have proven harder to keep running than 
imagined. As such, folks willing to stick with them would be wonderful!


Nicholas

--
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality