Re: [DISCUSS] - Releases, Project Management & Funding Thereof

2017-07-01 Thread Ron Wheeler

Do all of these combinations need to be fully tested for each release?

What are the combinations that have been tested for the current release?

How many of these combinations are known to be running in production?

How many of these production organizations have test environments that 
could be used? And operations staff that could run the tests.
They will test anyway so it is mostly a change to timing and the actual 
scripts.
They may be able to augment the existing scripts with the test scripts 
that they are using or work on the completion of the scripts already 
planned.


I am unsure what Paul means by "we need hardware to run tests on".
Clearly hardware is required for testing but it would not seem to matter 
where the hardware exists or who owns it as long as it is available.


Is there a list of tests that are missing?
Is the test suite documented so that end-users can actually use the 
tests on their own test systems?


This is a bit of a switch in thinking about testing and about the role 
of the users in the release management process but it has some benefits.
The testing function of the release team switches to a project 
management role that involves tracking and coaching the testing ecosystem.


Ron




On 30/06/2017 4:57 PM, Paul Angus wrote:

Taken from a talk on CloudStack testing [1]...

There are Many, many, MANY permutations of a CloudStack deployment….
• Basic / Advanced
• Local / shared / mixed storage
• More than 8 common hypervisor types/versions
• 4 or 5 Management server OS possibilities
• That’s 144 combinations only looking the basics.

[1] 
https://www.slideshare.net/ShapeBlue/cloudstack-eu-user-group-trillian?qid=74ff2be0-664c-4bca-a3dc-f30d880ca088&v=&b=&from_search=1

Trillian [2], can create any of those, and multiple at the same time, but the 
amount of hardware required to do that means that we have to pick our battles. 
Running the blueorangutan bot command '@blueorangutan matrix' in a PR will run 
the smoke test suite against a PR using 3 environments, one each of KVM, 
XenServer and vSphere and takes around 8 hours.

But that is only looking for major regressions.  A full component test run 
takes around 5 days to run and is riddled with bugs in the tests.

Ultimately these are still of limited scope, few people are as diligent as say 
Mike T in creating practical marvin tests for their code / features.

[2] https://github.com/shapeblue/Trillian

Therefore we need hardware to run tests on, but more importantly we need the 
tests to exist and work in the first place.  Then we can really do something.



paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
   
  



-Original Message-
From: Alex Hitchins [mailto:a...@alexhitchins.com]
Sent: 30 June 2017 21:34
To: dev@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof

Consultation with users is something that should definite be done. Canvas as 
many as possible.

I agree that most people will be running test environments before full rollout 
of any technology, I guess see it a little from a CTO eyes - why shortlist a 
technology that doesn't even endorse its own releases?

Hopefully we will get some more replies to this thread from other CloudStack 
enthusiasts to help shape this conversation.

I'm setting up a new development environment now to get my hands mildly soiled. 
Going the Windows route again. Fancy a challenge for the weekend.




Alexander Hitchins

E: a...@alexhitchins.com
W: alexhitchins.com
M: 07788 423 969
T: 01892 523 587

-Original Message-
From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
Sent: 30 June 2017 21:08
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] - Releases, Project Management & Funding Thereof


On 30/06/2017 3:28 PM, Alex Hitchins wrote:

We can't validate all scenarios no, hence suggesting several common setups as a 
reasonable baseline. I like the idea of users posting their hardware and 
versions as a community endeavour.

I strongly feel there should be an established, physical setup that the release 
team have access to in order to validate a release.

This is perhaps something that should be requested from the user community.
I would expect that anyone running Cloudstack in production on a major site 
would have a test setup and might be very happy to have the dev team test on 
their setup.
This would save them a lot of resources at the expense of a bit of time on 
their test environment.


If this was some random cat meme generator on GitHub, I'd accept the risk in 
running an untested version. For something I'd be running my business on 
however I'd expect there being some weight behind the release.

Perhaps I have unrealistic expectations!

Not at all.
Your expectations might be the key to making a pitch to the user community for 
some help from people and organizations that are not interested in writ

RE: [DISCUSS] - Releases, Project Management & Funding Thereof

2017-07-01 Thread Alex Hitchins
Out of interest, are there any guidelines/rules in place to reject PR's without 
unit tests?




Alexander Hitchins

E: a...@alexhitchins.com
W: alexhitchins.com
M: 07788 423 969
T: 01892 523 587

-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com] 
Sent: 30 June 2017 21:58
To: dev@cloudstack.apache.org
Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof

Taken from a talk on CloudStack testing [1]...

There are Many, many, MANY permutations of a CloudStack deployment…. 
• Basic / Advanced
• Local / shared / mixed storage
• More than 8 common hypervisor types/versions • 4 or 5 Management server OS 
possibilities • That’s 144 combinations only looking the basics.

[1] 
https://www.slideshare.net/ShapeBlue/cloudstack-eu-user-group-trillian?qid=74ff2be0-664c-4bca-a3dc-f30d880ca088&v=&b=&from_search=1

Trillian [2], can create any of those, and multiple at the same time, but the 
amount of hardware required to do that means that we have to pick our battles. 
Running the blueorangutan bot command '@blueorangutan matrix' in a PR will run 
the smoke test suite against a PR using 3 environments, one each of KVM, 
XenServer and vSphere and takes around 8 hours.

But that is only looking for major regressions.  A full component test run 
takes around 5 days to run and is riddled with bugs in the tests. 

Ultimately these are still of limited scope, few people are as diligent as say 
Mike T in creating practical marvin tests for their code / features.

[2] https://github.com/shapeblue/Trillian

Therefore we need hardware to run tests on, but more importantly we need the 
tests to exist and work in the first place.  Then we can really do something.



paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 


-Original Message-
From: Alex Hitchins [mailto:a...@alexhitchins.com]
Sent: 30 June 2017 21:34
To: dev@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof

Consultation with users is something that should definite be done. Canvas as 
many as possible.

I agree that most people will be running test environments before full rollout 
of any technology, I guess see it a little from a CTO eyes - why shortlist a 
technology that doesn't even endorse its own releases?

Hopefully we will get some more replies to this thread from other CloudStack 
enthusiasts to help shape this conversation.

I'm setting up a new development environment now to get my hands mildly soiled. 
Going the Windows route again. Fancy a challenge for the weekend.




Alexander Hitchins

E: a...@alexhitchins.com
W: alexhitchins.com
M: 07788 423 969
T: 01892 523 587

-Original Message-
From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
Sent: 30 June 2017 21:08
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] - Releases, Project Management & Funding Thereof


On 30/06/2017 3:28 PM, Alex Hitchins wrote:
> We can't validate all scenarios no, hence suggesting several common setups as 
> a reasonable baseline. I like the idea of users posting their hardware and 
> versions as a community endeavour.
>
> I strongly feel there should be an established, physical setup that the 
> release team have access to in order to validate a release.

This is perhaps something that should be requested from the user community.
I would expect that anyone running Cloudstack in production on a major site 
would have a test setup and might be very happy to have the dev team test on 
their setup.
This would save them a lot of resources at the expense of a bit of time on 
their test environment.

> If this was some random cat meme generator on GitHub, I'd accept the risk in 
> running an untested version. For something I'd be running my business on 
> however I'd expect there being some weight behind the release.
>
> Perhaps I have unrealistic expectations!

Not at all.
Your expectations might be the key to making a pitch to the user community for 
some help from people and organizations that are not interested in writing code 
but have a major interest in testing.
In addition to access to test equipment, this might actually get new people on 
the team with the right skills required to extend the test scripts and test 
procedure documentation.

Does anyone have a list of the configuration specifications that are required 
to test a new release?

Would it help to approach major users of Cloudstack with a direct request for 
use of their test equipment and QA staff in return for early access to new 
releases and testing on their hardware?

Ron

>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
> Sent: 30 June 2017 20:13
> To: dev@cloudstack.apache.org
> Subject: Re:

RE: [DISCUSS] - Releases, Project Management & Funding Thereof

2017-07-01 Thread Will Stevens
Yes, we can totally reject PRs until we are happy with the associated
tests.

On Jul 1, 2017 5:48 PM, "Alex Hitchins"  wrote:

> Out of interest, are there any guidelines/rules in place to reject PR's
> without unit tests?
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: 30 June 2017 21:58
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Taken from a talk on CloudStack testing [1]...
>
> There are Many, many, MANY permutations of a CloudStack deployment….
> • Basic / Advanced
> • Local / shared / mixed storage
> • More than 8 common hypervisor types/versions • 4 or 5 Management server
> OS possibilities • That’s 144 combinations only looking the basics.
>
> [1] https://www.slideshare.net/ShapeBlue/cloudstack-eu-user-
> group-trillian?qid=74ff2be0-664c-4bca-a3dc-f30d880ca088&v=
> &b=&from_search=1
>
> Trillian [2], can create any of those, and multiple at the same time, but
> the amount of hardware required to do that means that we have to pick our
> battles. Running the blueorangutan bot command '@blueorangutan matrix' in a
> PR will run the smoke test suite against a PR using 3 environments, one
> each of KVM, XenServer and vSphere and takes around 8 hours.
>
> But that is only looking for major regressions.  A full component test run
> takes around 5 days to run and is riddled with bugs in the tests.
>
> Ultimately these are still of limited scope, few people are as diligent as
> say Mike T in creating practical marvin tests for their code / features.
>
> [2] https://github.com/shapeblue/Trillian
>
> Therefore we need hardware to run tests on, but more importantly we need
> the tests to exist and work in the first place.  Then we can really do
> something.
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Alex Hitchins [mailto:a...@alexhitchins.com]
> Sent: 30 June 2017 21:34
> To: dev@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Consultation with users is something that should definite be done. Canvas
> as many as possible.
>
> I agree that most people will be running test environments before full
> rollout of any technology, I guess see it a little from a CTO eyes - why
> shortlist a technology that doesn't even endorse its own releases?
>
> Hopefully we will get some more replies to this thread from other
> CloudStack enthusiasts to help shape this conversation.
>
> I'm setting up a new development environment now to get my hands mildly
> soiled. Going the Windows route again. Fancy a challenge for the weekend.
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
> Sent: 30 June 2017 21:08
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] - Releases, Project Management & Funding Thereof
>
>
> On 30/06/2017 3:28 PM, Alex Hitchins wrote:
> > We can't validate all scenarios no, hence suggesting several common
> setups as a reasonable baseline. I like the idea of users posting their
> hardware and versions as a community endeavour.
> >
> > I strongly feel there should be an established, physical setup that the
> release team have access to in order to validate a release.
>
> This is perhaps something that should be requested from the user community.
> I would expect that anyone running Cloudstack in production on a major
> site would have a test setup and might be very happy to have the dev team
> test on their setup.
> This would save them a lot of resources at the expense of a bit of time on
> their test environment.
>
> > If this was some random cat meme generator on GitHub, I'd accept the
> risk in running an untested version. For something I'd be running my
> business on however I'd expect there being some weight behind the release.
> >
> > Perhaps I have unrealistic expectations!
>
> Not at all.
> Your expectations might be the key to making a pitch to the user community
> for some help from people and organizations that are not interested in
> writing code but have a major interest in testing.
> In addition to access to test equipment, this might actually get new
> people on the team with the right skills required to extend the test
> scripts and test procedure documentation.
>
> Does anyone have a list of the configuration specifications that are
> required to test a new release?
>
> Would it help to approach major users of Cloudstack with a direct request
> for use of their test equipment and QA staff in return for early access to
> new releases and testing on their hardware?
>

RE: [DISCUSS] - Releases, Project Management & Funding Thereof

2017-07-01 Thread Will Stevens
Which is part of the reason the RM job is hard and time consuming.
- checking the PRs have the appropriate tests.
- updating the CI to include the new tests.
- run and report CI for the PR (with very limited CI infra community wide).
- chase PR authors to get their PRs to a point where you are happy they are
not breaking master
- rinse repeat for 200+ PRs...

On Jul 1, 2017 6:34 PM, "Will Stevens"  wrote:

Yes, we can totally reject PRs until we are happy with the associated
tests.

On Jul 1, 2017 5:48 PM, "Alex Hitchins"  wrote:

> Out of interest, are there any guidelines/rules in place to reject PR's
> without unit tests?
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: 30 June 2017 21:58
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Taken from a talk on CloudStack testing [1]...
>
> There are Many, many, MANY permutations of a CloudStack deployment….
> • Basic / Advanced
> • Local / shared / mixed storage
> • More than 8 common hypervisor types/versions • 4 or 5 Management server
> OS possibilities • That’s 144 combinations only looking the basics.
>
> [1] https://www.slideshare.net/ShapeBlue/cloudstack-eu-user-grou
> p-trillian?qid=74ff2be0-664c-4bca-a3dc-f30d880ca088&v=&b=&from_search=1
>
> Trillian [2], can create any of those, and multiple at the same time, but
> the amount of hardware required to do that means that we have to pick our
> battles. Running the blueorangutan bot command '@blueorangutan matrix' in a
> PR will run the smoke test suite against a PR using 3 environments, one
> each of KVM, XenServer and vSphere and takes around 8 hours.
>
> But that is only looking for major regressions.  A full component test run
> takes around 5 days to run and is riddled with bugs in the tests.
>
> Ultimately these are still of limited scope, few people are as diligent as
> say Mike T in creating practical marvin tests for their code / features.
>
> [2] https://github.com/shapeblue/Trillian
>
> Therefore we need hardware to run tests on, but more importantly we need
> the tests to exist and work in the first place.  Then we can really do
> something.
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Alex Hitchins [mailto:a...@alexhitchins.com]
> Sent: 30 June 2017 21:34
> To: dev@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Consultation with users is something that should definite be done. Canvas
> as many as possible.
>
> I agree that most people will be running test environments before full
> rollout of any technology, I guess see it a little from a CTO eyes - why
> shortlist a technology that doesn't even endorse its own releases?
>
> Hopefully we will get some more replies to this thread from other
> CloudStack enthusiasts to help shape this conversation.
>
> I'm setting up a new development environment now to get my hands mildly
> soiled. Going the Windows route again. Fancy a challenge for the weekend.
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
> Sent: 30 June 2017 21:08
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] - Releases, Project Management & Funding Thereof
>
>
> On 30/06/2017 3:28 PM, Alex Hitchins wrote:
> > We can't validate all scenarios no, hence suggesting several common
> setups as a reasonable baseline. I like the idea of users posting their
> hardware and versions as a community endeavour.
> >
> > I strongly feel there should be an established, physical setup that the
> release team have access to in order to validate a release.
>
> This is perhaps something that should be requested from the user community.
> I would expect that anyone running Cloudstack in production on a major
> site would have a test setup and might be very happy to have the dev team
> test on their setup.
> This would save them a lot of resources at the expense of a bit of time on
> their test environment.
>
> > If this was some random cat meme generator on GitHub, I'd accept the
> risk in running an untested version. For something I'd be running my
> business on however I'd expect there being some weight behind the release.
> >
> > Perhaps I have unrealistic expectations!
>
> Not at all.
> Your expectations might be the key to making a pitch to the user community
> for some help from people and organizations that are not interested in
> writing code but have a major interest in testing.
> In addition to access to test equipment, this might actually get new
> people on the te

RE: [DISCUSS] - Releases, Project Management & Funding Thereof

2017-07-01 Thread Will Stevens
Oh, and if a system VM is touched, then you have to add in a new system VM
build and install into the CI setup prior to testing...

On Jul 1, 2017 6:41 PM, wrote:

Which is part of the reason the RM job is hard and time consuming.
- checking the PRs have the appropriate tests.
- updating the CI to include the new tests.
- run and report CI for the PR (with very limited CI infra community wide).
- chase PR authors to get their PRs to a point where you are happy they are
not breaking master
- rinse repeat for 200+ PRs...

On Jul 1, 2017 6:34 PM, "Will Stevens"  wrote:

Yes, we can totally reject PRs until we are happy with the associated
tests.

On Jul 1, 2017 5:48 PM, "Alex Hitchins"  wrote:

> Out of interest, are there any guidelines/rules in place to reject PR's
> without unit tests?
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: 30 June 2017 21:58
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Taken from a talk on CloudStack testing [1]...
>
> There are Many, many, MANY permutations of a CloudStack deployment….
> • Basic / Advanced
> • Local / shared / mixed storage
> • More than 8 common hypervisor types/versions • 4 or 5 Management server
> OS possibilities • That’s 144 combinations only looking the basics.
>
> [1] https://www.slideshare.net/ShapeBlue/cloudstack-eu-user-grou
> p-trillian?qid=74ff2be0-664c-4bca-a3dc-f30d880ca088&v=&b=&from_search=1
>
> Trillian [2], can create any of those, and multiple at the same time, but
> the amount of hardware required to do that means that we have to pick our
> battles. Running the blueorangutan bot command '@blueorangutan matrix' in a
> PR will run the smoke test suite against a PR using 3 environments, one
> each of KVM, XenServer and vSphere and takes around 8 hours.
>
> But that is only looking for major regressions.  A full component test run
> takes around 5 days to run and is riddled with bugs in the tests.
>
> Ultimately these are still of limited scope, few people are as diligent as
> say Mike T in creating practical marvin tests for their code / features.
>
> [2] https://github.com/shapeblue/Trillian
>
> Therefore we need hardware to run tests on, but more importantly we need
> the tests to exist and work in the first place.  Then we can really do
> something.
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Alex Hitchins [mailto:a...@alexhitchins.com]
> Sent: 30 June 2017 21:34
> To: dev@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] - Releases, Project Management & Funding Thereof
>
> Consultation with users is something that should definite be done. Canvas
> as many as possible.
>
> I agree that most people will be running test environments before full
> rollout of any technology, I guess see it a little from a CTO eyes - why
> shortlist a technology that doesn't even endorse its own releases?
>
> Hopefully we will get some more replies to this thread from other
> CloudStack enthusiasts to help shape this conversation.
>
> I'm setting up a new development environment now to get my hands mildly
> soiled. Going the Windows route again. Fancy a challenge for the weekend.
>
>
>
>
> Alexander Hitchins
> 
> E: a...@alexhitchins.com
> W: alexhitchins.com
> M: 07788 423 969
> T: 01892 523 587
>
> -Original Message-
> From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
> Sent: 30 June 2017 21:08
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] - Releases, Project Management & Funding Thereof
>
>
> On 30/06/2017 3:28 PM, Alex Hitchins wrote:
> > We can't validate all scenarios no, hence suggesting several common
> setups as a reasonable baseline. I like the idea of users posting their
> hardware and versions as a community endeavour.
> >
> > I strongly feel there should be an established, physical setup that the
> release team have access to in order to validate a release.
>
> This is perhaps something that should be requested from the user community.
> I would expect that anyone running Cloudstack in production on a major
> site would have a test setup and might be very happy to have the dev team
> test on their setup.
> This would save them a lot of resources at the expense of a bit of time on
> their test environment.
>
> > If this was some random cat meme generator on GitHub, I'd accept the
> risk in running an untested version. For something I'd be running my
> business on however I'd expect there being some weight behind the release.
> >
> > Perhaps I have unrealistic expectations!
>
> Not at all.
> Your expectations might be the key to making a pitch to the user community
> for some help from people and organizations that