angela wrote:
>> I'm not comfortable with the lack of _full_ test automation, including
>> verification of
>> the final installed image. If the final answer is that we must have
>> manual testing,
>> I think QE will need to own that.
> There are some scenarios that couldn't be done in automated test, such
> as we plan to include some user case which is about interoperability, we
> will install other OS (Windows, Linux) and install Solaris to see
> whether it can be installed alongside other OS's.

Angela,

We need to differentiate between test automation and the hardware/
software configurations they're run on.  In this case we'll essentially
be running the same test code (perform an install) on some systems
that contain no other type(s) of OS, and on others that have one or
more other OS types installed.  The same automated test case is used
and we accomplish the interoperability testing based on the
hardware/software combinations it's run on.

Interoperability testing therefore will be automated so long as
whomever is running the tests does so on a mix of systems and
configurations that include the other OS types.

> As for the verification of the final installation image, actually we
> have script to do that. It relies on STEP framework to control the test
> system to do reboot and post verify. This will be automated after
> integrated into PIT. Currently we can only manually do that.

Would it be possible for us to use the STEP framework ourselves for
our test execution?  The new test requirement does expect full
automation at putback time, not at some point later when PIT picks
the suite up, so if we could make use of this it would help meet
that requirement and improve our efficiency in executing the tests
prior to putback.

> We re-arranged the manual test execution, so the test schedule will be
> changed to:
> 11/23-12/01 automated test on virtual machines, manual test on x86 and sparc

Is manual testing on non-virtual machines entirely down to the
reboot issue?

> Do you have any other concerns on these?
>> I think we should have PIT runs.
> Yes, the automated test suite will be integrated into PIT.
> We plan to have 2 phases:
> In phase 1, we will integrate the automated test suite into STC at the
> same time when project integration according to the new policy.
> And in phase 2, we will integrate the suite into PIT.

To clarify for others, we (Solaris QE) are responsible for getting the
test suites integrated into the STC gate using a test framework that
ON-PIT can consume.  We will notify PIT in advance that the test suite
is coming and let them know what types of hardware/software configs
they should run it on to provide good regression testing.  After that
we don't have control over how quickly ON-PIT picks up the suite and
starts running it regularly.

Typically ON-PIT quickly picks up suites that don't have special
hardware requirements.  Install presents some special challenges
for them, and I haven't talked with them on this subject in a while,
so I'll ping them to discuss what's coming out of the Install test
space and get their feedback.  I'll report back on what I learn.

>> Why no memory leak testing?
> This has been discussed in our test meeting with Sue. Sue agreed that,
> the text installer will use python, no need to do memory leak test.

Memory leak testing is also of limited interest during an install as,
so long as the leak isn't large enough to scupper the install, it doesn't
matter as the machine will be rebooted right afterwards.

Regards,

Andre

Reply via email to