Hi Mary.
On 05/14/12 05:08 PM, Mary Ding wrote:
Jack:
On 05/14/12 11:01 AM, Jack Schwartz wrote:
Hi Mary.
Here are my comments:
README:
line 97: Probably better not to reference a specific bug. However,
if you choose to reference it, please update the bug with a reminder
to remove it from the README once it is fixed.
We need to reference the test suite bugs because otherwise we will not
know why the test counts were inaccurates and why certain tests were
excluded.
OK. I now see there are two bugs referenced.
164-167: slim_regression_test calls slim_regression_test.py. I
suggest updating the README to refer only to slim_regression_test. I
also suggest renaming one of the files so people don't confuse the
two or run the wrong file by mistake.
I had fixed the above.
Thanks.
The webrev had been updated to reflect the changes.
Webrev looks fine. Please update the two bugs mentioned in the README,
to remind their RE to fix the README as part of fixing those bugs, if
you haven't already done so.
Thanks,
Jack
More inline below.
On 05/11/12 02:35 PM, Mary Ding wrote:
Hi:
Can I have a code review for slim_test related issues for bugid 7167676
CR: http://monaco.sfbay/detail.jsf?cr=7167676 - Missing slim_test in
tests.nose compare to slim_regression_tests.py and need to update
README
Web rev is here:
https://cr.opensolaris.org/action/browse/caiman/md5645/7167676
This is how testing is done:
1. Before the fix, this is the test count:
slim_test - 1628 tests is executed
slim_regression_tests - 1877 tests is executed
After the fix:
slim_test - 1641 tests, the test count is off by 236 tests due to a
known issue with js2ai test. The js2ai tests got executed but the
count in nosetests is not correct. The bugid for js2ai issue is
7057701.
slim_regression_tests - 1877 tests
slim_test_result pointer is here:
/net/osol-bldx/datapool/ding/7167676/slim_source/usr/src/tools/tests/slim_test_result
slim_regression_test result pointer is here:
/net/osol-bldx/datapool/ding/7167676/slim_source/usr/src/tools/tests/slim_regression_test_result
The test results for failures are identical and the following issues
were found:
There are 5 failures in cmd/auto-install/checkpoints/test and the
failure is tracked under bugid 7167702.
Also found that lib/install_logging/test and lib/liberrsvc/test does
not execute any tests at all in both tests.nose and
slim_regression_tests.py.
The following bugs were filed to track the problem:
lib/install_logging/test - 7168025
lib/liberrsvc/test - 7168027
2. I also create a manual.nose file and will create a hudson job to
run the manual test automatically on tcu27-01
Part of the reason why there are manual tests is because those tests
may change the system they are running on. (At least, that was the
criteria for making a test manual when I did tests for Derived
Manifests.) So be careful about automating "manual" tests on a
public system.
For some of those manual tests, the test system need to have certain
smf service enabled in order for the test to work. e.g. like aimdns,
I talked to John Fischer and enable aimdns on the test machine .
For derived manifests, the tests are currently disabled because they
did not work. If the derived manifests tests needs to be run on a
regular basis, you will need to provide README and instructions on how
to get the tests to run.
Also you may want to verify that the starting environment for running
those tests is the same each time those tests get run.
Since I do not own or wrote the tests, it will be up to the test
owners to figure out whether the starting environment were the same.
Thanks,
Jack
tcu27-01# ./slim_test -c manual.nose
#1644 Test that AImDNS().browse returns True upon finding a service
... ok
#1645 Test AImDNS().find() returns True when service is found and
False ... ok
#1646 Verify a unique service shows up for all interfaces listed ...
FAIL
#1647 Test the AImDNS().register() method ... ok
#1648 test covers when there is an AIService ... ok
#1649 test covers when there is not an AIService ... FAIL
======================================================================
FAIL: Verify a unique service shows up for all interfaces listed
----------------------------------------------------------------------
Traceback (most recent call last):
File
"/export/home/ding/7167676/slim_source/usr/src/cmd/installadm/test/manual/test_aimdns_manual.py",
line 208, in test_interfaces
interface
AssertionError: Unable to find unique service on interface: net0
======================================================================
FAIL: test covers when there is not an AIService
----------------------------------------------------------------------
Traceback (most recent call last):
File
"/export/home/ding/7167676/slim_source/usr/src/cmd/auto-install/test/manual/test_ai_sd.py",
line 105, in test_no_aiservice
assert aisvc.lookup() == -1, 'lookup succeeded'
AssertionError: lookup succeeded
----------------------------------------------------------------------
Ran 6 tests in 148.830s
FAILED (failures=2)
tcu27-01#
3. I also created a wrapper for slim_regression_test.py called
slim_regression_test to execute slim_regression_test.py
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss