On 06/27/2016 02:57 AM, Fraser Tweedale wrote:
On Fri, Jun 24, 2016 at 12:08:24PM +0200, Milan Kubík wrote:
On 06/24/2016 03:42 AM, Fraser Tweedale wrote:
On Tue, Jun 21, 2016 at 05:01:35PM +0200, Milan Kubík wrote:
Hi Fraser and list,

I have made changes to the test plan on the wiki [1] according to the
information in "[Testplan review] Sub CAs" thread.

I also implemented the tests in the test plan:

patch 0038 - CATracker and CA CRUD test
patch 0039 - extension to CA ACL test
patch 0040 - functional test with ACLs and certificate profile, reusing my
previous S/MIME based tests. This patch also tests for the cert-request
behavior when profile ID or CA cn are ommited.

The tests ATM do not verify the Issuer name in the certificate itself, just
from the ipa entry of the certificate.

The approach you are using::

      assert cert_info['result']['issuer'] == smime_signing_ca.ipasubjectdn

is not quite as you describe (these are virtual attributes, not
attributes of an actual entry); but the approach is valid.
The issue then is in the wording? The other approach I could have used here
is to retrieve the two certificates and compare the fields manually.
Are these virtual attributes created from the certificate itself?

That's correct.

Fraser, could you please verify my reasoning behind the test cases for
cert-request in the patch 40?

The tests look OK.  With the default CA / default profiles, is there
appropriate isolation between test cases to ensure that if, e.g.
some other test case adds/modifies CA ACLs such that these
expected-to-fail tests now pass, that this does not affect the
TestCertSignMIMEwithSubCA test case?

Thanks,
Fraser
The ACL, SMIME CA and S/MIME profile lifetime is constrained by the class
scope
enforced by pytest.
The two test cases depend on the fact documented in the designs and that is
what
cert-request fallbacks to when CA or profile ID are not provided.
Unless something changes caIPAserviceCert profile or affiliated ACL, then
the test cases
are safe.

If you have thought about possible interference from other tests, I
am happy.

Note another problematic scenario: what if a different (preceding)
test adds a CA ACL that would allow the requests that you expect to
fail?  Just something to think about :)

Thanks,
Fraser
Then the failure would be problem of the preceding test and we would need to fix it. We are dealing with test side effects
in other parts of the execution already...

The test is constructed in a way that isolates it (to a certain degree) by the mechanisms available in pytest. Of course I cannot make the test future-proof or guarantee that a bug in some other test will not affect the execution of other tests as they all run against one IPA instance. I do not think, however, that potential misbehaving test case that will interfere
should prevent us from implementing this and similar test cases.

If you have some specific issue that is in the patch, I'm happy to fix them.
I will try to think more about corner cases here.
[1]: http://www.freeipa.org/page/V4/Sub-CAs/Test_Plan

Cheers

--
Milan Kubik

Attaching rebased patches and removing the expected fail from one of the
tests as ticket 5981 has fix posted.

--
Milan Kubik



--
Milan Kubik

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to