Looks fine to me.

Xuelei

On 9/30/2016 4:39 PM, Tim Du wrote:
Thanks Xuelei review it.
Latest webrev available here
:http://cr.openjdk.java.net/~tidu/8164322/webrev.02/ ,Please help to
review it. Thanks

Regards
Tim
On 2016/9/29 18:16, Xuelei Fan wrote:
On 9/29/2016 5:36 PM, Tim Du wrote:
Thank you for reviewing the codes.
Remove the codes which throw exception and add codes could show machine
not installed NSS library information.
The latest code is here:
http://cr.openjdk.java.net/~tidu/8164322/webrev.01/ ,please help to
review it.

Looks fine to me.  A very minor comment:
 303   + ",please initialize ...

Please add a white space between ",' and 'please".

Thanks,
Xuelei

For whether throw exception caused by OS/machine issue ,I file an
Enhancement https://bugs.openjdk.java.net/browse/JDK-8166893 , we can
keep discussing by it ,and then the test
sun/security/pkcs11/PKCS11Test.java could be supported to run on ARM
platform firstly. Thanks

Regards
Tim
On 2016/9/29 13:05, Xuelei Fan wrote:
On 9/29/2016 9:26 AM, Artem Smotrakov wrote:
Hi Xuelei,

This is not a problem with machine configuration. The actual test
cases
are not going to be run (even if there are NSS libs on a test machine)
until "osMap" contains an entry for specific platform.

OK.  Please fix for the purpose accordingly.

Tim, the current fix has two parts.  The 1st part, which throw
exception is a machine configuration problem, is not in the scope of
Artem's consideration, please remove it.  The 2nd part, which add new
items to the osMap, is fine to me.  I think we are on the same page
now.  Please update the webrev accordingly.  I will approve the 2nd
part fix.

Thanks,
Xuelei

Artem


On 09/28/2016 05:52 PM, Xuelei Fan wrote:
Hi Artem,

What do you think fix the testing infrastructure (JPRT/Mach5) with
proper configuration?  I think it is a easier than update a large
bunch of test codes.

I understand the concerns of yours, but I don't want add additional
effort for those who need to run the test on their own environment.
Some people run the test locally before submit JPRT jobs (or no
access
to JPRT systems for external people).

Update the testing infrastructure may solve both of your concerns and
my concerns.

I'm not sure the @requires tag would work or not.  It would be great
if you can find a solution with the tag.

Thanks,
Xuelei

On 9/29/2016 4:32 AM, Artem Smotrakov wrote:
Hi Xuelei,

I understand your concerns. But I'd prefer to be aware of situations
when a test reports that it passed when it actually did nothing.

How about using @requires key? We can try to specify all expected
platforms. If I understand correctly, jtreg won't run tests if
specified
requirements are not met. In this case, you need to look at the
report
to make sure all tests you are interested in were run.

Artem


On 09/28/2016 08:00 AM, Xuelei Fan wrote:
On 9/28/2016 9:20 PM, Denis Kononenko wrote:
There're 60+ tests related to PKCS11. Two years they have been
"passed" on 3 unsupported platforms on hosts where usually no NSS
libraries were installed. How can we rely on these results?
;-) The words of "unsupported platforms" are very confusing in this
mail thread.

Let's think more about what if a test failed.  What one will do
if a
test failed?
1. Test fail means source code problems for developers. One cannot
submit a change-set if a test failed.  He need to pay additional
effort and analysis the failure.  It take one developer a lot
effort
to know the root cause.  I would never like this unnecessary cost.
2. In order to get the test pass, he need to install the NSS libs
although NSS has nothing to do with his changeset.  It may be a
very
very hard step or even impossible (for example licenses issues)
step
for him.

TBH, I did not see much benefits to fail on unsupported
platforms.  I
agree that skip for pass is not a good idea, but fail to warn is
worse.

I think the root cause if "unsupported platforms" actually are
supported platforms, but by accident the NSS libraries are not
installed or not installed properly.

If one is not interested in NSS, the test get ignored (passed). If
one
is interested in NSS, he should install the NSS libs and the
test get
checked.

What do you think if fix the testing infrastructure with properly
installed NSS libs?

> The problem is the tests report they passed but actually they
were
> skipped. I have no objections against skipping tests but it would
> be better to give a hint somehow how many tests were skipped and
why.
Agreed.  Unfortunately, there are only two options, "pass" or
"fail",
at present.  It would be nice if there is a grey area. Any idea to
make summary of skipped tests and reasons?

Xuelei




Reply via email to