Hi Mike,

Shane is exploring options for automating tests in VS Code and believes we 
might be able to get away with a pure TypeScript test suite, which would be 
great IMHO.  I'll let him talk more about that if he's willing and has the time.

On the topic of code reviews, I think we should codify our expectations in a 
Wiki page on the GitHub repo.  Maybe we can distill it down to a checklist like 
I've seen here around release time.

To get the ideas flowing, here is a brief (non-exhaustive) list of what I'd 
expect before approving a merge:

CI should handle:
Linting
Style checking
Unit testing
Basic integration smoke testing
Cross-platform testing
Code coverage
License checking
Packaging and publishing (if tagged with a version tag)

Human review:
CI has run successfully?
Does the PR describe the change and how to test the change?
Was the test successful?
Did the test address/cover the changes that were made?
Does the PR completely fix the issue it was intended to fix?
Does the code conform with best practices (where are these defined?)?
Is the code idiomatic of the language used?
Are we introducing unnecessary dependencies?
Check semantic things like do the function names and purposes make sense?
Are we using comments effectively?
Did we avoid awkward, complex, or confusing constructions to the degree 
possible?
What is the risk/impact if something in this PR is busted?

The review is also a knowledge transfer, so to the reviewer:
Do you understand what you've reviewed?
Do you think you can maintain this code if the author isn't available?

The last bits on knowledge transfer speak to your qualifications as a reviewer. 
 Given those questions, for example, I would not (yet) be ideally qualified to 
review Scala PRs.

Hope it helps,
Davin

On 5/24/22, 3:21 PM, "Mike Beckerle" <mbecke...@apache.org> wrote:

    Are there test automation tools available that can be used to drive the
    vscode debugger?

    This would ideally include both something that drives the UI via easily
    created/maintained scripts, but also things like code coverage analysis.

    I am always an advocate of built-in-self-test and of developers putting in
    the test automation for new/changed features.
    This is easy enough for libraries like the main daffodil library where we
    also have codecov, and sonarqube analysis, and where the CI/CD pipeline
    runs a test suite for us.

    The whole subject is harder for UIs where it depends on the ability to
    script testing of the UI without the tests being so fragile that any change
    to the UI breaks too many of them.

    When a PR is created for the Daffodil VSCode IDE, what is the developer's
    expectation of what happens during code review?
    E.g., Are you expecting that at least one reviewer downloads and builds the
    new code base, and then tests the new/changed features?
    Or are you expecting just code-scrutiny for advice on patterns/idioms,
    likely errors, etc.?

    Please let's discuss.

    -mike beckerle


-----------------------------------------------------------------
This message and any files transmitted within are intended
solely for the addressee or its representative and may contain
company proprietary information.  If you are not the intended
recipient, notify the sender immediately and delete this
message.  Publication, reproduction, forwarding, or content
disclosure is prohibited without the consent of the original
sender and may be unlawful.

Concurrent Technologies Corporation and its Affiliates.
www.ctc.com  1-800-282-4392
-----------------------------------------------------------------

Reply via email to