Hi, It might be nice to formalize what needs to be done when reviewing a release candidate. I don't mean this as something that would add bureaucracy that would slow us down. Rather, it would be nice to have something as simple as a basic checklist of items that we could volunteer to check. That way, we could avoid potentially duplicating effort, which would speed us up, and we could avoid potentially missing some critical checks, which would help validate the integrity of our releases.
Some potential items to check: 1) Entire test suite should pass on OS X, Windows, and Linux. 2) All artifacts and accompanying checksums are present (see https://dist.apache.org/repos/dist/dev/incubator/systemml/0.10.0-incubating-rc1/ ) 3) All artifacts containing SystemML classes can execute a 'hello world' example 4) LICENSE and NOTICE files for all the artifacts have been checked 5) SystemML runs algorithms locally in standalone single-node 5) SystemML runs algorithms on local Hadoop (hadoop jar ...) 6) SystemML runs algorithms on local Spark (spark-submit ...) 7) SystemML runs algorithms on a Hadoop cluster 8) SystemML runs algorithms on a Spark cluster 9) SystemML performance suite has been run on a Hadoop cluster 10) SystemML performance suite has been run on a Spark cluster Would this be too many things to check or too few? Are there any critical items missing? Deron
