Sounds like we should do both, right? 1. Test the metrics API without accounting for the various sink types, i.e. > against the SDK. >
Metrics API is a runner-independent SDK concept. I'd imagine we'd want to have runner-independent test that interact with the API, outside of any specific transform implementation, execute them on all runners, and query the results. Goal: make sure Metrics work. 2. Have the sink types, or at least some of them, tested as part of > integration tests, e.g., have an in-memory Graphite server to test Graphite > metrics and so on. > This is valid too -- this is testing *usage* of Metrics API in the given IO. If a source/sink, or a transform in general, is exposing a metric, that metric should be tested in its own right as a part of the transform implementation.
