I see. Just to make sure I get it right, in (2), by sinks I mean various metrics backends (e.g., Graphite). So it boils down to having integration tests as part of Beam (runners?) that beyond testing the SDK layer (i.e., asserting over pipeline.metrics()) and actually test the specific metrics backend (i.e., asserting over inMemoryGraphite.metrics()), right?
On Mon, Jan 2, 2017 at 7:14 PM Davor Bonaci <[email protected]> wrote: > Sounds like we should do both, right? > > 1. Test the metrics API without accounting for the various sink types, i.e. > > against the SDK. > > > > Metrics API is a runner-independent SDK concept. I'd imagine we'd want to > have runner-independent test that interact with the API, outside of any > specific transform implementation, execute them on all runners, and query > the results. Goal: make sure Metrics work. > > 2. Have the sink types, or at least some of them, tested as part of > > integration tests, e.g., have an in-memory Graphite server to test > Graphite > > metrics and so on. > > > > This is valid too -- this is testing *usage* of Metrics API in the given > IO. If a source/sink, or a transform in general, is exposing a metric, that > metric should be tested in its own right as a part of the transform > implementation. >
