Similar to how we have `validate()` on the Pipeline to check the pipeline specification, dry-run would check the pipeline translation and report errors back to the user.

Assuming that Runners throw errors for unsupported features, that would already give users confidence that they will be able to run their pipelines with a specific Runner.

On 17.10.18 15:28, Robert Bradshaw wrote:
On Wed, Oct 17, 2018 at 3:17 PM Kenneth Knowles <k...@apache.org <mailto:k...@apache.org>> wrote:

    On Wed, Oct 17, 2018 at 3:12 AM Maximilian Michels <m...@apache.org
    <mailto:m...@apache.org>> wrote:

        A dry-run feature would be useful, i.e. the user can run an
        inspection
        on the pipeline to see if it contains any features which are not
        supported by the Runner.


    This seems extremely useful independent of an annotation processor
    (which also seems useful), and pretty easy to get done quickly.


+1, this would be very useful. (It could also be useful for cheaper testing of the dataflow, or other non-local, runners.)

        On 17.10.18 00:03, Rui Wang wrote:
         > Sounds like a good idea.
         >
         > Sounds like while coding, user gets a list to show if a
        feature is
         > supported on different runners. User can check the list for
        the answer.
         > Is my understanding correct? Will this approach become slow
        as number of
         > runner grows? (it's just a question as I am not familiar the
        performance
         > of combination of a long list, annotation and IDE)
         >
         >
         > -Rui
         >
         > On Sat, Oct 13, 2018 at 11:56 PM Reuven Lax <re...@google.com
        <mailto:re...@google.com>
         > <mailto:re...@google.com <mailto:re...@google.com>>> wrote:
         >
         >     Sounds like a good idea. I don't think it will work for all
         >     capabilities (e.g. some of them such as "exactly once"
        apply to all
         >     of the API surface), but useful for the ones that we can
        capture.
         >
         >     On Thu, Oct 4, 2018 at 2:43 AM Etienne Chauchot
         >     <echauc...@apache.org <mailto:echauc...@apache.org>
        <mailto:echauc...@apache.org <mailto:echauc...@apache.org>>> wrote:
         >
         >         Hi guys,
         >         As part of our user experience improvement to attract
        new Beam
         >         users, I would like to suggest something:
         >
         >         Today we only have the capability matrix to inform
        users about
         >         features support among runners. But, they might
        discover only
         >         when the pipeline runs, when they receive an
        exception, that a
         >         given feature is not supported by the targeted runner.
         >         I would like to suggest to translate the capability
        matrix into
         >         the API with annotations for example, so that, while
        coding, the
         >         user could know that, for now, a given feature is not
        supported
         >         on the runner he targets.
         >
         >         I know that the runner is only specified at pipeline
        runtime,
         >         and that adding code would be a leak of runner
        implementation
         >         and against portability. So it could be just informative
         >         annotations like @Experimental for example with no
        annotation
         >         processor.
         >
         >         WDYT?
         >
         >         Etienne
         >

Reply via email to