Re: Using user developped source in streamline python

2018-06-28 Thread Sebastien Morand
Hi I'm completly fine to contribute, let's start with Cloud SQL and Firestore. Get me right: I've already written a working writer and sink. So how to make them native so they will be able to be supported in the streamline pipeline? Just inheritance from the native_io? My sinks inherit from iobas

Re: Using user developped source in streamline python

2018-06-27 Thread Ismaël Mejía
It seems like a nice opportunity also to contribute them to the project in case you are able to work on them. Don't hesitate to contact us or ask for help if needed. On Wed, Jun 27, 2018 at 4:45 PM Ahmet Altay wrote: > Hi Sébastien, > > Currently there is no work in progress for including the w

Re: Using user developped source in streamline python

2018-06-27 Thread Ahmet Altay
Hi Sébastien, Currently there is no work in progress for including the write transforms for the locations you listed. You could develop your own version if interested. Please see WriteToBigquery transform [1] for reference. Ahmet [1] https://github.com/apache/beam/blob/375bd3a6a53ba3ba7c965278dc

Re: Using user developped source in streamline python

2018-06-27 Thread Sebastien Morand
Hi, Thanks for your answer. Ok looking forward and ready to test alpha on this. Because we have actually some use cases to send data to CloudSQL or Spanner or BigTable or Firestore. As far as I read the documentation, there is no native support for them and so we have already implemented a custom

Re: Using user developped source in streamline python

2018-06-21 Thread Lukasz Cwik
+dev@beam.apache.org Python streaming custom source support will be available via SplittableDoFn. It is actively being worked on by a few contributors but to my knowledge there is no roadmap yet for having support for this for Dataflow. On Thu, Jun 21, 2018 at 1:19 AM Sebastien Morand < sebastie