[ 
https://issues.apache.org/jira/browse/BEAM-6347?focusedWorklogId=180966&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-180966
 ]

ASF GitHub Bot logged work on BEAM-6347:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 04/Jan/19 03:52
            Start Date: 04/Jan/19 03:52
    Worklog Time Spent: 10m 
      Work Description: chamikaramj commented on pull request #7397: 
[BEAM-6347] Add website page for developing I/O connectors for Java
URL: https://github.com/apache/beam/pull/7397#discussion_r245197012
 
 

 ##########
 File path: website/src/documentation/io/developing-io-python.md
 ##########
 @@ -17,57 +20,80 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either 
express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 -->
-# Creating New Sources and Sinks with the Python SDK
+# Developing I/O connectors for Python
 
-The Apache Beam SDK for Python provides an extensible API that you can use to 
create new data sources and sinks. This tutorial shows how to create new 
sources and sinks using [Beam's Source and Sink 
API](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py).
+To connect to a data store that isn’t supported by Beam’s existing I/O
+connectors, you must create a custom I/O connector that usually consist of a
+source and a sink. All Beam sources and sinks are composite transforms; 
however,
+the implementation of your custom I/O depends on your use case.  See the [new
+I/O connector overview]({{ site.baseurl 
}}/documentation/io/developing-io-overview/)
+for a general overview of developing a new I/O connector.
 
-* Create a new source by extending the `BoundedSource` and `RangeTracker` 
interfaces.
-* Create a new sink by implementing the `Sink` and `Writer` classes.
+This page describes implementation details for developing sources and sinks
+using [Beam's Source and Sink 
API](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py)
+for Python. The Java SDK offers the same functionality, but uses a slightly
+different API. See [Developing I/O connectors for Java]({{ site.baseurl 
}}/documentation/io/developing-io-java/)
+for information specific to the Java SDK.
 
+## Implementation options
 
-## Why Create a New Source or Sink
+**Sources**
 
-You'll need to create a new source or sink if you want your pipeline to read 
data from (or write data to) a storage system for which the Beam SDK for Python 
does not provide [native support]({{ site.baseurl 
}}/documentation/programming-guide/#pipeline-io).
+For bounded (batch) sources, there are currently two options for creating a 
Beam
+source:
 
-In simple cases, you may not need to create a new source or sink. For example, 
if you need to read data from an SQL database using an arbitrary query, none of 
the advanced Source API features would benefit you. Likewise, if you'd like to 
write data to a third-party API via a protocol that lacks deduplication 
support, the Sink API wouldn't benefit you. In such cases it makes more sense 
to use a `ParDo`.
+1. Use `ParDo` and `GroupByKey`.  
 
-However, if you'd like to use advanced features such as dynamic splitting and 
size estimation, you should use Beam's APIs and create a new source or sink.
+2. Extend the `BoundedSource` and `RangeTracker` interfaces.
 
+`ParDo` is the recommended option, as implementing a `Source` can be tricky.
+The [new I/O connector overview]({{ site.baseurl 
}}/documentation/io/developing-io-overview/)
+covers using `ParDo`, and lists some use cases where you might want to use a
+Source (such as [dynamic work rebalancing]({{ site.baseurl 
}}/blog/2016/05/18/splitAtFraction-method.html)).
 
-## Basic Code Requirements for New Sources and Sinks {#basic-code-reqs}
+Splittable DoFn is a new sources framework that is under development and will
+replace the other options for developing bounded and unbounded sources. For 
more
+information, see the
+[roadmap for multi-SDK connector efforts]({{ site.baseurl 
}}/roadmap/connectors-multi-sdk/).
 
-Services use the classes you provide to read and/or write data using multiple 
worker instances in parallel. As such, the code you provide for `Source` and 
`Sink` subclasses must meet some basic requirements:
+**Sinks**
 
-### Serializability
+To create a Beam sink, we recommend that you use a single `ParDo` that writes 
the
+received records to the data store. However, for file-based sinks, you can use
+the `FileBasedSink` interface.
 
-Your `Source` or `Sink` subclass must be serializable. The service may create 
multiple instances of your `Source` or `Sink` subclass to be sent to multiple 
remote workers to facilitate reading or writing in parallel. Note that the 
*way* the source and sink objects are serialized is runner specific.
 
 Review comment:
   Can we reduce the "Creating a New Sink" section below to only talk about 
FileBasedSink since Sink interface of Python SDK will be deprecated/removed 
soon ?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 180966)
    Time Spent: 2h 20m  (was: 2h 10m)

> Add page for developing I/O connectors for Java
> -----------------------------------------------
>
>                 Key: BEAM-6347
>                 URL: https://issues.apache.org/jira/browse/BEAM-6347
>             Project: Beam
>          Issue Type: Bug
>          Components: website
>            Reporter: Melissa Pashniak
>            Assignee: Melissa Pashniak
>            Priority: Minor
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to