[ https://issues.apache.org/jira/browse/GRIFFIN-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695485#comment-16695485 ]
ASF GitHub Bot commented on GRIFFIN-213: ---------------------------------------- Github user toyboxman commented on a diff in the pull request: https://github.com/apache/incubator-griffin/pull/456#discussion_r235585607 --- Diff: griffin-doc/measure/measure-configuration-guide.md --- @@ -188,7 +188,7 @@ Above lists DQ job configure parameters. - **sinks**: Whitelisted sink types for this job. Note: no sinks will be used, if empty or omitted. ### <a name="data-connector"></a>Data Connector -- **type**: Data connector type, "AVRO", "HIVE", "TEXT-DIR" for batch mode, "KAFKA" for streaming mode. +- **type**: Data connector type: "AVRO", "HIVE", "TEXT-DIR", "CUSTOM" for batch mode; "KAFKA", "CUSTOM" for streaming mode. --- End diff -- EXTERNAL seems better, which means a third-party or vendor's data connector. @guoyuepeng @chemikadze how do you think about? > Support pluggable datasource connectors > --------------------------------------- > > Key: GRIFFIN-213 > URL: https://issues.apache.org/jira/browse/GRIFFIN-213 > Project: Griffin (Incubating) > Issue Type: Improvement > Reporter: Nikolay Sokolov > Priority: Minor > > As of Griffin 0.3, code modification is required, in order to add new data > connectors. > Proposal is to add new data connector type, CUSTOM, that would allow to > specify class name of data connector implementation to use. Additional jars > with custom connector implementations would be provided in spark > configuration template. > Class name would be specified in "class" config of data connector. For > example: > {code:json} > "connectors": [ > { > "type": "CUSTOM", > "config": { > "class": "org.example.griffin.JDBCConnector" > // extra connector-specific parameters > } > } > ] > {code} > Proposed contract for implementations is based on current convention: > - for batch > ** class should be a subclass of BatchDataConnector > ** if should have method with signature: > {code:java} > public static BatchDataConnector apply(ctx: BatchDataConnectorContext) > {code} > - for streaming > ** class should be a subclass of StreamingDataConnector > ** it should have method with signature: > {code:java} > public static StreamingDataConnector apply(ctx: StreamingDataConnectorContext) > {code} > Signatures of context objects: > {code:scala} > case class BatchDataConnectorContext(@transient sparkSession: SparkSession, > dcParam: DataConnectorParam, > timestampStorage: TimestampStorage) > case class StreamingDataConnectorContext(@transient sparkSession: > SparkSession, > @transient ssc: StreamingContext, > dcParam: DataConnectorParam, > timestampStorage: TimestampStorage, > streamingCacheClientOpt: > Option[StreamingCacheClient]) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)