[
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dalongliu updated BAHIR-228:
----------------------------
Description:
currently, for Flink-1.10.0, we can use the catalog to store our stream table
sink for kudu, it should exist a kudu table sink so we can register it to
catalog, and use kudu as a table in SQL environment.
we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder()
.setKuduMaster(kuduMaster)
.setTableName(kuduTable)
.build();
KuduWriterOptions writerOptions = KuduWriterOptions.builder()
.setWriteMode(KuduWriterMode.UPSERT)
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND)
.build();
KuduTableSink tableSink = KuduTableSink.builder()
.setOptions(options)
.setWriterOptions(writerOptions)
.setTableSchema(schema)
.build();
tEnv.registerTableSink("kudu", tableSink);
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment,
the writing speed at 50000/s in upsert mode
was:
currently, for Flink-1.10.0, we can use the catalog to store our stream table
sink for kudu, it should exist a kudu table sink so we can register it to
catalog, and use kudu as a table in SQL environment.
we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() .setKuduMaster(kuduMaster)
.setTableName(kuduTable) .build(); KuduWriterOptions writerOptions =
KuduWriterOptions.builder() .setWriteMode(KuduWriterMode.UPSERT)
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) .build(); KuduTableSink
tableSink = KuduTableSink.builder() .setOptions(options)
.setWriterOptions(writerOptions) .setTableSchema(schema) .build();
tEnv.registerTableSink("kudu", tableSink);
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment,
the writing speed at 50000/s in upsert mode
> Flink SQL supports kudu sink
> ----------------------------
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
> Issue Type: New Feature
> Components: Flink Streaming Connectors
> Reporter: dalongliu
> Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table
> sink for kudu, it should exist a kudu table sink so we can register it to
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder()
> .setKuduMaster(kuduMaster)
> .setTableName(kuduTable)
> .build();
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()
> .setWriteMode(KuduWriterMode.UPSERT)
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND)
> .build();
> KuduTableSink tableSink = KuduTableSink.builder()
> .setOptions(options)
> .setWriterOptions(writerOptions)
> .setTableSchema(schema)
> .build();
> tEnv.registerTableSink("kudu", tableSink);
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment,
> the writing speed at 50000/s in upsert mode
--
This message was sent by Atlassian Jira
(v8.3.4#803005)