With dev branch.

The config like this:


spark {
spark.app.name = "seatunnel"
spark.executor.instances = 2
spark.executor.cores = 1
spark.executor.memory = "1g"
}


input {


jdbc {
}
}


filter {






}


output {


kafka {
streaming_output_mode = "Append"
}
}


Detail usage can refer 
https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/configuration/input-plugin.

















At 2022-01-23 11:14:13, "李洪军" <[email protected]> wrote:
>Hello,I have a problem. How can I use the seatunnel for source is jdbc and 
>sink is kafka 。
>thanks very much.

Reply via email to