Github user tdas commented on a diff in the pull request: https://github.com/apache/spark/pull/15102#discussion_r81851557 --- Diff: docs/structured-streaming-kafka-integration.md --- @@ -0,0 +1,231 @@ +--- +layout: global +title: Structured Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher) +--- + +Structured Streaming integration for Kafka 0.10 to poll data from Kafka. It provides simple parallelism, +1:1 correspondence between Kafka partitions and Spark partitions. The source will cache the Kafka +consumer in executors and try the best to schedule the same Kafka topic partition to the same executor. + +### Linking +For Scala/Java applications using SBT/Maven project definitions, link your application with the following artifact: + + groupId = org.apache.spark + artifactId = spark-sql-kafka-0-10_{{site.SCALA_BINARY_VERSION}} + version = {{site.SPARK_VERSION_SHORT}} + +For Python applications, you need to add this above library and its dependencies when deploying your +application. See the [Deploying](#deploying) subsection below. + +### Creating a Kafka Source Stream + +<div class="codetabs"> +<div data-lang="scala" markdown="1"> + + // Subscribe to 1 topic + val ds1 = spark + .readStream + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1") + .load() + ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + .as[(String, String)] + + // Subscribe to multiple topics + val ds2 = spark + .readStream + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1,topic2") + .load() + ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + .as[(String, String)] + + // Subscribe to a pattern + val ds3 = spark + .readStream + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribePattern", "topic.*") + .load() + ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + .as[(String, String)] + +</div> +<div data-lang="java" markdown="1"> + + // Subscribe to 1 topic + Dataset<Row> ds1 = spark + .readStream() + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1") + .load() + ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + + // Subscribe to multiple topics + Dataset<Row> ds2 = spark + .readStream() + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1,topic2") + .load() + ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + + // Subscribe to a pattern + Dataset<Row> ds3 = spark + .readStream() + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribePattern", "topic.*") + .load() + ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + +</div> +<div data-lang="python" markdown="1"> + + # Subscribe to 1 topic + ds1 = spark + .readStream() + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1") + .load() + ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + + # Subscribe to multiple topics + ds2 = spark + .readStream + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribe", "topic1,topic2") + .load() + ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + + # Subscribe to a pattern + ds3 = spark + .readStream() + .format("kafka") + .option("kafka.bootstrap.servers", "host1:port1,host2:port2") + .option("subscribePattern", "topic.*") + .load() + ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") + +</div> +</div> + +Each row in the source has the following schema: +<table class="table"> +<tr><th>Column</th><th>Type</th></tr> +<tr> + <td>key</td> + <td>binary</td> +</tr> +<tr> + <td>value</td> + <td>binary</td> +</tr> +<tr> + <td>topic</td> + <td>string</td> +</tr> +<tr> + <td>partition</td> + <td>int</td> +</tr> +<tr> + <td>offset</td> + <td>long</td> +</tr> +<tr> + <td>timestamp</td> + <td>long</td> +</tr> +<tr> + <td>timestampType</td> + <td>int</td> +</tr> +</table> + +Right now, the Kafka source has the following Spark's specific options. + +<table class="table"> +<tr><th>Option</th><th>value</th><th>default</th><th>meaning</th></tr> +<tr> + <td>startingOffset</td> + <td>["earliest", "latest"]</td> + <td>"latest"</td> + <td>The start point when a query is started, either "earliest" which is from the earliest offset, + or "latest" which is just from the latest offset. Note: This only applies when a new Streaming q + uery is started, and that resuming will always pick up from where the query left off.</td> +</tr> +<tr> + <td>failOnCorruptMetadata</td> + <td>[true, false]</td> + <td>true</td> + <td>Whether to fail the query when metadata is corrupt (e.g., topics are deleted, or offsets are + out of range), which may lost data.</td> +</tr> +<tr> + <td>subscribe</td> + <td>A comma-separated list of topics</td> + <td>(none)</td> + <td>The topic list to subscribe. Only one of "subscribe" and "subscribePattern" options can be + specified for Kafka source.</td> +</tr> +<tr> + <td>subscribePattern</td> + <td>Java regex string</td> + <td>(none)</td> + <td>The pattern used to subscribe the topic. Only one of "subscribe" and "subscribePattern" + options can be specified for Kafka source.</td> +</tr> +<tr> + <td>kafka.consumer.poll.timeoutMs</td> + <td>long</td> --- End diff -- nit: can keep this is `int`
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org