You are using Spark Streaming Kafka package. The correct package name is "
org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0"
On Mon, Nov 20, 2017 at 4:15 PM, salemi wrote:
> Yes, we are using --packages
>
> $SPARK_HOME/bin/spark-submit --packages
>
Yes, we are using --packages
$SPARK_HOME/bin/spark-submit --packages
org.apache.spark:spark-streaming-kafka-0-10_2.11:2.2.0 --py-files shell.py
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To
What command did you use to launch your Spark application? The
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html#deploying
documentation suggests using spark-submit with the `--packages` flag to
include the required Kafka package. e.g.
./bin/spark-submit --packages
Hi All,
we are trying to use DataFrames approach with Kafka 0.10 and PySpark 2.2.0.
We followed the instruction on the wiki
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html.
We coded something similar to the code below using Python:
df = spark \
.read \