Cool! Thanks for your inputs Jacek and Mark!
From: Mark Hamstra [mailto:m...@clearstorydata.com]
Sent: 13 January 2017 12:59
To: Phadnis, Varun <phad...@sky.optymyze.com>
Cc: user@spark.apache.org
Subject: Re: Spark and Kafka integration
See "API compatibility" in http://
See "API compatibility" in http://spark.apache.org/versioning-policy.html
While code that is annotated as Experimental is still a good faith effort
to provide a stable and useful API, the fact is that we're not yet
confident enough that we've got the public API in exactly the form that we
want to
Hi Phadnis,
I found this in
http://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html:
> This version of the integration is marked as experimental, so the API is
> potentially subject to change.
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering
Hello,
We are using Spark 2.0 with Kafka 0.10.
As I understand, much of the API packaged in the following dependency we are
targeting is marked as "@Experimental"
org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0
What are implications of this being marked as experimental?
Hi
Some Background:
We have a Kafka cluster with ~45 topics. Some of topics contains logs in
Json format and some in PSV(pipe separated value) format. Now I want to
consume these logs using Spark streaming and store them in Parquet format
in HDFS.
Now my question is:
1. Can we create a
For Q2. The order of the logs in each partition is guaranteed but there cannot
be any such thing as global order.
From: Prashant Bhardwaj [mailto:prashant2006s...@gmail.com]
Sent: Monday, December 07, 2015 5:46 PM
To: user@spark.apache.org
Subject: Spark and Kafka Integration
Hi
Some