Repository: spark
Updated Branches:
  refs/heads/master a8567e34d -> a5e651f4c


[SPARK-19206][DOC][DSTREAM] Fix outdated parameter descriptions in kafka010

## What changes were proposed in this pull request?

Fix outdated parameter descriptions in kafka010

## How was this patch tested?

cc koeninger  zsxwing

Author: uncleGen <husty...@gmail.com>

Closes #16569 from uncleGen/SPARK-19206.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a5e651f4
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a5e651f4
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a5e651f4

Branch: refs/heads/master
Commit: a5e651f4c6f243b59724fd46237407374017a035
Parents: a8567e3
Author: uncleGen <husty...@gmail.com>
Authored: Sun Jan 15 11:16:49 2017 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Sun Jan 15 11:16:49 2017 +0000

----------------------------------------------------------------------
 .../kafka010/DirectKafkaInputDStream.scala      | 11 +++------
 .../spark/streaming/kafka010/KafkaRDD.scala     |  4 +--
 .../spark/streaming/kafka010/KafkaUtils.scala   | 26 ++++++++------------
 3 files changed, 16 insertions(+), 25 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a5e651f4/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
----------------------------------------------------------------------
diff --git 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
index 794f53c..6d6983c 100644
--- 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
+++ 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
@@ -42,15 +42,12 @@ import 
org.apache.spark.streaming.scheduler.rate.RateEstimator
  * The spark configuration spark.streaming.kafka.maxRatePerPartition gives the 
maximum number
  *  of messages
  * per second that each '''partition''' will accept.
- * @param locationStrategy In most cases, pass in [[PreferConsistent]],
+ * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
  *   see [[LocationStrategy]] for more details.
- * @param executorKafkaParams Kafka
- * <a href="http://kafka.apache.org/documentation.html#newconsumerconfigs";>
- * configuration parameters</a>.
- *   Requires  "bootstrap.servers" to be set with Kafka broker(s),
- *   NOT zookeeper servers, specified in host1:port1,host2:port2 form.
- * @param consumerStrategy In most cases, pass in [[Subscribe]],
+ * @param consumerStrategy In most cases, pass in 
[[ConsumerStrategies.Subscribe]],
  *   see [[ConsumerStrategy]] for more details
+ * @param ppc configuration of settings such as max rate on a per-partition 
basis.
+ *   see [[PerPartitionConfig]] for more details.
  * @tparam K type of Kafka message key
  * @tparam V type of Kafka message value
  */

http://git-wip-us.apache.org/repos/asf/spark/blob/a5e651f4/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
----------------------------------------------------------------------
diff --git 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
index 9839425..8f38095 100644
--- 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
+++ 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
@@ -41,8 +41,8 @@ import org.apache.spark.storage.StorageLevel
  * with Kafka broker(s) specified in host1:port1,host2:port2 form.
  * @param offsetRanges offset ranges that define the Kafka data belonging to 
this RDD
  * @param preferredHosts map from TopicPartition to preferred host for 
processing that partition.
- * In most cases, use [[DirectKafkaInputDStream.preferConsistent]]
- * Use [[DirectKafkaInputDStream.preferBrokers]] if your executors are on same 
nodes as brokers.
+ * In most cases, use [[LocationStrategies.PreferConsistent]]
+ * Use [[LocationStrategies.PreferBrokers]] if your executors are on same 
nodes as brokers.
  * @param useConsumerCache whether to use a consumer from a per-jvm cache
  * @tparam K type of Kafka message key
  * @tparam V type of Kafka message value

http://git-wip-us.apache.org/repos/asf/spark/blob/a5e651f4/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
----------------------------------------------------------------------
diff --git 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
index c11917f..3704632 100644
--- 
a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
+++ 
b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
@@ -48,7 +48,7 @@ object KafkaUtils extends Logging {
    * configuration parameters</a>. Requires "bootstrap.servers" to be set
    * with Kafka broker(s) specified in host1:port1,host2:port2 form.
    * @param offsetRanges offset ranges that define the Kafka data belonging to 
this RDD
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
    * @tparam K type of Kafka message key
    * @tparam V type of Kafka message value
@@ -80,14 +80,12 @@ object KafkaUtils extends Logging {
    * Java constructor for a batch-oriented interface for consuming from Kafka.
    * Starting and ending offsets are specified in advance,
    * so that you can control exactly-once semantics.
-   * @param keyClass Class of the keys in the Kafka records
-   * @param valueClass Class of the values in the Kafka records
    * @param kafkaParams Kafka
    * <a href="http://kafka.apache.org/documentation.html#newconsumerconfigs";>
    * configuration parameters</a>. Requires "bootstrap.servers" to be set
    * with Kafka broker(s) specified in host1:port1,host2:port2 form.
    * @param offsetRanges offset ranges that define the Kafka data belonging to 
this RDD
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
    * @tparam K type of Kafka message key
    * @tparam V type of Kafka message value
@@ -110,9 +108,9 @@ object KafkaUtils extends Logging {
    * The spark configuration spark.streaming.kafka.maxRatePerPartition gives 
the maximum number
    *  of messages
    * per second that each '''partition''' will accept.
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
-   * @param consumerStrategy In most cases, pass in 
ConsumerStrategies.subscribe,
+   * @param consumerStrategy In most cases, pass in 
[[ConsumerStrategies.Subscribe]],
    *   see [[ConsumerStrategies]] for more details
    * @tparam K type of Kafka message key
    * @tparam V type of Kafka message value
@@ -131,9 +129,9 @@ object KafkaUtils extends Logging {
    * :: Experimental ::
    * Scala constructor for a DStream where
    * each given Kafka topic/partition corresponds to an RDD partition.
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
-   * @param consumerStrategy In most cases, pass in 
ConsumerStrategies.subscribe,
+   * @param consumerStrategy In most cases, pass in 
[[ConsumerStrategies.Subscribe]],
    *   see [[ConsumerStrategies]] for more details.
    * @param perPartitionConfig configuration of settings such as max rate on a 
per-partition basis.
    *   see [[PerPartitionConfig]] for more details.
@@ -154,11 +152,9 @@ object KafkaUtils extends Logging {
    * :: Experimental ::
    * Java constructor for a DStream where
    * each given Kafka topic/partition corresponds to an RDD partition.
-   * @param keyClass Class of the keys in the Kafka records
-   * @param valueClass Class of the values in the Kafka records
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
-   * @param consumerStrategy In most cases, pass in 
ConsumerStrategies.subscribe,
+   * @param consumerStrategy In most cases, pass in 
[[ConsumerStrategies.Subscribe]],
    *   see [[ConsumerStrategies]] for more details
    * @tparam K type of Kafka message key
    * @tparam V type of Kafka message value
@@ -178,11 +174,9 @@ object KafkaUtils extends Logging {
    * :: Experimental ::
    * Java constructor for a DStream where
    * each given Kafka topic/partition corresponds to an RDD partition.
-   * @param keyClass Class of the keys in the Kafka records
-   * @param valueClass Class of the values in the Kafka records
-   * @param locationStrategy In most cases, pass in 
LocationStrategies.preferConsistent,
+   * @param locationStrategy In most cases, pass in 
[[LocationStrategies.PreferConsistent]],
    *   see [[LocationStrategies]] for more details.
-   * @param consumerStrategy In most cases, pass in 
ConsumerStrategies.subscribe,
+   * @param consumerStrategy In most cases, pass in 
[[ConsumerStrategies.Subscribe]],
    *   see [[ConsumerStrategies]] for more details
    * @param perPartitionConfig configuration of settings such as max rate on a 
per-partition basis.
    *   see [[PerPartitionConfig]] for more details.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to