Thanks Tathagata! You are right, I have packaged the contents of the spark 
shipped example jar into my jar....which contains serveral HDFS configuration 
files like hdfs-default.xml etc. Thanks!



bit1...@163.com
 
From: Tathagata Das
Date: 2015-02-24 12:04
To: bit1...@163.com
CC: yuzhihong; silvio.fiorito; user
Subject: Re: Re_ Re_ Does Spark Streaming depend on Hadoop_(4)
You could have a hdfs configuration files in the classpath of the program. HDFS 
libraries that Spark uses automatically picks those up and starts using them.

TD

On Mon, Feb 23, 2015 at 7:47 PM, bit1...@163.com <bit1...@163.com> wrote:
I am crazy for frequent mail rejection so I create a new thread SMTP error, 
DOT: 552 spam score (5.7) exceeded threshold 
(FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_BL_SPAMCOP_NET,SPF_PASS


Hi Silvio and Ted
I know there is a configuration parameter to control to write log to HDFS, but 
I didn't enable it.
From the stack trace, looks like accessing HDFS is triggered in my code, but I 
didn't use HDFS, following is my code:

object MyKafkaWordCount { 
def main(args: Array[String]) { 
println("Start to run MyKafkaWordCount") 
val conf = new 
SparkConf().setAppName("MyKafkaWordCount").setMaster("local[20]") 
val ssc = new StreamingContext(conf, Seconds(3)) 
val topicMap = Map("topic-p6-r2"->1) 
val zkQuorum = "localhost:2181"; 
val group = "topic-p6-r2-consumer-group" 

//Kakfa has 6 partitions, here create 6 Receiver 
val streams = (1 to 6).map ( _ => 
KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2) 
) 
//repartition to 18, 3 times of the receiver 
val partitions = ssc.union(streams).repartition(18).map("DataReceived: " + _) 

partitions.print() 
ssc.start() 
ssc.awaitTermination() 
} 
}



bit1...@163.com

Reply via email to