Question: How does Spark support multiple context?

Background:  I have a stream of data coming to Spark from Kafka.   For each 
data in the stream I want to download some files from HDFS and process the file 
data.  I have written code to process the file from HDFS and I have code 
written to process stream data from Kafka using SparkStreaming API.  I have not 
been able to link both.

Can you please let me know if it is feasible to create JavaRDD from file inside 
SparkStreamingRDD job processing step?

Thanks,

Rachana

Reply via email to