Re: How to add multiple sequence files from HDFS to a Spark Context to do Batch processing?

2015-07-31 Thread Marcelo Vanzin
file can be a directory (look at all children) or even a glob
(/path/*.ext, for example).

On Fri, Jul 31, 2015 at 11:35 AM, swetha swethakasire...@gmail.com wrote:

 Hi,

 How to add multiple sequence files from HDFS to a Spark Context to do Batch
 processing? I have something like the following in my code. Do I have to
 add
 Comma separated list of Sequence file paths to the Spark Context.

  val data  = if(args.length0  args(0)!=null)
   sc.sequenceFile(file,  classOf[LongWritable], classOf[Text]).
 map{case (x, y) = (x.toString, y.toString)}

 Thanks,
 Swetha



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/How-to-add-multiple-sequence-files-from-HDFS-to-a-Spark-Context-to-do-Batch-processing-tp24102.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




-- 
Marcelo


How to add multiple sequence files from HDFS to a Spark Context to do Batch processing?

2015-07-31 Thread swetha
Hi,

How to add multiple sequence files from HDFS to a Spark Context to do Batch
processing? I have something like the following in my code. Do I have to add
Comma separated list of Sequence file paths to the Spark Context.

 val data  = if(args.length0  args(0)!=null)
  sc.sequenceFile(file,  classOf[LongWritable], classOf[Text]).
map{case (x, y) = (x.toString, y.toString)}

Thanks,
Swetha



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-add-multiple-sequence-files-from-HDFS-to-a-Spark-Context-to-do-Batch-processing-tp24102.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org