Hi,
You can use FileInputformat API of Hadoop and newApiHadoopFile of spark to
get recursion. More on the topic you can refer here
http://stackoverflow.com/questions/8114579/using-fileinputformat-addinputpaths-to-recursively-add-hdfs-path

On Fri, Dec 19, 2014 at 4:50 PM, Sean Owen <so...@cloudera.com> wrote:
>
> How about using the HDFS API to create a list of all the directories
> to read from, and passing them as a comma-joined string to
> sc.textFile?
>
> On Fri, Dec 19, 2014 at 11:13 AM, Hafiz Mujadid
> <hafizmujadi...@gmail.com> wrote:
> > Hi experts!
> >
> > what is efficient way to read all files using spark from directory and
> its
> > sub-directories as well.currently i move all files from directory and it
> > sub-directories into another temporary directory and then read them all
> > using sc.textFile method. But I want a method so that moving to temporary
> > directory cost may be saved.
> >
> > Thanks
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/reading-files-recursively-using-spark-tp20782.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

-- 
Regards,
Madhukara Phatak
http://www.madhukaraphatak.com

Reply via email to