Hi Landon 

I had a problem very similar to your, where we have to process around 5
million relatively small files on NFS. After trying various options, we did
something similar to what Matei suggested.

1) take the original path and find the subdirectories under that path and
then parallelize the resulting list. you can configure the depth you want to
go down to before sending the paths across the cluster.

  def getFileList(srcDir:File, depth:Int) : List[File] = { 
    var list : ListBuffer[File] = new ListBuffer[File]() 
    if (srcDir.isDirectory()) { 
    srcDir.listFiles() .foreach((file: File) => 
       if (file.isFile()) { 
          list +=(file) 
       } else { 
          if (depth > 0 ) { 
             list ++= getFileList(file, (depth- 1 )) 
          } 
   else if (depth < 0) {
        list ++= getFileList(file, (depth)) 
          }
       else { 
          list += file 
       } 
    }) 
    }
    else {
       list += srcDir
    }
    list .toList 
  }





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Strategies-for-reading-large-numbers-of-files-tp15644p15835.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to