Re: increase parallelism of reading from hdfs

2014-08-11 Thread Paul Hamilton
PM To: user@spark.apache.org user@spark.apache.org Subject: increase parallelism of reading from hdfs In Spark Streaming, StreamContext.fileStream gives a FileInputDStream. Within each batch interval, it would launch map tasks for the new files detected during that interval. It appears

increase parallelism of reading from hdfs

2014-08-08 Thread Chen Song
In Spark Streaming, StreamContext.fileStream gives a FileInputDStream. Within each batch interval, it would launch map tasks for the new files detected during that interval. It appears that the way Spark compute the number of map tasks is based oo block size of files. Below is the quote from