There is no technical limit that prevents Hadoop from operating in this
fashion; it's simply the case that the included InputFormat implementations
do not do so. This behavior has been set in this fashion for a long time, so
it's unlikely that it will change soon, as that might break existing
applications.

But you can write your own subclass of TextInputFormat or
SequenceFileInputFormat that overrides the getSplits() method to recursively
descend through directories and search for files.

- Aaron

On Tue, Jun 2, 2009 at 1:22 PM, David Rosenstrauch <dar...@darose.net>wrote:

> As per a previous list question (
> http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200804.mbox/%3ce75c02ef0804011433x144813e6x2450da7883de3...@mail.gmail.com%3e)
> it looks as though it's not possible for hadoop to traverse input
> directories recursively in order to discover input files.
>
> Just wondering a) if there's any particular reason why this functionality
> doesn't exist, and b) if not, if there's any workaround/hack to make it
> possible.
>
> Like the OP, I was thinking it would be helpful to partition my input data
> by year, month, and day.  I figured his would enable me to run jobs against
> specific date ranges of input data, and thereby speed up the execution of my
> jobs since they wouldn't have to process every single record.
>
> Any way to make this happen?  (Or am I totally going about this the wrong
> way for what I'm trying to achieve?)
>
> TIA,
>
> DR
>

Reply via email to