Hey Aaron,

I had a similar problem. I have log files arranged in the following fashion:

/logs/<hostname>/<date>.log

I want to analyze a range of dates for all hosts. What I did was write into my driver class a subroutine that descends through the HDFS file system starting at /logs and builds a list of input files, then fed the list of files to the framework.

Example code below.

Brian

    FileSystem fs = FileSystem.get(conf);
Pattern fileNamePattern = Pattern.compile(".*datanode-(.*).log. ([0-9]+-[0-9]+-[0-9]+)");
    for (FileStatus status : fs.listStatus(base)) {
      Path pathname = status.getPath();
      for (FileStatus logfile : fs.listStatus(pathname)) {
        Path logFilePath = logfile.getPath();
        Matcher m = fileNamePattern.matcher(logFilePath.getName());
        if (m.matches()) {
          String dateString = m.group(2);
          Date logDate = df.parse(dateString);
if ((logDate.equals(startDate) || logDate.after(startDate)) && logDate.before(endDate)) {
            FileInputFormat.addInputPath(conf, logFilePath);
          } else {
//System.out.println("Ignoring file: " + logFilePath.getName()); //System.out.println("Start Date: " + startDate + ", End Date: " + endDate + ", Log date: " + logDate);
          }
        } else {
System.out.println("Ignoring file: " + logFilePath.getName());
        }
      }
    }


On Jun 2, 2009, at 6:22 PM, Aaron Kimball wrote:

There is no technical limit that prevents Hadoop from operating in this fashion; it's simply the case that the included InputFormat implementations do not do so. This behavior has been set in this fashion for a long time, so
it's unlikely that it will change soon, as that might break existing
applications.

But you can write your own subclass of TextInputFormat or
SequenceFileInputFormat that overrides the getSplits() method to recursively
descend through directories and search for files.

- Aaron

On Tue, Jun 2, 2009 at 1:22 PM, David Rosenstrauch <dar...@darose.net>wrote:

As per a previous list question (
http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200804.mbox/%3ce75c02ef0804011433x144813e6x2450da7883de3...@mail.gmail.com%3e)
it looks as though it's not possible for hadoop to traverse input
directories recursively in order to discover input files.

Just wondering a) if there's any particular reason why this functionality doesn't exist, and b) if not, if there's any workaround/hack to make it
possible.

Like the OP, I was thinking it would be helpful to partition my input data by year, month, and day. I figured his would enable me to run jobs against specific date ranges of input data, and thereby speed up the execution of my
jobs since they wouldn't have to process every single record.

Any way to make this happen? (Or am I totally going about this the wrong
way for what I'm trying to achieve?)

TIA,

DR


Reply via email to