[ 
https://issues.apache.org/jira/browse/HDFS-708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862412#action_12862412
 ] 

Konstantin Shvachko commented on HDFS-708:
------------------------------------------

With respect to 14. I found the following solution.
{code}
public DataGenerator(FileSystem fs, Path fn) throws IOException {
  if(!(fs instanceof DistributedFileSystem)) {
    this.fileId = -1L;
    return;
  }
  DFSDataInputStream in = null;
  try {
    in = (DFSDataInputStream) ((DistributedFileSystem)fs).open(fn);
    this.fileId = in.getCurrentBlock().getBlockId();
  } finally {
    if(in != null) in.close();
  }
}
{code}
Right after creating a file for write you can get the id of the first block of 
the file and store it in {{DataGenerator.fileId}} - a new field.. This id is 
not changing while renames, and can be reliably used as a file-specific mix-in 
for hash in data generation and verification. The data value of a file at a 
specific offset is then calculated as {{hash(fileId, offset)}};

> A stress-test tool for HDFS.
> ----------------------------
>
>                 Key: HDFS-708
>                 URL: https://issues.apache.org/jira/browse/HDFS-708
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: test, tools
>    Affects Versions: 0.22.0
>            Reporter: Konstantin Shvachko
>            Assignee: Joshua Harlow
>             Fix For: 0.22.0
>
>         Attachments: slive.patch, SLiveTest.pdf
>
>
> It would be good to have a tool for automatic stress testing HDFS, which 
> would provide IO-intensive load on HDFS cluster.
> The idea is to start the tool, let it run overnight, and then be able to 
> analyze possible failures.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to