[ 
https://issues.apache.org/jira/browse/HDFS-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13036320#comment-13036320
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-1877:
----------------------------------------------

- The variables, {{inJunitMode}}, {{BLOCK_SIZE}}, {{dfs}}, are not actually 
used.  Please remove them.

- How about the default {{filenameOption}} equals {{ROOT_DIR}}?

- You may simply have {{static private Log LOG = 
LogFactory.getLog(TestWriteRead.class);}}
{code}
+  static private Log LOG;
+
+  @Before
+  public void initJunitModeTest() throws Exception {
+    LOG = LogFactory.getLog(TestWriteRead.class);
{code}

- Please remove the following.  The default is already INFO.
{code}
+    ((Log4JLogger) FSNamesystem.LOG).getLogger().setLevel(Level.INFO);
+    ((Log4JLogger) DFSClient.LOG).getLogger().setLevel(Level.INFO);
{code}

- Most public methods should be package private.

- Please add comments to tell how to use the command options and the default 
values.

> Create a functional test for file read/write
> --------------------------------------------
>
>                 Key: HDFS-1877
>                 URL: https://issues.apache.org/jira/browse/HDFS-1877
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: test
>    Affects Versions: 0.22.0
>            Reporter: CW Chung
>            Priority: Minor
>         Attachments: TestWriteRead.java, TestWriteRead.patch
>
>
> It would be a great to have a tool, running on a real grid, to perform 
> function test (and stress tests to certain extent) for the file operations. 
> The tool would be written in Java and makes HDFS API calls to read, write, 
> append, hflush hadoop files. The tool would be usable standalone, or as a 
> building block for other regression or stress test suites (written in shell, 
> perl, python, etc).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to