[jira] [Commented] (HDFS-9355) Support colocation in HDFS.

2015-11-24 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026254#comment-15026254
 ] 

nijel commented on HDFS-9355:
-

Idea looks good
But co-location is a very broad topic. So i suggest to focus on favored nodes 
optimization as part of this JIRA as you mentioned.

One option is to give a client API to get the DNs based on storage policy.

> Support colocation in HDFS.
> ---
>
> Key: HDFS-9355
> URL: https://issues.apache.org/jira/browse/HDFS-9355
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>
> Through this feature client can give suggestion to HDFS to write his all the 
> blocks on same set of datanodes. Currently this we can achieve through 
> HDFS-2576. HDFS-2576 give option to hint namenode about favored nodes, but in 
> heterogeneous cluster this will not work out. Support client wants to write 
> his data in directory which have COLD policy, but he don't know which DN have 
> ARCHIVE storage, So he will not able to give favoredNodes list. 
> *Implementation*
> Colocation can enable by setting "dfs.colocation.enable" true in client 
> configuration. If colocation is enable and  favoredNodes list is empty then 
> {{DataStreamer}} will set first set of datanodes as favoredNodes which is 
> chosen for first block and subsequent block will use the same datanodes for 
> write. Before closing file client can get the favoredNodes list and same he 
> can use for writing new file.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-11-09 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998041#comment-14998041
 ] 

nijel commented on HDFS-9011:
-

looks like similar discussion happened in 
https://issues.apache.org/jira/browse/HDFS-8574.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2015-10-30 Thread nijel (JIRA)
nijel created HDFS-9353:
---

 Summary: Code and comment mismatch in  JavaKeyStoreProvider 
 Key: HDFS-9353
 URL: https://issues.apache.org/jira/browse/HDFS-9353
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: nijel
Priority: Trivial


In

org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI uri, 
Configuration conf) throws IOException

The comment mentioned is
{code}
// Get the password file from the conf, if not present from the user's
// environment var
{code}

But the code takes the value form ENV first

I think this make sense since the user can pass the ENV for a particular run.

My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-14 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957152#comment-14957152
 ] 

nijel commented on HDFS-9157:
-

bq.-1   hdfs tests

The test fails are un related to patch.

please review

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch, HDFS-9157_6.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-14 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_6.patch

One test fail is due to patch
Sorry for the wrong analysis

Updated the patch


thanks

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch, HDFS-9157_6.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_5.patch

patch to fix white space

Test fails are not related to this patch

thanks

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8575) Support User level Quota for space and Name (count)

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned HDFS-8575:
---

Assignee: (was: nijel)

keeping it unassigned as no work planned.

> Support User level Quota for space and Name (count)
> ---
>
> Key: HDFS-8575
> URL: https://issues.apache.org/jira/browse/HDFS-8575
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: nijel
>
> I would like to have one feature in HDFS to have quota management at user 
> level. 
> Background :
> When the customer uses a multi tenant solution it will have many Hadoop eco 
> system components like HIVE, HBASE, yarn etc. The base folder of these 
> components are different like /hive - Hive , /hbase -HBase. 
> Now if a user creates some file  or table these will be under the folder 
> specific to component. If the user name is taken into account it looks like
> {code}
> /hive/user1/table1
> /hive/user2/table1
> /hbase/user1/Htable1
> /hbase/user2/Htable1
>  
> Same for yarn/map-reduce data and logs
> {code}
>  
> In this case restricting the user to use a certain amount of disk/file is 
> very difficult since the current quota management is at folder level.
>  
> Requirement: User level Quota for space and Name (count). Say user1 can have 
> 100G irrespective of the folder or location used.
>  
> Here the idea to consider the file owner ad the key and attribute the quota 
> to it.  So the current quota system can have a initial check for the user 
> quota if defined, before validating the folder quota.
> Note:
> This need a change in fsimage to store the user and quota information
> Please have a look on this scenario. If it sounds good, i will create the 
> tasks and the update the design and prototype.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-10-13 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954510#comment-14954510
 ] 

nijel commented on HDFS-9046:
-

thanks [~vinayrpet] for your time

[~cnauroth], please have a review of the this change.

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_4.patch

thanks [~vinayrpet] for the review
Updated patch with the comments

Please have a look.

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-10 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_3.patch

Patch updated to remove white space
Removed unwanted file in patch

please review

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-10 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14951794#comment-14951794
 ] 

nijel commented on HDFS-9157:
-

bq. -1  release audit
As per my analysis this is not related to this patch.

bq.-1   hdfs tests
skipped tests are not related to this patch

thanks

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_2.patch

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-09 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14951579#comment-14951579
 ] 

nijel commented on HDFS-9157:
-

thanks [~liuml07] for the comments
Updated patch with the comments

New assertion added in testOfflineImageViewerHelpMessage.



> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_1.patch

Attatched the changes.
Please review

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Status: Patch Available  (was: Open)

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949897#comment-14949897
 ] 

nijel commented on HDFS-9157:
-

bq.-1   release audit
Not related to this patch

bq.-1   hdfs tests
As per my analysis, these are not related to the patch

thanks

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-10-07 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948065#comment-14948065
 ] 

nijel commented on HDFS-8442:
-

Removing this line will impact the mbean registration in tomcat 6.

So i suggest to have 2 xmls, one for tomcat 6 based version and one for tomcat7 
versions
Based on the version passed, build time can choose which xml to use

any thought ? 

> Remove ServerLifecycleListener from kms/server.xml.
> ---
>
> Key: HDFS-8442
> URL: https://issues.apache.org/jira/browse/HDFS-8442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-8442-1.patch
>
>
> Remove ServerLifecycleListener from kms/server.xml.
> From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
> ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
> Remove ServerLifecycleListener. This was already removed from server.xml and 
> with the Lifecycle re-factoring is no longer required. (markt)
> So if the build env is with tomcat later than this, kms startup is failing
> {code}
> SEVERE: Begin event threw exception
> java.lang.ClassNotFoundException: 
> org.apache.catalina.mbeans.ServerLifecycleListener
> {code}
> can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946184#comment-14946184
 ] 

nijel commented on HDFS-9159:
-

bq. -1  release audit
This comment is not related to this patch

bq.-1   checkstyle
This is due to indentation is kept same as other blocks for readability.

bq.-1   hdfs tests
Test fails are not related to this patch

thanks


> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945000#comment-14945000
 ] 

nijel commented on HDFS-9159:
-

sorry the name changed !!
Thanks [~vinayrpet] for the review

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9159:

Attachment: HDFS-9159_03.patch

thanks [~vshreyas] for your time
Updated the patch to fix the comments
Please have a look.


> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9201) Namenode Performance Improvement : Using for loop without iterator

2015-10-06 Thread nijel (JIRA)
nijel created HDFS-9201:
---

 Summary: Namenode Performance Improvement : Using for loop without 
iterator
 Key: HDFS-9201
 URL: https://issues.apache.org/jira/browse/HDFS-9201
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel


As discussed in HBASE-12023, the for each loop syntax will create few extra 
objects and garbage.

For arrays and Lists can change to the traditional syntax. 
This can improve memory foot print and can result in performance gain.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9201) Namenode Performance Improvement : Using for loop without iterator

2015-10-06 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9201:

Attachment: HDFS-9201_draft.patch

Had a try with NNThroughputBenchmark for read and write flow in a 40 core 
machines with few changes in core flow.
config : -threads 200 -files 50 -filesPerDir 100

Results (in ops per second)
||Read ~ 5% improvement observed||
|| trial || Without change || After the change |
| trail 1 | 187336 |  198886 |
| trail 2 | 181752 | 200642 | 
| trail 3 | 195388 | 200964 | 
|  |  |  | 
||Write - No change in write flow||
| trail 1 | 29585 | 29330 |
| trail 2 | 29670 | 29577 | 
| trail 3 | 29584 | 29670 | 

Attached the draft patch with the changes used for test
Please give your opinion

> Namenode Performance Improvement : Using for loop without iterator
> --
>
> Key: HDFS-9201
> URL: https://issues.apache.org/jira/browse/HDFS-9201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>  Labels: namenode, performance
> Attachments: HDFS-9201_draft.patch
>
>
> As discussed in HBASE-12023, the for each loop syntax will create few extra 
> objects and garbage.
> For arrays and Lists can change to the traditional syntax. 
> This can improve memory foot print and can result in performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9155) OEV should treat .XML files as XML even when the file name extension is uppercase

2015-10-05 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1493#comment-1493
 ] 

nijel commented on HDFS-9155:
-

thank [~cmccabe] for the review and commit

> OEV should treat .XML files as XML even when the file name extension is 
> uppercase
> -
>
> Key: HDFS-9155
> URL: https://issues.apache.org/jira/browse/HDFS-9155
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9155_01.patch
>
>
> As in document and help
> {noformat}
> -i,--inputFileedits file to process, xml (*case
>insensitive*) extension means XML format,
> {noformat}
> But if i give the file with "XML" extension it falls back to binary 
> processing.
> This issue is due the code
> {code}
>  int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
> .
> boolean xmlInput = inputFileName.endsWith(".xml");
> {code}
> Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-10-05 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1494#comment-1494
 ] 

nijel commented on HDFS-9158:
-

thanks [~templedf] and [~vinayrpet] for the review and commit

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9158.01.patch, HDFS-9158_02.patch, 
> HDFS-9158_03.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9158:

Attachment: HDFS-9158_03.patch

updated patch for the help message fix also
Keeping indent as same to follow above lines

Thanks

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9158.01.patch, HDFS-9158_02.patch, 
> HDFS-9158_03.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9155) [OEV] : The inputFile does not follow case insensitiveness incase of XML file

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9155:

Attachment: HDFS-9155_01.patch

updated the change.
please review

> [OEV] : The inputFile does not follow case insensitiveness incase of XML file
> -
>
> Key: HDFS-9155
> URL: https://issues.apache.org/jira/browse/HDFS-9155
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9155_01.patch
>
>
> As in document and help
> {noformat}
> -i,--inputFileedits file to process, xml (*case
>insensitive*) extension means XML format,
> {noformat}
> But if i give the file with "XML" extension it falls back to binary 
> processing.
> This issue is due the code
> {code}
>  int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
> .
> boolean xmlInput = inputFileName.endsWith(".xml");
> {code}
> Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9159:

Attachment: HDFS-9159_01.patch

Updated the patch
please review

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9159:

Status: Patch Available  (was: Open)

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9155) [OEV] : The inputFile does not follow case insensitiveness incase of XML file

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9155:

Status: Patch Available  (was: Open)

> [OEV] : The inputFile does not follow case insensitiveness incase of XML file
> -
>
> Key: HDFS-9155
> URL: https://issues.apache.org/jira/browse/HDFS-9155
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9155_01.patch
>
>
> As in document and help
> {noformat}
> -i,--inputFileedits file to process, xml (*case
>insensitive*) extension means XML format,
> {noformat}
> But if i give the file with "XML" extension it falls back to binary 
> processing.
> This issue is due the code
> {code}
>  int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
> .
> boolean xmlInput = inputFileName.endsWith(".xml");
> {code}
> Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9158:

Status: Patch Available  (was: Open)

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9158.01.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9158:

Attachment: HDFS-9158.01.patch

added the changes
Please review

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9158.01.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9158:

Attachment: HDFS-9158_02.patch

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9158.01.patch, HDFS-9158_02.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933523#comment-14933523
 ] 

nijel commented on HDFS-9158:
-

Thanks [~templedf] for the comments
Sorry for simple mistake.

Updated the patch

> [OEV-Doc] : Document does not mention about "-f" and "-r" options
> -
>
> Key: HDFS-9158
> URL: https://issues.apache.org/jira/browse/HDFS-9158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9158.01.patch, HDFS-9158_02.patch
>
>
> 1. Document does not mention about "-f" and "-r" options
> add these options also in document
> {noformat}
> -f,--fix-txids Renumber the transaction IDs in the input,
>so that there are no gaps or invalid  transaction IDs.
> -r,--recover   When reading binary edit logs, use recovery
>mode.  This will give you the chance to skip
>corrupt parts of the edit log.
> {noformat}
> 2. In help message there is some extra white spaces 
> {code}
> "so that there are no gaps or invalidtransaction IDs."
> {code}
> can remove this also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9160:

Status: Patch Available  (was: Open)

> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9160:

Attachment: HDFS-9160.01.patch

Added the changes.
Please have a look.

> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9154) Few improvements/bug fixes in offline image viewer and offline edit viewer

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9154:
---

 Summary: Few improvements/bug fixes in offline image viewer and 
offline edit viewer
 Key: HDFS-9154
 URL: https://issues.apache.org/jira/browse/HDFS-9154
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel


I was analyzing OEV and OIV for cluster maintenance and issue analysis purpose.
Seen few issues and usability improvements.

I will raise subtasks to this JIRA and handle.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9156) [OEV] : The inputFile does not follow case insensitiveness incase of XML file

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9156:
---

 Summary: [OEV] : The inputFile does not follow case 
insensitiveness incase of XML file
 Key: HDFS-9156
 URL: https://issues.apache.org/jira/browse/HDFS-9156
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


As in document and help
{noformat}
-i,--inputFileedits file to process, xml (*case
   insensitive*) extension means XML format,
{noformat}

But if i give the file with "XML" extension it falls back to binary processing.
This issue is due the code
{code}
 int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
.
boolean xmlInput = inputFileName.endsWith(".xml");

{code}
Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9161) [OIV] : Add a header section in oiv output for "delmited" processor to improve the redability

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9161:
---

 Summary: [OIV] : Add a header section in oiv output for  
"delmited" processor to improve the redability
 Key: HDFS-9161
 URL: https://issues.apache.org/jira/browse/HDFS-9161
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


Add a header section in oiv output for  "delmited" processor to improve the 
redability
Currently the output starts with the file details.
Can add a header like
"filename size   createdtime  replicationfactor ..."
With all the column names



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9162) [OIV and OEV] : The output stream can be created only if the processor value specified is correct, if not return error to user.

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9162:
---

 Summary: [OIV and OEV] : The output stream can be created only if 
the processor value specified is correct, if not return error to user.
 Key: HDFS-9162
 URL: https://issues.apache.org/jira/browse/HDFS-9162
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


In both tools, the output file will be overwritten, if it exists.

Now if user passes invalid processor value, the file content will be lost.

in this case the PrintStream is created before checking the processor options. 
So the output file will be empty even if the command fails.

The output stream can be created only if the processor value specified is 
correct, if not return error to user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9157:
---

 Summary: [OEV and OIV] : Unnecessary parsing for mandatory 
arguements if "-h" option is specified as the only option
 Key: HDFS-9157
 URL: https://issues.apache.org/jira/browse/HDFS-9157
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


In both tools, if "-h" is specified as the only option, it throws error as 
input and output not specified.
{noformat}
master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
Error parsing command-line options: Missing required options: o, i
Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
{noformat}

In code the parsing is happening before the "-h" option is verified
Can add code to return after initial check.

{code}
if (argv.length == 1 && argv[1] == "-h") {
  printHelp();
  return 0;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9156) [OEV] : The inputFile does not follow case insensitiveness incase of XML file

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel resolved HDFS-9156.
-
Resolution: Duplicate
  Assignee: (was: nijel)

> [OEV] : The inputFile does not follow case insensitiveness incase of XML file
> -
>
> Key: HDFS-9156
> URL: https://issues.apache.org/jira/browse/HDFS-9156
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>
> As in document and help
> {noformat}
> -i,--inputFileedits file to process, xml (*case
>insensitive*) extension means XML format,
> {noformat}
> But if i give the file with "XML" extension it falls back to binary 
> processing.
> This issue is due the code
> {code}
>  int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
> .
> boolean xmlInput = inputFileName.endsWith(".xml");
> {code}
> Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9155) [OEV] : The inputFile does not follow case insensitiveness incase of XML file

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9155:
---

 Summary: [OEV] : The inputFile does not follow case 
insensitiveness incase of XML file
 Key: HDFS-9155
 URL: https://issues.apache.org/jira/browse/HDFS-9155
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


As in document and help
{noformat}
-i,--inputFileedits file to process, xml (*case
   insensitive*) extension means XML format,
{noformat}

But if i give the file with "XML" extension it falls back to binary processing.
This issue is due the code
{code}
 int org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go()
.
boolean xmlInput = inputFileName.endsWith(".xml");

{code}
Here need to check the xml after converting the file name to lower case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9158) [OEV-Doc] : Document does not mention about "-f" and "-r" options

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9158:
---

 Summary: [OEV-Doc] : Document does not mention about "-f" and "-r" 
options
 Key: HDFS-9158
 URL: https://issues.apache.org/jira/browse/HDFS-9158
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


1. Document does not mention about "-f" and "-r" options

add these options also in document
{noformat}
-f,--fix-txids Renumber the transaction IDs in the input,
   so that there are no gaps or invalid  transaction IDs.
-r,--recover   When reading binary edit logs, use recovery
   mode.  This will give you the chance to skip
   corrupt parts of the edit log.
{noformat}


2. In help message there is some extra white spaces 
{code}
"so that there are no gaps or invalidtransaction IDs."
{code}
can remove this also




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9159:
---

 Summary: [OIV] : return value of the command is not correct if 
invalid value specified in "-p (processor)" option
 Key: HDFS-9159
 URL: https://issues.apache.org/jira/browse/HDFS-9159
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


Return value of the IOV command is not correct if invalid value specified in 
"-p (processor)" option


this needs to return error to user.
code change will be in switch statement of

{code}
 try (PrintStream out = outputFile.equals("-") ?
System.out : new PrintStream(outputFile, "UTF-8")) {
  switch (processor) {
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9160:
---

 Summary: [OIV-Doc] : Missing details of "delimited" for processor 
options
 Key: HDFS-9160
 URL: https://issues.apache.org/jira/browse/HDFS-9160
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


Missing details of "delimited" for processor options

{noformat}
-p|--processor processorSpecify the image processor to apply against 
the image file. Currently valid options are Web (default), XML and 
FileDistribution.

{noformat}
Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9163) [OIV and OEV] : Avoid stack trace at client side (shell)

2015-09-28 Thread nijel (JIRA)
nijel created HDFS-9163:
---

 Summary: [OIV and OEV] : Avoid stack trace at client side (shell)
 Key: HDFS-9163
 URL: https://issues.apache.org/jira/browse/HDFS-9163
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: nijel


In some error cases, it prints the stack trace in console.
eg:
{code}
master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -i invalidFile -o test
Encountered exception. Exiting: invalidFile (No such file or directory)
java.io.FileNotFoundException: invalidFile (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$FileLog.getInputStream(EditLogFileInputStream.java:390)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.getVersion(EditLogFileInputStream.java:265)
at org.apache.hadoop.hdfs.tools.offlineEd
{code}

Log can have exception trace. Console can be bit clean with only error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-09-28 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9159:

Attachment: HDFS-9159_02.patch

1. Fixed the checkstyle warning

2. as per my analysis Test fail is unrelated to this patch


> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909182#comment-14909182
 ] 

nijel commented on HDFS-9125:
-

thanks [~templedf] for you time.
Updated the patch with the comment and a minor change in the command prefix.
Please check

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Attachment: HDFS-9125_2.patch

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Attachment: HDFS-9125_3.patch

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909224#comment-14909224
 ] 

nijel commented on HDFS-9125:
-

Updated the patch with check style and fix for the test fail 
"org.apache.hadoop.cli.TestCLI.testAll"

bq. 
org.apache.hadoop.fs.TestLocalFsFCStatistics.testStatisticsThreadLocalDataCleanUp
this failure is unrelated and is passing locally.

please review
thanks

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9135) The dfs client will always connect to the old ip which previous resloved after domainname of namenode change to a new ip

2015-09-24 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907629#comment-14907629
 ] 

nijel commented on HDFS-9135:
-

i will look into this. Please feel free to re assign

> The dfs client will always connect to the old ip which previous resloved 
> after domainname of namenode change to a new ip
> 
>
> Key: HDFS-9135
> URL: https://issues.apache.org/jira/browse/HDFS-9135
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liujianhui
>Assignee: nijel
>Priority: Minor
>
> when the namenode from one machine to another machine, while the domainname 
> of the namenode is not modify, for example both are namenode.org . now 
> fs.hdfs.impl.disable.cache in conf is false as default, the dfs client will 
> always connect to the old machine firstly, and then connect to the new 
> machine with the IOException reponse. this procedure will waste a few seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9135) The dfs client will always connect to the old ip which previous resloved after domainname of namenode change to a new ip

2015-09-24 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned HDFS-9135:
---

Assignee: nijel

> The dfs client will always connect to the old ip which previous resloved 
> after domainname of namenode change to a new ip
> 
>
> Key: HDFS-9135
> URL: https://issues.apache.org/jira/browse/HDFS-9135
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liujianhui
>Assignee: nijel
>Priority: Minor
>
> when the namenode from one machine to another machine, while the domainname 
> of the namenode is not modify, for example both are namenode.org . now 
> fs.hdfs.impl.disable.cache in conf is false as default, the dfs client will 
> always connect to the old machine firstly, and then connect to the new 
> machine with the IOException reponse. this procedure will waste a few seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-23 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Status: Patch Available  (was: Open)

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-23 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Attachment: HDFS-9125_1.patch

attached the changes.
Please review

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9126) namenode crash in fsimage download/transfer

2015-09-23 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904724#comment-14904724
 ] 

nijel commented on HDFS-9126:
-

[~pingley]
Can you attach the logs or the error messages  for more clarity  ?

> namenode crash in fsimage download/transfer
> ---
>
> Key: HDFS-9126
> URL: https://issues.apache.org/jira/browse/HDFS-9126
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: OS:Centos 6.5(final)
> Hadoop:2.6.0
> namenode ha base 5 journalnode
>Reporter: zengyongping
>Priority: Critical
>
> In our product Hadoop cluster,when active namenode begin download/transfer 
> fsimage from standby namenode.some times zkfc monitor health of NameNode 
> socket timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING 
> ,happen hadoop namenode ha failover,fence old active namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-22 Thread nijel (JIRA)
nijel created HDFS-9125:
---

 Summary: Display help if the  command option to "hdfs dfs " is not 
valid
 Key: HDFS-9125
 URL: https://issues.apache.org/jira/browse/HDFS-9125
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel
Priority: Minor


{noformat}
master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
-mkdirs: Unknown command
{noformat}

Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-21 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14900496#comment-14900496
 ] 

nijel commented on HDFS-9046:
-

bq.-1   hdfs tests  186m 23sTests failed in hadoop-hdfs.
as per my analysis test failures are not related to this patch. Previous run 
these are passed

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-20 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9046:

Attachment: HDFS-9046_3.patch

Fixed the checkstyle and white space comments


> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-16 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9046:

Status: Patch Available  (was: Open)

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-16 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9046:

Attachment: HDFS-9046_2.patch

Updated the patch
Please review

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-10 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9046:

Attachment: HDFS-9046_1.patch

Attached the patch
Similar issue noticed in init section also.

It is hard to reproduce. i will try to add the test case

Please review

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-10 Thread nijel (JIRA)
nijel created HDFS-9046:
---

 Summary: Any Error during BPOfferService run can leads to Missing 
DN.
 Key: HDFS-9046
 URL: https://issues.apache.org/jira/browse/HDFS-9046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel


The cluster is ins HA mode and each DN having only one block pool.

The issue is once after switch one DN is missing from the current active NN.
Upon analysis I found that there is one exception in BPOfferService.run()

{noformat}
2015-08-21 09:02:11,190 | WARN  | DataNode: 
[[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
[DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:745)
{noformat}
After this particular BPOfferService is down during the run time.
And this particular NN will not have the details of this DN

Similar issues are discussed in the following JIRAs
https://issues.apache.org/jira/browse/HDFS-2882
https://issues.apache.org/jira/browse/HDFS-7714

Can we retry in this case also with a larger interval instead of shutting down 
this BPOfferService ?
I think since this exceptions can occur randomly in DN it is not good to keep 
the DN running where some NN does not have the info !




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5711) Removing memory limitation of the Namenode by persisting Block - Block location mappings to disk.

2015-08-24 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708941#comment-14708941
 ] 

nijel commented on HDFS-5711:
-

Its a long pending issue :)

recently i analyzed a similar requirement to improve the NN memory footprint 
and to HDFS cluster startup time.
The initial analysis was to keep the blockmap persisted in a memory cache with 
persistence support. Also only recent activities can be kept in memory.
with HDFS-395 in place, NN can keep only  the recent activities in memory.

Any thoughts ? 

 Removing memory limitation of the Namenode by persisting Block - Block 
 location mappings to disk.
 -

 Key: HDFS-5711
 URL: https://issues.apache.org/jira/browse/HDFS-5711
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Rohan Pasalkar
Assignee: Ajith S
Priority: Minor

 This jira is to track changes to be made to remove HDFS name-node memory 
 limitation to hold block - block location mappings.
 It is a known fact that the single Name-node architecture of HDFS has 
 scalability limits. The HDFS federation project alleviates this problem by 
 using horizontal scaling. This helps increase the throughput of metadata 
 operation and also the amount of data that can be stored in a Hadoop cluster.
 The Name-node stores all the filesystem metadata in memory (even in the 
 federated architecture), the
 Name-node design can be enhanced by persisting part of the metadata onto 
 secondary storage and retaining 
 the popular or recently accessed metadata information in main memory. This 
 design can benefit a HDFS deployment
 which doesn't use federation but needs to store a large number of files or 
 large number of blocks. Lin Xiao from Hortonworks attempted a similar
 project [1] in the Summer of 2013. They used LevelDB to persist the Namespace 
 information (i.e file and directory inode information).
 A patch with this change is yet to be submitted to code base. We also intend 
 to use LevelDB to persist metadata, and plan to 
 provide a complete solution, by not just persisting  the Namespace 
 information but also the Blocks Map onto secondary storage. 
 We did implement the basic prototype which stores the block-block location 
 mapping metadata to the persistent key-value store i.e. levelDB. Prototype 
 also maintains the in-memory cache of the recently used block-block location 
 mappings metadata. 
 References:
 [1] Lin Xiao, Hortonworks, Removing Name-node’s memory limitation, HDFS-5389, 
 http://www.slideshare.net/ydn/hadoop-meetup-hug-august-2013-removing-the-namenodes-memory-limitation.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8574) When block count for a volume exceeds dfs.blockreport.split.threshold, block report causes exception

2015-06-23 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597253#comment-14597253
 ] 

nijel commented on HDFS-8574:
-

In that case, i think there is no need of any change.
User can configure this accordingly, since the scenario is to verify the limits.
Thanks

 When block count for a volume exceeds dfs.blockreport.split.threshold, block 
 report causes exception
 

 Key: HDFS-8574
 URL: https://issues.apache.org/jira/browse/HDFS-8574
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ajith S
Assignee: Ajith S

 This piece of code in 
 {{org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport()}}
 {code}
 // Send one block report per message.
 for (int r = 0; r  reports.length; r++) {
   StorageBlockReport singleReport[] = { reports[r] };
   DatanodeCommand cmd = bpNamenode.blockReport(
   bpRegistration, bpos.getBlockPoolId(), singleReport,
   new BlockReportContext(reports.length, r, reportId));
   numReportsSent++;
   numRPCs++;
   if (cmd != null) {
 cmds.add(cmd);
   }
 {code}
 when a single volume contains many blocks, i.e more than the threshold, it is 
 trying to send the entire blockreport in one RPC, causing exception
 {code}
 java.lang.IllegalStateException: 
 com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
 large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
 the size limit.
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:369)
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:347)
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:325)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:473)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8629) WebHDFS Improvements: support Missing Features (quota, storagepolicy)

2015-06-18 Thread nijel (JIRA)
nijel created HDFS-8629:
---

 Summary: WebHDFS Improvements: support Missing Features (quota, 
storagepolicy)
 Key: HDFS-8629
 URL: https://issues.apache.org/jira/browse/HDFS-8629
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel


We are focusing on building a Hadoop operation management system based on REST 
API.
As per the analysis few features are missing in REST API like quota management 
and storage policy management.
Since these are supported by filesystem object, same operation can be allowed 
to perform from REST API.

Please give your comments. Will raise subtasks for missing features and other 
improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8629) WebHDFS Improvements: support Missing Features (quota, storagepolicy)

2015-06-18 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8629:

Issue Type: New Feature  (was: Bug)

 WebHDFS Improvements: support Missing Features (quota, storagepolicy)
 -

 Key: HDFS-8629
 URL: https://issues.apache.org/jira/browse/HDFS-8629
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: nijel

 We are focusing on building a Hadoop operation management system based on 
 REST API.
 As per the analysis few features are missing in REST API like quota 
 management and storage policy management.
 Since these are supported by filesystem object, same operation can be allowed 
 to perform from REST API.
 Please give your comments. Will raise subtasks for missing features and other 
 improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8631) WebHDFS : Support list/setQuota

2015-06-18 Thread nijel (JIRA)
nijel created HDFS-8631:
---

 Summary: WebHDFS : Support list/setQuota
 Key: HDFS-8631
 URL: https://issues.apache.org/jira/browse/HDFS-8631
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: surendra singh lilhore


User is able do quota management from filesystem object. Same operation can be 
allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-06-18 Thread nijel (JIRA)
nijel created HDFS-8630:
---

 Summary: WebHDFS : Support get/setStoragePolicy 
 Key: HDFS-8630
 URL: https://issues.apache.org/jira/browse/HDFS-8630
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: nijel
Assignee: surendra singh lilhore


User can set and get the storage policy from filesystem object. Same operation 
can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8574) When block count for a volume exceeds dfs.blockreport.split.threshold, block report causes exception

2015-06-17 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589701#comment-14589701
 ] 

nijel commented on HDFS-8574:
-

Can we think about making the protobuf size configurable ? is it feasible ? 

 When block count for a volume exceeds dfs.blockreport.split.threshold, block 
 report causes exception
 

 Key: HDFS-8574
 URL: https://issues.apache.org/jira/browse/HDFS-8574
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ajith S
Assignee: Ajith S

 This piece of code in 
 {{org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport()}}
 {code}
 // Send one block report per message.
 for (int r = 0; r  reports.length; r++) {
   StorageBlockReport singleReport[] = { reports[r] };
   DatanodeCommand cmd = bpNamenode.blockReport(
   bpRegistration, bpos.getBlockPoolId(), singleReport,
   new BlockReportContext(reports.length, r, reportId));
   numReportsSent++;
   numRPCs++;
   if (cmd != null) {
 cmds.add(cmd);
   }
 {code}
 when a single volume contains many blocks, i.e more than the threshold, it is 
 trying to send the entire blockreport in one RPC, causing exception
 {code}
 java.lang.IllegalStateException: 
 com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
 large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
 the size limit.
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:369)
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:347)
 at 
 org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:325)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:473)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8575) Support User level Quota for space and Name (count)

2015-06-17 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589573#comment-14589573
 ] 

nijel commented on HDFS-8575:
-

Thank [~aw] for the comments
Please find my update

bq.Not really. You assign the quota based on the use case and the usage 
pattern. It actually works extremely well.
My point is to have a quota for the user irrespective of the folders. 
As you mentioned if i need to control the usage of a specif user, i have to 
decide quota for each folders which is not easy (nearly not possible)  in this 
case. My scenario is to assign specific amount of quota for the user.

bq.How does this work in a federated name space?
I have not thought of federation. Our use case is non federated HA cluster
I think for federation we can think of supporting at namespace level. I will 
think and update for a better option.

 Support User level Quota for space and Name (count)
 ---

 Key: HDFS-8575
 URL: https://issues.apache.org/jira/browse/HDFS-8575
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: nijel
Assignee: nijel

 I would like to have one feature in HDFS to have quota management at user 
 level. 
 Background :
 When the customer uses a multi tenant solution it will have many Hadoop eco 
 system components like HIVE, HBASE, yarn etc. The base folder of these 
 components are different like /hive - Hive , /hbase -HBase. 
 Now if a user creates some file  or table these will be under the folder 
 specific to component. If the user name is taken into account it looks like
 {code}
 /hive/user1/table1
 /hive/user2/table1
 /hbase/user1/Htable1
 /hbase/user2/Htable1
  
 Same for yarn/map-reduce data and logs
 {code}
  
 In this case restricting the user to use a certain amount of disk/file is 
 very difficult since the current quota management is at folder level.
  
 Requirement: User level Quota for space and Name (count). Say user1 can have 
 100G irrespective of the folder or location used.
  
 Here the idea to consider the file owner ad the key and attribute the quota 
 to it.  So the current quota system can have a initial check for the user 
 quota if defined, before validating the folder quota.
 Note:
 This need a change in fsimage to store the user and quota information
 Please have a look on this scenario. If it sounds good, i will create the 
 tasks and the update the design and prototype.
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-8575) Support User level Quota for space and Name (count)

2015-06-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel moved YARN-3796 to HDFS-8575:
---

Key: HDFS-8575  (was: YARN-3796)
Project: Hadoop HDFS  (was: Hadoop YARN)

 Support User level Quota for space and Name (count)
 ---

 Key: HDFS-8575
 URL: https://issues.apache.org/jira/browse/HDFS-8575
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: nijel
Assignee: nijel

 I would like to have one feature in HDFS to have quota management at user 
 level. 
 Background :
 When the customer uses a multi tenant solution it will have many Hadoop eco 
 system components like HIVE, HBASE, yarn etc. The base folder of these 
 components are different like /hive - Hive , /hbase -HBase. 
 Now if a user creates some file  or table these will be under the folder 
 specific to component. If the user name is taken into account it looks like
 {code}
 /hive/user1/table1
 /hive/user2/table1
 /hbase/user1/Htable1
 /hbase/user2/Htable1
  
 Same for yarn/map-reduce data and logs
 {code}
  
 In this case restricting the user to use a certain amount of disk/file is 
 very difficult since the current quota management is at folder level.
  
 Requirement: User level Quota for space and Name (count). Say user1 can have 
 100G irrespective of the folder or location used.
  
 Here the idea to consider the file owner ad the key and attribute the quota 
 to it.  So the current quota system can have a initial check for the user 
 quota if defined, before validating the folder quota.
 Note:
 This need a change in fsimage to store the user and quota information
 Please have a look on this scenario. If it sounds good, i will create the 
 tasks and the update the design and prototype.
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8565) Typo in dfshealth.html - Decomissioning

2015-06-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8565:

Attachment: HDFS-8565.patch

Trivial patch with the change.

 Typo in dfshealth.html - Decomissioning
 -

 Key: HDFS-8565
 URL: https://issues.apache.org/jira/browse/HDFS-8565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: HDFS-8565.patch


 div class=page-headerh1smallDecomissioning/small/h1/div
 change to 
 div class=page-headerh1smallDecommissioning/small/h1/div
 in dfshealth.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8565) Typo in dfshealth.html - Decomissioning

2015-06-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8565:

Status: Patch Available  (was: Open)

 Typo in dfshealth.html - Decomissioning
 -

 Key: HDFS-8565
 URL: https://issues.apache.org/jira/browse/HDFS-8565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: HDFS-8565.patch


 div class=page-headerh1smallDecomissioning/small/h1/div
 change to 
 div class=page-headerh1smallDecommissioning/small/h1/div
 in dfshealth.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-8565) Typo in dfshealth.html - Decomissioning

2015-06-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel moved HBASE-13872 to HDFS-8565:
-

Key: HDFS-8565  (was: HBASE-13872)
Project: Hadoop HDFS  (was: HBase)

 Typo in dfshealth.html - Decomissioning
 -

 Key: HDFS-8565
 URL: https://issues.apache.org/jira/browse/HDFS-8565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
Priority: Trivial

 div class=page-headerh1smallDecomissioning/small/h1/div
 change to 
 div class=page-headerh1smallDecommissioning/small/h1/div
 in dfshealth.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8526) final behavior is not honored for YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH since it is a String[]

2015-06-03 Thread nijel (JIRA)
nijel created HDFS-8526:
---

 Summary: final behavior is not honored for 
YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH  since it is a String[]
 Key: HDFS-8526
 URL: https://issues.apache.org/jira/browse/HDFS-8526
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel


i was going through some find bugs rules. One issue reported in that is 

 public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = {
and 
  public static final String[] 
DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH=

is not honoring the final qualifier. The string array contents can be re 
assigned !
Simple test
{code}
public class TestClass {
  static final String[] t = { 1, 2 };
  public static void main(String[] args) {
System.out.println(12  10);
String[] t1={u};
//t = t1; // this will show compilation 
t (1) = t1 (1) ; // But this works

  }
}
{code}
One option is to use Collections.unmodifiableList

any thoughts ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8526) final behavior is not honored for YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH since it is a String[]

2015-06-03 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8526:

Description: 
i was going through some find bugs rules. One issue reported in that is 

 public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = {
and 
  public static final String[] 
DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH=

is not honoring the final qualifier. The string array contents can be re 
assigned !
Simple test
{code}
public class TestClass {
  static final String[] t = { 1, 2 };
  public static void main(String[] args) {
System.out.println(12  10);
String[] t1={u};
//t = t1; // this will show compilation  error
t (1) = t1 (1) ; // But this works

  }
}
{code}
One option is to use Collections.unmodifiableList

any thoughts ?

  was:
i was going through some find bugs rules. One issue reported in that is 

 public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = {
and 
  public static final String[] 
DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH=

is not honoring the final qualifier. The string array contents can be re 
assigned !
Simple test
{code}
public class TestClass {
  static final String[] t = { 1, 2 };
  public static void main(String[] args) {
System.out.println(12  10);
String[] t1={u};
//t = t1; // this will show compilation 
t (1) = t1 (1) ; // But this works

  }
}
{code}
One option is to use Collections.unmodifiableList

any thoughts ?


 final behavior is not honored for 
 YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH  since it is a String[]
 

 Key: HDFS-8526
 URL: https://issues.apache.org/jira/browse/HDFS-8526
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel

 i was going through some find bugs rules. One issue reported in that is 
  public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = {
 and 
   public static final String[] 
 DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH=
 is not honoring the final qualifier. The string array contents can be re 
 assigned !
 Simple test
 {code}
 public class TestClass {
   static final String[] t = { 1, 2 };
   public static void main(String[] args) {
 System.out.println(12  10);
 String[] t1={u};
 //t = t1; // this will show compilation  error
 t (1) = t1 (1) ; // But this works
   }
 }
 {code}
 One option is to use Collections.unmodifiableList
 any thoughts ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available

2015-05-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560374#comment-14560374
 ] 

nijel commented on HDFS-8475:
-

hi [~Vinod08]
Looks like you have only one valid datanode assigned for this block it got 
failed. So write will fail.
bq. There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.

What you are suspecting as issue here ? 

 Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no 
 length prefix available
 

 Key: HDFS-8475
 URL: https://issues.apache.org/jira/browse/HDFS-8475
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Vinod Valecha
Priority: Blocker

 Scenraio:
 =
 write a file
 corrupt block manually
 Exception stack trace- 
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream 
 Exception in createBlockOutputStream
  java.io.EOFException: Premature EOF: no 
 length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning 
 BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 2015-05-24 02:31:55.299 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode 
 10.108.106.59:50010
 [5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Excluding datanode 10.108.106.59:50010
 2015-05-24 02:31:55.300 WARNING [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 [5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception
  
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
 
  

[jira] [Commented] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-05-21 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553903#comment-14553903
 ] 

nijel commented on HDFS-8442:
-

bq.-1   tests included
Test are not applicable.

Please review patch

 Remove ServerLifecycleListener from kms/server.xml.
 ---

 Key: HDFS-8442
 URL: https://issues.apache.org/jira/browse/HDFS-8442
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
 Attachments: HDFS-8442-1.patch


 Remove ServerLifecycleListener from kms/server.xml.
 From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
 ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
 Remove ServerLifecycleListener. This was already removed from server.xml and 
 with the Lifecycle re-factoring is no longer required. (markt)
 So if the build env is with tomcat later than this, kms startup is failing
 {code}
 SEVERE: Begin event threw exception
 java.lang.ClassNotFoundException: 
 org.apache.catalina.mbeans.ServerLifecycleListener
 {code}
 can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-05-20 Thread nijel (JIRA)
nijel created HDFS-8442:
---

 Summary: Remove ServerLifecycleListener from kms/server.xml.
 Key: HDFS-8442
 URL: https://issues.apache.org/jira/browse/HDFS-8442
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel


Remove ServerLifecycleListener from kms/server.xml.

From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html

Remove ServerLifecycleListener. This was already removed from server.xml and 
with the Lifecycle re-factoring is no longer required. (markt)

So if the build env is with tomcat later than this, kms startup is failing
{code}
SEVERE: Begin event threw exception
java.lang.ClassNotFoundException: 
org.apache.catalina.mbeans.ServerLifecycleListener
{code}
can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-05-20 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8442:

Status: Patch Available  (was: Open)

 Remove ServerLifecycleListener from kms/server.xml.
 ---

 Key: HDFS-8442
 URL: https://issues.apache.org/jira/browse/HDFS-8442
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
 Attachments: HDFS-8442-1.patch


 Remove ServerLifecycleListener from kms/server.xml.
 From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
 ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
 Remove ServerLifecycleListener. This was already removed from server.xml and 
 with the Lifecycle re-factoring is no longer required. (markt)
 So if the build env is with tomcat later than this, kms startup is failing
 {code}
 SEVERE: Begin event threw exception
 java.lang.ClassNotFoundException: 
 org.apache.catalina.mbeans.ServerLifecycleListener
 {code}
 can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-05-20 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-8442:

Attachment: HDFS-8442-1.patch

please find the patch

 Remove ServerLifecycleListener from kms/server.xml.
 ---

 Key: HDFS-8442
 URL: https://issues.apache.org/jira/browse/HDFS-8442
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
 Attachments: HDFS-8442-1.patch


 Remove ServerLifecycleListener from kms/server.xml.
 From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
 ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
 Remove ServerLifecycleListener. This was already removed from server.xml and 
 with the Lifecycle re-factoring is no longer required. (markt)
 So if the build env is with tomcat later than this, kms startup is failing
 {code}
 SEVERE: Begin event threw exception
 java.lang.ClassNotFoundException: 
 org.apache.catalina.mbeans.ServerLifecycleListener
 {code}
 can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3512) Delay in scanning blocks at DN side when there are huge number of blocks

2015-05-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-3512:

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

 Delay in scanning blocks at DN side when there are huge number of blocks
 

 Key: HDFS-3512
 URL: https://issues.apache.org/jira/browse/HDFS-3512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3512.patch


 Block scanner maintains the full list of blocks at DN side in a map and there 
 is no differentiation between the blocks which are already scanned and the 
 ones not scanend. For every check (ie every 5 secs) it will pick one block 
 and scan. There are chances that it chooses a block which is already scanned 
 which leads to further delay in scanning of blcoks which are yet to be 
 scanned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3512) Delay in scanning blocks at DN side when there are huge number of blocks

2015-05-11 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537864#comment-14537864
 ] 

nijel commented on HDFS-3512:
-

Agree with [~umamaheswararao]
Closing as not a problem. Feel free to reopen

 Delay in scanning blocks at DN side when there are huge number of blocks
 

 Key: HDFS-3512
 URL: https://issues.apache.org/jira/browse/HDFS-3512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3512.patch


 Block scanner maintains the full list of blocks at DN side in a map and there 
 is no differentiation between the blocks which are already scanned and the 
 ones not scanend. For every check (ie every 5 secs) it will pick one block 
 and scan. There are chances that it chooses a block which is already scanned 
 which leads to further delay in scanning of blcoks which are yet to be 
 scanned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7998) HDFS Federation : Command mentioned to add a NN to existing federated cluster is wrong

2015-05-08 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533931#comment-14533931
 ] 

nijel commented on HDFS-7998:
-

+1
reviewed

 HDFS Federation : Command mentioned to add a NN to existing federated cluster 
 is wrong 
 ---

 Key: HDFS-7998
 URL: https://issues.apache.org/jira/browse/HDFS-7998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Ajith S
Assignee: Ajith S
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-7998.patch


 HDFS Federation documentation 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.html
 has the following command to add a namenode to existing cluster
  $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode 
  datanode_host_name:datanode_rpc_port
 this command is incorrect, actual correct command is 
  $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNamenodes 
  datanode_host_name:datanode_rpc_port
 need to update the same in documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8108) Fsck should provide the info on mandatory option to be used along with -blocks , -locations and -racks

2015-05-08 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533928#comment-14533928
 ] 

nijel commented on HDFS-8108:
-

reviewed
+1

 Fsck should provide the info on mandatory option to be used along with 
 -blocks , -locations and -racks
 

 Key: HDFS-8108
 URL: https://issues.apache.org/jira/browse/HDFS-8108
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Trivial
  Labels: BB2015-05-RFC
 Attachments: HDFS-8108.1.patch, HDFS-8108.2.patch


 Fsck usage information should provide the information on  which options are 
 mandatory to be  passed along with -blocks , -locations and -racks to be in 
 sync with documentation.
 For example :
 To get information on:
 1.  Blocks (-blocks),  option  -files should also be used.
 2.  Rack information (-racks),  option  -files and -blocks should also be 
 used.
 {noformat}
 ./hdfs fsck -files -blocks
 ./hdfs fsck -files -blocks -racks
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-348) When a HDFS client fails to read a block (due to server failure) the namenode should log this

2015-05-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel resolved HDFS-348.

  Resolution: Not A Problem
Target Version/s: 2.5.2

Agree with [~qwertymaniac]
closing as not a problem
Feel free to reopen

 When a HDFS client fails to read a block (due to server failure) the namenode 
 should log this
 -

 Key: HDFS-348
 URL: https://issues.apache.org/jira/browse/HDFS-348
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: eric baldeschwieler
Assignee: Sameer Paranjpye

 Right now only client debugging info is available.  The fact that the client 
 node needed to execute a failure mitigation strategy should be logged 
 centrally so we can do analysis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8148) NPE thrown at Namenode startup,.

2015-04-15 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14496111#comment-14496111
 ] 

nijel commented on HDFS-8148:
-

Hi archana and surendra
Looks like the patch on https://issues.apache.org/jira/browse/HDFS- fixed 
the issue.
Please confirm

  NPE thrown at Namenode startup,.
 -

 Key: HDFS-8148
 URL: https://issues.apache.org/jira/browse/HDFS-8148
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor

 At Namenode startup, NPE thrown when unsupported config parameter configured 
 in hdfs-site.xml 
 {code}
 2015-04-15 10:43:59,880 ERROR 
 org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1219)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1540)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:841)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-24 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: 0004-HDFS-7875.patch

thanks Harsh for the comments
Updated patch with the changes

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch, 0004-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-24 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379153#comment-14379153
 ] 

nijel commented on HDFS-7875:
-

Thanks Harsh :)

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Fix For: 2.8.0

 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch, 0004-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: 0003-HDFS-7875.patch

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: (was: 0003-HDFS-7875.patch)

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-09 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: 0003-HDFS-7875.patch

Updated for the comments.
sorry for the simple mistake :) 

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-09 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352828#comment-14352828
 ] 

nijel commented on HDFS-7875:
-

Build failure looks like not related to patch

*cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory*
Checking patch 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java...

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7883) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class

2015-03-04 Thread nijel (JIRA)
nijel created HDFS-7883:
---

 Summary: Move the Hadoop constants in HTTPServer.java to 
CommonConfigurationKeys class
 Key: HDFS-7883
 URL: https://issues.apache.org/jira/browse/HDFS-7883
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Priority: Minor


These 2 configurations in HttpServer2.java is hadoop configurations.
{code}
  static final String FILTER_INITIALIZER_PROPERTY
  = hadoop.http.filter.initializers;
  public static final String HTTP_MAX_THREADS = hadoop.http.max.threads;
{code}
It is better to keep it inside CommonConfigurationKeys







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7883) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class

2015-03-04 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned HDFS-7883:
---

Assignee: nijel

 Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
 -

 Key: HDFS-7883
 URL: https://issues.apache.org/jira/browse/HDFS-7883
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel
Priority: Minor

 These 2 configurations in HttpServer2.java is hadoop configurations.
 {code}
   static final String FILTER_INITIALIZER_PROPERTY
   = hadoop.http.filter.initializers;
   public static final String HTTP_MAX_THREADS = hadoop.http.max.threads;
 {code}
 It is better to keep it inside CommonConfigurationKeys



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7883) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class

2015-03-04 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7883:

Attachment: 0001-HDFS-7883.patch

Patch file with change.
Please have a look.

 Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
 -

 Key: HDFS-7883
 URL: https://issues.apache.org/jira/browse/HDFS-7883
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel
Priority: Minor
 Attachments: 0001-HDFS-7883.patch


 These 2 configurations in HttpServer2.java is hadoop configurations.
 {code}
   static final String FILTER_INITIALIZER_PROPERTY
   = hadoop.http.filter.initializers;
   public static final String HTTP_MAX_THREADS = hadoop.http.max.threads;
 {code}
 It is better to keep it inside CommonConfigurationKeys



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7883) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class

2015-03-04 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7883:

Status: Patch Available  (was: Open)

 Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
 -

 Key: HDFS-7883
 URL: https://issues.apache.org/jira/browse/HDFS-7883
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel
Priority: Minor
 Attachments: 0001-HDFS-7883.patch


 These 2 configurations in HttpServer2.java is hadoop configurations.
 {code}
   static final String FILTER_INITIALIZER_PROPERTY
   = hadoop.http.filter.initializers;
   public static final String HTTP_MAX_THREADS = hadoop.http.max.threads;
 {code}
 It is better to keep it inside CommonConfigurationKeys



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-03 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Description: 
By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
number of volume configured. Got stuck for some time in debugging since the log 
message didn't give much details.

The log message can be more detail. Added a patch with change in message.
Please have a look

  was:
By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
number of volume configured. Got stuck for some time in debugging since the log 
message didn't give much details.

The log message be more detail. Added a patch with change in message.
Please have a look


 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial

 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-03 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: 0001-HDFS-7875.patch

patch for log message change

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >