[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Attachment: HDFS-10835.004.patch

Patch 004:
* Fix misspelled "top" with "stop"

[~xiaochen], thanks for catching that typo.

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460412#comment-15460412
 ] 

Xiao Chen commented on HDFS-10835:
--

{{hadoop_add_subcommand "top n" "Stop $name, waiting up to n seconds for the 
process to end"}}
Let's correct the typo here as well. s/top/stop/g. 

Otherwise LGTM. I'm okay to keep this just a simple typo fix. Thanks John.

Any comments [~aw]?

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Attachment: HDFS-10835.003.patch

Patch 003:
* Use {{HttpFS}} in the comment

Thanks [~xiaochen] for the explanation. Already fixed the versions. As for 
abstracting the helper, I feel it is an overkill right now. If another 
component starts using Tomcat Catalina, it would be a good idea.

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460388#comment-15460388
 ] 

Xiao Chen commented on HDFS-10835:
--

Thanks [~jzhuge] for working on this, and [~aw] for commenting.

To explain Allen's comment, the typo seems to be introduced by HADOOP-12249, 
which is 3.0 only.
Suggest to update versions accordingly.

- please also update the comment to be consistent on {{HttpFS}}
- you probably already know this: kms.sh is pretty much the same. Have we 
considered abstract the helper function somehow?

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460386#comment-15460386
 ] 

John Zhuge commented on HDFS-10835:
---

Fixed. Thanks for catching my mistakes.

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Target Version/s: 3.0.0-alpha2  (was: 2.8.0)

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Affects Version/s: (was: 2.6.0)
   3.0.0-alpha1

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Attachment: HDFS-10835.002.patch

Patch 002:
* Use the official name {{HttpFS}}

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460311#comment-15460311
 ] 

Allen Wittenauer commented on HDFS-10835:
-

Why is this marked as affecting 2.6.0?  How can this possibly target 2.8.0?

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460302#comment-15460302
 ] 

Xiao Chen commented on HDFS-9781:
-

Findbugs warning is due to HDFS-10745, I left a comment on that jira.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460300#comment-15460300
 ] 

Xiao Chen commented on HDFS-10745:
--

Thanks for working on this, Daryn and Kihwal.

It seems this change introduced a findbugs warning in branch-2.
{noformat}
CodeWarning
DLS Dead store to src in 
org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSNamesystem, 
String, FSPermissionChecker, String, String, boolean, boolean)
Bug type DLS_DEAD_LOCAL_STORE (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
In method 
org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSNamesystem, 
String, FSPermissionChecker, String, String, boolean, boolean)
Local variable named src
At FSDirAppendOp.java:[line 92]
{noformat}

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460191#comment-15460191
 ] 

Hadoop QA commented on HDFS-10713:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 590 unchanged - 1 fixed = 591 total (was 591) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 
36s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10713 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826950/HDFS-10713.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 9db6c4e88dde 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 07650bc |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16627/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16627/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16627/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue 

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460177#comment-15460177
 ] 

Hadoop QA commented on HDFS-9781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-9781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826947/HDFS-9781.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2a854651d40 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460174#comment-15460174
 ] 

Manoj Govindassamy commented on HDFS-9781:
--

As [~xiaochen] mentioned, the updated test case 
TestFsDatasetImpl#testRemoveVolumeBeingWritten is failing as HDFS-10830 is not 
fixed yet.
* {code}
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(FsVolumeList.java:280)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.removeVolumes(FsDatasetImpl.java:506)
{code}


> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460168#comment-15460168
 ] 

Hadoop QA commented on HDFS-9781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
8s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-9781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826947/HDFS-9781.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-02 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10713:
--
Attachment: HDFS-10713.003.patch

Capturing and logging the longest lock held interval in each report.
Added throttling for read lock warnings also.

> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging, namenode
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
> Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, 
> HDFS-10713.002.patch, HDFS-10713.003.patch
>
>
> The NameNode logs a message if the FSNamesystem write lock is held by a 
> thread for over 1 second. These messages can be throttled to at one most one 
> per x minutes to avoid potentially filling up NN logs. We can also log the 
> number of suppressed notices since the last log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459992#comment-15459992
 ] 

Xiao Chen commented on HDFS-9781:
-

Discussed this offline with Manoj.

This jira fixes the NPE, and also enhanced the test to expose the locking 
problem when block report and remove volume happen at the same time.

HDFS-10830 is fixing the locking problem, so we should bring them together to 
branch-2, to have a passing unit test. Linked HDFS-10830 here.

Thanks for the good work, Manoj!

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9781:

Attachment: HDFS-9781.branch-2.001.patch

Hm, when I locally ran the test it actually timed out.
Attaching a renamed branch-2 patch to trigger jenkins.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10819) BlockManager fails to store a good block for a datanode storage after it reported a corrupt block — block replication stuck

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459935#comment-15459935
 ] 

Manoj Govindassamy commented on HDFS-10819:
---

[~andrew.wang],

Thanks for reviewing the patch.

{quote}We need to have a collision between two genstamps of the same 
block.{quote}
More importantly, if the same storage volume in DN happens to hold a block and 
its various genstamps, then without fix NN will not "store" the blocks with 
recent/higher genstamps. 

{quote}Would this also be addressed by having the NN first invalidate the 
corrupt replica before replicating the correct one{quote}
{{BlockManager#markBlockAsCorrupt}} already tries to invalidate the corrupt 
blocks. But block invalidations are postponed if any of the replica are stale  
and might not be invalidated for some time and will delay the block reaching to 
replication factor.

{quote}Also curious, would invalidation eventually fix this case, or is it 
truly stuck?
{code}
// add block to the datanode
AddBlockResult result = storageInfo.addBlock(storedBlock, reportedBlock);

if (result == AddBlockResult.ADDED) {
.. ..
} else if (result == AddBlockResult.REPLACED) {
.. .. 
} else {
  // if the same block is added again and the replica was corrupt
  // previously because of a wrong gen stamp, remove it from the
  // corrupt block list.
  corruptReplicas.removeFromCorruptReplicasMap(block, node,
  Reason.GENSTAMP_MISMATCH);
  curReplicaDelta = 0;
  blockLog.debug("BLOCK* addStoredBlock: Redundant addStoredBlock request"
  + " received for {} on node {} size {}", storedBlock, node,
  storedBlock.getNumBytes());
}
{code}

As you see above, there is code already in {{BlockManager#addStoredBlock}} to 
handle the case we are interested in -- Block with latest GS on the same 
storage volume. Except, the caller 
{{BlockManager#addStoredBlockUnderConstruction}} is mistakenly skipping the 
block and not allowing the other module to handle the case properly. Haven't 
explored the invalidation path fully and not sure if it solve the problem for 
testRemoveVolumeBeingWrittenForDatanode. Please let me know I need to explore 
this path.



> BlockManager fails to store a good block for a datanode storage after it 
> reported a corrupt block — block replication stuck
> ---
>
> Key: HDFS-10819
> URL: https://issues.apache.org/jira/browse/HDFS-10819
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10819.001.patch
>
>
> TestDataNodeHotSwapVolumes occasionally fails in the unit test 
> testRemoveVolumeBeingWrittenForDatanode.  Data write pipeline can have issues 
> as there could be timeouts, data node not reachable etc, and in this test 
> case it was more of induced one as one of the volumes in a datanode is 
> removed while block write is in progress. Digging further in the logs, when 
> the problem happens in the write pipeline, the error recovery is not 
> happening as expected leading to block replication never catching up.
> Though this problem has same signature as in HDFS-10780, from the logs it 
> looks like the code paths taken are totally different and so the root cause 
> could be different as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459931#comment-15459931
 ] 

Xiao Chen commented on HDFS-9781:
-

Branch-2 patch looks good too, committing shortly.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9781:
-
Attachment: HDFS-9781-branch-2.001.patch

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9781:
-
Attachment: (was: HDFS-9781-branch2.001.patch)

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459916#comment-15459916
 ] 

Hadoop QA commented on HDFS-9781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-9781 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826944/HDFS-9781-branch2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16624/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}




[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9781:
-
Attachment: HDFS-9781-branch2.001.patch

Attaching branch2 patch. 

Removed the test workaround as branch2 is not affected by HDFS-10830

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459831#comment-15459831
 ] 

Hudson commented on HDFS-9781:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10394 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10394/])
HDFS-9781. FsDatasetImpl#getBlockReports can occasionally throw (xiao: rev 
07650bc37a3c78ecc6566d813778d0954d0b06b0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459801#comment-15459801
 ] 

Mingliang Liu commented on HDFS-10475:
--

A complete op->lock time will be feasible for sure. To obtain top n paths 
holding lock for a long time, we may have to sample in order to reduce the 
overhead of collecting stack trace.

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10834) Add concat to libhdfs API

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459784#comment-15459784
 ] 

Hadoop QA commented on HDFS-10834:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 11s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826935/HDFS-10834.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 79ad081f9051 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6ea9be |
| Default Java | 1.8.0_101 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16623/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16623/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16623/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16623/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16623/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add concat to libhdfs API
> -
>
> Key: HDFS-10834
> URL: https://issues.apache.org/jira/browse/HDFS-10834
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, libhdfs
>Reporter: Gary Helmling
> Attachments: HDFS-10834.001.patch
>
>
> libhdfs does not currently provide access to calling FileSystem.concat().  
> Let's add a function for it.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459773#comment-15459773
 ] 

Xiao Chen commented on HDFS-9781:
-

I have committed this to trunk.

There are some minor conflicts backporting to branch-2, also I think the 
workaround of HDFS-10830 can be removed since we don't have that issue in 
branch-2. Could you post a branch-2 patch, [~manojg]?

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9781:

Fix Version/s: 3.0.0-alpha2

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-02 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459771#comment-15459771
 ] 

Erik Krogen commented on HDFS-10713:


In HDFS-10475 we hope to add more detailed metrics for which calls are causing 
long held locks. We are actively working on this so it should hopefully subsume 
any useful information that would be gathered by keeping all of the stack 
traces. 

> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging, namenode
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
> Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, 
> HDFS-10713.002.patch
>
>
> The NameNode logs a message if the FSNamesystem write lock is held by a 
> thread for over 1 second. These messages can be throttled to at one most one 
> per x minutes to avoid potentially filling up NN logs. We can also log the 
> number of suppressed notices since the last log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459700#comment-15459700
 ] 

Xiao Chen edited comment on HDFS-9781 at 9/2/16 10:33 PM:
--

[~jojochuang] ping'ed me offline saying this looks fine to him too.
I'll commit this shortly. Will comment on HDFS-10830 in the mean time.


was (Author: xiaochen):
[~jojochuang] ping'ed me offline saying this looks fine to me too.
I'll commit this shortly. Will comment on HDFS-10830 in the mean time.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10819) BlockManager fails to store a good block for a datanode storage after it reported a corrupt block — block replication stuck

2016-09-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459764#comment-15459764
 ] 

Andrew Wang commented on HDFS-10819:


Thanks for working on this Manoj. Great investigation here.

IIUC this is going to be a problem mostly for small clusters, right? We need to 
have a collision between two genstamps of the same block.

Would this also be addressed by having the NN first invalidate the corrupt 
replica before replicating the correct one? I'm wondering if the safer fix is 
to wait for this invalidation by excluding nodes with corrupt replicas when 
doing block placement.

Also curious, would invalidation eventually fix this case, or is it truly 
stuck? That seems like another bug we should address.

> BlockManager fails to store a good block for a datanode storage after it 
> reported a corrupt block — block replication stuck
> ---
>
> Key: HDFS-10819
> URL: https://issues.apache.org/jira/browse/HDFS-10819
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10819.001.patch
>
>
> TestDataNodeHotSwapVolumes occasionally fails in the unit test 
> testRemoveVolumeBeingWrittenForDatanode.  Data write pipeline can have issues 
> as there could be timeouts, data node not reachable etc, and in this test 
> case it was more of induced one as one of the volumes in a datanode is 
> removed while block write is in progress. Digging further in the logs, when 
> the problem happens in the write pipeline, the error recovery is not 
> happening as expected leading to block replication never catching up.
> Though this problem has same signature as in HDFS-10780, from the logs it 
> looks like the code paths taken are totally different and so the root cause 
> could be different as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459759#comment-15459759
 ] 

Mingliang Liu commented on HDFS-10713:
--

{quote}
If we just capture the longest lock held interval but lose the stack trace 
could that be an acceptable solution. Mingliang Liu, what do you think?
{quote}

That makes sense in order to avoid overhead of too frequent capturing the stack 
trace. If we find in the future we need the longest lock holding stack trace 
heavily, we can add it back. Before that, operators can reduce the config 
{{dfs.lock.suppress.warning.interval.ms}} for finer-granularity information. 
Thanks.

> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging, namenode
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
> Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, 
> HDFS-10713.002.patch
>
>
> The NameNode logs a message if the FSNamesystem write lock is held by a 
> thread for over 1 second. These messages can be throttled to at one most one 
> per x minutes to avoid potentially filling up NN logs. We can also log the 
> number of suppressed notices since the last log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459742#comment-15459742
 ] 

Hadoop QA commented on HDFS-10835:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m 
24s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
13s{color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 
74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826934/HDFS-10835.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 86f9b74a9f8e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6ea9be |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16622/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16622/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10833) Fix JSON errors in WebHDFS.md examples

2016-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459741#comment-15459741
 ] 

Hudson commented on HDFS-10833:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10393 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10393/])
HDFS-10833. Fix JSON errors in WebHDFS.md examples. (wang: rev 
cbd909ce2a5ac1da258f756fa1f93e84dd20b926)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md


> Fix JSON errors in WebHDFS.md examples
> --
>
> Key: HDFS-10833
> URL: https://issues.apache.org/jira/browse/HDFS-10833
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10833.001.patch
>
>
> Noticed a few JSON errors due to things like commas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10834) Add concat to libhdfs API

2016-09-02 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HDFS-10834:
-
Status: Patch Available  (was: Open)

> Add concat to libhdfs API
> -
>
> Key: HDFS-10834
> URL: https://issues.apache.org/jira/browse/HDFS-10834
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, libhdfs
>Reporter: Gary Helmling
> Attachments: HDFS-10834.001.patch
>
>
> libhdfs does not currently provide access to calling FileSystem.concat().  
> Let's add a function for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10834) Add concat to libhdfs API

2016-09-02 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HDFS-10834:
-
Attachment: HDFS-10834.001.patch

Here is a patch from [~sunchensamurai] adding hdfsConcat to the libhdfs API and 
a test for it.

Can someone with proper access please assign the issue to him, so that he can 
make any further updates?

> Add concat to libhdfs API
> -
>
> Key: HDFS-10834
> URL: https://issues.apache.org/jira/browse/HDFS-10834
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, libhdfs
>Reporter: Gary Helmling
> Attachments: HDFS-10834.001.patch
>
>
> libhdfs does not currently provide access to calling FileSystem.concat().  
> Let's add a function for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459718#comment-15459718
 ] 

Manoj Govindassamy commented on HDFS-10830:
---


Sounds good to me. Thanks a lot [~xiaochen].



> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Status: Patch Available  (was: Open)

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Attachment: HDFS-10835.001.patch

Patch 001:
* Replace words {{kms}} with {{https}}
* No unit test for startup script

Manual test output:
{noformat}
$ sbin/httpfs.sh -h
Usage: httpfs.sh [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]

  OPTIONS is none or any of:

--config dir   Hadoop config directory
--debugturn on shell script debug mode
--help usage information

  SUBCOMMAND is one of:

run -security Start in the current window with security manager
run   Start httpfs in the current window
start -security   Start in a separate window with security manager
start Start httpfs in a separate window
statusReturn the LSB compliant status
stop -force   Stop httpfs, wait up to 5 seconds and then use kill -KILL if 
still running
stop n -force Stop httpfs, wait up to n seconds and then use kill -KILL if 
still running
stop  Stop httpfs, waiting up to 5 seconds for the process to end
top n Stop httpfs, waiting up to n seconds for the process to end

SUBCOMMAND may print help when invoked w/o parameters or with -h.
{noformat}

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459708#comment-15459708
 ] 

Xiao Chen commented on HDFS-10830:
--

Thanks for working on this [~arpitagarwal] and [~manojg].
Just letting you know that I'm about to commit HDFS-9781, which has a 
workaround in unit test of this issue. Please update to trunk and remove that 
workaround in this patch. (I think it's good to decouple these 2 jiras, so the 
unit test workaround is fine).

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459700#comment-15459700
 ] 

Xiao Chen commented on HDFS-9781:
-

[~jojochuang] ping'ed me offline saying this looks fine to me too.
I'll commit this shortly. Will comment on HDFS-10830 in the mean time.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10833) Fix JSON errors in WebHDFS.md examples

2016-09-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10833:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks for reviewing Xiao, committed to trunk, branch-2, branch-2.8

> Fix JSON errors in WebHDFS.md examples
> --
>
> Key: HDFS-10833
> URL: https://issues.apache.org/jira/browse/HDFS-10833
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10833.001.patch
>
>
> Noticed a few JSON errors due to things like commas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10822) Log DataNodes in the write pipeline

2016-09-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10822:
---
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha2

> Log DataNodes in the write pipeline
> ---
>
> Key: HDFS-10822
> URL: https://issues.apache.org/jira/browse/HDFS-10822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10822.001.patch
>
>
> Trying to diagnose a slow HDFS flush, taking longer than 10 seconds, but did 
> not know which DNs were involved in the write pipeline. Of course, I could 
> search NN log for the list of DNs, but it might not be possible sometimes or 
> convenient.
> Propose to add a DEBUG trace to DataStreamer#setPipeline to print the list of 
> DNs in the pipeline.
> BTW, we do print the list of DNs during pipeline recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10813) DiskBalancer: Add the getNodeList method in Command

2016-09-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459670#comment-15459670
 ] 

Andrew Wang commented on HDFS-10813:


Gentle reminder to include the appropriate 3.0.0 fix version when committing to 
trunk, thanks!

> DiskBalancer: Add the getNodeList method in Command
> ---
>
> Key: HDFS-10813
> URL: https://issues.apache.org/jira/browse/HDFS-10813
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10813.001.patch
>
>
> The method {{Command#getNodeList}} in DiskBalancer was added in HDFS-9545, 
> but it's never used. We can improve that in the following aspects:
> 1.Change {{private}} to {{protected}} so that the subclass can use that 
> method in the future.
> 2.Reuse the method {{Command#getNodeList}} and to construct a new method
> like this {{List getNodes(String listArg)}}. This 
> method can be used for getting multiple nodes in the future. For example, if 
> we want to use {{hdfs diskbalancer -report -node}} or {{hdfs diskbalancer 
> -plan}} with multiple specified nodes, that method can be used. Now these 
> commands only support one node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10813) DiskBalancer: Add the getNodeList method in Command

2016-09-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10813:
---
Fix Version/s: 3.0.0-alpha2

> DiskBalancer: Add the getNodeList method in Command
> ---
>
> Key: HDFS-10813
> URL: https://issues.apache.org/jira/browse/HDFS-10813
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10813.001.patch
>
>
> The method {{Command#getNodeList}} in DiskBalancer was added in HDFS-9545, 
> but it's never used. We can improve that in the following aspects:
> 1.Change {{private}} to {{protected}} so that the subclass can use that 
> method in the future.
> 2.Reuse the method {{Command#getNodeList}} and to construct a new method
> like this {{List getNodes(String listArg)}}. This 
> method can be used for getting multiple nodes in the future. For example, if 
> we want to use {{hdfs diskbalancer -report -node}} or {{hdfs diskbalancer 
> -plan}} with multiple specified nodes, that method can be used. Now these 
> commands only support one node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9392) Admins support for maintenance state

2016-09-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9392:
--
Fix Version/s: 3.0.0-alpha2

> Admins support for maintenance state
> 
>
> Key: HDFS-9392
> URL: https://issues.apache.org/jira/browse/HDFS-9392
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9392-2.patch, HDFS-9392-3.patch, HDFS-9392-4.patch, 
> HDFS-9392.patch
>
>
> This is to allow admins to put nodes into maintenance state with optional 
> timeout value as well as take nodes out of maintenance state. Likely we will 
> leverage what we come up in https://issues.apache.org/jira/browse/HDFS-9005.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459659#comment-15459659
 ] 

Arpit Agarwal commented on HDFS-10830:
--

[~manojg], you're right. findbugs caught it too. I'll post an updated patch 
later today. Thank you for the review!

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10836) libhdfs++: Add a NO_TOOLS and NO_EXAMPLES flag to cmake

2016-09-02 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10836:
-

 Summary: libhdfs++: Add a NO_TOOLS and NO_EXAMPLES flag to cmake
 Key: HDFS-10836
 URL: https://issues.apache.org/jira/browse/HDFS-10836
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


For some instances, our consumers will want just the library, and not want to 
compile (and filgure out linking) for the tools and examples.  Let's add a 
CMake flag to turn those off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459653#comment-15459653
 ] 

Xiao Chen commented on HDFS-9781:
-

Patch 3 LGTM, +1. Will wait to see if any comments from others.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10834) Add concat to libhdfs API

2016-09-02 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HDFS-10834:
-
Component/s: hdfs

> Add concat to libhdfs API
> -
>
> Key: HDFS-10834
> URL: https://issues.apache.org/jira/browse/HDFS-10834
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, libhdfs
>Reporter: Gary Helmling
>
> libhdfs does not currently provide access to calling FileSystem.concat().  
> Let's add a function for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459633#comment-15459633
 ] 

Manoj Govindassamy edited comment on HDFS-10830 at 9/2/16 9:25 PM:
---

[~arpitagarwal],

{code}
  void waitVolumeRemoved(int sleepMillis, Condition condition) {
  .. .. ..
  try {
condition.wait(sleepMillis);  <==
  } catch (InterruptedException e) {
FsDatasetImpl.LOG.info("Thread interrupted when waiting for "
+ "volume reference to be released.");
Thread.currentThread().interrupt();
  }
{code}

# By calling Object monitor method wait() on {{Condition}} variable, it will be 
treated like a normal Object and will try to use its intrinsic monitor as 
against the Condition's wait-set instance (here {{datasetLockCondition}}). 
Copying below the note I read from java doc on Condition.

[Ref|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html]
{color:blue}
??Note that Condition instances are just normal objects and can themselves be 
used as the target in a synchronized statement, and can have their own monitor 
wait and notification methods invoked. Acquiring the monitor lock of a 
Condition instance, or using its monitor methods, has no specified relationship 
with acquiring the Lock associated with that Condition or the use of its 
waiting and signalling methods. It is recommended that to avoid confusion you 
never use Condition instances in this way, except perhaps within their own 
implementation.??
{color}

# So, the usage of condition.wait() here expects the Object monitor to be 
locked using synchronized() and since it is not locked, we will again get 
IllegalMonitorStateException.  Please let me know if my understanding is wrong.


was (Author: manojg):
[~arpitagarwal],

{quote}
  void waitVolumeRemoved(int sleepMillis, Condition condition) {
  .. .. ..
  try {
condition.wait(sleepMillis);  <==
  } catch (InterruptedException e) {
FsDatasetImpl.LOG.info("Thread interrupted when waiting for "
+ "volume reference to be released.");
Thread.currentThread().interrupt();
  }
{quote}

# By calling Object monitor method wait() on {{Condition}} variable, it will be 
treated like a normal Object and will try to use its intrinsic monitor as 
against the Condition's wait-set instance (here {{datasetLockCondition}}). 
Copying below the note I read from java doc on Condition.

[Ref|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html]
{color:blue}
??Note that Condition instances are just normal objects and can themselves be 
used as the target in a synchronized statement, and can have their own monitor 
wait and notification methods invoked. Acquiring the monitor lock of a 
Condition instance, or using its monitor methods, has no specified relationship 
with acquiring the Lock associated with that Condition or the use of its 
waiting and signalling methods. It is recommended that to avoid confusion you 
never use Condition instances in this way, except perhaps within their own 
implementation.??
{color}

# So, the usage of condition.wait() here expects the Object monitor to be 
locked using synchronized() and since it is not locked, we will again get 
IllegalMonitorStateException.  Please let me know if my understanding is wrong.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== 

[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459633#comment-15459633
 ] 

Manoj Govindassamy commented on HDFS-10830:
---

[~arpitagarwal],

{quote}
  void waitVolumeRemoved(int sleepMillis, Condition condition) {
  .. .. ..
  try {
condition.wait(sleepMillis);  <==
  } catch (InterruptedException e) {
FsDatasetImpl.LOG.info("Thread interrupted when waiting for "
+ "volume reference to be released.");
Thread.currentThread().interrupt();
  }
{quote}

# By calling Object monitor method wait() on {{Condition}} variable, it will be 
treated like a normal Object and will try to use its intrinsic monitor as 
against the Condition's wait-set instance (here {{datasetLockCondition}}). 
Copying below the note I read from java doc on Condition.

[Ref|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html]
{color:blue}
??Note that Condition instances are just normal objects and can themselves be 
used as the target in a synchronized statement, and can have their own monitor 
wait and notification methods invoked. Acquiring the monitor lock of a 
Condition instance, or using its monitor methods, has no specified relationship 
with acquiring the Lock associated with that Condition or the use of its 
waiting and signalling methods. It is recommended that to avoid confusion you 
never use Condition instances in this way, except perhaps within their own 
implementation.??
{color}

# So, the usage of condition.wait() here expects the Object monitor to be 
locked using synchronized() and since it is not locked, we will again get 
IllegalMonitorStateException.  Please let me know if my understanding is wrong.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459622#comment-15459622
 ] 

Hadoop QA commented on HDFS-9781:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826921/HDFS-9781.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6286d41afc65 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 378f624 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16621/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16621/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws 

[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2016-09-02 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459611#comment-15459611
 ] 

Bob Hansen commented on HDFS-10787:
---

For the sake of simplicity, I think we should introduce a public facade for 
them that only exposes two functions: LoadConfigs() and 
LoadConfigsFromDirectory(const char * dir), each returning an Options object.

Do we have a compelling use case for more than that?  Anything that wants to 
poke values in can poke values into the returned Options rather than the XML, I 
think.

> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-02 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10835:
-

 Summary: Typo in httpfs.sh hadoop_usage
 Key: HDFS-10835
 URL: https://issues.apache.org/jira/browse/HDFS-10835
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial


Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
replaced with {{httpfs}}:
{noformat}
function hadoop_usage
{
  hadoop_add_subcommand "run" "Start kms in the current window"
  hadoop_add_subcommand "run -security" "Start in the current window with 
security manager"
  hadoop_add_subcommand "start" "Start kms in a separate window"
  hadoop_add_subcommand "start -security" "Start in a separate window with 
security manager"
  hadoop_add_subcommand "status" "Return the LSB compliant status"
  hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
process to end"
  hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
process to end"
  hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and then 
use kill -KILL if still running"
  hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
then use kill -KILL if still running"
  hadoop_generate_usage "${MYNAME}" false
}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10832) HttpFS does not use the ephemeral ACL bit introduced in HDFS-6326

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459593#comment-15459593
 ] 

Xiao Chen commented on HDFS-10832:
--

Good find Andrew! Patch looks great too.

Some comments:
- HttpFSFileSystem: I think we can just pass in the json object to 
{{toFsPermission}} to make it more encapsulated.
- FSOperations: We can drop the explicit type arguments in map initialization.
- Test: maybe we can add a test case to cover encbit too?


> HttpFS does not use the ephemeral ACL bit introduced in HDFS-6326
> -
>
> Key: HDFS-10832
> URL: https://issues.apache.org/jira/browse/HDFS-10832
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-10823.001.patch
>
>
> HDFS-6326 introduced an ephemeral ACL bit in FSPermission to avoid doing 
> extra getAclStatus calls during listStatus.
> Parsing this extra bit was not carried over to HttpFS. Currently, it tries to 
> detect ACLs being disabled by catching exceptions (somewhat brittle). When 
> ACLs are on, it will do a getAclStatus per FileStatus object. This could have 
> severe performance implications.
> Snippet from FSOperations:
> {code}
>   /*
>* For each FileStatus, attempt to acquire an AclStatus.  If the
>* getAclStatus throws an exception, we assume that ACLs are turned
>* off entirely and abandon the attempt.
>*/
>   boolean useAcls = true;   // Assume ACLs work until proven otherwise
>   for (int i = 0; i < fileStatuses.length; i++) {
> if (useAcls) {
>   try {
> aclStatus = fs.getAclStatus(fileStatuses[i].getPath());
>   } catch (AclException e) {
> /* Almost certainly due to an "ACLs not enabled" exception */
> aclStatus = null;
> useAcls = false;
>   } catch (UnsupportedOperationException e) {
> /* Ditto above - this is the case for a local file system */
> aclStatus = null;
> useAcls = false;
>   }
> }
> statusPairs[i] = new StatusPair(fileStatuses[i], aclStatus);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459579#comment-15459579
 ] 

Zhe Zhang commented on HDFS-10475:
--

Thanks for confirming this Xiaoyu.

[~xkrogen] Has started some work on this direction. Now we have INFO level 
logging for write lock held for over 1 seconds and read lock held for over 5 
seconds (both configurable). We also have a metric for the lock queue length.

I think we can use this JIRA to discuss what other metrics to add
# A complete op -> aggregate lock time map?
# Top _n_ types of RPC calls with longest read / write lock?
# Top _n_ paths leading to longest aggregate read / write lock?

A similar idea was [mentioned | 
https://issues.apache.org/jira/browse/HDFS-10713?focusedCommentId=15453607=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15453607]
 under HDFS-10713. So pinging [~jingzhao] [~liuml07] [~arpitagarwal] and 
[~hanishakoneru] for opinions.

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10475:
-
Summary: Adding metrics for long FSNamesystem read and write locks  (was: 
Adding metrics for long FSD read and write locks)

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-09-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459574#comment-15459574
 ] 

John Zhuge commented on HDFS-10684:
---

Thanks [~andrew.wang] for the comment. I am looking into unit testing to ensure 
compatibility.

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE=false;
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE=hadoop-primarynamenode:8020=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE=hadoop-primarynamenode:8020=false;
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10475) Adding metrics for long FSD read and write locks

2016-09-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10475:
-
Summary: Adding metrics for long FSD read and write locks  (was: Adding 
metrics for long FSD lock)

> Adding metrics for long FSD read and write locks
> 
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10475) Adding metrics for long FSD lock

2016-09-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10475:
-
Assignee: Erik Krogen  (was: Xiaoyu Yao)

> Adding metrics for long FSD lock
> 
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9781:
-
Attachment: HDFS-9781.003.patch

Attaching v003 patch with review comments incorporated.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.003.patch, 
> HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-02 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10450:
--
Attachment: HDFS-10450.HDFS-8707.000.patch

> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10705) libhdfs++: FileSystem should have a convenience no-args ctor

2016-09-02 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459400#comment-15459400
 ] 

Bob Hansen commented on HDFS-10705:
---

+1

> libhdfs++: FileSystem should have a convenience no-args ctor
> 
>
> Key: HDFS-10705
> URL: https://issues.apache.org/jira/browse/HDFS-10705
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-10705.HDFS-8707.000.patch, 
> HDFS-10705.HDFS-8707.001.patch
>
>
> Our examples demonstrate that the common use case is "use default options, 
> default username, and default IOService."  Let's make a FileSystem::New() 
> that helps users with that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10705) libhdfs++: FileSystem should have a convenience no-args ctor

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459291#comment-15459291
 ] 

Hadoop QA commented on HDFS-10705:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Issue | HDFS-10705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826897/HDFS-10705.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c537f017c789 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 06316b7 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16620/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16620/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: FileSystem should have a convenience no-args ctor
> 
>
> Key: HDFS-10705
> URL: https://issues.apache.org/jira/browse/HDFS-10705
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> 

[jira] [Commented] (HDFS-10833) Fix JSON errors in WebHDFS.md examples

2016-09-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459277#comment-15459277
 ] 

Xiao Chen commented on HDFS-10833:
--

+1.
Thanks for taking care of this, Andrew.

> Fix JSON errors in WebHDFS.md examples
> --
>
> Key: HDFS-10833
> URL: https://issues.apache.org/jira/browse/HDFS-10833
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HDFS-10833.001.patch
>
>
> Noticed a few JSON errors due to things like commas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459264#comment-15459264
 ] 

Andrew Wang commented on HDFS-10823:


I asked Tucu about this offline, and he told me it's because HttpFS was 
designed to front any arbitrary FileSystem (but exposing WebHDFS). I haven't 
heard of anyone using it except as an HDFS gateway though, so I think this is 
an uncommon usecase. There's also an API impedance mismatch, since WebHDFS was 
designed for REST access to HDFS, not as a REST API for generic FileSystem 
access.

But, without a large overhaul of the HttpFS codebase, I don't think it's 
something we're going to address here. I'll try the subclassing hack to at 
least make the new FileSystem method protected rather than public. Better 
alternatives welcomed.

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10822) Log DataNodes in the write pipeline

2016-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459231#comment-15459231
 ] 

Hudson commented on HDFS-10822:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10391/])
HDFS-10822. Log DataNodes in the write pipeline. John Zhuge via Lei Xu (lei: 
rev 5a8c5064d1a1d596b1f5c385299a86ec6ab9ad1e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> Log DataNodes in the write pipeline
> ---
>
> Key: HDFS-10822
> URL: https://issues.apache.org/jira/browse/HDFS-10822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-10822.001.patch
>
>
> Trying to diagnose a slow HDFS flush, taking longer than 10 seconds, but did 
> not know which DNs were involved in the write pipeline. Of course, I could 
> search NN log for the list of DNs, but it might not be possible sometimes or 
> convenient.
> Propose to add a DEBUG trace to DataStreamer#setPipeline to print the list of 
> DNs in the pipeline.
> BTW, we do print the list of DNs during pipeline recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10822) Log DataNodes in the write pipeline

2016-09-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459222#comment-15459222
 ] 

John Zhuge commented on HDFS-10822:
---

Thanks [~eddyxu] for the review and commit.

> Log DataNodes in the write pipeline
> ---
>
> Key: HDFS-10822
> URL: https://issues.apache.org/jira/browse/HDFS-10822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-10822.001.patch
>
>
> Trying to diagnose a slow HDFS flush, taking longer than 10 seconds, but did 
> not know which DNs were involved in the write pipeline. Of course, I could 
> search NN log for the list of DNs, but it might not be possible sometimes or 
> convenient.
> Propose to add a DEBUG trace to DataStreamer#setPipeline to print the list of 
> DNs in the pipeline.
> BTW, we do print the list of DNs during pipeline recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10822) Log DataNodes in the write pipeline

2016-09-02 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10822:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

> Log DataNodes in the write pipeline
> ---
>
> Key: HDFS-10822
> URL: https://issues.apache.org/jira/browse/HDFS-10822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-10822.001.patch
>
>
> Trying to diagnose a slow HDFS flush, taking longer than 10 seconds, but did 
> not know which DNs were involved in the write pipeline. Of course, I could 
> search NN log for the list of DNs, but it might not be possible sometimes or 
> convenient.
> Propose to add a DEBUG trace to DataStreamer#setPipeline to print the list of 
> DNs in the pipeline.
> BTW, we do print the list of DNs during pipeline recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10822) Log DataNodes in the write pipeline

2016-09-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459198#comment-15459198
 ] 

Lei (Eddy) Xu commented on HDFS-10822:
--

+1.  Thanks for adding more debug information, [~jzhuge]

This patch does not include test because it only adds debug information.

Committed to trunk and branch-2.

> Log DataNodes in the write pipeline
> ---
>
> Key: HDFS-10822
> URL: https://issues.apache.org/jira/browse/HDFS-10822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Attachments: HDFS-10822.001.patch
>
>
> Trying to diagnose a slow HDFS flush, taking longer than 10 seconds, but did 
> not know which DNs were involved in the write pipeline. Of course, I could 
> search NN log for the list of DNs, but it might not be possible sometimes or 
> convenient.
> Propose to add a DEBUG trace to DataStreamer#setPipeline to print the list of 
> DNs in the pipeline.
> BTW, we do print the list of DNs during pipeline recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10789) Route webhdfs through the RPC call queue

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459176#comment-15459176
 ] 

Kihwal Lee commented on HDFS-10789:
---

What happens when the call queue is full and backoff is on? What does the http 
client get and what does {{WebHdfsFilesSystem}} do in terms of retry? Does the 
connection go away or can they still pile up when the webhdfs load is high?

> Route webhdfs through the RPC call queue
> 
>
> Key: HDFS-10789
> URL: https://issues.apache.org/jira/browse/HDFS-10789
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc, webhdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10789.patch
>
>
> Webhdfs is extremely expensive under load and is not subject to the QoS 
> benefits of the RPC call queue.  HADOOP-13537 provides the basis for routing 
> webhdfs through the call queue to provide unified QoS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10705) libhdfs++: FileSystem should have a convenience no-args ctor

2016-09-02 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10705:
---
Attachment: HDFS-10705.HDFS-8707.001.patch

Thanks for the review [~bobhansen].  Updated the comment to match your 
suggestion, I agree that that's much more clear for the external API.

New patch attached.

> libhdfs++: FileSystem should have a convenience no-args ctor
> 
>
> Key: HDFS-10705
> URL: https://issues.apache.org/jira/browse/HDFS-10705
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-10705.HDFS-8707.000.patch, 
> HDFS-10705.HDFS-8707.001.patch
>
>
> Our examples demonstrate that the common use case is "use default options, 
> default username, and default IOService."  Let's make a FileSystem::New() 
> that helps users with that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10772) Reduce byte/string conversions for get listing

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459119#comment-15459119
 ] 

Kihwal Lee commented on HDFS-10772:
---

Found its way to branch-2.8.

> Reduce byte/string conversions for get listing
> --
>
> Key: HDFS-10772
> URL: https://issues.apache.org/jira/browse/HDFS-10772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10772.patch
>
>
> {{FSDirectory.getListingInt}} does a byte/string conversion for the byte[] 
> startAfter just to determine if it should be resolved as an inode path.  This 
> is not the common case but rather for NFS support so it should be avoided.  
> When the resolution is necessary the conversions may be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10772) Reduce byte/string conversions for get listing

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10772:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Reduce byte/string conversions for get listing
> --
>
> Key: HDFS-10772
> URL: https://issues.apache.org/jira/browse/HDFS-10772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10772.patch
>
>
> {{FSDirectory.getListingInt}} does a byte/string conversion for the byte[] 
> startAfter just to determine if it should be resolved as an inode path.  This 
> is not the common case but rather for NFS support so it should be avoided.  
> When the resolution is necessary the conversions may be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10768) Optimize mkdir ops

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10768:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Optimize mkdir ops
> --
>
> Key: HDFS-10768
> URL: https://issues.apache.org/jira/browse/HDFS-10768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10768.1.patch, HDFS-10768.patch
>
>
> Directory creation causes excessive object allocation: ex. an immutable list 
> builder, containing the string of components converted from the IIP's 
> byte[]s, sublist views of the string list, iterable, followed by string to 
> byte[] conversion.  This can all be eliminated by accessing the component's 
> byte[] in the IIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10768) Optimize mkdir ops

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459109#comment-15459109
 ] 

Kihwal Lee commented on HDFS-10768:
---

Picked to 2.8.

> Optimize mkdir ops
> --
>
> Key: HDFS-10768
> URL: https://issues.apache.org/jira/browse/HDFS-10768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10768.1.patch, HDFS-10768.patch
>
>
> Directory creation causes excessive object allocation: ex. an immutable list 
> builder, containing the string of components converted from the IIP's 
> byte[]s, sublist views of the string list, iterable, followed by string to 
> byte[] conversion.  This can all be eliminated by accessing the component's 
> byte[] in the IIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10762) Pass IIP for file status related methods

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10762:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Pass IIP for file status related methods
> 
>
> Key: HDFS-10762
> URL: https://issues.apache.org/jira/browse/HDFS-10762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10762.1.patch, HDFS-10762.patch
>
>
> The frequently called file status methods will not require path re-resolves 
> if the IIP is passed down the call stack.  The code can be simplified further 
> if the IIP tracks if the original path was a reserved raw path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10762) Pass IIP for file status related methods

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459093#comment-15459093
 ] 

Kihwal Lee commented on HDFS-10762:
---

Commited this to branch-2.8.

> Pass IIP for file status related methods
> 
>
> Key: HDFS-10762
> URL: https://issues.apache.org/jira/browse/HDFS-10762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10762.1.patch, HDFS-10762.patch
>
>
> The frequently called file status methods will not require path re-resolves 
> if the IIP is passed down the call stack.  The code can be simplified further 
> if the IIP tracks if the original path was a reserved raw path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10762) Pass IIP for file status related methods

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459087#comment-15459087
 ] 

Kihwal Lee commented on HDFS-10762:
---

Cherry-picked HDFS-9621 to branch-2.8, which moves target into IIP.

> Pass IIP for file status related methods
> 
>
> Key: HDFS-10762
> URL: https://issues.apache.org/jira/browse/HDFS-10762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10762.1.patch, HDFS-10762.patch
>
>
> The frequently called file status methods will not require path re-resolves 
> if the IIP is passed down the call stack.  The code can be simplified further 
> if the IIP tracks if the original path was a reserved raw path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9621:
-
Fix Version/s: (was: 2.9.0)
   2.8.0

> getListing wrongly associates Erasure Coding policy to pre-existing 
> replicated files under an EC directory  
> 
>
> Key: HDFS-9621
> URL: https://issues.apache.org/jira/browse/HDFS-9621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, 
> HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch
>
>
> This is reported by [~ssreenivasan]:
> If we set Erasure Coding policy to a directory which contains some files with 
> replicated blocks, later when listing files under the directory these files 
> will be reported as EC files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459084#comment-15459084
 ] 

Kihwal Lee commented on HDFS-9621:
--

Picked to branch-2.8. Although it does not have the same bug, but it is a good 
improvement and makes merging easier.

> getListing wrongly associates Erasure Coding policy to pre-existing 
> replicated files under an EC directory  
> 
>
> Key: HDFS-9621
> URL: https://issues.apache.org/jira/browse/HDFS-9621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, 
> HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch
>
>
> This is reported by [~ssreenivasan]:
> If we set Erasure Coding policy to a directory which contains some files with 
> replicated blocks, later when listing files under the directory these files 
> will be reported as EC files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8901) Use ByteBuffer in striping positional read

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459076#comment-15459076
 ] 

Hadoop QA commented on HDFS-8901:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} root: The patch generated 1 new + 317 unchanged 
- 18 fixed = 318 total (was 335) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-8901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826857/HDFS-8901-v19.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 03a151f01ca7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 05f5c0f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16619/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HDFS-8901) Use ByteBuffer in striping positional read

2016-09-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459053#comment-15459053
 ] 

Zhe Zhang commented on HDFS-8901:
-

Thanks Sammi. I think all my previous comments have been addressed. But I see 
some issues / questions in the changes from v17 to v18/19:
# Why do we need to flip? In the previous code, {{arraycopy}} just starts 
copying from the current position of {{result}}.
{code}
-System.arraycopy(result.array(), result.position(), buf, offset,
-len);
+result.flip();
{code}
# Could you also explain why {{TestPread}} and {{TestSnapshotFileLength}} 
failed for v17 and v18 patches respectively?
# Need to be removed:
{code}
+  //buffer.position(buffer.position() + bytesToRead);
{code}

Thanks for adding the ByteBuffer version of readAll. Due to the size of the 
current patch, can we create a separate JIRA just to add this API? It will go 
through quickly and we can focus on the main logic on this JIRA.

Thanks for the great effort here.

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-8901-v10.patch, HDFS-8901-v17.patch, 
> HDFS-8901-v18.patch, HDFS-8901-v19.patch, HDFS-8901-v2.patch, 
> HDFS-8901-v3.patch, HDFS-8901-v4.patch, HDFS-8901-v5.patch, 
> HDFS-8901-v6.patch, HDFS-8901-v7.patch, HDFS-8901-v8.patch, 
> HDFS-8901-v9.patch, HDFS-8901.v11.patch, HDFS-8901.v12.patch, 
> HDFS-8901.v13.patch, HDFS-8901.v14.patch, HDFS-8901.v15.patch, 
> HDFS-8901.v16.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459047#comment-15459047
 ] 

Manoj Govindassamy commented on HDFS-10830:
---

sure, lets track the wait/signal improvements with a new jira. Give me some 
time, will take one more look at this patch and get back to you.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10705) libhdfs++: FileSystem should have a convenience no-args ctor

2016-09-02 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459023#comment-15459023
 ] 

Bob Hansen commented on HDFS-10705:
---

One change, if we could: remove "convenience factory" from the comment on 
New().  Just make it "Returns a new instance with default user and option, with 
the default IOService" or somesuch.

> libhdfs++: FileSystem should have a convenience no-args ctor
> 
>
> Key: HDFS-10705
> URL: https://issues.apache.org/jira/browse/HDFS-10705
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-10705.HDFS-8707.000.patch
>
>
> Our examples demonstrate that the common use case is "use default options, 
> default username, and default IOService."  Let's make a FileSystem::New() 
> that helps users with that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10745:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15459012#comment-15459012
 ] 

Kihwal Lee commented on HDFS-10745:
---

Committed to branch-2.8.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10744) Internally optimize path component resolution

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10744:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10744) Internally optimize path component resolution

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458990#comment-15458990
 ] 

Kihwal Lee commented on HDFS-10744:
---

Committed to branch-2.8.

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10711) Optimize FSPermissionChecker group membership check

2016-09-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10711:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Optimize FSPermissionChecker group membership check
> ---
>
> Key: HDFS-10711
> URL: https://issues.apache.org/jira/browse/HDFS-10711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10711.1.patch, HDFS-10711.patch
>
>
> HADOOP-13442 obviates the need for multiple group related object allocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10711) Optimize FSPermissionChecker group membership check

2016-09-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458975#comment-15458975
 ] 

Kihwal Lee commented on HDFS-10711:
---

Committed to 2.8.

> Optimize FSPermissionChecker group membership check
> ---
>
> Key: HDFS-10711
> URL: https://issues.apache.org/jira/browse/HDFS-10711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10711.1.patch, HDFS-10711.patch
>
>
> HADOOP-13442 obviates the need for multiple group related object allocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10834) Add concat to libhdfs API

2016-09-02 Thread Gary Helmling (JIRA)
Gary Helmling created HDFS-10834:


 Summary: Add concat to libhdfs API
 Key: HDFS-10834
 URL: https://issues.apache.org/jira/browse/HDFS-10834
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Reporter: Gary Helmling


libhdfs does not currently provide access to calling FileSystem.concat().  
Let's add a function for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458960#comment-15458960
 ] 

Arpit Agarwal commented on HDFS-10830:
--

bq. Wouldn't it be better to go for the following ..wait and signal model 
compared to polling
I completely agree, but that may be a more complex change. Let's fix the 
immediate problem first and address the signaling improvement later. Sound fair?

I assigned it to myself.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-10830:


Assignee: Arpit Agarwal  (was: Manoj Govindassamy)

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458948#comment-15458948
 ] 

Manoj Govindassamy commented on HDFS-10830:
---

Thanks for working on this [~arpitagarwal] and submitting a patch. Would you be 
willing to own this bug ? If so, please feel free to assign this bug to 
yourself. 

Had a quick look at the patch. I believe the submitted patch will not throw 
IllegalMonitorStateException. But, looks like the patch also  retains the old 
"polling" model. That is, wait and check in a loop. Wouldn't it be better to go 
for the following ..
* wait and signal model compared to polling
* and with a overall timeout to give up or abort the remove volume operation as 
the other concurrent user is taking a long time to finish his operations and 
the reference count not going to zero.

Please let me know your thoughts on above.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10830.01.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10828) Fix usage of FsDatasetImpl object lock in ReplicaMap

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458906#comment-15458906
 ] 

Hadoop QA commented on HDFS-10828:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 213 unchanged - 0 fixed = 214 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
53s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10828 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826853/HDFS-10828.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ddff152d3e00 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 05f5c0f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16618/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16618/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16618/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix usage of FsDatasetImpl object lock in ReplicaMap
> 
>
> Key: HDFS-10828
> URL: https://issues.apache.org/jira/browse/HDFS-10828
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> 

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458891#comment-15458891
 ] 

Manoj Govindassamy commented on HDFS-9781:
--

Thanks for the review [~jojochuang].

* sure, will add more details like block id to the log. Thinking more about 
this, I wonder if there will be a ton of log emitted by this as the volume 
being removed can hold a lot. So, should I also change this to a debug only 
logging? Please advice.

* sure, will catch the expected InterruptedException only and not all. This way 
we will not ignore other issues. thanks.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9781.002.patch, HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458843#comment-15458843
 ] 

Hadoop QA commented on HDFS-10830:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 27s{color} | {color:orange} root: The patch generated 1 new + 114 unchanged 
- 0 fixed = 115 total (was 114) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Monitor wait() called on a Condition in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(int,
 Condition)  At FsVolumeList.java:Condition in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(int,
 Condition)  At FsVolumeList.java:[line 285] |
|  |  Calling wait rather than await in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(int,
 Condition)  At FsVolumeList.java:in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(int,
 Condition)  At FsVolumeList.java:[line 285] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
\\
\\
|| Subsystem || Report/Notes ||
| 

  1   2   >