[jira] [Comment Edited] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418504#comment-16418504
 ] 

Nanda kumar edited comment on HDFS-13360 at 3/29/18 6:39 AM:
-

Thanks [~Deng FEI] for reporting and working on this.
We cannot start {{HdslDatanodeService}} (DatanodeStateMachine to be exact) 
before starting {{ObjectStoreRestPlugin}} as we might miss setting 
{{ozoneRestPort}} in some race condition. Please refer to 
[this|https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16409726&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16409726]
 comment for more details.
We should also create {{DatanodeDetails}} instance in {{HdslDatanodeService}} 
before starting {{ObjectStoreRestPlugin}}, {{DatanodeDetails}} is shared 
between {{HdslDatanodeService}} and {{ObjectStoreRestPlugin}}. This was the 
main reason behind the movement of {{DatanodeDetails}} creation to the 
constructor of {{HdslDatanodeService}}.
More details 
[here|https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16413946&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16413946].

As you have mentioned the existing implementation breaks 
MiniOzoneClassicCluster. We should think of a cleaner way to address this issue.


was (Author: nandakumar131):
Thanks [~Deng FEI] for reporting and working on this.
We cannot start {{HdslDatanodeService}} (DatanodeStateMachine to be exact) 
before starting {{ObjectStoreRestPlugin}} as we might miss setting 
{{ozoneRestPort}} in some race condition. Please refer to this comment for more 
details.
We should also create {{DatanodeDetails}} instance in {{HdslDatanodeService}} 
before starting {{ObjectStoreRestPlugin}}, {{DatanodeDetails}} is shared 
between {{HdslDatanodeService}} and {{ObjectStoreRestPlugin}}. This was the 
main reason behind the movement of {{DatanodeDetails}} creation to the 
constructor of {{HdslDatanodeService}}.
More details here.

As you have mentioned the existing implementation breaks 
MiniOzoneClassicCluster. We should think of a cleaner way to address this issue.

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
> Attachments: HDFS-13360-HDFS-7240.000.patch
>
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418504#comment-16418504
 ] 

Nanda kumar commented on HDFS-13360:


Thanks [~Deng FEI] for reporting and working on this.
We cannot start {{HdslDatanodeService}} (DatanodeStateMachine to be exact) 
before starting {{ObjectStoreRestPlugin}} as we might miss setting 
{{ozoneRestPort}} in some race condition. Please refer to this comment for more 
details.
We should also create {{DatanodeDetails}} instance in {{HdslDatanodeService}} 
before starting {{ObjectStoreRestPlugin}}, {{DatanodeDetails}} is shared 
between {{HdslDatanodeService}} and {{ObjectStoreRestPlugin}}. This was the 
main reason behind the movement of {{DatanodeDetails}} creation to the 
constructor of {{HdslDatanodeService}}.
More details here.

As you have mentioned the existing implementation breaks 
MiniOzoneClassicCluster. We should think of a cleaner way to address this issue.

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
> Attachments: HDFS-13360-HDFS-7240.000.patch
>
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-03-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418441#comment-16418441
 ] 

Xiao Chen commented on HDFS-13281:
--

Makes sense, so this patch only applies if step #1 does a {{startFile}} with 
/.reserved/raw, in which case there will be no step #3, and will be assumed 
that encrypted bytes are streamed to DN. And step#5 also only applies to files 
created with /.reserved/raw.

Could you rebase the patch? Doesn't apply to trunk.

Code comments:
 * The test doesn't verify that there was no intermediate EDEK consumed and set 
on the file. Probably need to set a customer provider with some counters to 
make sure the startfile didn't go through it
 * Because reading /.reserved/raw is supposed to be returning the raw bytes, 
shouldn't {{encryptedReservedStream}} be different than {{unEncryptedBytes}}? 
 * Can you add a distcp test as you mentioned before? Best if it's between 2 
zones with different keys, so the test can verify the decryption with the 
correct key.

Please link this to the webhdfs umbrella jira.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13360:

Attachment: HDFS-13360-HDFS-7240.000.patch

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
> Attachments: HDFS-13360-HDFS-7240.000.patch
>
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13367:

Attachment: (was: HDFS-13360-HDFS-7240.000.patch)

> Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService
> -
>
> Key: HDFS-13367
> URL: https://issues.apache.org/jira/browse/HDFS-13367
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ozone
>Affects Versions: HDFS-7240
> Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
> HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
> does not follow the order, should warning
>Reporter: DENG FEI
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13367:

Attachment: HDFS-13360-HDFS-7240.000.patch

> Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService
> -
>
> Key: HDFS-13367
> URL: https://issues.apache.org/jira/browse/HDFS-13367
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ozone
>Affects Versions: HDFS-7240
> Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
> HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
> does not follow the order, should warning
>Reporter: DENG FEI
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13360:

Attachment: (was: HDFS-13360-HDFS-7240.000.patch)

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI resolved HDFS-13367.
-
Resolution: Invalid

> Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService
> -
>
> Key: HDFS-13367
> URL: https://issues.apache.org/jira/browse/HDFS-13367
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ozone
>Affects Versions: HDFS-7240
> Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
> HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
> does not follow the order, should warning
>Reporter: DENG FEI
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418378#comment-16418378
 ] 

Yiqun Lin commented on HDFS-13359:
--

Thanks [~jojochuang] for the comment.
{quote}Could you shed a little more light on why changing from an object lock 
to a ReentrantLock improves locking? Is it because it is a fair lock?
{quote}
When the lock contention is increasing, the performance of {{ReentrantLock}} 
will be better than {{synchronized}} lock. By default, {{ReentrantLock}} uses a 
non-fair strategy, this is the same as {{synchronized}}. This is not the real 
reason I changed here,:).

> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)
DENG FEI created HDFS-13367:
---

 Summary: Ozone: ObjectStoreRestPlugin initialization depend on 
HdslDatanodeService
 Key: HDFS-13367
 URL: https://issues.apache.org/jira/browse/HDFS-13367
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ozone
Affects Versions: HDFS-7240
 Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
does not follow the order, should warning
Reporter: DENG FEI






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418369#comment-16418369
 ] 

DENG FEI commented on HDFS-13360:
-

[~anu] upload first patch, please review it.

Thanks

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
> Attachments: HDFS-13360-HDFS-7240.000.patch
>
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13360:

Attachment: HDFS-13360-HDFS-7240.000.patch

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
> Attachments: HDFS-13360-HDFS-7240.000.patch
>
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418366#comment-16418366
 ] 

genericqa commented on HDFS-13365:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  1s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterAllResolver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916707/HDFS-13365.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ed9e60945b3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3d185d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23710/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23710/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt

[jira] [Commented] (HDFS-13310) [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418359#comment-16418359
 ] 

genericqa commented on HDFS-13310:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 89 new 
+ 844 unchanged - 1 fixed = 933 total (was 845) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may 
expose internal representation by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:[line 36] |
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hd

[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418340#comment-16418340
 ] 

genericqa commented on HDFS-13364:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 21s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13364 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916714/HDFS-13364.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebe315aa35d2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3d185d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23709/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23709/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23709/tes

[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DataNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-13360:

Summary: Ozone: The configuration of  implement of DataNodeServicePlugin 
should obtain from datanode instance  (was: Ozone: The configuration of  
implement of DtaNodeServicePlugin should obtain from datanode instance)

> Ozone: The configuration of  implement of DataNodeServicePlugin should obtain 
> from datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13365) RBF: Adding trace support

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13365:
---
Status: Patch Available  (was: Open)

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13364:
---
Attachment: HDFS-13364.001.patch

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418284#comment-16418284
 ] 

genericqa commented on HDFS-13364:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13364 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916705/HDFS-13364.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0395d787ac08 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a991e89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23707/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23707/artifact/out/patch-compile-hadoop-hdfs-project_hadoop

[jira] [Updated] (HDFS-13365) RBF: Adding trace support

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13365:
---
Attachment: HDFS-13365.000.patch

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418242#comment-16418242
 ] 

Íñigo Goiri commented on HDFS-13364:


Attached the first implementation without unit tests yet.
Internally, we had services checking getBlocks() and we needed this interface.

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13364.000.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13364:
---
Status: Patch Available  (was: Open)

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13364.000.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13364:
---
Attachment: HDFS-13364.000.patch

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13364.000.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13310) [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-03-28 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418235#comment-16418235
 ] 

Ewan Higgs commented on HDFS-13310:
---

Submitting an updated patch (002) that adds offset and length to the 
MULTIPART_PUT_PART command. This is so the blocks in HDFS don't need to be tied 
1-1 to the parts being written to the backing store.

There is only 1 offset and 1 length despite taking a list of LocatedBlocks 
because we assume the part is contiguous.

> [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
> ---
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-03-28 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13310:
--
Status: Patch Available  (was: Open)

> [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
> ---
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-03-28 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13310:
--
Status: Open  (was: Patch Available)

> [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
> ---
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-03-28 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13310:
--
Attachment: HDFS-13310-HDFS-12090.002.patch

> [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
> ---
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418209#comment-16418209
 ] 

Chris Douglas commented on HDFS-13330:
--

bq. Could you also fix the findbugs error?
In this case, the findbugs warning is an artifact of the infinite loop. It 
infers (correctly) that the only way to exit the loop is for {{info}} to be 
assigned a non-null value. As you point out, it should also exit the loop 
before assigning {{info}} when {{replicaInfoMap.get(key)}} returns null, as 
[above|https://issues.apache.org/jira/browse/HDFS-13330?focusedCommentId=16412164&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16412164].

Since this retry code has never been enabled, this functionality is either 
unused, reliable "enough", or some other layer implements the retry logic. 
Changing this to a {{for}} loop with a low, fixed number of retries (i.e., 2, 3 
at the most) is probably sufficient. We'd like to know if/how it's used in 
applications, but absent that analysis, fixing it to work "as designed" is 
probably the best we're going to do.

bq. In addition, a test case for this method is greatly appreciated.
+1 The retry semantics of the committed code were clearly untested.

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13366) Deprecate all existing password fields in hdfs configuration

2018-03-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13366:
--

 Summary: Deprecate all existing password fields in hdfs 
configuration
 Key: HDFS-13366
 URL: https://issues.apache.org/jira/browse/HDFS-13366
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


In HADOOP-15325 [~shv] suggests we should mark all password fields in 
configuration file deprecated.

Raise this Jira to track this work in HDFS side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-03-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418094#comment-16418094
 ] 

Rushabh S Shah commented on HDFS-13281:
---

bq. How does HDFS-12597 use it?
The use case is:
1. EZ-aware webhdfs client (i.e add header:X-Hadoop-Accept-EZ) will send 
{{createFile}} request to namenode.
2. If the client supports webhdfs and if the file in EZ, then namenode will 
return {{FeInfo}} in response via header and append "/.reserved/raw" to 
redirect path.
3. The client will encrypt data with {{FeInfo}} and stream the encrypted bytes 
to namenode.
4. Since the path is prepended with {{/.reserved/raw}}, datanode will not 
encrypt again.
5. At the end, client will issue {{setXAttr}} on path to namenode.
6. According to HDFS-13035, we will allow owner of the file to do {{setXAttr}} 
_only if it is not set_. If namenode will {{setXAttr}} even on 
{{/.reserved/raw}} then webhdfs client will fail to {{setXAttr}}.
Hope it makes sense.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption

2018-03-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13314:
-
Fix Version/s: 2.9.1

> NameNode should optionally exit if it detects FsImage corruption
> 
>
> Key: HDFS-13314
> URL: https://issues.apache.org/jira/browse/HDFS-13314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2
>
> Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, 
> HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch
>
>
> The NameNode should optionally exit after writing an FsImage if it detects 
> the following kinds of corruptions:
> # INodeReference pointing to non-existent INode
> # Duplicate entries in snapshot deleted diff list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption

2018-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418042#comment-16418042
 ] 

Hudson commented on HDFS-13314:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13896 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13896/])
HDFS-13314. NameNode should optionally exit if it detects FsImage (arp: rev 
a991e899fb9f98d2089f37ac9ac7c485d3bbb959)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java


> NameNode should optionally exit if it detects FsImage corruption
> 
>
> Key: HDFS-13314
> URL: https://issues.apache.org/jira/browse/HDFS-13314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
> Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, 
> HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch
>
>
> The NameNode should optionally exit after writing an FsImage if it detects 
> the following kinds of corruptions:
> # INodeReference pointing to non-existent INode
> # Duplicate entries in snapshot deleted diff list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption

2018-03-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13314:
-
  Resolution: Fixed
   Fix Version/s: 3.0.2
  2.10.0
  3.1.0
Target Version/s:   (was: 2.10.0, 3.2.0)
  Status: Resolved  (was: Patch Available)

I've committed this. Thanks all for the reviews and comments.

Rushabh, let me know if you have any follow up comments.

> NameNode should optionally exit if it detects FsImage corruption
> 
>
> Key: HDFS-13314
> URL: https://issues.apache.org/jira/browse/HDFS-13314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
> Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, 
> HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch
>
>
> The NameNode should optionally exit after writing an FsImage if it detects 
> the following kinds of corruptions:
> # INodeReference pointing to non-existent INode
> # Duplicate entries in snapshot deleted diff list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417981#comment-16417981
 ] 

Jason Lowe commented on HDFS-13362:
---

The maven plugin for cmake automatically detects the number of processors in 
the machine and uses that for the build parallelism.  From CompileMojo#runMake:
{code}
  public void runMake() throws MojoExecutionException {
List cmd = new LinkedList();
cmd.add("make");
cmd.add("-j");
cmd.add(String.valueOf(availableProcessors));
cmd.add("VERBOSE=1");
{code}

I'm not an expert on the maven cmake plugin, but I do know the hadoop-common 
native builds and hadoop-yarn-server-nodemanager native builds use it.  See the 
cmake-compile goal definitions in their respective pom files for examples of 
how to build with cmake and run cetest for unit tests.  Fixing the dependencies 
for parallel builds will be a prerequisite since the maven cmake plugin always 
builds in parallel.

As for avoiding the build via a maven-level flag rather than a cmake flag, we 
should be able to leverage the {{activation}} portion of the profile 
configuration in the pom to disable the native build without invoking cmake at 
all.  HADOOP-13999 did something very similar for the skipShade flag to avoid 
the expensive shaded hadoop-client build.

> add a flag to skip the libhdfs++ build
> --
>
> Key: HDFS-13362
> URL: https://issues.apache.org/jira/browse/HDFS-13362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Priority: Minor
> Attachments: HDFS-13362.000.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk.  This covers adding a flag that would let people build libhdfs 
> without all of libhdfs++ if they don't need it; it should be built by default 
> to maintain compatibility with as many environments as possible.
> Some thoughts:
> -The increase in compile time only impacts clean builds.  Incremental 
> rebuilds aren't significantly more expensive than they used to be if the code 
> hasn't changed.
> -Compile times for libhdfs++ can most likely be reduced but that's a longer 
> term project.  boost::asio and tr1::optional are header-only libraries that 
> are heavily templated so every compilation unit that includes them has to do 
> a lot of parsing.
> Is it common to do completely clean builds frequently for interactive users?  
> Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13365) RBF: Adding trace support

2018-03-28 Thread JIRA
Íñigo Goiri created HDFS-13365:
--

 Summary: RBF: Adding trace support
 Key: HDFS-13365
 URL: https://issues.apache.org/jira/browse/HDFS-13365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA
Íñigo Goiri created HDFS-13364:
--

 Summary: RBF: Support NamenodeProtocol in the Router
 Key: HDFS-13364
 URL: https://issues.apache.org/jira/browse/HDFS-13364
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports

2018-03-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417936#comment-16417936
 ] 

Íñigo Goiri commented on HDFS-13347:


The conflicts were related to erasure coding and a minor difference with 
java7/8.
I added  [^HDFS-13347-branch-2.000.patch] with the actual diff for completeness 
and did the cherry-pick for branch-2 and branch-2.9..
Thanks [~linyiqun] for committing the rest and [~giovanni.fumarola], 
[~linyiqun], and [~shahrs87] for the review.
[~virajith], I'll take care of your comments in a follow up JIRA.

> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs. We should cache this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HDFS-13363:
-

Assignee: Gabor Bota

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13347:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.1
   2.10.0
   Status: Resolved  (was: Patch Available)

> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs. We should cache this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13347:
---
Attachment: HDFS-13347-branch-2.000.patch

> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs. We should cache this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-03-28 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417925#comment-16417925
 ] 

Surendra Singh Lilhore commented on HDFS-13292:
---

Thanks for the patch [~RANith] and  thanks [~shahrs87] for discussion.
{quote}The only downside of throwing {{not an empty directory}} exception first 
is some user might delete the contents of directory and not realize it is 
already in an EZ.
{quote}
Same case I am thinking. I feel we should fix this. Client should get the clear 
info from exception.

> Crypto command should give proper exception when key is already exist for 
> zone directory
> 
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Priority: Major
> Attachments: HDFS-13292.001.patch
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13363:
---
Description: 
When AclTransformation methods throws AclException, it does not record the file 
path that has the exception. Therefore even if it throws an exception, we would 
never know which file has those invalid ACLs.

 

These AclTransformation methods are invoked in FSDirAclOp methods, which know 
the file path. These FSDirAclOp methods can catch AclException, and then add 
the file path in the error message.

  was:
When AclTransformation methods throws AclException, it does not record the file 
path that has the exception. These AclTransformation methods are invoked in 
FSDirAclOp methods, which know the file path. Therefore even if it throws an 
exception, we would never know which file has those invalid ACLs.

 

These FSDirAclOp methods can catch AclException, and then add the file path in 
the error message.


> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-03-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13363:
--

 Summary: Record file path when FSDirAclOp throws AclException
 Key: HDFS-13363
 URL: https://issues.apache.org/jira/browse/HDFS-13363
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


When AclTransformation methods throws AclException, it does not record the file 
path that has the exception. These AclTransformation methods are invoked in 
FSDirAclOp methods, which know the file path. Therefore even if it throws an 
exception, we would never know which file has those invalid ACLs.

 

These FSDirAclOp methods can catch AclException, and then add the file path in 
the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417903#comment-16417903
 ] 

Wei-Chiu Chuang commented on HDFS-13357:


+1.

I also find a few other places where file path would be useful when there are 
AclExceptions. Will file jiras for those.

> Improve AclException message "Invalid ACL: only directories may have a 
> default ACL."
> 
>
> Key: HDFS-13357
> URL: https://issues.apache.org/jira/browse/HDFS-13357
> Project: Hadoop HDFS
>  Issue Type: Improvement
> Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, 
> Hive
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13357.001.patch
>
>
> I found this warning message in a HDFS cluster
> {noformat}
> 2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from 
> 10.0.0.1:39508 Call#79376996
> Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> 2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE
> ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> {noformat}
> However it doesn't tell me which file had this invalid ACL.
> This cluster has Sentry enabled, so it is possible this invalid ACL doesn't 
> come from HDFS, but from Sentry.
> File this Jira to improve the message and add file name in it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417880#comment-16417880
 ] 

Wei-Chiu Chuang commented on HDFS-13330:


Hi [~gabor.bota] thanks for the patch. I think this bug is more involved than 
it seems.

An infinite loop looks even scarier than a loop that never retries. Suppose 
there's a bug somewhere else and replicaInfoMap.get(key) return null, this loop 
will never end, because it runs inside a lock, and no one will be able to 
update it.

Could you also fix the findbugs error? In addition, a test case for this method 
is greatly appreciated.

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13330:
---
Priority: Major  (was: Minor)

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13330:
---
Issue Type: Bug  (was: Improvement)

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417842#comment-16417842
 ] 

James Clampffer commented on HDFS-13362:


Attached a patch to turn the build off by passing 
-Dnative_cmake_args="-DSKIP_LIBHDFSPP_BUILD=TRUE" to maven.  Someone who knows 
the maven/ant side of things could most likely add a more explicit maven flag 
pretty easily.

[~jlowe] I think the current build method wasn't a deliberate choice as much as 
it was something that worked well enough so it never got much attention.  
libhdfs++ used to support parallel compilation but since that was never 
integrated into the maven build some dependency issues crept in.  Getting 
parallel builds going again would be really nice and might make the option to 
skip the build unnecessary.  I hadn't attempted that because I wasn't sure 
where to get a default number of build threads to use.  I can take a look at 
fixing the dependency declarations.

> add a flag to skip the libhdfs++ build
> --
>
> Key: HDFS-13362
> URL: https://issues.apache.org/jira/browse/HDFS-13362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Priority: Minor
> Attachments: HDFS-13362.000.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk.  This covers adding a flag that would let people build libhdfs 
> without all of libhdfs++ if they don't need it; it should be built by default 
> to maintain compatibility with as many environments as possible.
> Some thoughts:
> -The increase in compile time only impacts clean builds.  Incremental 
> rebuilds aren't significantly more expensive than they used to be if the code 
> hasn't changed.
> -Compile times for libhdfs++ can most likely be reduced but that's a longer 
> term project.  boost::asio and tr1::optional are header-only libraries that 
> are heavily templated so every compilation unit that includes them has to do 
> a lot of parsing.
> Is it common to do completely clean builds frequently for interactive users?  
> Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417828#comment-16417828
 ] 

Wei-Chiu Chuang commented on HDFS-13359:


Hi [~linyiqun] thanks for the patch!

Could you shed a little more light on why changing from an object lock to a 
ReentrantLock improves locking? Is it because it is a fair lock?

Thank you

> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13362:
---
Attachment: HDFS-13362.000.patch

> add a flag to skip the libhdfs++ build
> --
>
> Key: HDFS-13362
> URL: https://issues.apache.org/jira/browse/HDFS-13362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Priority: Minor
> Attachments: HDFS-13362.000.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk.  This covers adding a flag that would let people build libhdfs 
> without all of libhdfs++ if they don't need it; it should be built by default 
> to maintain compatibility with as many environments as possible.
> Some thoughts:
> -The increase in compile time only impacts clean builds.  Incremental 
> rebuilds aren't significantly more expensive than they used to be if the code 
> hasn't changed.
> -Compile times for libhdfs++ can most likely be reduced but that's a longer 
> term project.  boost::asio and tr1::optional are header-only libraries that 
> are heavily templated so every compilation unit that includes them has to do 
> a lot of parsing.
> Is it common to do completely clean builds frequently for interactive users?  
> Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-03-28 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417802#comment-16417802
 ] 

He Xiaoqiao commented on HDFS-12749:


Thanks [~kihwal] for your detailed comment.
{quote}You can keep your change in register() and simply add the same logic to 
the processCommand()'s catch block. I.e. crack open the RemoteException and 
stop the actor thread if it is one of the terminal exceptions.{quote}
I think it may not be able to resolve this issue when catch and crack 
{{RemoteException}} for #processCommand, since #processCommand throws 
{{IOException}} which wrap {{SocketTimeoutException}} as [~tanyuxin] mentioned 
in description
{quote}java.io.IOException: Failed on local exception: java.io.IOException: 
java.net.SocketTimeoutException: 6 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel{quote}
According to your suggestions, is it better to create new issue to push that 
stop the actor thread if it meet some fatal or terminal exceptions? 

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistratio

[jira] [Updated] (HDFS-13358) RBF: Support for Delegation Token

2018-03-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13358:
---
Description: HDFS Router should support issuing / managing HDFS delegation 
tokens.

> RBF: Support for Delegation Token
> -
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417794#comment-16417794
 ] 

Íñigo Goiri commented on HDFS-13289:


The issue with TestRouterWebHDFSContractAppend is similar to the one with 
TestRouterWebHDFSContractCreate which is tracked in HDFS-13353; so we are good 
here.
Regarding the TestConnectionManager error, the negative case was triggered, so 
we have some issue here.
In {{fail("User is not present.");}}, we should track the user that is failing 
too.

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2018-03-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417737#comment-16417737
 ] 

Íñigo Goiri commented on HDFS-13248:


[~ajayydv], I agree.
I tried to go with the token approach but the tokens weren't sent at all.
I think we need some more basis like in HDFS-13358 for that.
If you have background on that, feel free to give it a try.

Thanks [~linyiqun], we can move the read/write logic into the NN side.
Let's see if we can do it with tokens though.

for the test, I tried to do something with the MiniDFSCluster but all the 
addresses are the same.
I tried to force addresses like 127.0.0.2 and 127.0.0.3 for the Router and the 
Client but it requires more wiring to set the origin IP in the client.
There is some logic to set it for the DNs but not for the user.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417724#comment-16417724
 ] 

Íñigo Goiri commented on HDFS-13353:


The last run for HDFS-13289 shows an error with 
TestRouterWebHDFSContractAppend: 
[report|https://builds.apache.org/job/PreCommit-HDFS-Build/23706/testReport/org.apache.hadoop.fs.contract.router.web/TestRouterWebHDFSContractAppend/testRenameFileBeingAppended/].
I think this might be related.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hado

[jira] [Commented] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417684#comment-16417684
 ] 

genericqa commented on HDFS-13289:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 21s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
|   | hadoop.hdfs.server.federation.router.TestConnectionManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916621/HDFS-13289.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 08d71c5d988c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 411993f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23706/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23706/testReport/ |
| Max. process+thread count | 945 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/P

[jira] [Commented] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417665#comment-16417665
 ] 

Jason Lowe commented on HDFS-13362:
---

Part of the problem with the excessive build time is that the build isn't being 
performed in parallel.  I noticed we're not using the cmake plugin but rather 
invoking cmake and make directly via the ant plugin.  Is there a good reason to 
not using the cmake plugin like all the other native builds in the project do?  
Doing so would automatically leverage parallel builds.

I tried forcing a parallel build manually by specifying -Dnative_make_args=-j4 
but it failed with a missing ClientNamenodeProtocol.pb.h.  Looks like the 
dependencies aren't fully specified in the makefile, which may explain why we 
can't use the cmake plugin.  I think fixing automatic parallel builds would 
significantly improve native build time on most setups.


> add a flag to skip the libhdfs++ build
> --
>
> Key: HDFS-13362
> URL: https://issues.apache.org/jira/browse/HDFS-13362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Priority: Minor
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk.  This covers adding a flag that would let people build libhdfs 
> without all of libhdfs++ if they don't need it; it should be built by default 
> to maintain compatibility with as many environments as possible.
> Some thoughts:
> -The increase in compile time only impacts clean builds.  Incremental 
> rebuilds aren't significantly more expensive than they used to be if the code 
> hasn't changed.
> -Compile times for libhdfs++ can most likely be reduced but that's a longer 
> term project.  boost::asio and tr1::optional are header-only libraries that 
> are heavily templated so every compilation unit that includes them has to do 
> a lot of parsing.
> Is it common to do completely clean builds frequently for interactive users?  
> Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13362:
---
Description: 
libhdfs++ has significantly increased clean build times for the native client 
on trunk.  This covers adding a flag that would let people build libhdfs 
without all of libhdfs++ if they don't need it; it should be built by default 
to maintain compatibility with as many environments as possible.

Some thoughts:
-The increase in compile time only impacts clean builds.  Incremental rebuilds 
aren't significantly more expensive than they used to be if the code hasn't 
changed.
-Compile times for libhdfs++ can most likely be reduced but that's a longer 
term project.  boost::asio and tr1::optional are header-only libraries that are 
heavily templated so every compilation unit that includes them has to do a lot 
of parsing.

Is it common to do completely clean builds frequently for interactive users?  
Are there opinions on what would be an acceptable compilation time?

  was:
libhdfs++ has significantly increased build times for the native client on 
trunk.  This covers adding a flag that would let people build libhdfs without 
all of libhdfs++ if they don't need it; it should be built by default to 
maintain compatibility with as many environments as possible.

Some thoughts:
-The increase in compile time only impacts clean builds.  Incremental rebuilds 
aren't significantly more expensive than they used to be if the code hasn't 
changed.
-Compile times for libhdfs++ can most likely be reduced but that's a longer 
term project.  boost::asio and tr1::optional are header-only libraries that are 
heavily templated so every compilation unit that includes them has to do a lot 
of parsing.

Is it common to do completely clean builds frequently for interactive users?  
Are there opinions on what would be an acceptable compilation time?


> add a flag to skip the libhdfs++ build
> --
>
> Key: HDFS-13362
> URL: https://issues.apache.org/jira/browse/HDFS-13362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Priority: Minor
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk.  This covers adding a flag that would let people build libhdfs 
> without all of libhdfs++ if they don't need it; it should be built by default 
> to maintain compatibility with as many environments as possible.
> Some thoughts:
> -The increase in compile time only impacts clean builds.  Incremental 
> rebuilds aren't significantly more expensive than they used to be if the code 
> hasn't changed.
> -Compile times for libhdfs++ can most likely be reduced but that's a longer 
> term project.  boost::asio and tr1::optional are header-only libraries that 
> are heavily templated so every compilation unit that includes them has to do 
> a lot of parsing.
> Is it common to do completely clean builds frequently for interactive users?  
> Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread James Clampffer (JIRA)
James Clampffer created HDFS-13362:
--

 Summary: add a flag to skip the libhdfs++ build
 Key: HDFS-13362
 URL: https://issues.apache.org/jira/browse/HDFS-13362
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: James Clampffer


libhdfs++ has significantly increased build times for the native client on 
trunk.  This covers adding a flag that would let people build libhdfs without 
all of libhdfs++ if they don't need it; it should be built by default to 
maintain compatibility with as many environments as possible.

Some thoughts:
-The increase in compile time only impacts clean builds.  Incremental rebuilds 
aren't significantly more expensive than they used to be if the code hasn't 
changed.
-Compile times for libhdfs++ can most likely be reduced but that's a longer 
term project.  boost::asio and tr1::optional are header-only libraries that are 
heavily templated so every compilation unit that includes them has to do a lot 
of parsing.

Is it common to do completely clean builds frequently for interactive users?  
Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417535#comment-16417535
 ] 

Wei Yan commented on HDFS-13353:


Thanks for the patch [~tasanuma0829]. Given that webhdfs doesn't support 
hflush/hsync, maybe we can disable this set of testcases to avoid flaky 
failures, including both testCreatedFileIsImmediatelyVisible and 
testCreatedFileIsVisibleOnFlush.

I tried sth yesterday, and I think currently NN WebHDFS 
(TestWebHdfsFileSystemContract.java) also doesn't cover this kind of testcases.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsF

[jira] [Commented] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417516#comment-16417516
 ] 

Dibyendu Karmakar commented on HDFS-13289:
--

Thanks [~elgoiri] for reviewing. I have updated the patch. Fixed checkstyle 
issue and as per your suggestion I have handle the negative case.

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13289:
-
Status: Patch Available  (was: Open)

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13289:
-
Status: Open  (was: Patch Available)

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-28 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13289:
-
Attachment: HDFS-13289.002.patch

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework

2018-03-28 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13341:

Target Version/s: HDFS-7240

> Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
> --
>
> Key: HDFS-13341
> URL: https://issues.apache.org/jira/browse/HDFS-13341
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13341-HDFS-7240.001.patch
>
>
> ServiceRuntimeInfo is a generic interface to provide common information via 
> JMX beans (such as build version, compile info, started time). 
> Currently it is used only by KSM/SCM, I suggest to move it to the 
> hadoop-hdsl/framework project from hadoop-commons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13360:

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-7240

> Ozone: The configuration of  implement of DtaNodeServicePlugin should obtain 
> from datanode instance
> ---
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13360) Ozone: The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13360:

Summary: Ozone: The configuration of  implement of DtaNodeServicePlugin 
should obtain from datanode instance  (was: The configuration of  implement of 
DtaNodeServicePlugin should obtain from datanode instance)

> Ozone: The configuration of  implement of DtaNodeServicePlugin should obtain 
> from datanode instance
> ---
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13360) Ozone: The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417380#comment-16417380
 ] 

Anu Engineer commented on HDFS-13360:
-

[~shahrs87] Thanks for the comment. Done.

> Ozone: The configuration of  implement of DtaNodeServicePlugin should obtain 
> from datanode instance
> ---
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417362#comment-16417362
 ] 

genericqa commented on HDFS-13357:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916599/HDFS-13357.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4e2d0b99b60d 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 411993f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23703/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23703/t

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417359#comment-16417359
 ] 

genericqa commented on HDFS-13353:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13353 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916601/HDFS-13353.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7357083c5eed 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 411993f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23705/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23705/testReport/ |
| Max. process+thread count | 1431 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23705/console |
| Powered by

[jira] [Commented] (HDFS-13360) The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417339#comment-16417339
 ] 

Rushabh S Shah commented on HDFS-13360:
---

Can you please prepend the title of the jira with {{Ozone: }} ?

> The configuration of  implement of DtaNodeServicePlugin should obtain from 
> datanode instance
> 
>
> Key: HDFS-13360
> URL: https://issues.apache.org/jira/browse/HDFS-13360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: DENG FEI
>Priority: Blocker
>
> MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
> HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417288#comment-16417288
 ] 

genericqa commented on HDFS-13243:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  4s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  7s{color} | {color:orange} hadoop-hdfs-project: The patch generated 114 new 
+ 853 unchanged - 2 fixed = 967 total (was 855) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
16s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 9 new + 1 
unchanged - 0 fixed = 10 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916598/HDFS-13243-v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b890fdc614d5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 411993f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |

[jira] [Created] (HDFS-13361) Ozone: Remove commands from command queue when the datanode is declared dead

2018-03-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13361:
--

 Summary: Ozone: Remove commands from command queue when the 
datanode is declared dead
 Key: HDFS-13361
 URL: https://issues.apache.org/jira/browse/HDFS-13361
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: HDFS-7240


SCM can queue commands for Datanodes to pickup. However, a dead datanode may 
never pick up the commands.The command queue needs to be cleaned for the 
datanode once its declared dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13360) The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)
DENG FEI created HDFS-13360:
---

 Summary: The configuration of  implement of DtaNodeServicePlugin 
should obtain from datanode instance
 Key: HDFS-13360
 URL: https://issues.apache.org/jira/browse/HDFS-13360
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ozone
Affects Versions: HDFS-7240
Reporter: DENG FEI


MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417225#comment-16417225
 ] 

genericqa commented on HDFS-13359:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916572/HDFS-13359.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5458367062db 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a71656c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23698/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23698/testR

[jira] [Comment Edited] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-28 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417215#comment-16417215
 ] 

Zsolt Venczel edited comment on HDFS-13176 at 3/28/18 11:38 AM:


I've executed
{code:java}
dev-support/bin/test-patch --run-tests HDFS-13176-branch-2.04.patch > 
HDFS-13176-branch-2_yetus.log 2>&1{code}
on my local system and uploaded the [^HDFS-13176-branch-2_yetus.log]

The failing unit tests seem to be unrelated.


was (Author: zvenczel):
I've executed
{code:java}
dev-support/bin/test-patch --run-tests HDFS-13176-branch-2.04.patch > 
HDFS-13176-branch-2_yetus.log 2>&1{code}
on my local system and uploaded the [^HDFS-13176-branch-2_yetus.log]

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13353:

Status: Patch Available  (was: Open)

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: /test/testCreatedFileIsVisibleOnFlush
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417218#comment-16417218
 ] 

Takanobu Asanuma commented on HDFS-13353:
-

Uploaded the 1st patch addressing my last comment. I think we should use hflush 
instead of flush here.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundExceptio

[jira] [Comment Edited] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-28 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417215#comment-16417215
 ] 

Zsolt Venczel edited comment on HDFS-13176 at 3/28/18 11:37 AM:


I've executed
{code:java}
dev-support/bin/test-patch --run-tests HDFS-13176-branch-2.04.patch > 
HDFS-13176-branch-2_yetus.log 2>&1{code}
on my local system and uploaded the [^HDFS-13176-branch-2_yetus.log]


was (Author: zvenczel):
I've run executed
{code:java}
dev-support/bin/test-patch --run-tests HDFS-13176-branch-2.04.patch > 
HDFS-13176-branch-2_yetus.log 2>&1{code}
on my local system and uploaded the 
[result|https://issues.apache.org/jira/secure/attachment/12916600/HDFS-13176-branch-2_yetus.log]

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-28 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417215#comment-16417215
 ] 

Zsolt Venczel commented on HDFS-13176:
--

I've run executed
{code:java}
dev-support/bin/test-patch --run-tests HDFS-13176-branch-2.04.patch > 
HDFS-13176-branch-2_yetus.log 2>&1{code}
on my local system and uploaded the 
[result|https://issues.apache.org/jira/secure/attachment/12916600/HDFS-13176-branch-2_yetus.log]

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417213#comment-16417213
 ] 

Takanobu Asanuma commented on HDFS-13353:
-

Thanks for the comments and the confirmation, [~elgoiri] and [~ywskycn].

Since WebHDFS doesn't support hflush/hsync now (HDFS-9020), I think a small 
delay is needed after the {{OutputStream}} calls flush.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)

[jira] [Updated] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-03-28 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13353:

Attachment: HDFS-13353.1.patch

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: /test/testCreatedFileIsVisibleOnFlush
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRem

[jira] [Comment Edited] (HDFS-13351) Revert HDFS-11156 from branch-2/branch-2.8

2018-03-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417212#comment-16417212
 ] 

Weiwei Yang edited comment on HDFS-13351 at 3/28/18 11:36 AM:
--

Is branch-2 jenkins healthy? I checked the 
[https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HDFS-Build/23679/|https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HDFS-Build/23679/consoleFull]
{noformat}
/opt/maven/bin/mvn 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-1 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt 2>&1
Build timed out (after 300 minutes). Marking the build as aborted.
Build was aborted
Performing Post build task...
Match found for :. : True
Logical operation result is TRUE
Running script  : #!/bin/bash
{noformat}
it indicates the job was timeout after 5 hours...


was (Author: cheersyang):
Is branch-2 jenkins healthy? I checked the log

{noformat}
/opt/maven/bin/mvn 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-1 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt 2>&1
Build timed out (after 300 minutes). Marking the build as aborted.
Build was aborted
Performing Post build task...
Match found for :. : True
Logical operation result is TRUE
Running script  : #!/bin/bash
{noformat}

it indicates the job was timeout after 5 hours...

> Revert HDFS-11156 from branch-2/branch-2.8
> --
>
> Key: HDFS-13351
> URL: https://issues.apache.org/jira/browse/HDFS-13351
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-13351-branch-2.001.patch
>
>
> Per discussion in HDFS-11156, lets revert the change from branch-2 and 
> branch-2.8. New patch can be tracked in HDFS-12459 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-28 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13176:
-
Attachment: HDFS-13176-branch-2_yetus.log

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13351) Revert HDFS-11156 from branch-2/branch-2.8

2018-03-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417212#comment-16417212
 ] 

Weiwei Yang commented on HDFS-13351:


Is branch-2 jenkins healthy? I checked the log

{noformat}
/opt/maven/bin/mvn 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-1 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt 2>&1
Build timed out (after 300 minutes). Marking the build as aborted.
Build was aborted
Performing Post build task...
Match found for :. : True
Logical operation result is TRUE
Running script  : #!/bin/bash
{noformat}

it indicates the job was timeout after 5 hours...

> Revert HDFS-11156 from branch-2/branch-2.8
> --
>
> Key: HDFS-13351
> URL: https://issues.apache.org/jira/browse/HDFS-13351
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-13351-branch-2.001.patch
>
>
> Per discussion in HDFS-11156, lets revert the change from branch-2 and 
> branch-2.8. New patch can be tracked in HDFS-12459 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13357:
--
Attachment: HDFS-13357.001.patch

> Improve AclException message "Invalid ACL: only directories may have a 
> default ACL."
> 
>
> Key: HDFS-13357
> URL: https://issues.apache.org/jira/browse/HDFS-13357
> Project: Hadoop HDFS
>  Issue Type: Improvement
> Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, 
> Hive
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13357.001.patch
>
>
> I found this warning message in a HDFS cluster
> {noformat}
> 2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from 
> 10.0.0.1:39508 Call#79376996
> Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> 2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE
> ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> {noformat}
> However it doesn't tell me which file had this invalid ACL.
> This cluster has Sentry enabled, so it is possible this invalid ACL doesn't 
> come from HDFS, but from Sentry.
> File this Jira to improve the message and add file name in it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13357:
--
Status: Patch Available  (was: Open)

> Improve AclException message "Invalid ACL: only directories may have a 
> default ACL."
> 
>
> Key: HDFS-13357
> URL: https://issues.apache.org/jira/browse/HDFS-13357
> Project: Hadoop HDFS
>  Issue Type: Improvement
> Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, 
> Hive
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13357.001.patch
>
>
> I found this warning message in a HDFS cluster
> {noformat}
> 2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from 
> 10.0.0.1:39508 Call#79376996
> Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> 2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE
> ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
> directories may have a default ACL.
> {noformat}
> However it doesn't tell me which file had this invalid ACL.
> This cluster has Sentry enabled, so it is possible this invalid ACL doesn't 
> come from HDFS, but from Sentry.
> File this Jira to improve the message and add file name in it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread Zephyr Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417198#comment-16417198
 ] 

Zephyr Guo commented on HDFS-13243:
---

Rebase patch-v3, attach v4.

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,1

[jira] [Updated] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated HDFS-13243:
--
Attachment: HDFS-13243-v4.patch

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.

[jira] [Commented] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417169#comment-16417169
 ] 

genericqa commented on HDFS-13330:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Redundant nullcheck of info, which is known to be non-null in 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ExtendedBlockId,
 ShortCircuitCache$ShortCircuitReplicaCreator)  Redundant null check at 
ShortCircuitCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ExtendedBlockId,
 ShortCircuitCache$ShortCircuitReplicaCreator)  Redundant null check at 
ShortCircuitCache.java:[line 701] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13330 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916583/HDFS-13330.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7210b6b488e7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/preco

[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread Zephyr Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417167#comment-16417167
 ] 

Zephyr Guo commented on HDFS-13243:
---

Hi, [~jojochuang]
I attached patch-v3. I move RPC call into synchronized code block. I try my 
best to let mock code clearly.

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2

[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417165#comment-16417165
 ] 

genericqa commented on HDFS-13243:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13243 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916595/HDFS-13243-v3.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23701/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j

[jira] [Updated] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-03-28 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated HDFS-13243:
--
Attachment: HDFS-13243-v3.patch

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16

[jira] [Commented] (HDFS-13087) Fix: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417157#comment-16417157
 ] 

genericqa commented on HDFS-13087:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13087 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916565/HDFS-13087.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f4695977b51d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a71656c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23697/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23697/testReport/ |
| Max. process+thread count | 3188 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit

[jira] [Updated] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13330:
--
Status: Patch Available  (was: Open)

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13330:
--
Status: Open  (was: Patch Available)

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-03-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13330:
--
Attachment: HDFS-13330.002.patch

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13359:
-
Description: 
DataXceiver hung due to the lock that locked by 
 {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
{code:java}
  @Override // FsDatasetSpi
  public InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {

ReplicaInfo info;
synchronized(this) {
  info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
}
...
  }
{code}
The lock {{synchronized(this)}} used here is expensive, there is already one 
{{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
instead.

  was:
DataXceiver hungs due to the lock that locked by 
 {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
{code:java}
  @Override // FsDatasetSpi
  public InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {

ReplicaInfo info;
synchronized(this) {
  info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
}
...
  }
{code}
The lock {{synchronized(this)}} used here is expensive, there is already one 
{{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
instead.


> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13359:
-
Summary: DataXceiver hung due to the lock in 
FsDatasetImpl#getBlockInputStream  (was: DataXceiver hungs due to the lock in 
FsDatasetImpl#getBlockInputStream)

> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hungs due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13359) DataXceiver hungs due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13359:
-
Description: 
DataXceiver hungs due to the lock that locked by 
 {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
{code:java}
  @Override // FsDatasetSpi
  public InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {

ReplicaInfo info;
synchronized(this) {
  info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
}
...
  }
{code}
The lock {{synchronized(this)}} used here is expensive, there is already one 
{{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
instead.

  was:
DataXceiver hungs due to the lock that locked by 
{{FsDatasetImpl#getBlockInputStream}}.
 !stack.jpg! 

{code:java}
  @Override // FsDatasetSpi
  public InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {

ReplicaInfo info;
synchronized(this) {
  info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
}
...
  }
{code}
The lock {{synchronized(this)}} used here is expensive, there is already one 
{{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
instead.


> DataXceiver hungs due to the lock in FsDatasetImpl#getBlockInputStream
> --
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hungs due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >