[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287231#comment-16287231
 ] 

Xiao Chen commented on HDFS-12910:
--

Thanks for posting a new patch [~nandakumar131], and the discussions to move 
this forward. It's coming up nicely.

Good point on the log file ownership Stephen! That's almost certainly the 
reason of the existing {{System.err}} usages. I don't think we can get over 
that easily. Rethrowing the exception feels like the way to go.

A few comments on the patch:
- {code:title=SecureDataNodeStarter#bind0}
  if (backlog < 1) {
// This default value is picked from java.net.ServerSocket
backlog = 50;
  }
{code}
I don't think this is correct here for 2 reasons. First, we should not 
explicitly override a value based on the current behavior of {{ServerSocket}}. 
Second, if someone intentionally sets {{ipc.server.listen.queue.size}} to a 
non-positive number, it will be changed by this patch. In other words, this 
could bring in a incompatible behavior.
Currently the ipc and httpserver binding calls 2 APIs, and I find it difficult 
to have a shared method at the caller to unify them. Maybe a cleaner way to do 
is to catch {{BindException}} at the end of {{getSecureResources}}?

- in the new method, we catch several exceptions when constructing the new BE, 
and rethrows the original BE. We should log these failures so in case they 
happen, there is a way to debug.
- in the test: I think the canonical way is to call {{bind}} inside the try 
block. {{close}} will be a no-op if it's not created. See the ctor of 
{{ServerSocket}} for reference.
- nit, I'd rename {{testInfoSocAddrBindException}} to something like 
{{testWebServerAddrBindException}}
- there are some extra line breaks in both classes, please fix them.

Also a quick note to [~nandakumar131], thanks for contributing to Hadoop and we 
appreciate your contributions! While reviewing, it would be great to articulate 
ideas and encourage the assignee to work on an issue. It is also fine to check 
with the assignee about their availability, and offer to take over a jira if 
they agree and you're interested. We usually do not post a patch directly to a 
jira that someone else is actively working on. I understand posting a patch may 
be easier to express the idea sometimes, and I think we're fine here. But 
please try to collaborate via discussions first in the future. :)

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at 

[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287220#comment-16287220
 ] 

genericqa commented on HDFS-9806:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 53s{color} 
| {color:red} root generated 1 new + 1236 unchanged - 0 fixed = 1237 total (was 
1236) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 19s{color} | {color:orange} root: The patch generated 90 new + 2123 
unchanged - 15 fixed = 2213 total (was 2138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m  

[jira] [Updated] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.

2017-12-11 Thread Wang XL (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-12862:
---
  Labels: patch  (was: )
   Fix Version/s: 2.7.1
Target Version/s: 2.7.1
  Status: Patch Available  (was: Open)

> CacheDirective may invalidata,when NN restart or make a transition to Active.
> -
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
>  Labels: patch
> Fix For: 2.7.1
>
> Attachments: HDFS-12862-branch-2.7.1.001.patch
>
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.

2017-12-11 Thread Wang XL (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-12862:
---
Attachment: HDFS-12862-branch-2.7.1.001.patch

> CacheDirective may invalidata,when NN restart or make a transition to Active.
> -
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
> Attachments: HDFS-12862-branch-2.7.1.001.patch
>
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2017-12-11 Thread He Xiaoqiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-10453:
---
Attachment: HDFS-10453-branch-2.7.005.patch

[~Octivian] thanks for your suggestions, we do need to update 
{{priorityToReplIdx}} when remove block/blocks from {{neededReplications}}. I 
just upload new patch for branch-2.7, please let me know if i am wrong.

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453-branch-2.7.005.patch, HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenario:
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size Long.MAX_VALUE). 
> During of stage#3 ReplicationMonitor stuck for long time, especial in a large 
> cluster. invalidateBlocks & neededReplications continues to grow and no 
> consumes. it will loss data at the worst.
> This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block 
> and remove it from neededReplications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Status: Patch Available  (was: Open)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Status: Open  (was: Patch Available)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12802) RBF: Control MountTableResolver cache size

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287174#comment-16287174
 ] 

genericqa commented on HDFS-12802:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 23s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.federation.resolver.TestMountTableResolver |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287166#comment-16287166
 ] 

genericqa commented on HDFS-12895:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901602/HDFS-12895.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux a19e03f3016a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 55fc2d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22361/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| javadoc | 

[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287148#comment-16287148
 ] 

genericqa commented on HDFS-12912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
21s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
46s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}237m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestSecurityTokenEditLog |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | 

[jira] [Comment Edited] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread usharani (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287141#comment-16287141
 ] 

usharani edited comment on HDFS-12833 at 12/12/17 5:31 AM:
---

[~surendrasingh] Attached patch in branch2.


was (Author: peruguusha):
[~surendrasingh] Thanks for reporting.Uploaded patch in branch2.

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, 
> HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread usharani (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287141#comment-16287141
 ] 

usharani commented on HDFS-12833:
-

[~surendrasingh] Thanks for reporting.Uploaded patch in branch2.

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, 
> HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread usharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

usharani updated HDFS-12833:

Attachment: HDFS-12833-branch-2.001.patch

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, 
> HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2017-12-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287059#comment-16287059
 ] 

Xiang Li commented on HDFS-10453:
-

[~hexiaoqiao], thanks for the patch and quick update!
Shall we need to call 
{{neededReplications.decrementReplicationIndex(priority)}} after 
{{neededReplications.remove(rw.block, rw.priority)}}
to make it like
{code}
if (rw.block.getNumBytes() == BlockCommand.NO_ACK) {
  // remove from neededReplications while block has deleted.
  neededReplications.remove(rw.block, rw.priority);
  neededReplications.remove(rw.priority) // <-- here
}
{code}
I am not quite familiar with those code, please advise


> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenario:
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size Long.MAX_VALUE). 
> During of stage#3 ReplicationMonitor stuck for long time, especial in a large 
> cluster. invalidateBlocks & neededReplications continues to grow and no 
> consumes. it will loss data at the worst.
> This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block 
> and remove it from neededReplications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-12-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287093#comment-16287093
 ] 

Anoop Sam John commented on HDFS-10285:
---

As an HBase developer (HDFS user) I see SPS as not a new feature but an attempt 
to fix some of existing limitation/issues in HSM feature.  So as a user, IMHO, 
asking user for a new process for fixing the issue would be too much.  Again I 
can not say wrt the HDFS implementation or theory.  

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287039#comment-16287039
 ] 

Yiqun Lin commented on HDFS-12895:
--

Thanks for the review, [~elgoiri]. I agree on your suggestions.
Attach the updated patch. Please have a look.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Attachment: HDFS-12895.003.patch

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12882) Support full open(PathHandle) contract in HDFS

2017-12-11 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12882:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews, [~elgoiri]. I committed this.

> Support full open(PathHandle) contract in HDFS
> --
>
> Key: HDFS-12882
> URL: https://issues.apache.org/jira/browse/HDFS-12882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, 
> HDFS-12882.01.patch, HDFS-12882.02.patch, HDFS-12882.03.patch, 
> HDFS-12882.04.patch, HDFS-12882.05.patch, HDFS-12882.05.patch
>
>
> HDFS-7878 added support for {{open(PathHandle)}}, but it only partially 
> implemented the semantics specified in the contract (i.e., open-by-inodeID). 
> HDFS should implement all permutations of the default options for 
> {{PathHandle}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-11 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286300#comment-16286300
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

I would really like the messages that go to System.err in this issue to get 
into the DN role log, as from a support perspective, users don't tend to go 
looking for the jsvc.err file and hence cannot find this issue easily when it 
occurs. However, I am not sure that is feasible here. 

When jsvc runs the methods in "SecureDataNodeStarter", they are running as root 
which allows it to bind the ports under 1024. Then when the DN proper starts, 
the user is switched to hdfs.

So while we could use the usual log4j logger for these messages, it means the 
role log would initially be created as root and then the DN running under HDFS 
would not be able to write to it. I guess that is why the pattern of writing 
messages to System.err is already used in SecureDataNodeStarter - to avoid bad 
ownership on the role log.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-11 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12917:
---
Attachment: HADOOP-12917.patch

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
> Attachments: HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-11 Thread chencan (JIRA)
chencan created HDFS-12917:
--

 Summary: Fix description errors in testErasureCodingConf.xml
 Key: HDFS-12917
 URL: https://issues.apache.org/jira/browse/HDFS-12917
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chencan


In testErasureCodingConf.xml,there are two case's description may be "getPolicy 
: get EC policy information at specified path, whick have an EC Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2017-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286285#comment-16286285
 ] 

Daryn Sharp commented on HDFS-12914:


Had a cluster with a job causing unusually heavy IO.  DNs became moderately 
congested with commands.  Eventually 1 was declared dead.  Upon rejoining a few 
mins later, the FBR was rejected with "because the DN is not in the pending 
set".  The replication storm in conjunction with the bad job caused nodes to go 
dead like dominos.  Some that rejoined had their FBR rejected with "is not 
valid for unknown datanode" in addition to "because the DN is not in the 
pending set".

On a 2400 node cluster, ~400 nodes were temporarily dead.  304 had their FBRs 
rejected when rejoining.  80k blocks were missing.  Had to force FBRs to bring 
the blocks back.

I have no clue why a rejected report clears the storage/node's stale state!

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286431#comment-16286431
 ] 

Hudson commented on HDFS-12833:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13354 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13354/])
HDFS-12833. Distcp : Update the usage of delete option for dependency 
(surendralilhore: rev 00129c5314dcd9bafa8138dbbcd51a173edbf098)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12855) Fsck violates namesystem locking

2017-12-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286398#comment-16286398
 ] 

Xiao Chen commented on HDFS-12855:
--

Thanks for creating the jira and discussions folks. Makes sense to me.

I recall [~daryn] mentioned Yahoo runs fsck on / daily on their clusters - 
impressive but surprising if this has never been ran into... Daryn, is there an 
existing patch to take care of this? Could you please share to benefit us all 
if so :)

> Fsck violates namesystem locking 
> -
>
> Key: HDFS-12855
> URL: https://issues.apache.org/jira/browse/HDFS-12855
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>Assignee: Manoj Govindassamy
>
> {{NamenodeFsck}} access {{FSNamesystem}} structures, such as INodes, 
> BlockInfo without holding a lock. See e.g. {{NamenodeFsck.blockIdCK()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12802) RBF: Control MountTableResolver cache size

2017-12-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12802:
---
Attachment: HDFS-12802.001.patch

> RBF: Control MountTableResolver cache size
> --
>
> Key: HDFS-12802
> URL: https://issues.apache.org/jira/browse/HDFS-12802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12802.000.patch, HDFS-12802.001.patch
>
>
> Currently, the {{MountTableResolver}} caches the resolutions for the 
> {{PathLocation}}. However, this cache can grow with no limits if there are a 
> lot of unique paths. Some of these cached resolutions might not be used at 
> all.
> The {{MountTableResolver}} should clean the {{locationCache}} periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287009#comment-16287009
 ] 

Andrew Wang commented on HDFS-12907:


xattr change makes sense based on the scope of this change.

We should also still validate the case that users that don't have read access 
can't access the raw xattrs, if we aren't.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286988#comment-16286988
 ] 

genericqa commented on HDFS-12000:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 4 new 
+ 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 39 new + 1 
unchanged - 0 fixed = 40 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | 

[jira] [Commented] (HDFS-12802) RBF: Control MountTableResolver cache size

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286952#comment-16286952
 ] 

genericqa commented on HDFS-12802:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.federation.resolver.TestMountTableResolver |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestGetBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | 

[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286935#comment-16286935
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
5s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 343 unchanged - 10 fixed = 343 total (was 353) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.TestQuota |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestListFilesInFileContext |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | 

[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286923#comment-16286923
 ] 

genericqa commented on HDFS-12881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
23s{color} | {color:green} root generated 0 new + 1232 unchanged - 4 fixed = 
1232 total (was 1236) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 27s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}267m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.net.TestNetworkTopology |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | 

[jira] [Commented] (HDFS-12891) Do not invalidate blocks if toInvalidate is empty

2017-12-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286906#comment-16286906
 ] 

Hudson commented on HDFS-12891:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13357 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13357/])
HDFS-12891. Do not invalidate blocks if toInvalidate is empty. (weichiu: rev 
55fc2d6485702a99c6d4bb261a720d1f0498af2b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java


> Do not invalidate blocks if toInvalidate is empty
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286908#comment-16286908
 ] 

genericqa commented on HDFS-12626:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12626 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901564/HDFS-12626-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux dce138f04118 

[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806.001.patch

Posting a consolidated patch for the changes in apache/HDFS-9806 branch.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Patch Available  (was: Open)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12891) Do not invalidate blocks if toInvalidate is empty

2017-12-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12891:
---
   Resolution: Fixed
Fix Version/s: 3.0.1
   Status: Resolved  (was: Patch Available)

Committed the patch 02 to trunk (3.1.0) and branch-3.0 (3.0.1).
Thanks [~zvenczel] for identifying and fixing the issue!

Note i updated the jira summary. This is a bug in the code that is caught by a 
test, so want to update the summary to reflect that fact.

> Do not invalidate blocks if toInvalidate is empty
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286871#comment-16286871
 ] 

Bharat Viswanadham edited comment on HDFS-12916 at 12/12/17 12:42 AM:
--

After adding below jars to classpath, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar



was (Author: bharatviswa):
After adding below jars, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12891) Do not invalidate blocks if toInvalidate is empty

2017-12-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12891:
---
Fix Version/s: 3.1.0

> Do not invalidate blocks if toInvalidate is empty
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.1.0
>
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286871#comment-16286871
 ] 

Bharat Viswanadham edited comment on HDFS-12916 at 12/12/17 12:42 AM:
--

After adding below jars to classpath, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar


cc [~busbey]


was (Author: bharatviswa):
After adding below jars to classpath, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Attachment: HDFS-12912-HDFS-9806.002.patch

Posting a slight modified patch fixing the findbugs, checkstyle and failed test.

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286871#comment-16286871
 ] 

Bharat Viswanadham edited comment on HDFS-12916 at 12/12/17 12:54 AM:
--

After adding below jars to classpath, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar

so, do we need to add  above jars and shade them in  hadoop-client-runtime jar

cc [~busbey]


was (Author: bharatviswa):
After adding below jars to classpath, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar


cc [~busbey]

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Status: Patch Available  (was: Open)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Status: Open  (was: Patch Available)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12891) Do not invalidate blocks if toInvalidate is empty

2017-12-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12891:
---
Summary: Do not invalidate blocks if toInvalidate is empty  (was: 
TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError)

> Do not invalidate blocks if toInvalidate is empty
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286871#comment-16286871
 ] 

Bharat Viswanadham commented on HDFS-12916:
---

After adding below jars, commands started working

commons-logging-1.1.3.jar
htrace-core4-4.1.0-incubating.jar
slf4j-api-1.7.25.jar
slf4j-log4j12-1.7.25.jar
log4j-1.2.17.jar


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286754#comment-16286754
 ] 

Hudson commented on HDFS-12875:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13356 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13356/])
HDFS-12875. RBF: Complete logic for -readonly option of dfsrouteradmin 
(inigoiri: rev 5cd1056ad77a2ebb0466e7bc597b57f6fe30)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterDFSCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java


> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch, HDFS-12875.007.patch, HDFS-12875.008.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286803#comment-16286803
 ] 

Wei-Chiu Chuang commented on HDFS-12891:


Filed HDFS-12915 for the found findbugs warning. Will commit the patch soon.

> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12875:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch, HDFS-12875.007.patch, HDFS-12875.008.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286556#comment-16286556
 ] 

genericqa commented on HDFS-12912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
55s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 3 new + 20 unchanged - 
0 fixed = 23 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap.createDB(String,
 boolean, String)  At LevelDBFileRegionAliasMap.java:ignored in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap.createDB(String,
 boolean, String)  At LevelDBFileRegionAliasMap.java:[line 117] |
| Failed junit tests | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | 

[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286744#comment-16286744
 ] 

Daryn Sharp commented on HDFS-12907:


The xattr test change brings up an interesting case.  Unless [~andrew.wang] 
objects, I think it's also ok to allow users to see raw xattrs if they have 
read access.  That gets us 1/2 to supporting distcp in backwards compatible 
manner.  The test should actually verify not just that it passes, but that it 
actually returned the expected xattr.

The switch/case on the same ident level is still bothering me...

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS

2017-12-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286735#comment-16286735
 ] 

Íñigo Goiri commented on HDFS-12882:


Thanks [~chris.douglas] for checking the unit tests.
+1 on [^HDFS-12882.05.patch].

> Support full open(PathHandle) contract in HDFS
> --
>
> Key: HDFS-12882
> URL: https://issues.apache.org/jira/browse/HDFS-12882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, 
> HDFS-12882.01.patch, HDFS-12882.02.patch, HDFS-12882.03.patch, 
> HDFS-12882.04.patch, HDFS-12882.05.patch, HDFS-12882.05.patch
>
>
> HDFS-7878 added support for {{open(PathHandle)}}, but it only partially 
> implemented the semantics specified in the contract (i.e., open-by-inodeID). 
> HDFS should implement all permutations of the default options for 
> {{PathHandle}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286730#comment-16286730
 ] 

Íñigo Goiri commented on HDFS-12875:


Committed to {{trunk}}. {{branch-3.0}}, {{branch-2}}, and {{branch-2.9}}.
Thanks for the review [~linyiqun] and [~lukmajercak].

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch, HDFS-12875.007.patch, HDFS-12875.008.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12802) RBF: Control MountTableResolver cache size

2017-12-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12802:
---
Status: Patch Available  (was: Open)

> RBF: Control MountTableResolver cache size
> --
>
> Key: HDFS-12802
> URL: https://issues.apache.org/jira/browse/HDFS-12802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12802.000.patch
>
>
> Currently, the {{MountTableResolver}} caches the resolutions for the 
> {{PathLocation}}. However, this cache can grow with no limits if there are a 
> lot of unique paths. Some of these cached resolutions might not be used at 
> all.
> The {{MountTableResolver}} should clean the {{locationCache}} periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286716#comment-16286716
 ] 

Rushabh S Shah edited comment on HDFS-12907 at 12/11/17 11:13 PM:
--

Difference between v001 and v002 patch is only in test code namely 
{{TestReservedRawPaths, FSXAttrBaseTest}}.
Both of the tests passed.
So I think all the failures are not related to the latest patch.
Many of them failed with {{unable to create new native thread}}.
[~daryn] please review.


was (Author: shahrs87):
Difference between v001 and v002 patch is only in test code namely 
{{TestReservedRawPaths, FSXAttrBaseTest}}.
Both of the tests passed.
So I think all the failures are not related to the latest patch.
[~daryn] please review.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286716#comment-16286716
 ] 

Rushabh S Shah commented on HDFS-12907:
---

Difference between v001 and v002 patch is only in test code namely 
{{TestReservedRawPaths, FSXAttrBaseTest}}.
Both of the tests passed.
So I think all the failures are not related to the latest patch.
[~daryn] please review.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-11 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286715#comment-16286715
 ] 

Lukas Majercak commented on HDFS-12875:
---

Last patch LGTM.

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch, HDFS-12875.007.patch, HDFS-12875.008.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12000:
--
Attachment: HDFS-12000-HDFS-7240.010.patch

Seems Jenkins railed on v009 patch due to some issue unrelated to the patch, 
resubmit v009 patch as v010 to trigger another build.

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.006.patch

Argh, still a few outstanding tests to be fixed. Think I got them all this 
time. v006.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12916:
--
Description: 
[root@n001 hadoop]# bin/hdfs dfs -rm /
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder
at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
Caused by: java.lang.ClassNotFoundException: 
org.apache.htrace.core.Tracer$Builder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more

cc [~busbey]

  was:
[root@n001 hadoop]# bin/hdfs dfs -rm /
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder
at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
Caused by: java.lang.ClassNotFoundException: 
org.apache.htrace.core.Tracer$Builder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-12916:
-

Assignee: Bharat Viswanadham

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12916:
-

 Summary: HDFS commands throws error, when only shaded clients in 
classpath
 Key: HDFS-12916
 URL: https://issues.apache.org/jira/browse/HDFS-12916
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


[root@n001 hadoop]# bin/hdfs dfs -rm /
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder
at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
Caused by: java.lang.ClassNotFoundException: 
org.apache.htrace.core.Tracer$Builder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12626:
--
Attachment: HDFS-12626-HDFS-7240.005.patch

Thanks [~xyao] for the review and the comments! All fixed in v005 patch except:

bq. Line 201: Do we update the keyInfo with the modification time when the 
block is written to the container as well?

If I understand this question correctly. Then I believe writing a block to 
container is a process between client and the containers, KSM is not involved 
in this process so it does not know when a particular block write this done.

> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch, HDFS-12626-HDFS-7240.005.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286567#comment-16286567
 ] 

genericqa commented on HDFS-12907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
46s{color} | {color:red} The patch generated 42 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.TestParallelShortCircuitLegacyRead |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
|   | hadoop.hdfs.TestQuota |
|   | hadoop.hdfs.TestDFSStartupVersions |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 

[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286349#comment-16286349
 ] 

Virajith Jalaparti commented on HDFS-12912:
---

Hi [~daryn], Leveldb is used only as part of a custom implementation (which is 
configurable) of the mapping we store from hdfs blocks to (segments of) files 
in the remote storage system. Others could be used for maintaining this 
mapping. It is not part of the common code path.

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12912:
-

Assignee: Virajith Jalaparti

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS

2017-12-11 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286644#comment-16286644
 ] 

Chris Douglas commented on HDFS-12882:
--

The unit test failures appear to be spurious. Some are due to resource 
exhaustion at Jenkins. All pass when run locally.

> Support full open(PathHandle) contract in HDFS
> --
>
> Key: HDFS-12882
> URL: https://issues.apache.org/jira/browse/HDFS-12882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, 
> HDFS-12882.01.patch, HDFS-12882.02.patch, HDFS-12882.03.patch, 
> HDFS-12882.04.patch, HDFS-12882.05.patch, HDFS-12882.05.patch
>
>
> HDFS-7878 added support for {{open(PathHandle)}}, but it only partially 
> implemented the semantics specified in the contract (i.e., open-by-inodeID). 
> HDFS should implement all permutations of the default options for 
> {{PathHandle}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12855) Fsck violates namesystem locking

2017-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286621#comment-16286621
 ] 

Daryn Sharp commented on HDFS-12855:


That's completely broken.  This code path isn't for a full fsck, it's for a 
specific block which isn't a common operation unless debugging.  We haven't hit 
it, no patch, sorry.

> Fsck violates namesystem locking 
> -
>
> Key: HDFS-12855
> URL: https://issues.apache.org/jira/browse/HDFS-12855
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>Assignee: Manoj Govindassamy
>
> {{NamenodeFsck}} access {{FSNamesystem}} structures, such as INodes, 
> BlockInfo without holding a lock. See e.g. {{NamenodeFsck.blockIdCK()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286515#comment-16286515
 ] 

Wei-Chiu Chuang commented on HDFS-12891:


+1 The patch does not call addBlocksToBeInvalidated() when toInvalidate is 
empty, therefore avoid violating the assert in addBlocksToBeInvalidated. The 
fix looks reasonable to me.

The findbug warning is due to an unrelated change, probably from HDFS-12480.



> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Attachment: HDFS-12881.003.patch

Updating patch to address checkstyle issue. Test failures in hadoop-yarn and 
hadoop-mapreduce are failing without patch as well.

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286304#comment-16286304
 ] 

Íñigo Goiri commented on HDFS-12895:


For the {{RouterPermissionChecker}}, it might be a good idea to make it extend 
{{FsPermissionChecker}} to track the users, groups, etc. We are not saving much 
but it might be worth just to avoid having a pretty much repeated constructor 
(ignoring {{attributeProvider}} which could just be a null). This might require 
changing the visibility of a couple things in {{FsPermissionChecker}} though. I 
leave the decision to you, fine either way.

We should have a short constant for the 00755.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286569#comment-16286569
 ] 

genericqa commented on HDFS-12910:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12910 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901354/HDFS-12910.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b93d2444a825 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22352/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22352/testReport/ |
| Max. process+thread count | 3978 (vs. ulimit 

[jira] [Created] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-11 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12915:
--

 Summary: Fix findbugs warning in 
INodeFile$HeaderFormat.getBlockLayoutRedundancy
 Key: HDFS-12915
 URL: https://issues.apache.org/jira/browse/HDFS-12915
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Wei-Chiu Chuang


It seems HDFS-12840 creates a new findbugs warning.

Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte)
Bug type NP_NULL_ON_SOME_PATH (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
In method 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte)
Value loaded from replication
Dereferenced at INodeFile.java:[line 210]
Known null at INodeFile.java:[line 207]

>From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] would 
>you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286505#comment-16286505
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 339 unchanged - 8 fixed = 339 total (was 347) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901529/HDFS-12818.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d48007f825f3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-11 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286379#comment-16286379
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

[~nandakumar131] Thanks for the second patch version that includes the test 
case. While this version adds a bit more code which is a little complex for 
what we need here, it leads to a cleaner solution with just the exception and 
stack trace logged. Note that ultimately the stack trace is printed to 
System.err, probably by org.apache.commons.daemon.support.DaemonLoader.

I tried running the tests and they worked for me.

I will let some others review the 002 patch and if anyone has any ideas on 
getting messages into the actual DN role log without affecting the log 
ownership it would be good to hear them.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286419#comment-16286419
 ] 

Surendra Singh Lilhore commented on HDFS-12833:
---

Committed to trunk. [~usharani] Can you attach the patch from branch2 ?.

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-11 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12833:
--
Summary: Distcp : Update the usage of delete option for dependency with 
update and overwrite option  (was: In Distcp, Delete option not having the 
proper usage message.)

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-11 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-12910:
-
Status: Patch Available  (was: Open)

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Status: Patch Available  (was: Open)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286310#comment-16286310
 ] 

Daryn Sharp commented on HDFS-12912:


Haven't been following the umbrella.  Is leveldb split off in a custom impl or 
part of the common code path?

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.005.patch

v005 patch fixes the last test errors; because {{SimulatedFSDataset}} now bases 
the number of storages off of the configuration it is supplied with, some tests 
needed to be changed to expect two storages (the default) rather than just one. 
Should be good to go now.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-11 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12907:
--
Attachment: HDFS-12907.002.patch

Thanks [~daryn] for the reviews.
All the comments are addressed in the patch v#002 except one.
bq. The case statements should be indented within the switch block. It's like 
writing an if or while w/o indenting the body.
Didn't I do the same thing ?
The checkstyle also didn't complain.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12914) Block report leases cause missing blocks until next report

2017-12-11 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12914:
--

 Summary: Block report leases cause missing blocks until next report
 Key: HDFS-12914
 URL: https://issues.apache.org/jira/browse/HDFS-12914
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Daryn Sharp
Priority: Critical


{{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for conditions 
such as "unknown datanode", "not in pending set", "lease has expired", wrong 
lease id, etc.  Lease rejection does not throw an exception.  It returns false 
which bubbles up to  {{NameNodeRpcServer#blockReport}} and interpreted as 
{{noStaleStorages}}.

A re-registering node whose FBR is rejected from an invalid lease becomes 
active with _no blocks_.  A replication storm ensues possibly causing DNs to 
temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
re-registration.  The cluster will have many "missing blocks" until the DNs 
next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-11 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-12911:
---
Description: 
This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So far comments to handle
Daryn:
 # Lock should not kept while executing placement policy.
 # While starting up the NN, SPS Xattrs checks happen even if feature disabled. 
This could potentially impact the startup speed. 

UMA:
# I am adding one more possible improvement to reduce Xattr objects 
significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.
Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Feature creation because of SPS alone.
# Currently SPS putting long id objects in Q for tracking SPS called Inodes. 
So, it is additional created and size of it would be (obj ref + value) = (8 + 
8) bytes [ ignoring alignment for time being]
So, the possible improvement here is, instead of creating new Long obj, we can 
keep existing inode object for tracking. Advantage is, Inode object already 
maintained in NN, so no new object creation is needed. So, we just need to 
maintain one obj ref. Above two points should significantly reduce the memory 
requirements of SPS. So, for SPS call: 8bytes for called inode tracking + 8 
bytes for Xattr ref.



  was:
This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So, far from Daryn:
  1. Lock should not kept while executing placement policy.
   2. While starting up the NN, SPS Xattrs checks happen even if feature 
disabled. This could potentially impact the startup speed. 

I am adding one more possible improvement to reduce Xattr objects significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.

Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Future creation because of SPS alone.


> [SPS]: Fix review comments from discussions in HDFS-10285
> -
>
> Key: HDFS-12911
> URL: https://issues.apache.org/jira/browse/HDFS-12911
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>
> This is the JIRA for tracking the possible improvements or issues discussed 
> in main JIRA
> So far comments to handle
> Daryn:
>  # Lock should not kept while executing placement policy.
>  # While starting up the NN, SPS Xattrs checks happen even if feature 
> disabled. This could potentially impact the startup speed. 
> UMA:
> # I am adding one more possible improvement to reduce Xattr objects 
> significantly.
>  SPS Xattr is constant object. So, we create one Xattr deduplication object 
> once statically and use the same object reference when required to add SPS 
> Xattr to Inode. So, here additional bytes required for storing SPS Xattr 
> would turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr 
> overhead should come down significantly IMO. Lets explore the feasibility on 
> this option.
> Xattr list Future will not be specially created for SPS, that list would have 
> been created by SetStoragePolicy already on the same directory. So, no extra 
> Feature creation because of SPS alone.
> # Currently SPS putting long id objects in Q for tracking SPS called Inodes. 
> So, it is additional created and size of it would be (obj ref + value) = (8 + 
> 8) bytes [ ignoring alignment for time being]
> So, the possible improvement here is, instead of creating new Long obj, we 
> can keep existing inode object for tracking. Advantage is, Inode object 
> already maintained in NN, so no new object creation is needed. So, we just 
> need to maintain one obj ref. Above two points should significantly reduce 
> the memory requirements of SPS. So, for SPS call: 8bytes for called inode 
> tracking + 8 bytes for Xattr ref.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-12-11 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286040#comment-16286040
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Hi [~chris.douglas], 
{quote}
Have any benchmarks been run, particularly with the SPS disabled?
{quote}

I tried to benchmark startup times with trunk code and SPS branch when SPS 
disabled whether its really impacting the startup time. 

Here are the data points: 


Total Inodes created for test: INFO 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 26598566 
INodes.

*Restart times with trunk code:*
Run1: 2017-12-11 06:23:30,658 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *81153 msecs*
Run2: 2017-12-11 06:27:15,313 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *83717 msecs*
Run3: 2017-12-11 06:29:18,574 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *82620 msecs*

*Restart times with SPS branch:*
Added a log to indicate SPS flag: INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: spsEnabled  
= false
And while checking Xattr in addToInodeMap, it will check whether SPS enabled or 
not
{code}
if (getBlockManager().isSPSEnabled()) {
addStoragePolicySatisfier((INodeWithAdditionalFields) inode, xaf);
}
{code} 

Run1: 2017-12-11 06:38:49,209 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *83874 msecs*
Run2: 2017-12-11 06:42:57,803 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *81013 msecs*
Run3: 2017-12-11 06:45:33,288 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in *81817 msecs*

*So, this is clearly showing that, with disable of SPS, there is no impact on 
NN.*





> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285950#comment-16285950
 ] 

genericqa commented on HDFS-12895:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 25s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.isPermissionEnabled
 from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer(Configuration,
 Router)  At RouterAdminServer.java:from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer(Configuration,
 Router)  At RouterAdminServer.java:[line 117] |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.routerOwner 
from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer(Configuration,
 Router)  At RouterAdminServer.java:from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer(Configuration,
 Router)  At RouterAdminServer.java:[line 114] |
|  |  Write to static field 

[jira] [Commented] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285938#comment-16285938
 ] 

genericqa commented on HDFS-12913:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12913 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901485/HDFS-12913.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cd67744475c6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a2edc4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22348/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285935#comment-16285935
 ] 

genericqa commented on HDFS-12891:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
14s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12891 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901477/HDFS-12891.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2ffd31137fcc 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a2edc4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22346/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22346/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285905#comment-16285905
 ] 

Zsolt Venczel commented on HDFS-12913:
--

* Have removed first read operation check as it's redundant and it's failure is 
unhandled.

> TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred
> -
>
> Key: HDFS-12913
> URL: https://issues.apache.org/jira/browse/HDFS-12913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12913.01.patch
>
>
> Once in every 5000 test run the following issue happens:
> {code}
> 2017-12-11 10:33:09 [INFO] 
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO]  T E S T S
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 262.641 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] 
> testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
>   Time elapsed: 262.477 s  <<< ERROR!
> 2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
> 2017-12-11 10:37:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> 2017-12-11 10:37:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2017-12-11 10:37:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2017-12-11 10:37:32   at java.lang.reflect.Method.invoke(Method.java:498)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> 2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> 2017-12-11 10:37:32   at 

[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285818#comment-16285818
 ] 

genericqa commented on HDFS-12895:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
33s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.permissionChecker
 from instance method 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.setPermissionChecker(RouterPermissionChecker)
  At RouterAdminServer.java:from instance method 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.setPermissionChecker(RouterPermissionChecker)
  At RouterAdminServer.java:[line 228] |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901464/HDFS-12895.002.patch |
| 

[jira] [Updated] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-12913:
-
Status: Patch Available  (was: In Progress)

> TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred
> -
>
> Key: HDFS-12913
> URL: https://issues.apache.org/jira/browse/HDFS-12913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12913.01.patch
>
>
> Once in every 5000 test run the following issue happens:
> {code}
> 2017-12-11 10:33:09 [INFO] 
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO]  T E S T S
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 262.641 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] 
> testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
>   Time elapsed: 262.477 s  <<< ERROR!
> 2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
> 2017-12-11 10:37:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> 2017-12-11 10:37:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2017-12-11 10:37:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2017-12-11 10:37:32   at java.lang.reflect.Method.invoke(Method.java:498)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> 2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> 2017-12-11 10:37:32   at 
> 

[jira] [Updated] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-12913:
-
Attachment: HDFS-12913.01.patch

> TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred
> -
>
> Key: HDFS-12913
> URL: https://issues.apache.org/jira/browse/HDFS-12913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12913.01.patch
>
>
> Once in every 5000 test run the following issue happens:
> {code}
> 2017-12-11 10:33:09 [INFO] 
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO]  T E S T S
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 262.641 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] 
> testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
>   Time elapsed: 262.477 s  <<< ERROR!
> 2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
> 2017-12-11 10:37:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> 2017-12-11 10:37:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2017-12-11 10:37:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2017-12-11 10:37:32   at java.lang.reflect.Method.invoke(Method.java:498)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> 2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
> 

[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285770#comment-16285770
 ] 

Yiqun Lin edited comment on HDFS-12895 at 12/11/17 11:12 AM:
-

Re-attach the v2 patch. Fixing the bug of permission checker. Permission 
checker should be created during each RPC call instead of reusing the same 
instance. Because remote user will be changed.


was (Author: linyiqun):
Re-attach the v2 patch. Fixing the bug of permission checker.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-12913:
-
Target Version/s: 3.0.0

> TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred
> -
>
> Key: HDFS-12913
> URL: https://issues.apache.org/jira/browse/HDFS-12913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
>
> Once in every 5000 test run the following issue happens:
> {code}
> 2017-12-11 10:33:09 [INFO] 
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO]  T E S T S
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 262.641 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] 
> testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
>   Time elapsed: 262.477 s  <<< ERROR!
> 2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
> 2017-12-11 10:37:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> 2017-12-11 10:37:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2017-12-11 10:37:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2017-12-11 10:37:32   at java.lang.reflect.Method.invoke(Method.java:498)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> 2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
> 2017-12-11 10:37:32   at 
> 

[jira] [Comment Edited] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285771#comment-16285771
 ] 

Zsolt Venczel edited comment on HDFS-12891 at 12/11/17 11:10 AM:
-

[~jojochuang] thanks for taking a look!
I have removed the unrelated change and created a separate jira (HDFS-12913) to 
cover the issue.


was (Author: zvenczel):
[~jojochuang] thanks for taking a look!
I have removed the unrelated change and created a separate jira 
(https://issues.apache.org/jira/browse/HDFS-12913) to cover the issue.

> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285771#comment-16285771
 ] 

Zsolt Venczel commented on HDFS-12891:
--

[~jojochuang] thanks for taking a look!
I have removed the unrelated change and created a separate jira 
(https://issues.apache.org/jira/browse/HDFS-12913) to cover the issue.

> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Attachment: (was: HDFS-12895.002.patch)

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Attachment: HDFS-12895.002.patch

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-11 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285770#comment-16285770
 ] 

Yiqun Lin commented on HDFS-12895:
--

Re-attach the v2 patch. Fixing the bug of permission checker.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12913 started by Zsolt Venczel.

> TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred
> -
>
> Key: HDFS-12913
> URL: https://issues.apache.org/jira/browse/HDFS-12913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
>
> Once in every 5000 test run the following issue happens:
> {code}
> 2017-12-11 10:33:09 [INFO] 
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO]  T E S T S
> 2017-12-11 10:33:09 [INFO] 
> ---
> 2017-12-11 10:33:09 [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 262.641 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
> 2017-12-11 10:37:32 [ERROR] 
> testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
>   Time elapsed: 262.477 s  <<< ERROR!
> 2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
> 2017-12-11 10:37:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> 2017-12-11 10:37:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2017-12-11 10:37:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2017-12-11 10:37:32   at java.lang.reflect.Method.invoke(Method.java:498)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2017-12-11 10:37:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2017-12-11 10:37:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2017-12-11 10:37:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> 2017-12-11 10:37:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
> 2017-12-11 10:37:32   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> 2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> 2017-12-11 10:37:32   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
> 2017-12-11 10:37:32   at 
> 

[jira] [Created] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-12913:


 Summary: TestDNFencingWithReplication.testFencingStress:137 ? 
Runtime Deferred
 Key: HDFS-12913
 URL: https://issues.apache.org/jira/browse/HDFS-12913
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


Once in every 5000 test run the following issue happens:
{code}
2017-12-11 10:33:09 [INFO] 
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO]  T E S T S
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO] Running 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
Time elapsed: 262.641 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] 
testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
  Time elapsed: 262.477 s  <<< ERROR!
2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
2017-12-11 10:37:32 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
2017-12-11 10:37:32 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2017-12-11 10:37:32 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2017-12-11 10:37:32 at java.lang.reflect.Method.invoke(Method.java:498)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
2017-12-11 10:37:32 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
2017-12-11 10:37:32 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1862)
2017-12-11 10:37:32 at 

[jira] [Updated] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-11 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-12891:
-
Attachment: HDFS-12891.02.patch

> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >