[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306648#comment-16306648
 ] 

Íñigo Goiri commented on HDFS-12895:


I was thinking that we could actually use the EXECUTE permissions. When a 
client tries to access a path, we could check the x ACL of the mount point and 
throw an exception. This would allow RBF blocking some users from accessing 
some mount points. I see a couple issues like:
* Is the semantics clear or a little convoluted?
* What happens with sub mount points? 

Is this worth opening a JIRA? 

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306635#comment-16306635
 ] 

genericqa commented on HDFS-11847:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
810 unchanged - 1 fixed = 812 total (was 811) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904047/HDFS-11847.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux ed5c2efac78c 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personal

[jira] [Updated] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-29 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11847:
--
Attachment: HDFS-11847.05.patch

Thanks [~xiaochen] for the review. Attached v05 patch to address the following. 
Please take a look at the latest patch. 
1. HDFS-12969 is tracking the enhancements needed to {{dfsAdmin 
-listOpenFiles}} command.
2. Restored old API in the client packages. 
3. {{FSN#getFilesBlockingDecom}} nows returns a batched list honoring 
{{maxListOpenFilesResponses}}. 
4. Restored the old reporting format
5. Surprisingly I don't see this change in the IDE. Able to get this 
unnecessary change removed after a fresh pull. 
And, updated the test case to cover the batch response for listing open files 
by type. 

> Enhance dfsadmin listOpenFiles command to list files blocking datanode 
> decommissioning
> --
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch, 
> HDFS-11847.03.patch, HDFS-11847.04.patch, HDFS-11847.05.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12970) HdfsFileStatus#getPath returning null.

2017-12-29 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12970:
--
Description: 
After HDFS-12681, HdfsFileStatus#getPath() returns null.
I don't think this is expected.

Both the implementation of {{HdfsFileStatus}} sets the path to null.
{code:title=HdfsNamedFileStatus.java|borderStyle=solid}
  HdfsNamedFileStatus(long length, boolean isdir, int replication,
  long blocksize, long mtime, long atime,
  FsPermission permission, Set flags,
  String owner, String group,
  byte[] symlink, byte[] path, long fileId,
  int childrenNum, FileEncryptionInfo feInfo,
  byte storagePolicy, ErasureCodingPolicy ecPolicy) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null,   -- The last null is for path.
HdfsFileStatus.convert(flags));
{code}


{code:title=HdfsLocatedFileStatus.java|borderStyle=solid}
  HdfsLocatedFileStatus(long length, boolean isdir, int replication,
long blocksize, long mtime, long atime,
FsPermission permission, EnumSet flags,
String owner, String group,
byte[] symlink, byte[] path, long fileId,
int childrenNum, FileEncryptionInfo feInfo,
byte storagePolicy, ErasureCodingPolicy ecPolicy,
LocatedBlocks hdfsloc) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null, HdfsFileStatus.convert(flags),  -- The last 
null on this line is for path.
null);
{code}


  was:
After HDFS-12681, HdfsFileStatus#getPath() returns null.
I don't think this is expected.

Both the implementation of {{HdfsFileStatus}} sets it to null.
{code:title=HdfsNamedFileStatus.java|borderStyle=solid}
  HdfsNamedFileStatus(long length, boolean isdir, int replication,
  long blocksize, long mtime, long atime,
  FsPermission permission, Set flags,
  String owner, String group,
  byte[] symlink, byte[] path, long fileId,
  int childrenNum, FileEncryptionInfo feInfo,
  byte storagePolicy, ErasureCodingPolicy ecPolicy) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null,   -- The last null is for path.
HdfsFileStatus.convert(flags));
{code}


{code:title=HdfsLocatedFileStatus.java|borderStyle=solid}
  HdfsLocatedFileStatus(long length, boolean isdir, int replication,
long blocksize, long mtime, long atime,
FsPermission permission, EnumSet flags,
String owner, String group,
byte[] symlink, byte[] path, long fileId,
int childrenNum, FileEncryptionInfo feInfo,
byte storagePolicy, ErasureCodingPolicy ecPolicy,
LocatedBlocks hdfsloc) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null, HdfsFileStatus.convert(flags),  -- The last 
null on this line is for path.
null);
{code}



> HdfsFileStatus#getPath returning null.
> --
>
> Key: HDFS-12970
> URL: https://issues.apache.org/jira/browse/HDFS-12970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Priority: Critical
>
> After HDFS-12681, HdfsFileStatus#getPath() returns null.
> I don't think this is expected.
> Both the implementation of {{HdfsFileStatus}} sets the path to null.
> {code:title=HdfsNamedFileStatus.java|borderStyle=solid}
>   HdfsNamedFileStatus(long length, boolean isdir, int replication,
>   long blocksize, long mtime, long atime,
>   FsPermission permission, Set flags,
>   String owner, String group,
>   byte[] symlink, byte[] path, long fileId,
>   int childrenNum, FileEncryptionInfo feInfo,
>   byte storagePolicy, ErasureCodingPolicy ecPolicy) {
> super(length, isdir, replication, blocksize, mtime, atime,
> HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
> owner, group, null, null,   -- The last null 

[jira] [Updated] (HDFS-12970) HdfsFileStatus#getPath returning null.

2017-12-29 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12970:
--
Description: 
After HDFS-12681, HdfsFileStatus#getPath() returns null.
I don't think this is expected.

Both the implementation of {{HdfsFileStatus}} sets it to null.
{code:title=HdfsNamedFileStatus.java|borderStyle=solid}
  HdfsNamedFileStatus(long length, boolean isdir, int replication,
  long blocksize, long mtime, long atime,
  FsPermission permission, Set flags,
  String owner, String group,
  byte[] symlink, byte[] path, long fileId,
  int childrenNum, FileEncryptionInfo feInfo,
  byte storagePolicy, ErasureCodingPolicy ecPolicy) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null,   -- The last null is for path.
HdfsFileStatus.convert(flags));
{code}


{code:title=HdfsLocatedFileStatus.java|borderStyle=solid}
  HdfsLocatedFileStatus(long length, boolean isdir, int replication,
long blocksize, long mtime, long atime,
FsPermission permission, EnumSet flags,
String owner, String group,
byte[] symlink, byte[] path, long fileId,
int childrenNum, FileEncryptionInfo feInfo,
byte storagePolicy, ErasureCodingPolicy ecPolicy,
LocatedBlocks hdfsloc) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null, HdfsFileStatus.convert(flags),  -- The last 
null on this line is for path.
null);
{code}


  was:
After HDFS-12681, HdfsFileStatus#getPath() returns null.
I don't think this is expected.

Relevant code chunk
Both the implementation of {{HdfsFileStatus}} sets it to null.
{code:title=HdfsNamedFileStatus.java|borderStyle=solid}
  HdfsNamedFileStatus(long length, boolean isdir, int replication,
  long blocksize, long mtime, long atime,
  FsPermission permission, Set flags,
  String owner, String group,
  byte[] symlink, byte[] path, long fileId,
  int childrenNum, FileEncryptionInfo feInfo,
  byte storagePolicy, ErasureCodingPolicy ecPolicy) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null,   -- The last null is for path.
HdfsFileStatus.convert(flags));
{code}


{code:title=HdfsLocatedFileStatus.java|borderStyle=solid}
  HdfsLocatedFileStatus(long length, boolean isdir, int replication,
long blocksize, long mtime, long atime,
FsPermission permission, EnumSet flags,
String owner, String group,
byte[] symlink, byte[] path, long fileId,
int childrenNum, FileEncryptionInfo feInfo,
byte storagePolicy, ErasureCodingPolicy ecPolicy,
LocatedBlocks hdfsloc) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null, HdfsFileStatus.convert(flags),  -- The last 
null on this line is for path.
null);
{code}



> HdfsFileStatus#getPath returning null.
> --
>
> Key: HDFS-12970
> URL: https://issues.apache.org/jira/browse/HDFS-12970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Priority: Critical
>
> After HDFS-12681, HdfsFileStatus#getPath() returns null.
> I don't think this is expected.
> Both the implementation of {{HdfsFileStatus}} sets it to null.
> {code:title=HdfsNamedFileStatus.java|borderStyle=solid}
>   HdfsNamedFileStatus(long length, boolean isdir, int replication,
>   long blocksize, long mtime, long atime,
>   FsPermission permission, Set flags,
>   String owner, String group,
>   byte[] symlink, byte[] path, long fileId,
>   int childrenNum, FileEncryptionInfo feInfo,
>   byte storagePolicy, ErasureCodingPolicy ecPolicy) {
> super(length, isdir, replication, blocksize, mtime, atime,
> HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
> owner, group, null, null,   -- The la

[jira] [Created] (HDFS-12970) HdfsFileStatus#getPath returning null.

2017-12-29 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12970:
-

 Summary: HdfsFileStatus#getPath returning null.
 Key: HDFS-12970
 URL: https://issues.apache.org/jira/browse/HDFS-12970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0
Reporter: Rushabh S Shah
Priority: Critical


After HDFS-12681, HdfsFileStatus#getPath() returns null.
I don't think this is expected.

Relevant code chunk
Both the implementation of {{HdfsFileStatus}} sets it to null.
{code:title=HdfsNamedFileStatus.java|borderStyle=solid}
  HdfsNamedFileStatus(long length, boolean isdir, int replication,
  long blocksize, long mtime, long atime,
  FsPermission permission, Set flags,
  String owner, String group,
  byte[] symlink, byte[] path, long fileId,
  int childrenNum, FileEncryptionInfo feInfo,
  byte storagePolicy, ErasureCodingPolicy ecPolicy) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null,   -- The last null is for path.
HdfsFileStatus.convert(flags));
{code}


{code:title=HdfsLocatedFileStatus.java|borderStyle=solid}
  HdfsLocatedFileStatus(long length, boolean isdir, int replication,
long blocksize, long mtime, long atime,
FsPermission permission, EnumSet flags,
String owner, String group,
byte[] symlink, byte[] path, long fileId,
int childrenNum, FileEncryptionInfo feInfo,
byte storagePolicy, ErasureCodingPolicy ecPolicy,
LocatedBlocks hdfsloc) {
super(length, isdir, replication, blocksize, mtime, atime,
HdfsFileStatus.convert(isdir, symlink != null, permission, flags),
owner, group, null, null, HdfsFileStatus.convert(flags),  -- The last 
null on this line is for path.
null);
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2017-12-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306560#comment-16306560
 ] 

Íñigo Goiri commented on HDFS-12934:


* I've checked {{MountTableResolver}} and it might make sense to use a similar 
{{TreeMap}} for the map in {{RouterQuotaLocalCache}}. It would simplify the 
recursive search. I would go for a read/write lock or even better an atomic 
update of the full map. Not sure 
* In {{Router}}, I would rename {{getChildrenPaths(String path)}} into 
{{getQuotaUsage(String path)}} and {{getQuotaUsageCache()}} {{getQuotaUsage()}}.
* Typo {{NamaService}} in {{RouterRpcServer}} and {{loc.getNameserviceId()}} 
should be extracted.
* Update {{TestMountTable}} to include the new field for the quota.
* In {{TestRouterQuota}}, I think we shouldn't use the wait for breaking the 
quota but actually create successfully the first 3 and then fail in the next 
one.
* Do you mind opening the JIRA for the doc and the UI?

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12934.001.patch, HDFS-12934.002.patch, 
> HDFS-12934.003.patch, RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-12-29 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306556#comment-16306556
 ] 

Xiao Chen edited comment on HDFS-12618 at 12/29/17 9:39 PM:


Sorry for my month-long delay on reviewing, finally locked myself to the chair 
and reviewed the latest patch and comments before the end of the year. 

Good to see we're improving, and happy to see the many added test cases. Thanks 
for the continued work [~wchevreuil], and [~daryn] for the reviews.

The problem is pretty hard, but direction looks good. Some comments on patch  4:
- {{validate()}} then catch {{AssertionError}} should be changed, for the 
reasons Daryn mentioned, plus the fact that assertion could be disabled at run 
time. See 
https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html#enable-disable
 . 
- I'm not sure the current {{getLastINode()==null}} check is enough for 
{{INodeReference}}s. What if the block changed in the middle of the snapshots? 
For example, say file 1 has block 1&2. Then the following happened: snapshot 
s1, truncate so file has block 1-only, snapshot s2, append so file has block 
1&3, snapshot s3. Would we be able to tell the difference when {{fsck 
-includeSnapshots}} now?
- Because locks are reacquired during fsck, it's theoretically possible that 
snapshots are created / deleted during the scan. I think current behavior is 
we're not aware of new snapshots, and skip the deleted snapshots (since 
{{snapshottableDirs}} is populated before the {{check}} call. Possible to add a 
fault-injected test to make sure we don't NPE on deleted snapshots?
- Speechlessly {{NamenodeFsck}} also has other block counts like 
{{numMinReplicatedBlocks}}. Current code only takes care of total blocks, which 
IMO is the most important. This also seems to be the goal of this jira as 
suggested by the title and description, so Okay to split that to another jira.

Trivial ones:
- I see the variable name of {{checkDir}} is changed to {{filePath}}, which is 
not accurate. Prefer to keep the old name {{path}}.
- {{checkFilesInSnapshotOnly}}: suggest to handle {{inode==null}} in it's own 
block, so we don't have to worry about that for non {{INodeFile}} code paths. 
(FYI null is not instanceof anything, so patch 4 code didn't have to check. 
Need to be careful after changing to {{isFile}}, as (correctly) suggested by 
Daryn.)
- {{lastSnapshotId = -1}} should use {{Snapshot.NO_SNAPSHOT_ID}} rather than -1.
- {{inodeFile..getFileWithSnapshotFeature().getDiffs()}} cannot never null 
judging from {{FileWithSnapshotFeature}}, so no need for nullity check
- Please format the code you changed. There are many space inconsistencies 
around brackets.
- Test should add timeouts. Perhaps better to just use a {{Rule}} on the class, 
to safeguard cases by default with something like 3 minutes.
- Feels to me the "HEALTHY" check in the beginning of each test case is not 
necessary.
- Could use {{GenericTestUtils.waitFor()}} for the waits.
- Optional - {{TestFsck}} is already 2.4k+ lines long. Maybe better to create a 
new test class for snapshot blockcount specifically. In that class the name of 
each test would be shorter and more readable.

Also a curiosity question to [~daryn]:
bq. try - lock - finally unlock v.s. lock - try - finally unlock
Understood and completely agree with the advice. Curiosity comes from: in 
current HDFS, it looks like only {{FSN#writeLockInterruptibly}} is possible to 
throw a checked exception. Of course there could also be unchecked exceptions - 
is this a coding advice or something you have run into in practice? Care to 
share the fun details? :)


was (Author: xiaochen):
Sorry for my month-long delay on reviewing, finally locked myself to the chair 
and reviewed the latest patch and comments before the end of the year. 

Good to see we're improving, and happy to see the many added test cases. Thanks 
for the continued work [~wchevreuil], and [~daryn] for the reviews.

The problem is pretty hard, but direction looks good. Some comments on patch  4:
- {{validate()}} then catch {{AssertionError}} should be changed, for the 
reasons Daryn mentioned, plus the fact that assertion could be disabled at run 
time. See 
https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html#enable-disable
 . 
- I'm not sure the current {{getLastINode()==null}} check is enough for 
{{INodeReference}}s. What if the block changed in the middle of the snapshots? 
For example, say file 1 has block 1&2. Then the following happened: snapshot 
s1, truncate so file has block 1-only, snapshot s2, append so file has block 
1&3, snapshot s3. Would we be able to tell the difference when {{fsck 
-includeSnapshots}} now?
- Because locks are reacquired during fsck, it's theoretically possible that 
snapshots are created / deleted during the scan. I think current behavior is 
we're not awa

[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-12-29 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306556#comment-16306556
 ] 

Xiao Chen commented on HDFS-12618:
--

Sorry for my month-long delay on reviewing, finally locked myself to the chair 
and reviewed the latest patch and comments before the end of the year. 

Good to see we're improving, and happy to see the many added test cases. Thanks 
for the continued work [~wchevreuil], and [~daryn] for the reviews.

The problem is pretty hard, but direction looks good. Some comments on patch  4:
- {{validate()}} then catch {{AssertionError}} should be changed, for the 
reasons Daryn mentioned, plus the fact that assertion could be disabled at run 
time. See 
https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html#enable-disable
 . 
- I'm not sure the current {{getLastINode()==null}} check is enough for 
{{INodeReference}}s. What if the block changed in the middle of the snapshots? 
For example, say file 1 has block 1&2. Then the following happened: snapshot 
s1, truncate so file has block 1-only, snapshot s2, append so file has block 
1&3, snapshot s3. Would we be able to tell the difference when {{fsck 
-includeSnapshots}} now?
- Because locks are reacquired during fsck, it's theoretically possible that 
snapshots are created / deleted during the scan. I think current behavior is 
we're not aware of new snapshots, and skip the deleted snapshots (since 
{{snapshottableDirs}} is populated before the {{check}} call. Possible to add a 
fault-injected test to make sure we don't NPE on deleted snapshots?
- Speechlessly {{NamenodeFsck}} also has other block counts like 
{{numMinReplicatedBlocks}}. Current code only takes care of total blocks, which 
IMO is the most important. This also seems to be the goal of this jira as 
suggested by the title and description, so Okay to split that to another jira.

Trivial ones:
- I see the variable name of {{checkDir}} is changed to {{filePath}}, which is 
not accurate. Prefer to keep the old name {{path}}.
- {{checkFilesInSnapshotOnly}}: suggest to handle {{inode==null}} in it's own 
block, so we don't have to worry about that for non {{INodeFile}} code paths. 
(FYI null is not instanceof anything, so patch 4 code didn't have to check. 
Need to be careful after changing to {{isFile}}, as (correctly) suggested by 
Daryn.)
- {{lastSnapshotId = -1}} should use {{Snapshot.NO_SNAPSHOT_ID}} rather than -1.
- {{inodeFile..getFileWithSnapshotFeature().getDiffs()}} cannot never null 
judging from {{FileWithSnapshotFeature}}, so no need for nullity check
- Test should add timeouts. Perhaps better to just use a {{Rule}} on the class, 
to safeguard cases by default with something like 3 minutes.
- Feels to me the "HEALTHY" check in the beginning of each test case is not 
necessary.
- Could use {{GenericTestUtils.waitFor()}} for the waits.
- Optional - {{TestFsck}} is already 2.4k+ lines long. Maybe better to create a 
new test class for snapshot blockcount specifically. In that class the name of 
each test would be shorter and more readable.

Also a curiosity question to [~daryn]:
bq. try - lock - finally unlock v.s. lock - try - finally unlock
Understood and completely agree with the advice. Curiosity comes from: in 
current HDFS, it looks like only {{FSN#writeLockInterruptibly}} is possible to 
throw a checked exception. Of course there could also be unchecked exceptions - 
is this a coding advice or something you have run into in practice? Care to 
share the fun details? :)

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch, 
> HDFS-12618.002.patch, HDFS-12618.003.patch, HDFS-12618.004.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noforma

[jira] [Created] (HDFS-12969) DfsAdmin listOpenFiles should report files by type

2017-12-29 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-12969:
-

 Summary: DfsAdmin listOpenFiles should report files by type
 Key: HDFS-12969
 URL: https://issues.apache.org/jira/browse/HDFS-12969
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.1.0
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


HDFS-11847 has introduced a new option to {{-blockingDecommission}} to an 
existing command 
{{dfsadmin -listOpenFiles}}. But the reporting done by the command doesn't 
differentiate the files based on the type (like blocking decommission). In 
order to change the reporting style, the proto format used for the base command 
has to be updated to carry additional fields and better be done in a new jira 
outside of HDFS-11847. This jira is to track the end-to-end enhancements needed 
for dfsadmin -listOpenFiles console output.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306529#comment-16306529
 ] 

Hudson commented on HDFS-12915:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13425/])
HDFS-12915. Fix findbugs warning in (lei: rev 
6e3e1b8cde737e4c03b0f5279cab0239e7069a72)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java


> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chris Douglas
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch, 
> HDFS-12915.02.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-29 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306519#comment-16306519
 ] 

Chris Douglas commented on HDFS-12915:
--

Thanks, [~eddyxu]

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chris Douglas
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch, 
> HDFS-12915.02.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-29 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12915:
-
   Resolution: Fixed
 Assignee: Chris Douglas
Fix Version/s: 3.0.1
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.0

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chris Douglas
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch, 
> HDFS-12915.02.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-29 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306500#comment-16306500
 ] 

Lei (Eddy) Xu commented on HDFS-12915:
--

[~chris.douglas] Thanks for the patch! +1 on the last patch. 

The {{blockType}} is widely used in HDFS to differentiate a file is EC or not 
today. I think it should not block this patch in. 

I will commit the latest patch soon.  Thanks for taking care of this, 
[~chris.douglas] and thanks for the reviews, [~Sammi]

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch, 
> HDFS-12915.02.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-29 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306491#comment-16306491
 ] 

Chris Douglas commented on HDFS-12915:
--

[~eddyxu], [~Sammi], could you take a look at the patch? The findbugs warning 
has been flagged in every patch build for a few weeks. Please feel free to take 
this over, but we should commit a solution soon.

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch, 
> HDFS-12915.02.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306259#comment-16306259
 ] 

genericqa commented on HDFS-12934:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 431 unchanged - 0 fixed = 432 total (was 431) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12934 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904006/HDFS-12934.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux eb85fddb323e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a55884c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22521/artifact/out/branch-findbug

[jira] [Commented] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-29 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306199#comment-16306199
 ] 

Shashikant Banerjee commented on HDFS-12795:


I see TestSCMCli tests failing currently in my local box everytime. It seems 
the tests are failing because now the state transitions are not going from OPEN 
to CLOSE. 
HDFS-12968 is opened to track the same.

[~nandakumar131], can you please confirm this.

> Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and 
> LifeCycleEvent FULL_CONTAINER
> 
>
> Key: HDFS-12795
> URL: https://issues.apache.org/jira/browse/HDFS-12795
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12795-HDFS-7240.000.patch
>
>
> To bring in support for close container, SCM has to have Container 
> LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER.
> {noformat}
> States: OPEN-->PENDING_CLOSE-->[CLOSED]
> Events:   (FULL_CONTAINER)(CLOSE)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2017-12-29 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
Attachment: HDFS-12934.003.patch

Thanks for the review, [~elgoiri]. Address the initial comments.
Attach the updated patch.

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12934.001.patch, HDFS-12934.002.patch, 
> HDFS-12934.003.patch, RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12966) Ozone: owner name should be set properly when the container allocation happens

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16306151#comment-16306151
 ] 

genericqa commented on HDFS-12966:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}128m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.ksm.TestKsmBlockVersioning |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.container.TestContainerStateManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.

[jira] [Assigned] (HDFS-12636) Ozone: OzoneFileSystem: both rest/rpc backend should be supported using unified OzoneClient client

2017-12-29 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDFS-12636:


Assignee: Lokesh Jain  (was: Mukul Kumar Singh)

> Ozone: OzoneFileSystem: both rest/rpc backend should be supported using 
> unified OzoneClient client
> --
>
> Key: HDFS-12636
> URL: https://issues.apache.org/jira/browse/HDFS-12636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
> Fix For: HDFS-7240
>
>
> OzoneClient library provides a method to invoke both RPC as well as REST 
> based methods to ozone. This api will help in the improving both the 
> performance as well as the interface management in OzoneFileSystem.
> This jira will be used to convert the REST based calls to use this new 
> unified client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org