[jira] [Commented] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory
[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329913#comment-16329913 ] Misha Dmitriev commented on HDFS-12051: --- [~szetszwo] regarding the patch name: I believe your comments are not very constructive, because you repeatedly complain that the summary is misleading, but don't explain in more details what you would like to change. The summary cannot cover the details of all the things I changed in the code. If it was crucial for you that it just mentions "NameCache" (that's the change that you've just made), you could say so explicitly and/or make this change yourself right away. That would save both of us a lot of time. Regarding the numbers. I would really appreciate if you spent some time reading the beginning of this thread, where I gave the numbers indicating the significance of the problem (how much memory was wasted by duplicate byte[] arrays despite the presence of the old NameCache), and how much savings my new NameCache provided. But if you insist that I do it once again, I am copying this here for your convenience. "Analyzing one heap dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays result in 6.5% memory overhead, and most of these arrays are referenced by {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}" "What makes this case special is that the number of byte[] arrays is very high (~100M total arrays, ~25M unique arrays), but the average duplication factor is not very high (~4). Some byte[] arrays are replicated in an extremely high number, e.g. per the jxray report there are 3.5M copies of one 17-element array and so on. But that means that the vast majority of arrays actually don't have any duplicates." "I've redesigned the new NameCache so that its size adjusts depending on the size of the input data, within user-specified limits. It was tested using a synthetic workload simulating that of a big Hadoop installation. The result is an 8.5% reduction in the overhead due to duplicate byte[] arrays. Here are the results of the jxray analysis of the respective heap dumps: Before {code:java} 19. DUPLICATE PRIMITIVE ARRAYS Types of duplicate objects: Ovhd Num objs Num unique objs Class name 346,198K (12.6%) 12097893 3714559 byte[] ... Total arrays: 12,101,111 Unique arrays: 3,716,791 Duplicate values: 371,424 Overhead: 346,322K (12.6%) {code} After: {code:java} 19. DUPLICATE PRIMITIVE ARRAYS Types of duplicate objects: Ovhd Num objs Num unique objs Class name 100,440K (3.9%) 6208877 3855398 byte[] ... Total arrays: 6,212,104 Unique arrays: 3,857,624 Duplicate values: 727,662 Overhead: 100,566K (3.9%){code} " I hope very much that you will now spend some time and really read these numbers. As for "reasons to hurry" - this is not a hurry, this is just a change that's desperately behind the schedule. I made it in August 2017, and now it's January 2018. > Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly > those denoting file/directory names) to save memory > - > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Attachment: HDFS-12935.009.patch > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS-12935.009.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Status: Patch Available (was: In Progress) Patch 009: Fix the checkstyle. > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0, 3.0.0-beta1, 2.9.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS-12935.009.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12973) RBF: Document global quota supporting in federation
[ https://issues.apache.org/jira/browse/HDFS-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329907#comment-16329907 ] Yiqun Lin commented on HDFS-12973: -- Attach new patch to fix checkstyle warnings. > RBF: Document global quota supporting in federation > --- > > Key: HDFS-12973 > URL: https://issues.apache.org/jira/browse/HDFS-12973 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-12973.001.patch, HDFS-12973.002.patch, > HDFS-12973.003.patch > > > Document global quota supporting in federation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Status: In Progress (was: Patch Available) > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0, 3.0.0-beta1, 2.9.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12973) RBF: Document global quota supporting in federation
[ https://issues.apache.org/jira/browse/HDFS-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12973: - Attachment: HDFS-12973.003.patch > RBF: Document global quota supporting in federation > --- > > Key: HDFS-12973 > URL: https://issues.apache.org/jira/browse/HDFS-12973 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-12973.001.patch, HDFS-12973.002.patch, > HDFS-12973.003.patch > > > Document global quota supporting in federation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13021) Incorrect storage policy of snapshot file was returned by getStoragePolicy command
[ https://issues.apache.org/jira/browse/HDFS-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HDFS-13021: Summary: Incorrect storage policy of snapshot file was returned by getStoragePolicy command (was: Incorrect storage policy of snapshott file was returned by getStoragePolicy command) > Incorrect storage policy of snapshot file was returned by getStoragePolicy > command > -- > > Key: HDFS-13021 > URL: https://issues.apache.org/jira/browse/HDFS-13021 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, snapshots >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > > Snapshots are supposed to be immutable and read only, so the file status > which in a snapshot path shouldn't follow the original file's change. > The StoragePolicy in snapshot situation acts like a bug now. > --- > Reproduction:Operation on snapshottable dir {{/storagePolicy}} > *before make snapshot:* > {code:java} > [bin]# hdfs storagepolicies -setStoragePolicy -path /storagePolicy -policy > PROVIDED > Set storage policy PROVIDED on /storagePolicy > [bin]# hadoop fs -put /home/file /storagePolicy/file_PROVIDED > [bin]# hdfs storagepolicies -getStoragePolicy -path > /storagePolicy/file_PROVIDED > The storage policy of /storagePolicy/file_PROVIDED: > BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], > replicationFallbacks=[ARCHIVE]} > {code} > *make snapshot and check:* > {code:java} > [bin]# hdfs dfs -createSnapshot /storagePolicy s3_PROVIDED > Created snapshot /storagePolicy/.snapshot/s3_PROVIDED > [bin]# hdfs storagepolicies -getStoragePolicy -path > /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED > The storage policy of /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED: > BlockStoragePolicy{PROVIDED:1, storageTypes=[PROVIDED, DISK], > creationFallbacks=[PROVIDED, DISK], replicationFallbacks=[PROVIDED, DISK]} > {code} > *change the StroagePolicy and check again:* > {code:java} > [bin]# hdfs storagepolicies -setStoragePolicy -path /storagePolicy -policy HOT > Set storage policy HOT on /storagePolicy > [bin]# hdfs storagepolicies -getStoragePolicy -path > /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED > The storage policy of /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED: > BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], > replicationFallbacks=[ARCHIVE]} It shouldn't be HOT > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329897#comment-16329897 ] Tsz Wo Nicholas Sze commented on HDFS-12990: Question: If the downstream projects have hard coded 8020, is it the case that they will not work if user has set the port to a different number in the conf? > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory
[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329891#comment-16329891 ] Tsz Wo Nicholas Sze commented on HDFS-12051: > I would like to commit by tomorrow if there is no objection. ... Is there a reason to hurry? > Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly > those denoting file/directory names) to save memory > - > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 > of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, > 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) > ... and 45902395 more arrays, of which 13158084 are unique > <-- > org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name > <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode > <-- {j.u.ArrayList} <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs > <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- > org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 > elements) ... <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > 409,830K (0.8%), 13482787 dup arrays (13260241 unique) > 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...) > ... and 13479257 more arrays, of which 13260231 are unique > <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- > org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.Blo
[jira] [Commented] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory
[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329889#comment-16329889 ] Tsz Wo Nicholas Sze commented on HDFS-12051: > ... Otherwise, as I am really afraid based on the current experience > interacting with you, we may spend a lot more time just me suggesting new > names and you rejecting them. [~mi...@cloudera.com], I have clearly [commented on 05/Jan/18|https://issues.apache.org/jira/browse/HDFS-12051?focusedCommentId=16314331&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16314331] that the Summary and Description of this JIRA are misleading. They were not fixed until yesterday. I also asked [~yzhangal] 6 days ago [a question|https://issues.apache.org/jira/browse/HDFS-12051?focusedCommentId=16321743&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16321743] but got no answer. These are probably the reasons that we have spent a lot of time. > ... I have already provided you the numbers that you asked for. ... Where are those numbers? Sorry that I was not able to find them. Thanks. > Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly > those denoting file/directory names) to save memory > - > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 > of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, > 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) > ... and 45902395 more arrays, of which 13158084 are unique > <-- > org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name > <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode > <-- {j.u.ArrayList} <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs > <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- > org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 > elements) ... <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > 409,830K (0.8%), 13482787 dup arrays (13260241 unique) > 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
[jira] [Updated] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory
[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-12051: --- Summary: Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory (was: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory) > Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly > those denoting file/directory names) to save memory > - > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 > of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, > 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) > ... and 45902395 more arrays, of which 13158084 are unique > <-- > org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name > <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode > <-- {j.u.ArrayList} <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs > <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- > org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 > elements) ... <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > 409,830K (0.8%), 13482787 dup arrays (13260241 unique) > 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...) > ... and 13479257 more arrays, of which 13260231 are unique > <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- > org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- > org.apache.hadoop.hdfs.server.block
[jira] [Commented] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329872#comment-16329872 ] genericqa commented on HDFS-12942: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.TestDFSRemove | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12942 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906499/HDFS-12942.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4a3d4a2d5b1b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6e42d05 | | maven | version: Apache Maven 3.3.9 | | Default
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329858#comment-16329858 ] genericqa commented on HDFS-12574: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 354 unchanged - 2 fixed = 356 total (was 356) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12574 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906495/HDFS-12574.010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e6f14ec1a356 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_6
[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329823#comment-16329823 ] Íñigo Goiri commented on HDFS-12792: None of the failed unit tests are related. However, I cannot see the new unit tests being executed: https://builds.apache.org/job/PreCommit-HDFS-Build/22685/testReport/ Any idea why it didn't run any of {{org.apache.hadoop.fs.contract.router}}? > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329817#comment-16329817 ] Íñigo Goiri commented on HDFS-13028: {{testProxyGetStats}} worked fine but not sure how to validate it's not failing anymore. The current report: https://builds.apache.org/job/PreCommit-HDFS-Build/22686/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpc/testProxyGetStats/ > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Attachments: HDFS-13028.000.patch > > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329768#comment-16329768 ] genericqa commented on HDFS-12792: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 22 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestParallelShortCircuitReadUnCached | | | hadoop.hdfs.TestWriteRead | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12792 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906482/HDFS-12792.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4ef67934a9ab 3.13.0-135-generic #184-Ubuntu
[jira] [Commented] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329738#comment-16329738 ] genericqa commented on HDFS-13028: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13028 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906486/HDFS-13028.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c90d24e6d20e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6e42d05 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22686/testReport/ | | Max. process+thread count | 4263 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22686/console | | Powered b
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329724#comment-16329724 ] genericqa commented on HDFS-12574: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-2.8 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 27s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 11s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} branch-2.8 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 239 unchanged - 3 fixed = 241 total (was 242) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd | | JIRA Issue | HDFS-12574 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906508/HDFS-12574.010.branch-2.8.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname |
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329723#comment-16329723 ] genericqa commented on HDFS-12574: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 364 unchanged - 2 fixed = 366 total (was 366) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 2s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 | | JIRA Issue | HDFS-12574 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906505/HDFS-12574.010.branch-2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 60bbaa25e
[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329704#comment-16329704 ] Eric Yang commented on HDFS-12990: -- [~chris.douglas] {quote} That would involve creating a 3.0.1 release with this change and voting on it. {quote} I agree the proposal is the right solution to address the concerns regarding this change. It would be good to list known problems like Nicholas suggested to assist undecided voters. > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12911) [SPS]: Modularize the SPS code and expose necessary interfaces for external/internal implementations.
[ https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329699#comment-16329699 ] Uma Maheswara Rao G commented on HDFS-12911: Thanks [~rakeshr] for your review. I have updated another patch by fixing the comments. > [SPS]: Modularize the SPS code and expose necessary interfaces for > external/internal implementations. > - > > Key: HDFS-12911 > URL: https://issues.apache.org/jira/browse/HDFS-12911 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Attachments: HDFS-12911-HDFS-10285-01.patch, > HDFS-12911-HDFS-10285-02.patch, HDFS-12911-HDFS-10285-03.patch, > HDFS-12911-HDFS-10285-04.patch, HDFS-12911.00.patch > > > One of the key comment from the discussions was to modularize the SPS code, > so we can easily plug in the external/internal implementations. This JIRA for > doing the necessary refactoring. > So other comments to handle > Daryn: > # Lock should not kept while executing placement policy. > - handled by HDFS-12982 > # While starting up the NN, SPS Xattrs checks happen even if feature > disabled. This could potentially impact the startup speed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12574: -- Attachment: HDFS-12574.010.branch-2.8.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329669#comment-16329669 ] Jason Lowe commented on HDFS-12919: --- branch-3 should not exist at all (yet). It was accidentally created by a committer recently. branch-3 will eventually track 3.x releases similar to how branch-2 tracks 2.x releases. But for now trunk is already tracking 3.x releases, so we do not need a branch-3. branch-3 should be created when trunk moves to 4.0.0-SNAPSHOT, but in the meantime I'm in the process of asking for the removal of branch-3. > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12919-branch-3.001.patch, > HDFS-12919-branch-3.002.patch, HDFS-12919-branch-3.003.patch, > HDFS-12919.000.patch, HDFS-12919.001.patch, HDFS-12919.002.patch, > HDFS-12919.003.patch, HDFS-12919.004.patch, HDFS-12919.005.patch, > HDFS-12919.006.patch, HDFS-12919.007.patch, HDFS-12919.008.patch, > HDFS-12919.009.patch, HDFS-12919.010.patch, HDFS-12919.011.patch, > HDFS-12919.012.patch, HDFS-12919.013.patch, HDFS-12919.013.patch, > HDFS-12919.014.patch, HDFS-12919.015.patch, HDFS-12919.016.patch, > HDFS-12919.017.patch, HDFS-12919.018.patch, HDFS-12919.019.patch, > HDFS-12919.020.patch, HDFS-12919.021.patch, HDFS-12919.022.patch, > HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13023) Journal Sync does not work on a secure cluster
[ https://issues.apache.org/jira/browse/HDFS-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13023: - Summary: Journal Sync does not work on a secure cluster (was: Journal Sync not working on a secure cluster) > Journal Sync does not work on a secure cluster > -- > > Key: HDFS-13023 > URL: https://issues.apache.org/jira/browse/HDFS-13023 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13023.00.patch > > > Fails with the following exception. > {code} > 2018-01-10 01:15:40,517 INFO server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(235)) - Syncing Journal > /0.0.0.0:8485 with xxx, journal id: mycluster > 2018-01-10 01:15:40,583 ERROR server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(259)) - Could not sync with > Journal at xxx/xxx:8485 > com.google.protobuf.ServiceException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/x...@example.com > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy16.getEditLogManifest(Unknown Source) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:254) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190) > at java.lang.Thread.run(Thread.java:748) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/xxx > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) > at org.apache.hadoop.ipc.Client.call(Client.java:1437) > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > ... 6 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13023) Journal Sync not working on a secure cluster
[ https://issues.apache.org/jira/browse/HDFS-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13023: - Summary: Journal Sync not working on a secure cluster (was: Journal Sync not working on a kerberos cluster) > Journal Sync not working on a secure cluster > > > Key: HDFS-13023 > URL: https://issues.apache.org/jira/browse/HDFS-13023 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13023.00.patch > > > Fails with the following exception. > {code} > 2018-01-10 01:15:40,517 INFO server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(235)) - Syncing Journal > /0.0.0.0:8485 with xxx, journal id: mycluster > 2018-01-10 01:15:40,583 ERROR server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(259)) - Could not sync with > Journal at xxx/xxx:8485 > com.google.protobuf.ServiceException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/x...@example.com > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy16.getEditLogManifest(Unknown Source) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:254) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190) > at java.lang.Thread.run(Thread.java:748) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/xxx > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) > at org.apache.hadoop.ipc.Client.call(Client.java:1437) > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > ... 6 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12574: -- Attachment: HDFS-12574.010.branch-2.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13023) Journal Sync not working on a kerberos cluster
[ https://issues.apache.org/jira/browse/HDFS-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329652#comment-16329652 ] Hanisha Koneru commented on HDFS-13023: --- Thanks for working on this [~bharatviswa]. The patch LGTM. I have a few comments: * Can you please add a short description for the new InterQJournal protocol classes (refer to the corresponding QJournal classes). * {{The getEditLogFromJournal}} method name gives the idea that we are getting the edit logs from the journal whereas we are getting only the edit log manifest. Can you please rename this and corresponding methods to reflect that. * In {{JournalNodeSyncer}}, can you remove all the references to QJournalProtocols as we are not using them anymore. I see one method {{convertJournalId()}} which is still using QJournalProtocol but this method is not used and can be removed. > Journal Sync not working on a kerberos cluster > -- > > Key: HDFS-13023 > URL: https://issues.apache.org/jira/browse/HDFS-13023 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13023.00.patch > > > Fails with the following exception. > {code} > 2018-01-10 01:15:40,517 INFO server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(235)) - Syncing Journal > /0.0.0.0:8485 with xxx, journal id: mycluster > 2018-01-10 01:15:40,583 ERROR server.JournalNodeSyncer > (JournalNodeSyncer.java:syncWithJournalAtIndex(259)) - Could not sync with > Journal at xxx/xxx:8485 > com.google.protobuf.ServiceException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/x...@example.com > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy16.getEditLogManifest(Unknown Source) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:254) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190) > at java.lang.Thread.run(Thread.java:748) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for > protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: > this service is only accessible by nn/xxx > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) > at org.apache.hadoop.ipc.Client.call(Client.java:1437) > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > ... 6 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13029) /.reserved/raw/.reserved/.inodes/ is not resolvable.
[ https://issues.apache.org/jira/browse/HDFS-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329646#comment-16329646 ] Daryn Sharp commented on HDFS-13029: I'll take this but won't get to it soon. I (accidentally) fixed this years ago with optimizations to path string/byte conversions. It was an internal patch to improve upon my external patch, but I forgot to push it back out. > /.reserved/raw/.reserved/.inodes/ is not resolvable. > -- > > Key: HDFS-13029 > URL: https://issues.apache.org/jira/browse/HDFS-13029 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Rushabh S Shah >Priority: Major > > Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12942: -- Attachment: HDFS-12942.005.patch > Synchronization issue in FSDataSetImpl#moveBlock > > > Key: HDFS-12942 > URL: https://issues.apache.org/jira/browse/HDFS-12942 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-12942.001.patch, HDFS-12942.002.patch, > HDFS-12942.003.patch, HDFS-12942.004.patch, HDFS-12942.005.patch > > > FSDataSetImpl#moveBlock works in following following 3 steps: > # first creates a new replicaInfo object > # calls finalizeReplica to finalize it. > # Calls removeOldReplica to remove oldReplica. > A client can potentially append to the old replica between step 1 and 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13029) /.reserved/raw/.reserved/.inodes/ is not resolvable.
[ https://issues.apache.org/jira/browse/HDFS-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp reassigned HDFS-13029: -- Assignee: Daryn Sharp > /.reserved/raw/.reserved/.inodes/ is not resolvable. > -- > > Key: HDFS-13029 > URL: https://issues.apache.org/jira/browse/HDFS-13029 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Rushabh S Shah >Assignee: Daryn Sharp >Priority: Major > > Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329613#comment-16329613 ] Rushabh S Shah edited comment on HDFS-12574 at 1/17/18 10:35 PM: - Thanks [~daryn] for the review. {quote}Only substantive comment is I think you can revert the changes in NamenodeWebHdfsMethods#redirectURI. Instead of passing in a ResponseBuilder and FileStatus just for the sole purpose of letting OPEN set a header, push the logic up into the open call. That will also avoid introducing a new unnecessary getFileInfo for creates. {quote} In the coming patches for supporting EZ for webhdfs create, append, I will use the {{ResponseBuilder}} in {{redirectUri}}. I have made the changes for doing getFileInfo just for {{Open, Append, GETFILECHECKSUM}}. {quote}Very trivial comment is instead of donotFollowRedirect, perhaps use followRedirects to match the name of the HttpURLConnection method name. It's a bit clumsy to read logic that negates a negative. {quote} Changed in the latest patch. bq. Also, please add client tests for reading a reserved inode path in an EZ, with and w/o the reserved raw prefix. Today we don't support resolving of {{/.reserved/raw/.reserved/.inodes/}}. So we can't read from EZ directory with {{/.reserved/.inodes}} using webhdfs. Created HDFS-13029 for that. was (Author: shahrs87): Thanks [~daryn] for the review. {quote} Only substantive comment is I think you can revert the changes in NamenodeWebHdfsMethods#redirectURI. Instead of passing in a ResponseBuilder and FileStatus just for the sole purpose of letting OPEN set a header, push the logic up into the open call. That will also avoid introducing a new unnecessary getFileInfo for creates. {quote} In the coming patches for supporting EZ for webhdfs create, append, I will use the {{ResponseBuilder}} in {{redirectUri}}. I have made the changes for doing getFileInfo just for {{Open, Append, GETFILECHECKSUM}}. {quote} Very trivial comment is instead of donotFollowRedirect, perhaps use followRedirects to match the name of the HttpURLConnection method name. It's a bit clumsy to read logic that negates a negative. {quote} Changed in the latest patch. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329625#comment-16329625 ] Chris Douglas commented on HDFS-12990: -- bq. I believe a vote would make people think about this problem and respond according to their preferences That would involve creating a 3.0.1 release with this change and voting on it. > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329613#comment-16329613 ] Rushabh S Shah commented on HDFS-12574: --- Thanks [~daryn] for the review. {quote} Only substantive comment is I think you can revert the changes in NamenodeWebHdfsMethods#redirectURI. Instead of passing in a ResponseBuilder and FileStatus just for the sole purpose of letting OPEN set a header, push the logic up into the open call. That will also avoid introducing a new unnecessary getFileInfo for creates. {quote} In the coming patches for supporting EZ for webhdfs create, append, I will use the {{ResponseBuilder}} in {{redirectUri}}. I have made the changes for doing getFileInfo just for {{Open, Append, GETFILECHECKSUM}}. {quote} Very trivial comment is instead of donotFollowRedirect, perhaps use followRedirects to match the name of the HttpURLConnection method name. It's a bit clumsy to read logic that negates a negative. {quote} Changed in the latest patch. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12574: -- Attachment: HDFS-12574.010.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13029) /.reserved/raw/.reserved/.inodes/ is not resolvable.
[ https://issues.apache.org/jira/browse/HDFS-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-13029: -- Description: Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. > /.reserved/raw/.reserved/.inodes/ is not resolvable. > -- > > Key: HDFS-13029 > URL: https://issues.apache.org/jira/browse/HDFS-13029 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Environment: Namenode cannot resolve > {{/.reserved/raw/.reserved/.inodes/}} path. >Reporter: Rushabh S Shah >Priority: Major > > Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13029) /.reserved/raw/.reserved/.inodes/ is not resolvable.
Rushabh S Shah created HDFS-13029: - Summary: /.reserved/raw/.reserved/.inodes/ is not resolvable. Key: HDFS-13029 URL: https://issues.apache.org/jira/browse/HDFS-13029 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Environment: Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. Reporter: Rushabh S Shah -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13029) /.reserved/raw/.reserved/.inodes/ is not resolvable.
[ https://issues.apache.org/jira/browse/HDFS-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-13029: -- Environment: (was: Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path.) > /.reserved/raw/.reserved/.inodes/ is not resolvable. > -- > > Key: HDFS-13029 > URL: https://issues.apache.org/jira/browse/HDFS-13029 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Rushabh S Shah >Priority: Major > > Namenode cannot resolve {{/.reserved/raw/.reserved/.inodes/}} path. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13028: --- Attachment: HDFS-13028.000.patch > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Minor > Labels: RBF > Attachments: HDFS-13028.000.patch > > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13028: --- Assignee: Íñigo Goiri Status: Patch Available (was: Open) > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Attachments: HDFS-13028.000.patch > > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329479#comment-16329479 ] Íñigo Goiri commented on HDFS-13028: It might be a race condition updating the {{GET_STATS_USED_IDX}}. One solution is to keep checking until this is fine. > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Minor > Labels: RBF > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Labels: RBF (was: ) > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13028: --- Labels: RBF (was: ) > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Minor > Labels: RBF > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13028: --- Priority: Minor (was: Major) > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Minor > Labels: RBF > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329470#comment-16329470 ] Íñigo Goiri commented on HDFS-13028: An example from HDFS-12792: https://builds.apache.org/job/PreCommit-HDFS-Build/22683/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpc/testProxyGetStats/ {code} Stats for 1 don't match: 2212141!=2261293 expected:<2212141> but was:<2261293> java.lang.AssertionError: Stats for 1 don't match: 2212141!=2261293 expected:<2212141> but was:<2261293> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.hdfs.server.federation.router.TestRouterRpc.testProxyGetStats(TestRouterRpc.java:524) {code} > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Major > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329467#comment-16329467 ] Íñigo Goiri commented on HDFS-12792: {{TestRouterRpc}} is not broken by this JIRA but it seems to fail very often lately. I created HDFS-13028 to track it. > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
[ https://issues.apache.org/jira/browse/HDFS-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13028: --- Description: TestRouterRpc#testProxyGetStats is failing frequently. > RBF: Fix spurious TestRouterRpc#testProxyGetStats > - > > Key: HDFS-13028 > URL: https://issues.apache.org/jira/browse/HDFS-13028 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Major > > TestRouterRpc#testProxyGetStats is failing frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13028) RBF: Fix spurious TestRouterRpc#testProxyGetStats
Íñigo Goiri created HDFS-13028: -- Summary: RBF: Fix spurious TestRouterRpc#testProxyGetStats Key: HDFS-13028 URL: https://issues.apache.org/jira/browse/HDFS-13028 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Íñigo Goiri -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Attachment: HDFS-12792.004.patch > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329406#comment-16329406 ] genericqa commented on HDFS-12792: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 22 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 2s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.federation.router.TestRouterRpc | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12792 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906425/HDFS-12792.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 60847b20db16 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6e42d05 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22683/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22683/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/
[jira] [Updated] (HDFS-12911) [SPS]: Modularize the SPS code and expose necessary interfaces for external/internal implementations.
[ https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-12911: --- Attachment: HDFS-12911-HDFS-10285-04.patch > [SPS]: Modularize the SPS code and expose necessary interfaces for > external/internal implementations. > - > > Key: HDFS-12911 > URL: https://issues.apache.org/jira/browse/HDFS-12911 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Attachments: HDFS-12911-HDFS-10285-01.patch, > HDFS-12911-HDFS-10285-02.patch, HDFS-12911-HDFS-10285-03.patch, > HDFS-12911-HDFS-10285-04.patch, HDFS-12911.00.patch > > > One of the key comment from the discussions was to modularize the SPS code, > so we can easily plug in the external/internal implementations. This JIRA for > doing the necessary refactoring. > So other comments to handle > Daryn: > # Lock should not kept while executing placement policy. > - handled by HDFS-12982 > # While starting up the NN, SPS Xattrs checks happen even if feature > disabled. This could potentially impact the startup speed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329344#comment-16329344 ] Íñigo Goiri commented on HDFS-12919: Thanks [~jlowe], so branch-3 is 3.1 and trunk is 3.1+? > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12919-branch-3.001.patch, > HDFS-12919-branch-3.002.patch, HDFS-12919-branch-3.003.patch, > HDFS-12919.000.patch, HDFS-12919.001.patch, HDFS-12919.002.patch, > HDFS-12919.003.patch, HDFS-12919.004.patch, HDFS-12919.005.patch, > HDFS-12919.006.patch, HDFS-12919.007.patch, HDFS-12919.008.patch, > HDFS-12919.009.patch, HDFS-12919.010.patch, HDFS-12919.011.patch, > HDFS-12919.012.patch, HDFS-12919.013.patch, HDFS-12919.013.patch, > HDFS-12919.014.patch, HDFS-12919.015.patch, HDFS-12919.016.patch, > HDFS-12919.017.patch, HDFS-12919.018.patch, HDFS-12919.019.patch, > HDFS-12919.020.patch, HDFS-12919.021.patch, HDFS-12919.022.patch, > HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB or larger than the test file
[ https://issues.apache.org/jira/browse/HDFS-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329311#comment-16329311 ] Zsolt Venczel commented on HDFS-13004: -- Thanks a lot [~jlowe]! > TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB > or larger than the test file > - > > Key: HDFS-13004 > URL: https://issues.apache.org/jira/browse/HDFS-13004 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Labels: flaky-test > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-13004.01.patch, HDFS-13004.02.patch, > HDFS-13004.03.patch > > > andre{code} > Error: > failed testCase at i=1, > blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths= > {4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304},safeLength=25165824] > java.lang.AssertionError: File length should be the same expected:<25165824> > but was:<18874368> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143) > Stack: > java.lang.AssertionError: > failed testCase at i=1, > blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths={4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304} > ,safeLength=25165824] > java.lang.AssertionError: File length should be the same expected:<25165824> > but was:<18874368> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLease
[jira] [Updated] (HDFS-12843) Ozone: Client: TestOzoneRpcClient#testPutKeyRatisThreeNodes is failing
[ https://issues.apache.org/jira/browse/HDFS-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12843: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~msingh] Thank you for the contribution. I have committed this to the feature branch > Ozone: Client: TestOzoneRpcClient#testPutKeyRatisThreeNodes is failing > -- > > Key: HDFS-12843 > URL: https://issues.apache.org/jira/browse/HDFS-12843 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12843-HDFS-7240.001.patch > > > {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} is failing with below error > {noformat} > java.io.IOException: Create key failed, error:INTERNAL_ERROR > at > org.apache.hadoop.ozone.ksm.protocolPB.KeySpaceManagerProtocolClientSideTranslatorPB.openKey(KeySpaceManagerProtocolClientSideTranslatorPB.java:538) > at > org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:455) > at > org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:255) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClient.testPutKeyRatisThreeNodes(TestOzoneRpcClient.java:487) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable
[ https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329229#comment-16329229 ] Jason Lowe commented on HDFS-9049: -- This was committed accidentally to branch-3 instead of branch-3.0, so I picked this change over to branch-3.0 for its inclusion in the 3.0.1 release. > Make Datanode Netty reverse proxy port to be configurable > - > > Key: HDFS-9049 > URL: https://issues.apache.org/jira/browse/HDFS-9049 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-9049-01.patch, HDFS-9049-02.patch, > HDFS-9049-03.patch, HDFS-9049-04.patch > > > In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random > port to start with binding to localhost. This port can be made configurable > for better deployments. > {code} > HttpServer2.Builder builder = new HttpServer2.Builder() > .setName("datanode") > .setConf(confForInfoServer) > .setACL(new AccessControlList(conf.get(DFS_ADMIN, " "))) > .hostName(getHostnameForSpnegoPrincipal(confForInfoServer)) > .addEndpoint(URI.create("http://localhost:0";)) > .setFindPort(true); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager
[ https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329228#comment-16329228 ] Nanda kumar commented on HDFS-12522: Thanks [~anu] for working on this, overall the patch looks good to me. Please find my comments below Package naming: {{ContainerStates}} should be {{containerstates}} *ContainerStateManager.java* * Line:119 Instead of storing {{ContainerInfo}} in {{lastUsedMap}} we can just store {{ContainerID}}. * Line:310 Do we need handle Integer overflow case for {{containerCount}}? * Line:313 If {{containers.createContainer(containerInfo)}} call fails, should we revert {{containerCount}}? (If we want to reuse containerCount value, we need to hold the lock. As of now, {{ContainerStateManager#allocateContainer is called only from ContainerMapping#allocateContainer which already holds the lock.) * Line:354 Javadoc to be updated stating that {{getMatchingContainer}} will return {{null}} if there are no matching containers available. * Line:363 {{containers.getMatchingContainerIDs}} can return null, so {{if (matchingSet.size() == 0)}} has to be {{if (matchingSet == null || matchingSet.size() == 0)}} * Line:369 We should store {{lastUsedContainer}} for each combination of owner, type and factor instead of just owner? Consider the following case {noformat} [Container - Id: 1, owner: OZONE, type: ratis, factor: 3] [Container - Id: 2, owner: OZONE, type: ratis, factor: 3] [Container - Id: 3, owner: OZONE, type: ratis, factor: 3] [Container - Id: 4, owner: OZONE, type: ratis, factor: 3] [Container - Id: 5, owner: OZONE, type: ratis, factor: 1] [Container - Id: 6, owner: OZONE, type: ratis, factor: 3] [Container - Id: 7, owner: OZONE, type: ratis, factor: 3] [Container - Id: 8, owner: OZONE, type: ratis, factor: 3] [Container - Id: 9, owner: OZONE, type: ratis, factor: 3] Request 1: getMatchingContainer(10, OZONE, RATIS, 3, OPEN) Will return: Container - Id: 2 //The first time, we will skip the first container. Request 2: getMatchingContainer(10, OZONE, RATIS, 1, OPEN) Will return: Container - Id: 5 Request 3: getMatchingContainer(10, OZONE, RATIS, 3, OPEN) Will return: Container - Id: 6 Request 4: getMatchingContainer(10, OZONE, RATIS, 1, OPEN) Will return: Container - Id: 5 Request 5: getMatchingContainer(10, OZONE, RATIS, 3, OPEN) Will return: Container - Id: 6 Request 6: getMatchingContainer(10, OZONE, RATIS, 1, OPEN) Will return: Container - Id: 5 Request 7: getMatchingContainer(10, OZONE, RATIS, 3, OPEN) Will return: Container - Id: 6 {noformat} We will only be using container 5 and 6, other containers will not be used. * Line:424 Can we rename {{getMatchingContainers}} to {{getMatchingContainerIDs}} as it returns {{NavigableSet}}? *ContainerStateMap.java* * Line:61 Typo {{Contianer}} -> {{Container}} * Line:85 {{emptySet}} can be static. * Line:115 rename {{createContainer}} to {{addContainer}}, as it does not create any container but adds them to {{ContainerStateMap}}? * Line:118, 162, 183, 234, 248, 262, 299 lock instance doesn't need to be assigned to a variable. * Line:141 Do we need {{getCurrentInfo(ContainerInfo info)}}? we can always use {{getContainerInfo(int containerID)}}. * Line:151 {{getContainerInfo}} can directly take {{ContainerID}} object instead of creating new ContainerID object for each call (unnecessary object creation) * Line:200 We should not use {{ContainerInfo}} that is passed as argument, it will contain stale {{allocatedBytes}} value as it’s read from metadata db (allocatedBytes value is updated in db only during SCM shutdown). We should always do {{containerMap.get}}, update the state and then {{containerMap.put}} * {{updateState}}: We don’t need {{ContainerInfo}} as an argument, we can just have {{containerID}} * Rename suggestions: ** {{getContainersByOwner}} -> {{getContainerIDsByOwner}} ** {{getContainersByType}} -> {{getContainerIDsByType}} ** {{getContainersByFactor}} -> {{getContainerIDsByFactor}} ** {{getContainersByState}} -> {{getContainerIDsByState}} *ContainerAttribute.java* * Line:98 Incorrect comment, {{we inserted the value since it was existing in the set.}} --> {{we inserted the value as it doesn’t exist in the set.}} *ContainerID.java* * Suggestion: we can add a factory method for ContainerID construction - {{public static ContainerID valueOf(int id)}} *ContainerMapping.java* * Line:354 - 355 Random “+” & “*” in the javadoc. * Line: 443 - 444 This can be removed. *ContainerInfo.java* * Since we have removed PriorityQueue from ContainerStateManager, we don’t need {{lastUsed}} time in {{ContainerInfo.}} We will be maintaining that information in {{ContainerStateManager#lastUsedMap}}. We can also remove {{ContainerInfo#compare}} and {{ContainerInfo#compareTo}} methods. * {{setState}} : Having setter method in ContainerInfo, this one is just a con
[jira] [Commented] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB or larger than the test file
[ https://issues.apache.org/jira/browse/HDFS-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329208#comment-16329208 ] Jason Lowe commented on HDFS-13004: --- This was committed accidentally to branch-3 instead of branch-3.0, so I picked this change over to branch-3.0 for its inclusion in the 3.0.1 release. > TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB > or larger than the test file > - > > Key: HDFS-13004 > URL: https://issues.apache.org/jira/browse/HDFS-13004 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Labels: flaky-test > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-13004.01.patch, HDFS-13004.02.patch, > HDFS-13004.03.patch > > > andre{code} > Error: > failed testCase at i=1, > blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths= > {4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304},safeLength=25165824] > java.lang.AssertionError: File length should be the same expected:<25165824> > but was:<18874368> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143) > Stack: > java.lang.AssertionError: > failed testCase at i=1, > blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths={4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304} > ,safeLength=25165824] > java.lang.AssertionError: File length should be the same expected:<25165824> > but was:<18874368> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79) > at > org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runT
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329201#comment-16329201 ] Jason Lowe commented on HDFS-12919: --- This was committed accidentally to branch-3 instead of branch-3.0, so I picked this change over to branch-3.0 for its inclusion in the 3.0.1 release. > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12919-branch-3.001.patch, > HDFS-12919-branch-3.002.patch, HDFS-12919-branch-3.003.patch, > HDFS-12919.000.patch, HDFS-12919.001.patch, HDFS-12919.002.patch, > HDFS-12919.003.patch, HDFS-12919.004.patch, HDFS-12919.005.patch, > HDFS-12919.006.patch, HDFS-12919.007.patch, HDFS-12919.008.patch, > HDFS-12919.009.patch, HDFS-12919.010.patch, HDFS-12919.011.patch, > HDFS-12919.012.patch, HDFS-12919.013.patch, HDFS-12919.013.patch, > HDFS-12919.014.patch, HDFS-12919.015.patch, HDFS-12919.016.patch, > HDFS-12919.017.patch, HDFS-12919.018.patch, HDFS-12919.019.patch, > HDFS-12919.020.patch, HDFS-12919.021.patch, HDFS-12919.022.patch, > HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329197#comment-16329197 ] Daryn Sharp commented on HDFS-12574: Also, please add client tests for reading a reserved inode path in an EZ, with and w/o the reserved raw prefix. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11848) Enhance dfsadmin listOpenFiles command to list files under a given path
[ https://issues.apache.org/jira/browse/HDFS-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329195#comment-16329195 ] Jason Lowe commented on HDFS-11848: --- This was committed accidentally to branch-3 instead of branch-3.0, so I picked this change over to branch-3.0 for its inclusion in the 3.0.1 release. > Enhance dfsadmin listOpenFiles command to list files under a given path > --- > > Key: HDFS-11848 > URL: https://issues.apache.org/jira/browse/HDFS-11848 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Yiqun Lin >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-11848.001.patch, HDFS-11848.002.patch, > HDFS-11848.003.patch, HDFS-11848.004.patch > > > HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list > all the open files in the system. > One more thing that would be nice here is to filter the output on a passed > path or DataNode. Usecases: An admin might already know a stale file by path > (perhaps from fsck's -openforwrite), and wants to figure out who the lease > holder is. Proposal here is add suboptions to {{listOpenFiles}} to list files > filtered by path. > {{LeaseManager#getINodeWithLeases(INodeDirectory)}} can be used to get the > open file list for any given ancestor directory. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329149#comment-16329149 ] Daryn Sharp commented on HDFS-12574: I think it generally looks good. ReadRunner is getting pretty complex but simplifying that is beyond the scope of this feature. Only substantive comment is I think you can revert the changes in {{NamenodeWebHdfsMethods#redirectURI}}. Instead of passing in a {{ResponseBuilder}} and {{FileStatus}} just for the sole purpose of letting OPEN set a header, push the logic up into the open call. That will also avoid introducing a new unnecessary {{getFileInfo}} for creates. Very trivial comment is instead of {{donotFollowRedirect}}, perhaps use {{followRedirects}} to match the name of the {{HttpURLConnection}} method name. It's a bit clumsy to read logic that negates a negative. –– [~andrew.wang], what's your thought on the approach? The main compatibility case is supporting sites that allow DNs to stream back unencrypted data (DNs are KMS proxy users). Current/old webhdfs clients will continue to rely on that behavior. New webhdfs clients will request end-to-end encryption by: # EZ-aware webhdfs client sends header to indicate EZ support # If client indicates support, NN will add FE info header into OPEN response # If client indicates support, NN will prefix the redirect path with /.reserved/raw so DNs will stream the encrypted bytes. Supports RU when there's a mix of old/new DNs. # Webhdfs client wraps a crypto stream using the FE info. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329134#comment-16329134 ] genericqa commented on HDFS-12935: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 53s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 374 unchanged - 0 fixed = 375 total (was 374) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12935 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906413/HDFS-12935.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b78544de8819 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6e42d05 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22682/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22682/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22682/testReport/
[jira] [Commented] (HDFS-12843) Ozone: Client: TestOzoneRpcClient#testPutKeyRatisThreeNodes is failing
[ https://issues.apache.org/jira/browse/HDFS-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329129#comment-16329129 ] Anu Engineer commented on HDFS-12843: - +1, I will commit this shortly. > Ozone: Client: TestOzoneRpcClient#testPutKeyRatisThreeNodes is failing > -- > > Key: HDFS-12843 > URL: https://issues.apache.org/jira/browse/HDFS-12843 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Mukul Kumar Singh >Priority: Major > Attachments: HDFS-12843-HDFS-7240.001.patch > > > {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} is failing with below error > {noformat} > java.io.IOException: Create key failed, error:INTERNAL_ERROR > at > org.apache.hadoop.ozone.ksm.protocolPB.KeySpaceManagerProtocolClientSideTranslatorPB.openKey(KeySpaceManagerProtocolClientSideTranslatorPB.java:538) > at > org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:455) > at > org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:255) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClient.testPutKeyRatisThreeNodes(TestOzoneRpcClient.java:487) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12051) Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory
[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329084#comment-16329084 ] Yongjun Zhang commented on HDFS-12051: -- Hi [~szetszwo], I would like to commit by tomorrow if there is no objection. Thanks. > Intern duplicate byte[] arrays (mainly those denoting file/directory names) > to save memory > -- > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 > of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, > 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) > ... and 45902395 more arrays, of which 13158084 are unique > <-- > org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name > <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode > <-- {j.u.ArrayList} <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs > <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- > org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 > elements) ... <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > 409,830K (0.8%), 13482787 dup arrays (13260241 unique) > 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...) > ... and 13479257 more arrays, of which 13260231 are unique > <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- > org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap
[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning
[ https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329062#comment-16329062 ] Manoj Govindassamy commented on HDFS-11847: --- [~jlowe], My bad. My intention was only to use branch-3.0 and not create a new branch. Will check my scripts. Thanks for spotting this. Please let me know the corrective actions and I will follow them. > Enhance dfsadmin listOpenFiles command to list files blocking datanode > decommissioning > -- > > Key: HDFS-11847 > URL: https://issues.apache.org/jira/browse/HDFS-11847 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch, > HDFS-11847.03.patch, HDFS-11847.04.patch, HDFS-11847.05.patch > > > HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list > all the open files in the system. > Additionally, it would be very useful to only list open files that are > blocking the DataNode decommissioning. With thousand+ node clusters, where > there might be machines added and removed regularly for maintenance, any > option to monitor and debug decommissioning status is very helpful. Proposal > here is to add suboptions to {{listOpenFiles}} for the above case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Attachment: HDFS-12792.003.patch > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328990#comment-16328990 ] Rushabh S Shah commented on HDFS-12574: --- Based on offline discussions with [~daryn], he had following suggestions. +WebHdfsFileSystem.java+ # Replace \{{redirectUrl}} with \{{resolvedUrl}}. +EncryptableInputStream.java+ # Remove this class altogether. +DFSCient.java+ # Move \{{DFSInputStream#createWrappedInputStream}} to {{DFSClient}} +NamenodeWebHdfsMethods.java+ # Previous patch was not applying cleanly due to some change. Rebased the patch. Addressed all the comments in the latest patch. Unit Test Failures. {noformat} [INFO] Running org.apache.hadoop.hdfs.server.balancer.TestBalancer [INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 382.018 s - in org.apache.hadoop.hdfs.server.balancer.TestBalancer [INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 116.024 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure [ERROR] testUnderReplicationAfterVolFailure(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure) Time elapsed: 93.742 s <<< ERROR! java.util.concurrent.TimeoutException: --> tracked by HDFS-11398 [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 267.471 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 263.43 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 267.398 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 266.834 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 265.416 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 [INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy [WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 243.934 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy [INFO] Running org.apache.hadoop.hdfs.TestErasureCodingPolicies [INFO] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.921 s - in org.apache.hadoop.hdfs.TestErasureCodingPolicies [INFO] Running org.apache.hadoop.hdfs.TestMaintenanceState [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 374.135 s - in org.apache.hadoop.hdfs.TestMaintenanceState [INFO] Running org.apache.hadoop.hdfs.TestReplication [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.958 s - in org.apache.hadoop.hdfs.TestReplication [INFO] Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.164 s - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts [ERROR] Errors: [ERROR] TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure:412 » Timeout Ti... [INFO] [ERROR] Tests run: 391, Failures: 0, Errors: 1, Skipped: 42{noformat} Re-ran all the failed tests to ensure none of them is failing due to my change. All were passing except {{TestDataNodeVolumeFailure#testUnderReplicationAfterVolFailure}} which is being tracked by {{HDFS-11398}}. [~daryn]: can you please provide feedback on the latest patch. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mai
[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning
[ https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328942#comment-16328942 ] Jason Lowe commented on HDFS-11847: --- I recently noticed the new {{branch-3}} branch and tracked it back to here. branch-3.0 is for tracking 3.0.x releases, currently 3.0.1-SNAPSHOT. Was the creation of {{branch-3}} intentional, and if so, how is it different than branch-3.0? > Enhance dfsadmin listOpenFiles command to list files blocking datanode > decommissioning > -- > > Key: HDFS-11847 > URL: https://issues.apache.org/jira/browse/HDFS-11847 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch, > HDFS-11847.03.patch, HDFS-11847.04.patch, HDFS-11847.05.patch > > > HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list > all the open files in the system. > Additionally, it would be very useful to only list open files that are > blocking the DataNode decommissioning. With thousand+ node clusters, where > there might be machines added and removed regularly for maintenance, any > option to monitor and debug decommissioning status is very helpful. Proposal > here is to add suboptions to {{listOpenFiles}} for the above case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328857#comment-16328857 ] Jianfei Jiang edited comment on HDFS-12935 at 1/17/18 3:01 PM: --- Patch 008: Rebase trunk. was (Author: jiangjianfei): Rebase trunk. > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Status: Patch Available (was: In Progress) Rebase trunk. > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0, 3.0.0-beta1, 2.9.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Attachment: HDFS-12935.008.patch > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Attachment: HDFS-12935.008.patch > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Attachment: (was: HDFS-12935.008.patch) > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-12935: - Status: In Progress (was: Patch Available) > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0, 3.0.0-beta1, 2.9.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS-12935.008.patch, HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328778#comment-16328778 ] genericqa commented on HDFS-13027: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}195m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestFetchImage | | | hadoop.hdfs.TestDFSStartupVersions | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.TestHDFSTrash | | | hadoop.hdfs.TestReplication | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906379/HDFS-13027-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 15b56e686c7e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patc
[jira] [Commented] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
[ https://issues.apache.org/jira/browse/HDFS-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328767#comment-16328767 ] genericqa commented on HDFS-13026: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.ozone.ozShell.TestOzoneShell | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.cblock.TestBufferManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.container.replication.TestContainerReplicationManager | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.ozone.ksm.TestKeySpaceManager | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.ozone.scm.TestSCMCli | | | hadoop.ozone.web.client.TestKeys | | | hadoop.hdfs.TestEncryptionZonesWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-13026 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906374/HDFS-13026-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4c
[jira] [Comment Edited] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up
[ https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328339#comment-16328339 ] Jianfei Jiang edited comment on HDFS-12935 at 1/17/18 12:31 PM: Thanks [~brahmareddy] to point out. I will updated the patch to handle {{listOpenFiles}}. was (Author: jiangjianfei): Thanks [~brahmareddy] to point out. I have updated the patch to handle \{{listOpenFiles}}. Please review if available. > Get ambiguous result for DFSAdmin command in HA mode when only one namenode > is up > - > > Key: HDFS-12935 > URL: https://issues.apache.org/jira/browse/HDFS-12935 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, > HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, > HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, > HDFS_12935.001.patch > > > In HA mode, if one namenode is down, most of functions can still work. When > considering the following two occasions: > (1)nn1 up and nn2 down > (2)nn1 down and nn2 up > These two occasions should be equivalent. However, some of the DFSAdmin > commands will have ambiguous results. The commands can be send successfully > to the up namenode and are always functionally useful only when nn1 is up > regardless of exception (IOException when connecting to the down namenode > nn2). If only nn2 is up, the commands have no use at all and only exception > to connect nn1 can be found. > See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to > set balancer bandwidth value for datanodes as an example. It works and all > the datanodes can get the setting values only when nn1 is up. If only nn2 is > up, the command throws exception directly and no datanode get the bandwidth > setting. Approximately ten DFSAdmin commands use the similar logical process > and may be ambiguous. > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345 > *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820* > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei02:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2 > active > [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234 > setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to > jiangjianfei01:9820 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > [root@jiangjianfei01 ~]# -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-13027: - Status: Patch Available (was: Open) > Handle NPE due to deleted blocks in race condition > -- > > Key: HDFS-13027 > URL: https://issues.apache.org/jira/browse/HDFS-13027 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS-13027-01.patch > > > Since File deletions and Block removal from BlocksMap done in separate locks, > there are possibilities of NPE due to calls of > {{blockManager.getBlockCollection(block)}} returning null. > Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-13027: - Attachment: HDFS-13027-01.patch > Handle NPE due to deleted blocks in race condition > -- > > Key: HDFS-13027 > URL: https://issues.apache.org/jira/browse/HDFS-13027 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS-13027-01.patch > > > Since File deletions and Block removal from BlocksMap done in separate locks, > there are possibilities of NPE due to calls of > {{blockManager.getBlockCollection(block)}} returning null. > Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328596#comment-16328596 ] Vinayakumar B commented on HDFS-13027: -- Attached the patch for trunk. > Handle NPE due to deleted blocks in race condition > -- > > Key: HDFS-13027 > URL: https://issues.apache.org/jira/browse/HDFS-13027 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS-13027-01.patch > > > Since File deletions and Block removal from BlocksMap done in separate locks, > there are possibilities of NPE due to calls of > {{blockManager.getBlockCollection(block)}} returning null. > Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328594#comment-16328594 ] Vinayakumar B commented on HDFS-13027: -- One similar issue fixed recently was HDFS-12638, which avoided calls to {{getBlockCollection()}} itself. > Handle NPE due to deleted blocks in race condition > -- > > Key: HDFS-13027 > URL: https://issues.apache.org/jira/browse/HDFS-13027 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > > Since File deletions and Block removal from BlocksMap done in separate locks, > there are possibilities of NPE due to calls of > {{blockManager.getBlockCollection(block)}} returning null. > Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13027) Handle NPE due to deleted blocks in race condition
[ https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B reassigned HDFS-13027: Assignee: Vinayakumar B > Handle NPE due to deleted blocks in race condition > -- > > Key: HDFS-13027 > URL: https://issues.apache.org/jira/browse/HDFS-13027 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > > Since File deletions and Block removal from BlocksMap done in separate locks, > there are possibilities of NPE due to calls of > {{blockManager.getBlockCollection(block)}} returning null. > Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13027) Handle NPE due to deleted blocks in race condition
Vinayakumar B created HDFS-13027: Summary: Handle NPE due to deleted blocks in race condition Key: HDFS-13027 URL: https://issues.apache.org/jira/browse/HDFS-13027 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Vinayakumar B Since File deletions and Block removal from BlocksMap done in separate locks, there are possibilities of NPE due to calls of {{blockManager.getBlockCollection(block)}} returning null. Handle all possibilities of NPEs due to this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13003) Access time on dir changed via setTimes() should be stored in fsimage
[ https://issues.apache.org/jira/browse/HDFS-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328580#comment-16328580 ] Sreejith MV commented on HDFS-13003: Thanks for the comments [~brahmareddy]. As discussed, I have added one testcase for the scenario. Would you please review the patch? > Access time on dir changed via setTimes() should be stored in fsimage > - > > Key: HDFS-13003 > URL: https://issues.apache.org/jira/browse/HDFS-13003 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sreejith MV >Assignee: Sreejith MV >Priority: Major > Attachments: HDFS-13003.002.patch, HDFS-13003.patch > > > Access time for a directory can be modified with > DistributedFileSystem.setTimes(). > But this changed access time is not stored in the fsimage. > After restart of namenode, it will be lost and reset as zero. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13003) Access time on dir changed via setTimes() should be stored in fsimage
[ https://issues.apache.org/jira/browse/HDFS-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreejith MV updated HDFS-13003: --- Attachment: HDFS-13003.002.patch > Access time on dir changed via setTimes() should be stored in fsimage > - > > Key: HDFS-13003 > URL: https://issues.apache.org/jira/browse/HDFS-13003 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sreejith MV >Assignee: Sreejith MV >Priority: Major > Attachments: HDFS-13003.002.patch, HDFS-13003.patch > > > Access time for a directory can be modified with > DistributedFileSystem.setTimes(). > But this changed access time is not stored in the fsimage. > After restart of namenode, it will be lost and reset as zero. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13003) Access time on dir changed via setTimes() should be stored in fsimage
[ https://issues.apache.org/jira/browse/HDFS-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreejith MV updated HDFS-13003: --- Priority: Major (was: Minor) > Access time on dir changed via setTimes() should be stored in fsimage > - > > Key: HDFS-13003 > URL: https://issues.apache.org/jira/browse/HDFS-13003 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sreejith MV >Assignee: Sreejith MV >Priority: Major > Attachments: HDFS-13003.patch > > > Access time for a directory can be modified with > DistributedFileSystem.setTimes(). > But this changed access time is not stored in the fsimage. > After restart of namenode, it will be lost and reset as zero. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12843) Ozone: Client: TestOzoneRpcClient#testPutKeyRatisThreeNodes is failing
[ https://issues.apache.org/jira/browse/HDFS-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328572#comment-16328572 ] genericqa commented on HDFS-12843: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}192m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.ozone.ksm.TestKeySpaceManager | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.ozone.scm.TestSCMCli | | | hadoop.ozone.web.client.TestKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-12843 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906350/HDFS-12843-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a7a0f8beee4a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 18f9fea | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22678/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-
[jira] [Updated] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
[ https://issues.apache.org/jira/browse/HDFS-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13026: - Status: Patch Available (was: Open) > Ozone: TestContainerPersistence is failing becaue of container data mismatch > > > Key: HDFS-13026 > URL: https://issues.apache.org/jira/browse/HDFS-13026 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13026-HDFS-7240.001.patch > > > TestContainerPersistence fails because of the following error. > {code} > [INFO] Running > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 10.386 s <<< FAILURE! - in > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] > testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) > Time elapsed: 2.294 s <<< FAILURE! org.junit.ComparisonFailure: > expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> > but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
[ https://issues.apache.org/jira/browse/HDFS-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13026: - Environment: (was: TestContainerPersistence fails because of the following error. {code} [INFO] Running org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.386 s <<< FAILURE! - in org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) Time elapsed: 2.294 s <<< FAILURE! org.junit.ComparisonFailure: expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code}) > Ozone: TestContainerPersistence is failing becaue of container data mismatch > > > Key: HDFS-13026 > URL: https://issues.apache.org/jira/browse/HDFS-13026 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13026-HDFS-7240.001.patch > > > TestContainerPersistence fails because of the following error. > {code} > [INFO] Running > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 10.386 s <<< FAILURE! - in > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] > testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) > Time elapsed: 2.294 s <<< FAILURE! org.junit.ComparisonFailure: > expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> > but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsu
[jira] [Updated] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
[ https://issues.apache.org/jira/browse/HDFS-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13026: - Description: TestContainerPersistence fails because of the following error. {code} [INFO] Running org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.386 s <<< FAILURE! - in org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) Time elapsed: 2.294 s <<< FAILURE! org.junit.ComparisonFailure: expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > Ozone: TestContainerPersistence is failing becaue of container data mismatch > > > Key: HDFS-13026 > URL: https://issues.apache.org/jira/browse/HDFS-13026 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 > Environment: TestContainerPersistence fails because of the following > error. > {code} > [INFO] Running > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 10.386 s <<< FAILURE! - in > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] > testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) > Time elapsed: 2.294 s <<< FAILURE! > org.junit.ComparisonFailure: > expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> > but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13026-HDFS-7240.001.patch > > > TestContainerPersistence fails because of the following error. > {code} > [INFO] Running > org.apache.hadoop.ozone.container.common.impl.TestContain
[jira] [Updated] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
[ https://issues.apache.org/jira/browse/HDFS-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13026: - Attachment: HDFS-13026-HDFS-7240.001.patch > Ozone: TestContainerPersistence is failing becaue of container data mismatch > > > Key: HDFS-13026 > URL: https://issues.apache.org/jira/browse/HDFS-13026 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 > Environment: TestContainerPersistence fails because of the following > error. > {code} > [INFO] Running > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 10.386 s <<< FAILURE! - in > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence > [ERROR] > testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) > Time elapsed: 2.294 s <<< FAILURE! > org.junit.ComparisonFailure: > expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> > but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13026-HDFS-7240.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13026) Ozone: TestContainerPersistence is failing becaue of container data mismatch
Mukul Kumar Singh created HDFS-13026: Summary: Ozone: TestContainerPersistence is failing becaue of container data mismatch Key: HDFS-13026 URL: https://issues.apache.org/jira/browse/HDFS-13026 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Environment: TestContainerPersistence fails because of the following error. {code} [INFO] Running org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.386 s <<< FAILURE! - in org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence [ERROR] testMultipleWriteSingleRead(org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence) Time elapsed: 2.294 s <<< FAILURE! org.junit.ComparisonFailure: expected:<[76626052b877b37503fdb052dfd8f73398643a02f9f7432b4e3a7f8b69b85915]> but was:<[940c05649be7c86ab3993aa708f3d30311ebc0b68723465ac9c90460a23e6e58]> at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable
[ https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328527#comment-16328527 ] Hudson commented on HDFS-9049: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13509 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13509/]) HDFS-9049. Make Datanode Netty reverse proxy port to be configurable. (vinayakumarb: rev 09efdfe9e13c9695867ce4034aa6ec970c2032f1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java > Make Datanode Netty reverse proxy port to be configurable > - > > Key: HDFS-9049 > URL: https://issues.apache.org/jira/browse/HDFS-9049 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-9049-01.patch, HDFS-9049-02.patch, > HDFS-9049-03.patch, HDFS-9049-04.patch > > > In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random > port to start with binding to localhost. This port can be made configurable > for better deployments. > {code} > HttpServer2.Builder builder = new HttpServer2.Builder() > .setName("datanode") > .setConf(confForInfoServer) > .setACL(new AccessControlList(conf.get(DFS_ADMIN, " "))) > .hostName(getHostnameForSpnegoPrincipal(confForInfoServer)) > .addEndpoint(URI.create("http://localhost:0";)) > .setFindPort(true); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN
[ https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-8693: --- Fix Version/s: 3.0.1 2.9.1 2.10.0 3.1.0 > refreshNamenodes does not support adding a new standby to a running DN > -- > > Key: HDFS-8693 > URL: https://issues.apache.org/jira/browse/HDFS-8693 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, ha >Affects Versions: 2.6.0 >Reporter: Jian Fang >Assignee: Ajith S >Priority: Critical > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-8693.02.patch, HDFS-8693.03.patch, HDFS-8693.1.patch > > > I tried to run the following command on a Hadoop 2.6.0 cluster with HA > support > $ hdfs dfsadmin -refreshNamenodes datanode-host:port > to refresh name nodes on data nodes after I replaced one name node with a new > one so that I don't need to restart the data nodes. However, I got the > following error: > refreshNamenodes: HA does not currently support adding a new standby to a > running DN. Please do a rolling restart of DNs to reconfigure the list of NNs. > I checked the 2.6.0 code and the error was thrown by the following code > snippet, which led me to this JIRA. > void refreshNNList(ArrayList addrs) throws IOException { > Set oldAddrs = Sets.newHashSet(); > for (BPServiceActor actor : bpServices) > { oldAddrs.add(actor.getNNSocketAddress()); } > Set newAddrs = Sets.newHashSet(addrs); > if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty()) > { // Keep things simple for now -- we can implement this at a later date. > throw new IOException( "HA does not currently support adding a new standby to > a running DN. " + "Please do a rolling restart of DNs to reconfigure the list > of NNs."); } > } > Looks like this the refreshNameNodes command is an uncompleted feature. > Unfortunately, the new name node on a replacement is critical for auto > provisioning a hadoop cluster with HDFS HA support. Without this support, the > HA feature could not really be used. I also observed that the new standby > name node on the replacement instance could stuck in safe mode because no > data nodes check in with it. Even with a rolling restart, it may take quite > some time to restart all data nodes if we have a big cluster, for example, > with 4000 data nodes, let alone restarting DN is way too intrusive and it is > not a preferable operation in production. It also increases the chance for a > double failure because the standby name node is not really ready for a > failover in the case that the current active name node fails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable
[ https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-9049: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.4 3.0.1 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Thanks [~brahmareddy] for reviews. Committed to trunk,branch-3, branch-2, branch-2.9, branch-2.8 > Make Datanode Netty reverse proxy port to be configurable > - > > Key: HDFS-9049 > URL: https://issues.apache.org/jira/browse/HDFS-9049 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-9049-01.patch, HDFS-9049-02.patch, > HDFS-9049-03.patch, HDFS-9049-04.patch > > > In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random > port to start with binding to localhost. This port can be made configurable > for better deployments. > {code} > HttpServer2.Builder builder = new HttpServer2.Builder() > .setName("datanode") > .setConf(confForInfoServer) > .setACL(new AccessControlList(conf.get(DFS_ADMIN, " "))) > .hostName(getHostnameForSpnegoPrincipal(confForInfoServer)) > .addEndpoint(URI.create("http://localhost:0";)) > .setFindPort(true); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13024) Ozone: ContainerStateMachine should synchronize operations between createContainer op and writeChunk
[ https://issues.apache.org/jira/browse/HDFS-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328509#comment-16328509 ] genericqa commented on HDFS-13024: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 0 unchanged - 2 fixed = 0 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.TestListFilesInDFS | | | hadoop.cblock.TestBufferManager | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestClientReportBadBlock | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.ozone.scm.TestSCMCli | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestFileAppend3 | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.fs.viewfs.TestViewFileSystemLinkFallback | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-13024 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906348/HDFS-13024-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8fda4a635474 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | mave
[jira] [Commented] (HDFS-12911) [SPS]: Modularize the SPS code and expose necessary interfaces for external/internal implementations.
[ https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328428#comment-16328428 ] Rakesh R commented on HDFS-12911: - Thanks [~umamaheswararao] for the patch. Adding few comments. # Please update the javadoc of \{{StoragePolicySatisfier}} class, now it is representing only internal. Good to make it generic for both the cases. {code} Here Namenode * will pick the file blocks which are expecting to change its storages {code} # Make spsPaths as final - \{{private final SPSPathIds spsPaths;}}. Also, can you keep this attribute after \{{private boolean spsEnabled;}}, this will help in rebasing. Thanks! # How abt keeping the block move logic inside IntraSPSNameNodeBlockMoveTaskHandler class instead of BlockManager.java. Just a thought to keep minimal changes in BlockManager. {code} /** * Assigns the block movement task to target datanode. */ @Override public void submitMoveTask(BlockMovingInfo blkMovingInfo, BlockMovementListener blockMoveCompletionListener) throws IOException { namesystem.readLock(); try { DatanodeDescriptor dn = datanodeManager .getDatanode(blkMovingInfo.getTarget().getDatanodeUuid()); if (dn == null) { throw new IOException("Failed to schedule block movement task:" + blkMovingInfo + " as target datanode: " + blkMovingInfo.getTarget() + " doesn't exists"); } dn.addBlocksToMoveStorage(blkMovingInfo); dn.incrementBlocksScheduled(blkMovingInfo.getTargetStorageType()); } finally { namesystem.readUnlock(); } } {code} # Add \{{@InterfaceAudience.Private}} to BlockMovementListener, SPSPathIds, ItemInfo. # Typo \{{pathIDProcesor}} => \{{pathIDProcessor}} > [SPS]: Modularize the SPS code and expose necessary interfaces for > external/internal implementations. > - > > Key: HDFS-12911 > URL: https://issues.apache.org/jira/browse/HDFS-12911 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Attachments: HDFS-12911-HDFS-10285-01.patch, > HDFS-12911-HDFS-10285-02.patch, HDFS-12911-HDFS-10285-03.patch, > HDFS-12911.00.patch > > > One of the key comment from the discussions was to modularize the SPS code, > so we can easily plug in the external/internal implementations. This JIRA for > doing the necessary refactoring. > So other comments to handle > Daryn: > # Lock should not kept while executing placement policy. > - handled by HDFS-12982 > # While starting up the NN, SPS Xattrs checks happen even if feature > disabled. This could potentially impact the startup speed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12911) [SPS]: Modularize the SPS code and expose necessary interfaces for external/internal implementations.
[ https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R reassigned HDFS-12911: --- Assignee: Uma Maheswara Rao G (was: Rakesh R) > [SPS]: Modularize the SPS code and expose necessary interfaces for > external/internal implementations. > - > > Key: HDFS-12911 > URL: https://issues.apache.org/jira/browse/HDFS-12911 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Attachments: HDFS-12911-HDFS-10285-01.patch, > HDFS-12911-HDFS-10285-02.patch, HDFS-12911-HDFS-10285-03.patch, > HDFS-12911.00.patch > > > One of the key comment from the discussions was to modularize the SPS code, > so we can easily plug in the external/internal implementations. This JIRA for > doing the necessary refactoring. > So other comments to handle > Daryn: > # Lock should not kept while executing placement policy. > - handled by HDFS-12982 > # While starting up the NN, SPS Xattrs checks happen even if feature > disabled. This could potentially impact the startup speed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org