[jira] [Commented] (HDFS-14150) RBF: Quotas of the sub-cluster should be removed when removing the mount point
[ https://issues.apache.org/jira/browse/HDFS-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721011#comment-16721011 ] Takanobu Asanuma commented on HDFS-14150: - Thanks for the comment, [~brahmareddy] and [~ayushtkn]. Indeed, it is safer. * "hdfs dfsrouteradmin -rm" should not remove the quota * "hdfs dfsrouteradmin -add" should sync the existing quota. [~linyiqun] How does look that? > RBF: Quotas of the sub-cluster should be removed when removing the mount point > -- > > Key: HDFS-14150 > URL: https://issues.apache.org/jira/browse/HDFS-14150 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > From HDFS-14143 > {noformat} > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10 > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: 10/1, SsQuota: > 10 B/0 B] > $ hdfs dfsrouteradmin -rm /ns1_data > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hadoop fs -put file1 /ns1_data/file1 > put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but > diskspace consumed = 402653184 B = 384 MB > {noformat} > This is because the quotas of the subclusters still remain after "hdfs > dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the > existing quotas. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14150) RBF: Quotas of the sub-cluster should be removed when removing the mount point
[ https://issues.apache.org/jira/browse/HDFS-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720999#comment-16720999 ] Ayush Saxena commented on HDFS-14150: - Thanks [~tasanuma0829] for taking this up. >From the discussion at HDFS-14143 bq. Just unset sub-filesystem's quota when removing mount tables. That is a probable solution for sure.But on a thought if the directory that we are mounting already had some quota set in. Shouldn't we sync up with that too?Might be while creating the entry we could check the quotas of the destination and put it in.That I guess would solve our existing use case too. > RBF: Quotas of the sub-cluster should be removed when removing the mount point > -- > > Key: HDFS-14150 > URL: https://issues.apache.org/jira/browse/HDFS-14150 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > From HDFS-14143 > {noformat} > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10 > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: 10/1, SsQuota: > 10 B/0 B] > $ hdfs dfsrouteradmin -rm /ns1_data > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hadoop fs -put file1 /ns1_data/file1 > put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but > diskspace consumed = 402653184 B = 384 MB > {noformat} > This is because the quotas of the subclusters still remain after "hdfs > dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the > existing quotas. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command
[ https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721005#comment-16721005 ] Ayush Saxena commented on HDFS-13839: - [~ramkumar] Pls give a check to discussion at HDFS-13843 > RBF: Add order information in dfsrouteradmin "-ls" command > -- > > Key: HDFS-13839 > URL: https://issues.apache.org/jira/browse/HDFS-13839 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation >Reporter: Soumyapn >Assignee: venkata ram kumar ch >Priority: Major > Labels: RBF > Attachments: HDFS-13839-001.patch > > > Scenario: > If we execute the hdfs dfsrouteradmin -ls command, order information > is not present. > Example: > ./hdfs dfsrouteradmin -ls /apps1 > With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage > information is displayed. But there is no "order" information displayed with > the "ls" command > > Expected: > order information should be displayed with the -ls command to know the order > set. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720993#comment-16720993 ] xiangheng edited comment on HDFS-14138 at 12/14/18 7:01 AM: hi,[~csun],I'm sorry,I created this JIRA HDFS-14138 three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,my english is not very well,thank you very much. was (Author: xiangheng): hi,[~csun],I'm sorry,I created this JIRA HDFS-14138 three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720993#comment-16720993 ] xiangheng edited comment on HDFS-14138 at 12/14/18 6:59 AM: hi,[~csun],I'm sorry,I created this JIRA HDFS-14138 three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. was (Author: xiangheng): hi,[~csun],I'm sorry,I created this JIRA three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14150) RBF: Quotas of the sub-cluster should be removed when removing the mount point
[ https://issues.apache.org/jira/browse/HDFS-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720998#comment-16720998 ] Brahma Reddy Battula commented on HDFS-14150: - IMO, if this will be real case, retaining old values can be better..? > RBF: Quotas of the sub-cluster should be removed when removing the mount point > -- > > Key: HDFS-14150 > URL: https://issues.apache.org/jira/browse/HDFS-14150 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > From HDFS-14143 > {noformat} > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10 > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: 10/1, SsQuota: > 10 B/0 B] > $ hdfs dfsrouteradmin -rm /ns1_data > $ hdfs dfsrouteradmin -add /ns1_data ns1 /data > $ hdfs dfsrouteradmin -ls /ns1_data > SourceDestinations Owner > Group Mode Quota/Usage > /ns1_datans1->/data tasanuma > users rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hadoop fs -put file1 /ns1_data/file1 > put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but > diskspace consumed = 402653184 B = 384 MB > {noformat} > This is because the quotas of the subclusters still remain after "hdfs > dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the > existing quotas. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720993#comment-16720993 ] xiangheng commented on HDFS-14138: -- hi,[~csun],I'm sorry,I created this JIRA three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720993#comment-16720993 ] xiangheng edited comment on HDFS-14138 at 12/14/18 6:49 AM: hi,[~csun],I'm sorry,I created this JIRA three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. was (Author: xiangheng): hi,[~csun],I'm sorry,I created this JIRA three days ago,But have a little repetitions with your patch HDFS-14146,If you would like,I sincerely hope you can change the repetition with me ,Is that ok? I'm sorry,thank you very much. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14150) RBF: Quotas of the sub-cluster should be removed when removing the mount point
Takanobu Asanuma created HDFS-14150: --- Summary: RBF: Quotas of the sub-cluster should be removed when removing the mount point Key: HDFS-14150 URL: https://issues.apache.org/jira/browse/HDFS-14150 Project: Hadoop HDFS Issue Type: Improvement Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma >From HDFS-14143 {noformat} $ hdfs dfsrouteradmin -add /ns1_data ns1 /data $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10 $ hdfs dfsrouteradmin -ls /ns1_data SourceDestinations Owner Group Mode Quota/Usage /ns1_datans1->/data tasanuma users rwxr-xr-x [NsQuota: 10/1, SsQuota: 10 B/0 B] $ hdfs dfsrouteradmin -rm /ns1_data $ hdfs dfsrouteradmin -add /ns1_data ns1 /data $ hdfs dfsrouteradmin -ls /ns1_data SourceDestinations Owner Group Mode Quota/Usage /ns1_datans1->/data tasanuma users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] $ hadoop fs -put file1 /ns1_data/file1 put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but diskspace consumed = 402653184 B = 384 MB {noformat} This is because the quotas of the subclusters still remain after "hdfs dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the existing quotas. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720989#comment-16720989 ] Takanobu Asanuma commented on HDFS-14143: - I created it by HDFS-14150. > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol
[ https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14116: Summary: ObserverReadProxyProvider should work with protocols other than ClientProtocol (was: Fix a potential class cast error in ObserverReadProxyProvider) > ObserverReadProxyProvider should work with protocols other than ClientProtocol > -- > > Key: HDFS-14116 > URL: https://issues.apache.org/jira/browse/HDFS-14116 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Chen Liang >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14116-HDFS-12943.000.patch, > HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, > HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch > > > Currently in {{ObserverReadProxyProvider}} constructor there is this line > {code} > ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext); > {code} > This could potentially cause failure, because it is possible that factory can > not be casted here. Specifically, > {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the > constructor will be called, and there are two paths that could call into this: > (1).{{NameNodeProxies.createProxy}} > (2).{{NameNodeProxiesClient.createFailoverProxyProvider}} > (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses > {{NameNodeHAProxyFactory}} which can not be casted to > {{ClientHAProxyFactory}}, this happens when, for example, running > NNThroughputBenmarck. To fix this we can at least: > 1. introduce setAlignmentContext to HAProxyFactory which is the parent of > both ClientHAProxyFactory and NameNodeHAProxyFactory OR > 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a > if check with reflection. > Depending on whether it make sense to have alignment context for the case (1) > calling code paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720977#comment-16720977 ] Takanobu Asanuma commented on HDFS-14143: - [~linyiqun] Sure, thanks for your kind! I'll create another jira. > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720971#comment-16720971 ] Yiqun Lin commented on HDFS-13443: -- {quote}After expireAfterWrite time cache entries are marked removed but actually not removed. Scheduler is added to forcefully remove entries after expiry time so that connection can be closed. {quote} Expired entries can not only be removed by manually calling cleanup function but also can be evicted on each cache modification, on occasional cache accesses . Please see the javadoc of {{CacheBuilder}}. {quote}If expireAfterWrite or expireAfterAccess is requested entries may be evicted on each cache modification, on occasional cache accesses, or on calls to Cache.cleanUp. Expired entries may be counted in Cache.size, but will never be visible to read or write operations. {quote} > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720971#comment-16720971 ] Yiqun Lin edited comment on HDFS-13443 at 12/14/18 6:10 AM: {quote}After expireAfterWrite time cache entries are marked removed but actually not removed. Scheduler is added to forcefully remove entries after expiry time so that connection can be closed. {quote} Expired entries can not only be removed by manually calling cleanup function but also can be evicted on each cache modification, on occasional cache accesses . Please see the javadoc of {{CacheBuilder}}. {quote}If expireAfterWrite or expireAfterAccess is requested entries may be evicted on each cache modification, on occasional cache accesses, or on calls to Cache.cleanUp. Expired entries may be counted in Cache.size, but will never be visible to read or write operations. {quote} So this will be a delayed-removal. was (Author: linyiqun): {quote}After expireAfterWrite time cache entries are marked removed but actually not removed. Scheduler is added to forcefully remove entries after expiry time so that connection can be closed. {quote} Expired entries can not only be removed by manually calling cleanup function but also can be evicted on each cache modification, on occasional cache accesses . Please see the javadoc of {{CacheBuilder}}. {quote}If expireAfterWrite or expireAfterAccess is requested entries may be evicted on each cache modification, on occasional cache accesses, or on calls to Cache.cleanUp. Expired entries may be counted in Cache.size, but will never be visible to read or write operations. {quote} > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720949#comment-16720949 ] Chao Sun commented on HDFS-14116: - Thanks [~vagarychen] for verifying the failed tests. Attached patch v4 to address the checkstyle issue. > Fix a potential class cast error in ObserverReadProxyProvider > - > > Key: HDFS-14116 > URL: https://issues.apache.org/jira/browse/HDFS-14116 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Chen Liang >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14116-HDFS-12943.000.patch, > HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, > HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch > > > Currently in {{ObserverReadProxyProvider}} constructor there is this line > {code} > ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext); > {code} > This could potentially cause failure, because it is possible that factory can > not be casted here. Specifically, > {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the > constructor will be called, and there are two paths that could call into this: > (1).{{NameNodeProxies.createProxy}} > (2).{{NameNodeProxiesClient.createFailoverProxyProvider}} > (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses > {{NameNodeHAProxyFactory}} which can not be casted to > {{ClientHAProxyFactory}}, this happens when, for example, running > NNThroughputBenmarck. To fix this we can at least: > 1. introduce setAlignmentContext to HAProxyFactory which is the parent of > both ClientHAProxyFactory and NameNodeHAProxyFactory OR > 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a > if check with reflection. > Depending on whether it make sense to have alignment context for the case (1) > calling code paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13970) Use MultiMap for CacheManager Directives to simplify the code
[ https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720964#comment-16720964 ] Hudson commented on HDFS-13970: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15606 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15606/]) HDFS-13970. Use MultiMap for CacheManager Directives to simplify the (aajisaka: rev ca379e1c43fd733a34f3ece6172c96d74c890422) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java > Use MultiMap for CacheManager Directives to simplify the code > - > > Key: HDFS-13970 > URL: https://issues.apache.org/jira/browse/HDFS-13970 > Project: Hadoop HDFS > Issue Type: Improvement > Components: caching, hdfs >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 3.3.0 > > Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch > > > # Use Guava Multimap to simplify code > ## Currently, code uses a mix of LinkedList and ArrayList - just pick one > ## Currently, {{directivesByPath}} structure is sorted but never used in a > sorted way, it only performs remove and add operations, no iteration - use a > {{Set}} instead of a {{List}} for values to support faster remove operation. > Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear > that order really matters. > # The {{CacheDirective}} class needs a better hashcode implementation since > it will be used in a Set. Do not instantiate a {{HashBuilder}} object every > time {{hashcode}} is called. Ouch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720958#comment-16720958 ] Yiqun Lin commented on HDFS-14143: -- {quote} This is because the quotas of the subclusters still remain after "hdfs dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the existing quotas. {quote} [~tasanuma0829], this is a good catch. Are you interested in improving this (Just unset sub-filesystem's quota when removing mount tables.)? I can help the review, :). > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720947#comment-16720947 ] Takanobu Asanuma commented on HDFS-14143: - [~ayushtkn] Yes, I waited for a while after mounting the subcluster. But the existing quota of the subcluster didn't show in the output of "dfsrouteradmin -ls". > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk
[ https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720952#comment-16720952 ] Hadoop QA commented on HDFS-14135: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestModTime | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.TestMultiThreadedHflush | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.TestErasureCodingExerciseAPIs | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.TestFSOutputSummer | | | hadoop.hdfs.TestSetrepIncreasing | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.TestEncryptionZones | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14135 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12951762/HDFS-14135-08.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 95a7bb2c1cb7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
[jira] [Commented] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped
[ https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720954#comment-16720954 ] Hadoop QA commented on HDFS-14132: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 16s{color} | {color:orange} root: The patch generated 3 new + 32 unchanged - 0 fixed = 35 total (was 32) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.TestDecommissionWithStriped | |
[jira] [Updated] (HDFS-13970) Use MultiMap for CacheManager Directives to simplify the code
[ https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-13970: - Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Committed this to trunk. Thanks [~belugabehr] for the contribution! > Use MultiMap for CacheManager Directives to simplify the code > - > > Key: HDFS-13970 > URL: https://issues.apache.org/jira/browse/HDFS-13970 > Project: Hadoop HDFS > Issue Type: Improvement > Components: caching, hdfs >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 3.3.0 > > Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch > > > # Use Guava Multimap to simplify code > ## Currently, code uses a mix of LinkedList and ArrayList - just pick one > ## Currently, {{directivesByPath}} structure is sorted but never used in a > sorted way, it only performs remove and add operations, no iteration - use a > {{Set}} instead of a {{List}} for values to support faster remove operation. > Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear > that order really matters. > # The {{CacheDirective}} class needs a better hashcode implementation since > it will be used in a Set. Do not instantiate a {{HashBuilder}} object every > time {{hashcode}} is called. Ouch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.
[ https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14149: Status: Patch Available (was: Open) > Adjust annotations on new interfaces/classes for SBN reads. > --- > > Key: HDFS-14149 > URL: https://issues.apache.org/jira/browse/HDFS-14149 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14149-HDFS-12943.000.patch > > > Let's make sure that all new classes and interfaces > # do have annotations, as some of them don't, like > {{ObserverReadProxyProvider}} > # that they are annotated as {{Private}} and {{Evolving}}, to allow room for > changes -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.
[ https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14149: Attachment: HDFS-14149-HDFS-12943.000.patch > Adjust annotations on new interfaces/classes for SBN reads. > --- > > Key: HDFS-14149 > URL: https://issues.apache.org/jira/browse/HDFS-14149 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Priority: Major > Attachments: HDFS-14149-HDFS-12943.000.patch > > > Let's make sure that all new classes and interfaces > # do have annotations, as some of them don't, like > {{ObserverReadProxyProvider}} > # that they are annotated as {{Private}} and {{Evolving}}, to allow room for > changes -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.
[ https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun reassigned HDFS-14149: --- Assignee: Chao Sun > Adjust annotations on new interfaces/classes for SBN reads. > --- > > Key: HDFS-14149 > URL: https://issues.apache.org/jira/browse/HDFS-14149 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14149-HDFS-12943.000.patch > > > Let's make sure that all new classes and interfaces > # do have annotations, as some of them don't, like > {{ObserverReadProxyProvider}} > # that they are annotated as {{Private}} and {{Evolving}}, to allow room for > changes -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.
[ https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720944#comment-16720944 ] Chao Sun commented on HDFS-14149: - Attached patch v0. > Adjust annotations on new interfaces/classes for SBN reads. > --- > > Key: HDFS-14149 > URL: https://issues.apache.org/jira/browse/HDFS-14149 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Priority: Major > Attachments: HDFS-14149-HDFS-12943.000.patch > > > Let's make sure that all new classes and interfaces > # do have annotations, as some of them don't, like > {{ObserverReadProxyProvider}} > # that they are annotated as {{Private}} and {{Evolving}}, to allow room for > changes -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14116: Attachment: HDFS-14116-HDFS-12943.004.patch > Fix a potential class cast error in ObserverReadProxyProvider > - > > Key: HDFS-14116 > URL: https://issues.apache.org/jira/browse/HDFS-14116 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Chen Liang >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14116-HDFS-12943.000.patch, > HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, > HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch > > > Currently in {{ObserverReadProxyProvider}} constructor there is this line > {code} > ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext); > {code} > This could potentially cause failure, because it is possible that factory can > not be casted here. Specifically, > {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the > constructor will be called, and there are two paths that could call into this: > (1).{{NameNodeProxies.createProxy}} > (2).{{NameNodeProxiesClient.createFailoverProxyProvider}} > (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses > {{NameNodeHAProxyFactory}} which can not be casted to > {{ClientHAProxyFactory}}, this happens when, for example, running > NNThroughputBenmarck. To fix this we can at least: > 1. introduce setAlignmentContext to HAProxyFactory which is the parent of > both ClientHAProxyFactory and NameNodeHAProxyFactory OR > 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a > if check with reflection. > Depending on whether it make sense to have alignment context for the case (1) > calling code paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13970) Use MultiMap for CacheManager Directives to simplify the code
[ https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-13970: - Flags: (was: Patch) Hadoop Flags: Reviewed Summary: Use MultiMap for CacheManager Directives to simplify the code (was: CacheManager Directives Map) > Use MultiMap for CacheManager Directives to simplify the code > - > > Key: HDFS-13970 > URL: https://issues.apache.org/jira/browse/HDFS-13970 > Project: Hadoop HDFS > Issue Type: Improvement > Components: caching, hdfs >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch > > > # Use Guava Multimap to simplify code > ## Currently, code uses a mix of LinkedList and ArrayList - just pick one > ## Currently, {{directivesByPath}} structure is sorted but never used in a > sorted way, it only performs remove and add operations, no iteration - use a > {{Set}} instead of a {{List}} for values to support faster remove operation. > Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear > that order really matters. > # The {{CacheDirective}} class needs a better hashcode implementation since > it will be used in a Set. Do not instantiate a {{HashBuilder}} object every > time {{hashcode}} is called. Ouch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13661) Ls command with e option fails when the filesystem is not HDFS
[ https://issues.apache.org/jira/browse/HDFS-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720942#comment-16720942 ] Ayush Saxena commented on HDFS-13661: - Sorry for comming late [~tasanuma0829] I too verified this.I guess we can't use this logic there. I guess what the existing logic that has been implemented here to keep "- " we can go ahead with it only. bq. The patch also improves the display of the command like below. Not Sure we can do this improvement as changes in CLI would be considered Incompatible. Just a thought can we handle Exception on the contentSummary.getErasureCodingPolicy() in a way to throw Unsupported Exception in such an occurance. If that is returning an expection means that EC isn't supported. If doesn't sounds good I am good with just the "-" thing. > Ls command with e option fails when the filesystem is not HDFS > -- > > Key: HDFS-13661 > URL: https://issues.apache.org/jira/browse/HDFS-13661 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, tools >Affects Versions: 3.1.0, 3.0.3 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HDFS-13661.1.patch > > > {noformat} > $ hadoop fs -ls -e file:// > Found 10 items > -ls: Fatal internal error > java.lang.NullPointerException > at org.apache.hadoop.fs.shell.Ls.adjustColumnWidths(Ls.java:308) > at org.apache.hadoop.fs.shell.Ls.processPaths(Ls.java:242) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:387) > at org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:226) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269) > at > org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120) > at org.apache.hadoop.fs.shell.Command.run(Command.java:176) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:391) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13970) CacheManager Directives Map
[ https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720939#comment-16720939 ] Akira Ajisaka commented on HDFS-13970: -- +1, committing this. > CacheManager Directives Map > --- > > Key: HDFS-13970 > URL: https://issues.apache.org/jira/browse/HDFS-13970 > Project: Hadoop HDFS > Issue Type: Improvement > Components: caching, hdfs >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch > > > # Use Guava Multimap to simplify code > ## Currently, code uses a mix of LinkedList and ArrayList - just pick one > ## Currently, {{directivesByPath}} structure is sorted but never used in a > sorted way, it only performs remove and add operations, no iteration - use a > {{Set}} instead of a {{List}} for values to support faster remove operation. > Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear > that order really matters. > # The {{CacheDirective}} class needs a better hashcode implementation since > it will be used in a Set. Do not instantiate a {{HashBuilder}} object every > time {{hashcode}} is called. Ouch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13970) CacheManager Directives Map
[ https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720939#comment-16720939 ] Akira Ajisaka edited comment on HDFS-13970 at 12/14/18 5:24 AM: The test failure is not related to the patch. +1, committing this. was (Author: ajisakaa): +1, committing this. > CacheManager Directives Map > --- > > Key: HDFS-13970 > URL: https://issues.apache.org/jira/browse/HDFS-13970 > Project: Hadoop HDFS > Issue Type: Improvement > Components: caching, hdfs >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch > > > # Use Guava Multimap to simplify code > ## Currently, code uses a mix of LinkedList and ArrayList - just pick one > ## Currently, {{directivesByPath}} structure is sorted but never used in a > sorted way, it only performs remove and add operations, no iteration - use a > {{Set}} instead of a {{List}} for values to support faster remove operation. > Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear > that order really matters. > # The {{CacheDirective}} class needs a better hashcode implementation since > it will be used in a Set. Do not instantiate a {{HashBuilder}} object every > time {{hashcode}} is called. Ouch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720937#comment-16720937 ] Mohammad Arshad commented on HDFS-13443: {quote}This will be done inside LoadingCache since we have set expireAfterWrite for this.{quote} After expireAfterWrite time cache entries are marked removed but actually not removed. Scheduler is added to forcefully remove entries after expiry time so that connection can be closed. > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harshakiran Reddy resolved HDFS-14143. -- Resolution: Duplicate > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720924#comment-16720924 ] Ayush Saxena commented on HDFS-14143: - [~tasanuma0829] Did you wait till the dfs.federation.router.quota-cache.update.interval".? > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720923#comment-16720923 ] Takanobu Asanuma commented on HDFS-14143: - Hi, [~Harsha1206]. As [~linyiqun] and [~brahmareddy] said, it seems HDFS-13583 fixed the issue. If that is your case, please close this jira. > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files
[ https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720921#comment-16720921 ] Takanobu Asanuma commented on HDFS-14143: - Actually, the problem I saw is different. Seems {{clrQuota}} works well. Sorry for the confusion. I didn't use {{clrQuota}}. I just removed and added the mount point. {noformat} $ hdfs dfsrouteradmin -add /ns1_data ns1 /data $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10 $ hdfs dfsrouteradmin -ls /ns1_data SourceDestinations Owner Group Mode Quota/Usage /ns1_datans1->/data tasanuma users rwxr-xr-x [NsQuota: 10/1, SsQuota: 10 B/0 B] $ hdfs dfsrouteradmin -rm /ns1_data $ hdfs dfsrouteradmin -add /ns1_data ns1 /data $ hdfs dfsrouteradmin -ls /ns1_data SourceDestinations Owner Group Mode Quota/Usage /ns1_datans1->/data tasanuma users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] $ hadoop fs -put file1 /ns1_data/file1 put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but diskspace consumed = 402653184 B = 384 MB {noformat} This is because the quotas of the subclusters still remain after "{{hdfs dfsrouteradmin -rm}}". And "{{hdfs dfsrouteradmin -add}}" doesn't reflect the existing quotas. > RBF: After clrQuota mount point is not allowing to create new files > > > Key: HDFS-14143 > URL: https://issues.apache.org/jira/browse/HDFS-14143 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Takanobu Asanuma >Priority: Major > Labels: RBF > > {noformat} > bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3 > Successfully set quota for mount point /src10 > bin> ./hdfs dfsrouteradmin -clrQuota /src10 > Successfully clear quota for mount point /src10 > bin> ./hdfs dfs -put harsha /dest10/file1 > bin> ./hdfs dfs -put harsha /dest10/file2 > bin> ./hdfs dfs -put harsha /dest10/file3 > put: The NameSpace quota (directories and files) of directory /dest10 is > exceeded: quota=3 file count=4 > bin> ./hdfs dfsrouteradmin -ls /src10 > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /src10hacluster->/dest10hdfs > hadooprwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > bin> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720910#comment-16720910 ] Yiqun Lin edited comment on HDFS-13443 at 12/14/18 4:53 AM: Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. Minor comments: {code}+ } else { + LOG.warn("Service {} not enabled: depenendent services not enabled.", + MountTableRefresherService.class.getSimpleName()); + } {code} I prefer to log which depenendent service here like {{router admin service or state store service is not enabled.}} {code} +/* + * When cleanUp() method is called, expired RouterClient will be removed and + * closed. + */ +clientCacheCleanerScheduler.scheduleWithFixedDelay( +() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime, +routerClientMaxLiveTime, TimeUnit.MILLISECONDS); {code} I don't think we really need a scheduler thread to clean up the expired router client. This will be done inside LoadingCache since we have set {{expireAfterWrite}} for this. was (Author: linyiqun): Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. minor comments: {quote}+ } else { + LOG.warn("Service {} not enabled: depenendent services not enabled.", + MountTableRefresherService.class.getSimpleName()); + } {quote} I prefer to log which depenendent service here like {{router admin service or state store service is not enabled.}} {quote} +/* + * When cleanUp() method is called, expired RouterClient will be removed and + * closed. + */ +clientCacheCleanerScheduler.scheduleWithFixedDelay( +() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime, +routerClientMaxLiveTime, TimeUnit.MILLISECONDS); {quote} I don't think we really need a scheduler thread to clean up the expired router client. This will be done inside LoadingCache since we have set {{expireAfterWrite}} for this. > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720910#comment-16720910 ] Yiqun Lin edited comment on HDFS-13443 at 12/14/18 4:51 AM: Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. minor comments: {quote}+ } else { + LOG.warn("Service {} not enabled: depenendent services not enabled.", + MountTableRefresherService.class.getSimpleName()); + } {quote} I prefer to log which depenendent service here like {{router admin service or state store service is not enabled.}} {quote} +/* + * When cleanUp() method is called, expired RouterClient will be removed and + * closed. + */ +clientCacheCleanerScheduler.scheduleWithFixedDelay( +() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime, +routerClientMaxLiveTime, TimeUnit.MILLISECONDS); {quote} I don't think we really need a scheduler thread to clean up the expired router client. This will be done inside LoadingCache since we have set {{expireAfterWrite}} for this. was (Author: linyiqun): Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. minor comments: {quote} + } else { +LOG.warn("Service {} not enabled: depenendent services not enabled.", +MountTableRefresherService.class.getSimpleName()); + } {quote} I prefer to log the which depenendent service here like {{router admin service or state store service is not enabled.}} > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node
[ https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720914#comment-16720914 ] Brahma Reddy Battula commented on HDFS-12943: - Thanks all for great work here. I think,write requests can be degraded..? As they also contains some read requests like getFileinfo(),getServerDefaults() ...(getHAServiceState() is newly added) . Just I had checked for mkdir perf,it's like below. * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() + 1 mkdirs = 6 calls) * ii) Every second request is getting timedout[1] and rpc call is getting skipped from observer.( 7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 12 calls).Here two getFileInfo() skipped from observer hence it's success with Active. {noformat} time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF1 real 0m4.314s user 0m3.668s sys 0m0.272s time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF2 real 0m22.238s user 0m3.800s sys 0m0.248s {noformat} without ObserverReadProxyProvider ( 2 getFileInfo() + 1 mkdirs() = 3 Calls) {noformat} time ./hdfs --loglevel debug dfs -mkdir /TestsCFP real 0m2.105s user 0m3.768s sys 0m0.592s {noformat} *Please correct me if I am missing anyting.* timedout[1],Every second write request I am getting following, did I miss something here,these calls are skipped from observer. {noformat} 2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to vm1/10.*.*.*:65110: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] java.net.SocketTimeoutException: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:567) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1849) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) 2018-12-14 11:21:45,313 DEBUG ipc.Client: IPC Client (1006094903) connection to vm1/10.*.*.*:65110 from brahma: closed{noformat} > Consistent Reads from Standby Node > -- > > Key: HDFS-12943 > URL: https://issues.apache.org/jira/browse/HDFS-12943 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Konstantin Shvachko >Priority: Major > Attachments: ConsistentReadsFromStandbyNode.pdf, > ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, > TestPlan-ConsistentReadsFromStandbyNode.pdf > > > StandbyNode in HDFS is a replica of the active NameNode. The states of the > NameNodes are coordinated via the journal. It is natural to consider > StandbyNode as a read-only replica. As with any replicated distributed system > the problem of stale reads should be resolved. Our main goal is to provide > reads from standby in a consistent way in order to enable a wide range of > existing applications running on top of HDFS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12943) Consistent Reads from Standby Node
[ https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720914#comment-16720914 ] Brahma Reddy Battula edited comment on HDFS-12943 at 12/14/18 4:33 AM: --- Thanks all for great work here. I think,write requests can be degraded..? As they also contains some read requests like getFileinfo(),getServerDefaults() ...(getHAServiceState() is newly added) . Just I had checked for mkdir perf,it's like below. * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() + 1 mkdirs = 6 calls) * ii) Every second request is getting timedout[1] and rpc call is getting skipped from observer.( 7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 12 calls).Here two getFileInfo() skipped from observer hence it's success with Active. {noformat} time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF1 real 0m4.314s user 0m3.668s sys 0m0.272s time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF2 real 0m22.238s user 0m3.800s sys 0m0.248s {noformat} *without ObserverReadProxyProvider ( 2 getFileInfo() + 1 mkdirs() = 3 Calls)* {noformat} time ./hdfs --loglevel debug dfs -mkdir /TestsCFP real 0m2.105s user 0m3.768s sys 0m0.592s {noformat} *Please correct me if I am missing anyting.* timedout[1],Every second write request I am getting following, did I miss something here,these calls are skipped from observer. {noformat} 2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to vm1/10.*.*.*:65110: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] java.net.SocketTimeoutException: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:567) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1849) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) 2018-12-14 11:21:45,313 DEBUG ipc.Client: IPC Client (1006094903) connection to vm1/10.*.*.*:65110 from brahma: closed{noformat} was (Author: brahmareddy): Thanks all for great work here. I think,write requests can be degraded..? As they also contains some read requests like getFileinfo(),getServerDefaults() ...(getHAServiceState() is newly added) . Just I had checked for mkdir perf,it's like below. * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() + 1 mkdirs = 6 calls) * ii) Every second request is getting timedout[1] and rpc call is getting skipped from observer.( 7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 12 calls).Here two getFileInfo() skipped from observer hence it's success with Active. {noformat} time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF1 real 0m4.314s user 0m3.668s sys 0m0.272s time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF2 real 0m22.238s user 0m3.800s sys 0m0.248s {noformat} without ObserverReadProxyProvider ( 2 getFileInfo() + 1 mkdirs() = 3 Calls) {noformat} time ./hdfs --loglevel debug dfs -mkdir /TestsCFP real 0m2.105s user 0m3.768s sys 0m0.592s {noformat} *Please correct me if I am missing anyting.* timedout[1],Every second write request I am getting following, did I miss something here,these calls are skipped from observer. {noformat} 2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to vm1/10.*.*.*:65110: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] java.net.SocketTimeoutException: 1
[jira] [Comment Edited] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720822#comment-16720822 ] xiangheng edited comment on HDFS-14138 at 12/14/18 4:33 AM: hi,[~vagarychen],Thanks for your suggestions,I’m sorry,I'm a newcomer in Hadoop community.I created this JIRA three days ago,Please support the newcomers,wish you all the best;),thank you very much. Anyway, I will follow your advice:D. was (Author: xiangheng): Thanks for your suggestions,I’m sorry,I'm a newcomer in Hadoop community.I created this JIRA three days ago,Please support the newcomers,wish you all the best;),thank you very much. Anyway, I will follow your advice:D. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720822#comment-16720822 ] xiangheng edited comment on HDFS-14138 at 12/14/18 4:33 AM: hi,[~vagarychen],Thanks for your suggestions,I’m sorry,I'm a newcomer in Hadoop community.I created this JIRA three days ago,Please support the newcomers,wish you all the best;),thank you very much. was (Author: xiangheng): hi,[~vagarychen],Thanks for your suggestions,I’m sorry,I'm a newcomer in Hadoop community.I created this JIRA three days ago,Please support the newcomers,wish you all the best;),thank you very much. Anyway, I will follow your advice:D. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12943) Consistent Reads from Standby Node
[ https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720914#comment-16720914 ] Brahma Reddy Battula edited comment on HDFS-12943 at 12/14/18 4:33 AM: --- Thanks all for great work here. I think,write requests can be degraded..? As they also contains some read requests like getFileinfo(),getServerDefaults() ...(getHAServiceState() is newly added) . Just I had checked for mkdir perf,it's like below. * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() + 1 mkdirs = 6 calls) * ii) Every second request is getting timedout[1] and rpc call is getting skipped from observer.( 7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 12 calls).Here two getFileInfo() skipped from observer hence it's success with Active. {noformat} time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF1 real 0m4.314s user 0m3.668s sys 0m0.272s time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF2 real 0m22.238s user 0m3.800s sys 0m0.248s {noformat} *without ObserverReadProxyProvider ( 2 getFileInfo() + 1 mkdirs() = 3 Calls)* {noformat} time ./hdfs --loglevel debug dfs -mkdir /TestsCFP real 0m2.105s user 0m3.768s sys 0m0.592s {noformat} *Please correct me if I am missing anyting.* timedout[1],Every second write request I am getting following, did I miss something here,these calls are skipped from observer. {noformat} 2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to vm1/10.*.*.*:65110: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] java.net.SocketTimeoutException: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:567) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1849) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) 2018-12-14 11:21:45,313 DEBUG ipc.Client: IPC Client (1006094903) connection to vm1/10.*.*.*:65110 from brahma: closed{noformat} was (Author: brahmareddy): Thanks all for great work here. I think,write requests can be degraded..? As they also contains some read requests like getFileinfo(),getServerDefaults() ...(getHAServiceState() is newly added) . Just I had checked for mkdir perf,it's like below. * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() + 1 mkdirs = 6 calls) * ii) Every second request is getting timedout[1] and rpc call is getting skipped from observer.( 7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 12 calls).Here two getFileInfo() skipped from observer hence it's success with Active. {noformat} time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF1 real 0m4.314s user 0m3.668s sys 0m0.272s time hdfs --loglevel debug dfs -Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider -mkdir /TestsORF2 real 0m22.238s user 0m3.800s sys 0m0.248s {noformat} *without ObserverReadProxyProvider ( 2 getFileInfo() + 1 mkdirs() = 3 Calls)* {noformat} time ./hdfs --loglevel debug dfs -mkdir /TestsCFP real 0m2.105s user 0m3.768s sys 0m0.592s {noformat} *Please correct me if I am missing anyting.* timedout[1],Every second write request I am getting following, did I miss something here,these calls are skipped from observer. {noformat} 2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to vm1/10.*.*.*:65110: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110] java.net.SocketTimeoutException: 1
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720910#comment-16720910 ] Yiqun Lin commented on HDFS-13443: -- Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. minor comments: {quote} + } else { +LOG.warn("Service {} not enabled: depenendent services not enabled.", +MountTableRefresherService.class.getSimpleName()); + } {quote} I prefer to log the which depenendent service here like {{router admin service or state store service is not enabled.}} > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, > HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, > HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, > HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, > HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, > HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, > HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, > HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk
[ https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-14135: Attachment: HDFS-14135-08.patch > TestWebHdfsTimeouts Fails intermittently in trunk > - > > Key: HDFS-14135 > URL: https://issues.apache.org/jira/browse/HDFS-14135 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch, > HDFS-14135-03.patch, HDFS-14135-04.patch, HDFS-14135-05.patch, > HDFS-14135-06.patch, HDFS-14135-07.patch, HDFS-14135-08.patch > > > Reference to failure > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720863#comment-16720863 ] Ayush Saxena commented on HDFS-14145: - Thanx [~elgoiri] for reviewing. The failure is due to: {code:java} java.lang.OutOfMemoryError: Java heap space {code} I have verified the results at local.It passes successfully. {noformat} [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA [INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.954 s - in org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA [INFO] [INFO] Results: [INFO] [INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-antrun-plugin:1.7:run (hdfs-test-bats-driver) @ hadoop-hdfs --- [INFO] Executing tasks [INFO] Executed tasks [INFO] [INFO] BUILD SUCCESS [INFO] {noformat} > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch, HDFS-14145-02.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.
Konstantin Shvachko created HDFS-14149: -- Summary: Adjust annotations on new interfaces/classes for SBN reads. Key: HDFS-14149 URL: https://issues.apache.org/jira/browse/HDFS-14149 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-12943 Reporter: Konstantin Shvachko Let's make sure that all new classes and interfaces # do have annotations, as some of them don't, like {{ObserverReadProxyProvider}} # that they are annotated as {{Private}} and {{Evolving}}, to allow room for changes -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-881) Encapsulate all client to OM requests into one request type
[ https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720853#comment-16720853 ] Hadoop QA commented on HDDS-881: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-ozone: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} root generated 0 new + 13 unchanged - 1 fixed = 13 total (was 14) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 6s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 8s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 5s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.web.TestOzoneRestWithMiniCluster | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.TestMiniOzoneCluster | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.ozShell.TestOzoneShell | | | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient | | | hadoop.ozone.client.rpc.TestFailureHandlingByClient | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.web.client.TestVolume | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler | | | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.ozone.om.TestContainerReportWithKeys | | | hadoop.ozone.web.client.TestBuckets | | | hadoop.ozone.om.TestOzoneManager | | | hadoop.ozone.web.client.TestKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-881 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12951750/HDDS-881.005.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 8d5e574f3e20 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 4aa0609 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | |
[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped
[ https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14132: -- Attachment: HDFS-14132.001.patch > Add BlockLocation.isStriped() to determine if block is replicated or Striped > > > Key: HDFS-14132 > URL: https://issues.apache.org/jira/browse/HDFS-14132 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > > Impala uses FileSystem#getBlockLocation to get block locations. We can add > isStriped() method for it to easier determine the block is belonged to > replicated file or striped file. > In HDFS, this isStriped information is already in > HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to > BlockLocation does not introduce space overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped
[ https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14132: -- Attachment: HDFS-14132.001.patch Status: Patch Available (was: Open) > Add BlockLocation.isStriped() to determine if block is replicated or Striped > > > Key: HDFS-14132 > URL: https://issues.apache.org/jira/browse/HDFS-14132 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-14132.001.patch > > > Impala uses FileSystem#getBlockLocation to get block locations. We can add > isStriped() method for it to easier determine the block is belonged to > replicated file or striped file. > In HDFS, this isStriped information is already in > HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to > BlockLocation does not introduce space overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped
[ https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14132: -- Attachment: (was: HDFS-14132.001.patch) > Add BlockLocation.isStriped() to determine if block is replicated or Striped > > > Key: HDFS-14132 > URL: https://issues.apache.org/jira/browse/HDFS-14132 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > > Impala uses FileSystem#getBlockLocation to get block locations. We can add > isStriped() method for it to easier determine the block is belonged to > replicated file or striped file. > In HDFS, this isStriped information is already in > HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to > BlockLocation does not introduce space overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-881) Encapsulate all client to OM requests into one request type
[ https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-881: Summary: Encapsulate all client to OM requests into one request type (was: Add support to transform client requests to OM into Ratis requests) > Encapsulate all client to OM requests into one request type > --- > > Key: HDDS-881 > URL: https://issues.apache.org/jira/browse/HDDS-881 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-881.001.patch, HDDS-881.002.patch, > HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch > > > When OM receives a request, we need to transform the request into Ratis > server compatible request so that the OM's Ratis server can process that > request. > In this Jira, we just add the support to convert a client request received by > OM into a RaftClient request. This transformed request would later be passed > onto the OM's Ratis server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-881) Add support to transform client requests to OM into Ratis requests
[ https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720826#comment-16720826 ] Hanisha Koneru commented on HDDS-881: - Thank you [~msingh] for the offline discussion. Updated patch v05 to have a single rpc call - submitRequest to OM. All client requests such as createVolume, deleteBucket etc. will be encapsulated in OMRequest. > Add support to transform client requests to OM into Ratis requests > -- > > Key: HDDS-881 > URL: https://issues.apache.org/jira/browse/HDDS-881 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-881.001.patch, HDDS-881.002.patch, > HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch > > > When OM receives a request, we need to transform the request into Ratis > server compatible request so that the OM's Ratis server can process that > request. > In this Jira, we just add the support to convert a client request received by > OM into a RaftClient request. This transformed request would later be passed > onto the OM's Ratis server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-881) Add support to transform client requests to OM into Ratis requests
[ https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-881: Attachment: HDDS-881.005.patch > Add support to transform client requests to OM into Ratis requests > -- > > Key: HDDS-881 > URL: https://issues.apache.org/jira/browse/HDDS-881 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-881.001.patch, HDDS-881.002.patch, > HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch > > > When OM receives a request, we need to transform the request into Ratis > server compatible request so that the OM's Ratis server can process that > request. > In this Jira, we just add the support to convert a client request received by > OM into a RaftClient request. This transformed request would later be passed > onto the OM's Ratis server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720822#comment-16720822 ] xiangheng commented on HDFS-14138: -- Thanks for your suggestions,I’m sorry,I'm a newcomer in Hadoop community.I created this JIRA three days ago,Please support the newcomers,wish you all the best;),thank you very much. Anyway, I will follow your advice:D. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720798#comment-16720798 ] Chen Liang commented on HDFS-14138: --- Hey [~xiangheng] thanks for looking through the code! I missed this JIra, you can file this Jira as subtask of HDFS-12943, it would be a lot easier for us to realize and track :). This is indeed a typo by me, but I think the ongoing HDFS-14146 is fixing this also. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14138) Description errors in the comparison logic of transaction ID
[ https://issues.apache.org/jira/browse/HDFS-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720731#comment-16720731 ] xiangheng commented on HDFS-14138: -- [~xkrogen],[~csun],[~vagarychen],Please give some review suggestions,thanks all. > Description errors in the comparison logic of transaction ID > > > Key: HDFS-14138 > URL: https://issues.apache.org/jira/browse/HDFS-14138 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: xiangheng >Priority: Minor > Attachments: HDFS-14138-HDFS-12943.000.patch > > > The call processing should be postponed until the client call's state id is > aligned (<=) with the server state id,not >=. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720720#comment-16720720 ] Chen Liang commented on HDFS-14116: --- Just one more trivial thing, can we fix at least the third one of the checkstyle warning? Which I think should be just add 'private' to the service proxy variable. +1 with this fixed. I've also run the failed tests locally and they all passed. > Fix a potential class cast error in ObserverReadProxyProvider > - > > Key: HDFS-14116 > URL: https://issues.apache.org/jira/browse/HDFS-14116 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Chen Liang >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-14116-HDFS-12943.000.patch, > HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, > HDFS-14116-HDFS-12943.003.patch > > > Currently in {{ObserverReadProxyProvider}} constructor there is this line > {code} > ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext); > {code} > This could potentially cause failure, because it is possible that factory can > not be casted here. Specifically, > {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the > constructor will be called, and there are two paths that could call into this: > (1).{{NameNodeProxies.createProxy}} > (2).{{NameNodeProxiesClient.createFailoverProxyProvider}} > (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses > {{NameNodeHAProxyFactory}} which can not be casted to > {{ClientHAProxyFactory}}, this happens when, for example, running > NNThroughputBenmarck. To fix this we can at least: > 1. introduce setAlignmentContext to HAProxyFactory which is the parent of > both ClientHAProxyFactory and NameNodeHAProxyFactory OR > 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a > if check with reflection. > Depending on whether it make sense to have alignment context for the case (1) > calling code paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720705#comment-16720705 ] Íñigo Goiri commented on HDFS-14145: {{TestDFSAdminWithHA}} must be unrelated; [~ayushtkn] can you confirm? {{TestWebHdfsTimeouts}} is well known. +1 on [^HDFS-14145-02.patch]. > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch, HDFS-14145-02.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14148) HDFS OIV ReverseXML snapshot issue: Found unknown XML keys in : dir
[ https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14148: -- Summary: HDFS OIV ReverseXML snapshot issue: Found unknown XML keys in : dir (was: HDFS OIV ReverseXML doesn't support snapshot well) > HDFS OIV ReverseXML snapshot issue: Found unknown XML keys in : dir > > > Key: HDFS-14148 > URL: https://issues.apache.org/jira/browse/HDFS-14148 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Siyao Meng >Priority: Major > > The current HDFS OIV tool doesn't seem to support snapshot well when > reversing XML back to binary. > {code:bash} > $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML > OfflineImageReconstructor failed: Found unknown XML keys in : dir > java.io.IOException: Found unknown XML keys in : dir > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) > $ grep -n "" fsimage_0026542.xml > 228:222049220495 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720685#comment-16720685 ] Hadoop QA commented on HDFS-14145: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14145 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12951728/HDFS-14145-02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 77d352ee6019 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aa0609 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/25802/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/25802/testReport/ | | Max. process+thread count | 4361 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/25802/console | | Powered by | Apache Yetus 0.8.0
[jira] [Updated] (HDFS-14148) HDFS OIV ReverseXML doesn't support snapshot well
[ https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14148: -- Description: The current HDFS OIV tool doesn't seem to support snapshot well when reversing XML back to binary. {code:bash} $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML OfflineImageReconstructor failed: Found unknown XML keys in : dir java.io.IOException: Found unknown XML keys in : dir at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) $ grep -n "" fsimage_0026542.xml 228:222049220495 {code} was: The current HDFS OIV tool doesn't seem to support snapshot when reversing XML back to binary. {code:bash} $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML OfflineImageReconstructor failed: Found unknown XML keys in : dir java.io.IOException: Found unknown XML keys in : dir at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) $ grep -n "" fsimage_0026542.xml 228:222049220495 {code} > HDFS OIV ReverseXML doesn't support snapshot well > - > > Key: HDFS-14148 > URL: https://issues.apache.org/jira/browse/HDFS-14148 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Siyao Meng >Priority: Major > > The current HDFS OIV tool doesn't seem to support snapshot well when > reversing XML back to binary. > {code:bash} > $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML > OfflineImageReconstructor failed: Found unknown XML keys in : dir > java.io.IOException: Found unknown XML keys in : dir > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) > $ grep -n "" fsimage_0026542.xml > 228:222049220495 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14148) HDFS OIV ReverseXML doesn't support snapshot well
[ https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14148: -- Summary: HDFS OIV ReverseXML doesn't support snapshot well (was: HDFS OIV ReverseXML: Support Snapshot) > HDFS OIV ReverseXML doesn't support snapshot well > - > > Key: HDFS-14148 > URL: https://issues.apache.org/jira/browse/HDFS-14148 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Siyao Meng >Priority: Major > > The current HDFS OIV tool doesn't seem to support snapshot when reversing XML > back to binary. > {code:bash} > $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML > OfflineImageReconstructor failed: Found unknown XML keys in : dir > java.io.IOException: Found unknown XML keys in : dir > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) > $ grep -n "" fsimage_0026542.xml > 228:222049220495 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14147) Backport of HDFS-13056 to the 2.9 branch
[ https://issues.apache.org/jira/browse/HDFS-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720676#comment-16720676 ] Giovanni Matteo Fumarola commented on HDFS-14147: - cc. [~dennishuo] , [~xiaochen] , [~ste...@apache.org] > Backport of HDFS-13056 to the 2.9 branch > > > Key: HDFS-14147 > URL: https://issues.apache.org/jira/browse/HDFS-14147 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, hdfs >Affects Versions: 2.9.0, 2.9.1, 2.9.2 >Reporter: Yan >Priority: Major > Attachments: HDFS-14147-branch-2.9.v1.patch, HDFS-14147.pdf > > > HDFS-13056, Expose file-level composite CRCs in HDFS which are comparable > across different instances/layouts, is a significant feature for storage > agnostic CRC comparisons between HDFS and cloud object stores such as S3 and > GCS. With the extensively installed base of Hadoop 2, it should make a lot of > sense to have the feature in Hadoop 2. > The plan is to start with the backporting to 2.9, followed by 2.8 and 2.7 in > that order. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720672#comment-16720672 ] Hadoop QA commented on HDFS-14146: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 47s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 19s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 54s{color} | {color:orange} root: The patch generated 2 new + 204 unchanged - 0 fixed = 206 total (was 204) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 54s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.balancer.TestBalancer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14146 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12951719/HDFS-14146-HDFS-12943.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 05f8e1947b08 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12943 / e87e797 | | maven | version:
[jira] [Updated] (HDFS-14148) HDFS OIV ReverseXML: Support Snapshot
[ https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14148: -- Description: The current HDFS OIV tool doesn't seem to support snapshot when reversing XML back to binary. {code:bash} $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML OfflineImageReconstructor failed: Found unknown XML keys in : dir java.io.IOException: Found unknown XML keys in : dir at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) $ grep -n "" fsimage_0026542.xml 228:222049220495 {code} was: The current HDFS OIV tool doesn't seem to support snapshot when reversing XML back to binary. {code:bash} $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML OfflineImageReconstructor failed: Found unknown XML keys in : dir java.io.IOException: Found unknown XML keys in : dir at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) {code} > HDFS OIV ReverseXML: Support Snapshot > - > > Key: HDFS-14148 > URL: https://issues.apache.org/jira/browse/HDFS-14148 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Siyao Meng >Priority: Major > > The current HDFS OIV tool doesn't seem to support snapshot when reversing XML > back to binary. > {code:bash} > $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML > OfflineImageReconstructor failed: Found unknown XML keys in : dir > java.io.IOException: Found unknown XML keys in : dir > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) > $ grep -n "" fsimage_0026542.xml > 228:222049220495 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.
[ https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-13873: --- Fix Version/s: HDFS-12943 > ObserverNode should reject read requests when it is too far behind. > --- > > Key: HDFS-13873 > URL: https://issues.apache.org/jira/browse/HDFS-13873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13873-HDFS-12943.001.patch, > HDFS-13873-HDFS-12943.002.patch, HDFS-13873-HDFS-12943.003.patch, > HDFS-13873-HDFS-12943.004.patch, HDFS-13873-HDFS-12943.005.patch > > > Add a server-side threshold for ObserverNode to reject read requests when it > is too far behind. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14148) HDFS OIV ReverseXML: Support Snapshot
Siyao Meng created HDFS-14148: - Summary: HDFS OIV ReverseXML: Support Snapshot Key: HDFS-14148 URL: https://issues.apache.org/jira/browse/HDFS-14148 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.0.0 Reporter: Siyao Meng The current HDFS OIV tool doesn't seem to support snapshot when reversing XML back to binary. {code:bash} $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML OfflineImageReconstructor failed: Found unknown XML keys in : dir java.io.IOException: Found unknown XML keys in : dir at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.
[ https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13873: --- Status: Open (was: Patch Available) More clarification in documentation. JavaDoc only change. > ObserverNode should reject read requests when it is too far behind. > --- > > Key: HDFS-13873 > URL: https://issues.apache.org/jira/browse/HDFS-13873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13873-HDFS-12943.001.patch, > HDFS-13873-HDFS-12943.002.patch, HDFS-13873-HDFS-12943.003.patch, > HDFS-13873-HDFS-12943.004.patch, HDFS-13873-HDFS-12943.005.patch > > > Add a server-side threshold for ObserverNode to reject read requests when it > is too far behind. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.
[ https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720660#comment-16720660 ] Erik Krogen commented on HDFS-13873: +1 LGTM. Just committed based off of v004 Jenkins run given that it was a Javadoc only from v004 -> v005. > ObserverNode should reject read requests when it is too far behind. > --- > > Key: HDFS-13873 > URL: https://issues.apache.org/jira/browse/HDFS-13873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13873-HDFS-12943.001.patch, > HDFS-13873-HDFS-12943.002.patch, HDFS-13873-HDFS-12943.003.patch, > HDFS-13873-HDFS-12943.004.patch, HDFS-13873-HDFS-12943.005.patch > > > Add a server-side threshold for ObserverNode to reject read requests when it > is too far behind. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.
[ https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen resolved HDFS-13873. Resolution: Fixed > ObserverNode should reject read requests when it is too far behind. > --- > > Key: HDFS-13873 > URL: https://issues.apache.org/jira/browse/HDFS-13873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13873-HDFS-12943.001.patch, > HDFS-13873-HDFS-12943.002.patch, HDFS-13873-HDFS-12943.003.patch, > HDFS-13873-HDFS-12943.004.patch, HDFS-13873-HDFS-12943.005.patch > > > Add a server-side threshold for ObserverNode to reject read requests when it > is too far behind. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.
[ https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13873: --- Attachment: HDFS-13873-HDFS-12943.005.patch > ObserverNode should reject read requests when it is too far behind. > --- > > Key: HDFS-13873 > URL: https://issues.apache.org/jira/browse/HDFS-13873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Affects Versions: HDFS-12943 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13873-HDFS-12943.001.patch, > HDFS-13873-HDFS-12943.002.patch, HDFS-13873-HDFS-12943.003.patch, > HDFS-13873-HDFS-12943.004.patch, HDFS-13873-HDFS-12943.005.patch > > > Add a server-side threshold for ObserverNode to reject read requests when it > is too far behind. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720641#comment-16720641 ] Hadoop QA commented on HDFS-14116: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 31s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 5s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 25s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 2s{color} | {color:orange} root: The patch generated 3 new + 254 unchanged - 3 fixed = 257 total (was 257) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}200m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDistributedFileSystemWithECFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover | | | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14116 | | JIRA Patch URL |
[jira] [Created] (HDDS-927) Add negetive test case for grpc mTLS auth
Ajay Kumar created HDDS-927: --- Summary: Add negetive test case for grpc mTLS auth Key: HDDS-927 URL: https://issues.apache.org/jira/browse/HDDS-927 Project: Hadoop Distributed Data Store Issue Type: New Feature Reporter: Ajay Kumar Add negetive test case for grpc mTLS auth -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS
[ https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-115: Resolution: Fixed Status: Resolved (was: Patch Available) [~xyao] thanks for contribution. HDDS-927 to track negative test case. > GRPC: Support secure gRPC endpoint with mTLS > - > > Key: HDDS-115 > URL: https://issues.apache.org/jira/browse/HDDS-115 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDDS-115-HDDS-4.001.patch, HDDS-115-HDDS-4.002.patch, > HDDS-115-HDDS-4.003.patch, HDDS-115-HDDS-4.004.patch, > HDDS-115-HDDS-4.005.patch, HDDS-115-HDDS-4.006.patch, > HDDS-115-HDDS-4.008.patch, HDDS-115-HDDS-4.009.patch, > HDDS-115-HDDS-4.010.patch, HDDS-115-HDDS-4.011.patch, > HDDS-115-HDDS-4.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14147) Backport of HDFS-13056 to the 2.9 branch
[ https://issues.apache.org/jira/browse/HDFS-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yan updated HDFS-14147: --- Attachment: (was: HDFS-14147.pdf) > Backport of HDFS-13056 to the 2.9 branch > > > Key: HDFS-14147 > URL: https://issues.apache.org/jira/browse/HDFS-14147 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, hdfs >Affects Versions: 2.9.0, 2.9.1, 2.9.2 >Reporter: Yan >Priority: Major > Attachments: HDFS-14147-branch-2.9.v1.patch, HDFS-14147.pdf > > > HDFS-13056, Expose file-level composite CRCs in HDFS which are comparable > across different instances/layouts, is a significant feature for storage > agnostic CRC comparisons between HDFS and cloud object stores such as S3 and > GCS. With the extensively installed base of Hadoop 2, it should make a lot of > sense to have the feature in Hadoop 2. > The plan is to start with the backporting to 2.9, followed by 2.8 and 2.7 in > that order. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14147) Backport of HDFS-13056 to the 2.9 branch
[ https://issues.apache.org/jira/browse/HDFS-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yan updated HDFS-14147: --- Attachment: HDFS-14147.pdf > Backport of HDFS-13056 to the 2.9 branch > > > Key: HDFS-14147 > URL: https://issues.apache.org/jira/browse/HDFS-14147 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, hdfs >Affects Versions: 2.9.0, 2.9.1, 2.9.2 >Reporter: Yan >Priority: Major > Attachments: HDFS-14147-branch-2.9.v1.patch, HDFS-14147.pdf > > > HDFS-13056, Expose file-level composite CRCs in HDFS which are comparable > across different instances/layouts, is a significant feature for storage > agnostic CRC comparisons between HDFS and cloud object stores such as S3 and > GCS. With the extensively installed base of Hadoop 2, it should make a lot of > sense to have the feature in Hadoop 2. > The plan is to start with the backporting to 2.9, followed by 2.8 and 2.7 in > that order. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720615#comment-16720615 ] Hadoop QA commented on HDFS-14146: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 28s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 33s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 57s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 52s{color} | {color:orange} root: The patch generated 2 new + 204 unchanged - 0 fixed = 206 total (was 204) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestFileAppend2 | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.TestReadStripedFileWithDNFailure | | | hadoop.hdfs.server.balancer.TestBalancer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue
[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient
[ https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720611#comment-16720611 ] Hadoop QA commented on HDFS-14084: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 13s{color} | {color:red} root generated 1 new + 1490 unchanged - 0 fixed = 1491 total (was 1490) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 49s{color} | {color:orange} root: The patch generated 19 new + 133 unchanged - 1 fixed = 152 total (was 134) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}187m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.web.TestWebHDFS | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14084 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12951712/HDFS-14084.007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9c2c2445a24c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDDS-910) Expose OMMetrics
[ https://issues.apache.org/jira/browse/HDDS-910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720607#comment-16720607 ] Bharat Viswanadham commented on HDDS-910: - Thank You [~elek] for the info. It helped me understand how these things work. Yes, I have also tried with file sink, and I am able to see metrics. I have one question I have created HDDS-917 for exposing NodeManagerMXBean, i think for that we need to implement this MetricsSource interface correct? as we don't have any @Metrics notations. > Expose OMMetrics > > > Key: HDDS-910 > URL: https://issues.apache.org/jira/browse/HDDS-910 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDDS-910.00.patch > > > Implement MetricsSource interface, so that external metrics can collect the > OMMetrics. > > From *MetricsSource.java:* > It registers with \{@link MetricsSystem}, which periodically polls it to > collect \{@link MetricsRecord} and passes it to \{@link MetricsSink}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720603#comment-16720603 ] Ayush Saxena commented on HDFS-14145: - Thanx [~elgoiri] for the agreement.I checked the other tests too.It seems barring one which actually has its use case and expects the item to be put in retry queue.Doesn't start this thread. I have uploaded the v2 with the fix. Pls Review :) > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch, HDFS-14145-02.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-14145: Attachment: HDFS-14145-02.patch > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch, HDFS-14145-02.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720588#comment-16720588 ] Íñigo Goiri commented on HDFS-14145: I like that better. Another option would be to spy the monitor thread and check its status. > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14134) Idempotent operations throwing RemoteException should not be retried by the client
[ https://issues.apache.org/jira/browse/HDFS-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720572#comment-16720572 ] Kitti Nanasi commented on HDFS-14134: - The relevant part is the following: {quote}in FailoverOnNetworkExceptionRetry#shouldRetry we don't fail-over and retry if we're making a non-idempotent call and there's an IOException or SocketException that's not Connect, NoRouteToHost, UnknownHost, or Standby. The rationale of course is that the operation may have reached the server and retrying elsewhere could leave us in an insconsistent state. This means if a client doing a create/delete which gets a SocketTimeoutException (which is an IOE) or an EOF SocketException the exception will be thrown all the way up to the caller of FileSystem/FileContext. That's reasonable because only the user of the API at this level has sufficient knoweldge of how to handle the failure, eg if they get such an exception after issuing a delete they can check if the file still exists and if so re-issue the delete (however they may also not want to do this, and FileContext doesn't know which). {quote} > Idempotent operations throwing RemoteException should not be retried by the > client > -- > > Key: HDFS-14134 > URL: https://issues.apache.org/jira/browse/HDFS-14134 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client, ipc >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-14134.001.patch, HDFS-14134.002.patch, > HDFS-14134.003.patch, HDFS-14134.004.patch, HDFS-14134.005.patch, > HDFS-14134_retrypolicy_change_proposal.pdf > > > Currently, some operations that throw IOException on the NameNode are > evaluated by RetryPolicy as FAILOVER_AND_RETRY, but they should just fail > fast. > For example, when calling getXAttr("user.some_attr", file") where the file > does not have the attribute, NN throws an IOException with message "could not > find attr". The current client retry policy determines the action for that to > be FAILOVER_AND_RETRY. The client then fails over and retries until it > reaches the maximum number of retries. Supposedly, the client should be able > to tell that this exception is normal and fail fast. > Moreover, even if the action was FAIL, the RetryInvocationHandler looks at > all the retry actions from all requests, and FAILOVER_AND_RETRY takes > precedence over FAIL action. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-99) Adding SCM Audit log
[ https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720570#comment-16720570 ] Dinesh Chitlangia commented on HDDS-99: --- [~ajayydv] sure, once all audit logging is in, we can look at making improvements as needed. > Adding SCM Audit log > > > Key: HDDS-99 > URL: https://issues.apache.org/jira/browse/HDDS-99 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Xiaoyu Yao >Assignee: Dinesh Chitlangia >Priority: Major > Labels: alpha2 > Attachments: HDDS-99.001.patch, HDDS-99.002.patch > > > This ticket is opened to add SCM audit log. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720569#comment-16720569 ] Erik Krogen commented on HDFS-14146: +1 from me pending Jenkins. Thanks [~csun]! > Handle exception from internalQueueCall > --- > > Key: HDFS-14146 > URL: https://issues.apache.org/jira/browse/HDFS-14146 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14146-HDFS-12943.000.patch, > HDFS-14146-HDFS-12943.001.patch, HDFS-14146-HDFS-12943.002.patch > > > When we re-queue RPC call, the {{internalQueueCall}} will potentially throw > exceptions (e.g., RPC backoff), which is then swallowed. This will cause the > RPC to be silently discarded without response to the client, which is not > good. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14134) Idempotent operations throwing RemoteException should not be retried by the client
[ https://issues.apache.org/jira/browse/HDFS-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720564#comment-16720564 ] Lukas Majercak commented on HDFS-14134: --- I'll go through that discussion. > Idempotent operations throwing RemoteException should not be retried by the > client > -- > > Key: HDFS-14134 > URL: https://issues.apache.org/jira/browse/HDFS-14134 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client, ipc >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-14134.001.patch, HDFS-14134.002.patch, > HDFS-14134.003.patch, HDFS-14134.004.patch, HDFS-14134.005.patch, > HDFS-14134_retrypolicy_change_proposal.pdf > > > Currently, some operations that throw IOException on the NameNode are > evaluated by RetryPolicy as FAILOVER_AND_RETRY, but they should just fail > fast. > For example, when calling getXAttr("user.some_attr", file") where the file > does not have the attribute, NN throws an IOException with message "could not > find attr". The current client retry policy determines the action for that to > be FAILOVER_AND_RETRY. The client then fails over and retries until it > reaches the maximum number of retries. Supposedly, the client should be able > to tell that this exception is normal and fail fast. > Moreover, even if the action was FAIL, the RetryInvocationHandler looks at > all the retry actions from all requests, and FAILOVER_AND_RETRY takes > precedence over FAIL action. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14134) Idempotent operations throwing RemoteException should not be retried by the client
[ https://issues.apache.org/jira/browse/HDFS-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720563#comment-16720563 ] Kitti Nanasi commented on HDFS-14134: - Yes, this change covers that, I just wanted to understand why you changed it like that, but we're pretty much on the same page now. I have only one concern, which is the case of non-remote IOExceptions on non-idempotent operations, I'm not sure if retrying those will cause any problems. For reference there is a discussion on [HADOOP-7380|https://issues.apache.org/jira/browse/HADOOP-7380] on why it was introduced. Other than that patch v5 looks good. > Idempotent operations throwing RemoteException should not be retried by the > client > -- > > Key: HDFS-14134 > URL: https://issues.apache.org/jira/browse/HDFS-14134 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client, ipc >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-14134.001.patch, HDFS-14134.002.patch, > HDFS-14134.003.patch, HDFS-14134.004.patch, HDFS-14134.005.patch, > HDFS-14134_retrypolicy_change_proposal.pdf > > > Currently, some operations that throw IOException on the NameNode are > evaluated by RetryPolicy as FAILOVER_AND_RETRY, but they should just fail > fast. > For example, when calling getXAttr("user.some_attr", file") where the file > does not have the attribute, NN throws an IOException with message "could not > find attr". The current client retry policy determines the action for that to > be FAILOVER_AND_RETRY. The client then fails over and retries until it > reaches the maximum number of retries. Supposedly, the client should be able > to tell that this exception is normal and fail fast. > Moreover, even if the action was FAIL, the RetryInvocationHandler looks at > all the retry actions from all requests, and FAILOVER_AND_RETRY takes > precedence over FAIL action. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720511#comment-16720511 ] Ayush Saxena edited comment on HDFS-14145 at 12/13/18 8:02 PM: --- [~elgoiri] In the test the at : {code:java} bsmAttemptedItems.add(0L, 0L, 0L, blocksMap, 0); {code} There are two operations that are performed 1--> {code:java} synchronized (*storageMovementAttemptedItems*) { storageMovementAttemptedItems.add(itemInfo); } {code} and 2--> {code:java} synchronized (scheduledBlkLocs) { scheduledBlkLocs.putAll(assignedBlocks); } {code} And in parallel there is the Thread BlocksStorageMovementAttemptMonitor In which in the blocksStorageMovementUnReportedItemsCheck() ---> {code:java} synchronized (*storageMovementAttemptedItems*) { Iterator iter = storageMovementAttemptedItems .iterator(); . . . blockStorageMovementNeeded.add(candidate); *iter.remove();* {code} If this thread just hits after (1) is performed .This removes the item we added and puts it in blockStorageMovementNeeded.That is why when we check in the assertion : {code:java} assertEquals("Item doesn't exist in the attempted list", 1, bsmAttemptedItems.getAttemptedItemsCount()); {code} We get 0 instead of 1. This thread comes back after 1 minute of interval.To outsmart this move.I added the sleep before we going to enter our process of add into storageMovementAttemptedItems.So that this thread goes up its first round and doesn't interfere in our process. If waiting doesn't seems as an alt We can remove :: {code:java} bsmAttemptedItems.start(); {code} This will not start up the monitor thread which is interfering. Observed in the test just above testAddReportedMoveAttemptFinishedBlocks() . It is also doing something similar and doesn't have this in. was (Author: ayushtkn): [~elgoiri] In the test the at : {code:java} bsmAttemptedItems.add(0L, 0L, 0L, blocksMap, 0); {code} There are two operations that are performed 1--> {code:java} synchronized (*storageMovementAttemptedItems*) { storageMovementAttemptedItems.add(itemInfo); } {code} and 2--> {code:java} synchronized (scheduledBlkLocs) { scheduledBlkLocs.putAll(assignedBlocks); } {code} And in parallel there is the Thread BlocksStorageMovementAttemptMonitor In which in the blocksStorageMovementUnReportedItemsCheck() ---> {code:java} synchronized (*storageMovementAttemptedItems*) { Iterator iter = storageMovementAttemptedItems .iterator(); . . . blockStorageMovementNeeded.add(candidate); *iter.remove();* {code} If this thread just hits after (1) is performed .This removes the item we added and puts it in blockStorageMovementNeeded.That is why when we check in the assertion : {code:java} assertEquals("Item doesn't exist in the attempted list", 1, bsmAttemptedItems.getAttemptedItemsCount()); {code} We get 0 instead of 1. This thread comes back after 1 minute of interval.To outsmart this move.I added the sleep before we going to enter our process of add into storageMovementAttemptedItems.So that this thread goes up its first round and doesn't interfere in our process. > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot
[ https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720537#comment-16720537 ] Siyao Meng commented on HDFS-13101: --- Hi [~arpitagarwal], sorry for the long delay. We've asked the customer if they are willing to share the image but still hasn't got any response so far. > Yet another fsimage corruption related to snapshot > -- > > Key: HDFS-13101 > URL: https://issues.apache.org/jira/browse/HDFS-13101 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > > Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is > present, so it's likely another case not covered by the fix. We are currently > trying to collect good fsimage + editlogs to replay to reproduce it and > investigate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS
[ https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720533#comment-16720533 ] Ajay Kumar commented on HDDS-115: - [~xyao] thanks for updating the patch, +1 . > GRPC: Support secure gRPC endpoint with mTLS > - > > Key: HDDS-115 > URL: https://issues.apache.org/jira/browse/HDDS-115 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDDS-115-HDDS-4.001.patch, HDDS-115-HDDS-4.002.patch, > HDDS-115-HDDS-4.003.patch, HDDS-115-HDDS-4.004.patch, > HDDS-115-HDDS-4.005.patch, HDDS-115-HDDS-4.006.patch, > HDDS-115-HDDS-4.008.patch, HDDS-115-HDDS-4.009.patch, > HDDS-115-HDDS-4.010.patch, HDDS-115-HDDS-4.011.patch, > HDDS-115-HDDS-4.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720518#comment-16720518 ] Chao Sun commented on HDFS-14146: - Thanks [~xkrogen]. Attached patch v2 to address the comments. > Handle exception from internalQueueCall > --- > > Key: HDFS-14146 > URL: https://issues.apache.org/jira/browse/HDFS-14146 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14146-HDFS-12943.000.patch, > HDFS-14146-HDFS-12943.001.patch, HDFS-14146-HDFS-12943.002.patch > > > When we re-queue RPC call, the {{internalQueueCall}} will potentially throw > exceptions (e.g., RPC backoff), which is then swallowed. This will cause the > RPC to be silently discarded without response to the client, which is not > good. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14146: Attachment: HDFS-14146-HDFS-12943.002.patch > Handle exception from internalQueueCall > --- > > Key: HDFS-14146 > URL: https://issues.apache.org/jira/browse/HDFS-14146 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14146-HDFS-12943.000.patch, > HDFS-14146-HDFS-12943.001.patch, HDFS-14146-HDFS-12943.002.patch > > > When we re-queue RPC call, the {{internalQueueCall}} will potentially throw > exceptions (e.g., RPC backoff), which is then swallowed. This will cause the > RPC to be silently discarded without response to the client, which is not > good. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720511#comment-16720511 ] Ayush Saxena commented on HDFS-14145: - [~elgoiri] In the test the at : {code:java} bsmAttemptedItems.add(0L, 0L, 0L, blocksMap, 0); {code} There are two operations that are performed 1--> {code:java} synchronized (*storageMovementAttemptedItems*) { storageMovementAttemptedItems.add(itemInfo); } {code} and 2--> {code:java} synchronized (scheduledBlkLocs) { scheduledBlkLocs.putAll(assignedBlocks); } {code} And in parallel there is the Thread BlocksStorageMovementAttemptMonitor In which in the blocksStorageMovementUnReportedItemsCheck() ---> {code:java} synchronized (*storageMovementAttemptedItems*) { Iterator iter = storageMovementAttemptedItems .iterator(); . . . blockStorageMovementNeeded.add(candidate); *iter.remove();* {code} If this thread just hits after (1) is performed .This removes the item we added and puts it in blockStorageMovementNeeded.That is why when we check in the assertion : {code:java} assertEquals("Item doesn't exist in the attempted list", 1, bsmAttemptedItems.getAttemptedItemsCount()); {code} We get 0 instead of 1. This thread comes back after 1 minute of interval.To outsmart this move.I added the sleep before we going to enter our process of add into storageMovementAttemptedItems.So that this thread goes up its first round and doesn't interfere in our process. > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14134) Idempotent operations throwing RemoteException should not be retried by the client
[ https://issues.apache.org/jira/browse/HDFS-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720506#comment-16720506 ] Lukas Majercak commented on HDFS-14134: --- I agree non-remote IOExceptions could be network related, but this is covered right? Non-remote IOExceptions are retried with this change, no matter whether the operation is idempotent. > Idempotent operations throwing RemoteException should not be retried by the > client > -- > > Key: HDFS-14134 > URL: https://issues.apache.org/jira/browse/HDFS-14134 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client, ipc >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-14134.001.patch, HDFS-14134.002.patch, > HDFS-14134.003.patch, HDFS-14134.004.patch, HDFS-14134.005.patch, > HDFS-14134_retrypolicy_change_proposal.pdf > > > Currently, some operations that throw IOException on the NameNode are > evaluated by RetryPolicy as FAILOVER_AND_RETRY, but they should just fail > fast. > For example, when calling getXAttr("user.some_attr", file") where the file > does not have the attribute, NN throws an IOException with message "could not > find attr". The current client retry policy determines the action for that to > be FAILOVER_AND_RETRY. The client then fails over and retries until it > reaches the maximum number of retries. Supposedly, the client should be able > to tell that this exception is normal and fail fast. > Moreover, even if the action was FAIL, the RetryInvocationHandler looks at > all the retry actions from all requests, and FAILOVER_AND_RETRY takes > precedence over FAIL action. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14134) Idempotent operations throwing RemoteException should not be retried by the client
[ https://issues.apache.org/jira/browse/HDFS-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720507#comment-16720507 ] Lukas Majercak commented on HDFS-14134: --- I'd argue that this change is even safer, because previously the retry action would be FAIL for: SocketExceptions (non-idempotent) Non-remote IOExceptions (non-idempotent) > Idempotent operations throwing RemoteException should not be retried by the > client > -- > > Key: HDFS-14134 > URL: https://issues.apache.org/jira/browse/HDFS-14134 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client, ipc >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-14134.001.patch, HDFS-14134.002.patch, > HDFS-14134.003.patch, HDFS-14134.004.patch, HDFS-14134.005.patch, > HDFS-14134_retrypolicy_change_proposal.pdf > > > Currently, some operations that throw IOException on the NameNode are > evaluated by RetryPolicy as FAILOVER_AND_RETRY, but they should just fail > fast. > For example, when calling getXAttr("user.some_attr", file") where the file > does not have the attribute, NN throws an IOException with message "could not > find attr". The current client retry policy determines the action for that to > be FAILOVER_AND_RETRY. The client then fails over and retries until it > reaches the maximum number of retries. Supposedly, the client should be able > to tell that this exception is normal and fail fast. > Moreover, even if the action was FAIL, the RetryInvocationHandler looks at > all the retry actions from all requests, and FAILOVER_AND_RETRY takes > precedence over FAIL action. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-911) Make TestOzoneManager unit tests independent
[ https://issues.apache.org/jira/browse/HDDS-911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720489#comment-16720489 ] Bharat Viswanadham commented on HDDS-911: - +1. (As for time out we have a new Jira opened) If there are no more comments, I will commit this by eod. > Make TestOzoneManager unit tests independent > > > Key: HDDS-911 > URL: https://issues.apache.org/jira/browse/HDDS-911 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Affects Versions: 0.3.0 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HDDS-911.002.patch > > > In the latest pre commit builds TestOzoneManager.testListVolumes is failed. > Locally it's passed. > Sometimes it's failed locally if I execute all the tests in the > TestOzoneManager. > TestOzoneManager initialized the MiniOzoneCluster in @BeforeClass instead of > @Before. It's more faster but there is a higher chance that the unit tests > are affected by each other. > I propose to initialize a new MiniOzoneCluster for each test even if it's > faster it's more safer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-99) Adding SCM Audit log
[ https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720487#comment-16720487 ] Ajay Kumar commented on HDDS-99: [~dineshchitlangia] thanks for working on this. Patch looks good to me. I wanted to suggest wrapping all audit related functionality in a boolean check. We don't have to go through any audit code if audit itself is not disabled. May be we can do same for OzoneManager audit in another jira. > Adding SCM Audit log > > > Key: HDDS-99 > URL: https://issues.apache.org/jira/browse/HDDS-99 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Xiaoyu Yao >Assignee: Dinesh Chitlangia >Priority: Major > Labels: alpha2 > Attachments: HDDS-99.001.patch, HDDS-99.002.patch > > > This ticket is opened to add SCM audit log. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14146) Handle exception from internalQueueCall
[ https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720468#comment-16720468 ] Erik Krogen commented on HDFS-14146: Nice job [~csun], this looks great! I have a few very minor style nits: * The comment about blocking within {{internalQueueCall()}} is on the wrong method ({{add()}} instead of {{put()}}) * For {{Server#requeueCall()}}, the {{throws}} should be on the same line as the exceptions? * You're missing a space before the curly brace on Server L2758 The test is clever! > Handle exception from internalQueueCall > --- > > Key: HDFS-14146 > URL: https://issues.apache.org/jira/browse/HDFS-14146 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14146-HDFS-12943.000.patch, > HDFS-14146-HDFS-12943.001.patch > > > When we re-queue RPC call, the {{internalQueueCall}} will potentially throw > exceptions (e.g., RPC backoff), which is then swallowed. This will cause the > RPC to be silently discarded without response to the client, which is not > good. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14145) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk
[ https://issues.apache.org/jira/browse/HDFS-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720479#comment-16720479 ] Íñigo Goiri commented on HDFS-14145: I have to say that is a weird place to put a sleep. I would expect the sleep to be before the first assert. Could we do a waitfor for the assert condition? I think I'm missing the race condition, what is the main thread doing? > TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded > fails sporadically in Trunk > > > Key: HDFS-14145 > URL: https://issues.apache.org/jira/browse/HDFS-14145 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14145-01.patch > > > Reference : > https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ > https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy
[ https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720473#comment-16720473 ] Íñigo Goiri commented on HDFS-14088: +1 on [^HDFS-14088.006.patch]. [~LiJinglun] can you double check? > RequestHedgingProxyProvider can throw NullPointerException when failover due > to no lock on currentUsedProxy > --- > > Key: HDFS-14088 > URL: https://issues.apache.org/jira/browse/HDFS-14088 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Yuxuan Wang >Assignee: Yuxuan Wang >Priority: Major > Attachments: HDFS-14088.001.patch, HDFS-14088.002.patch, > HDFS-14088.003.patch, HDFS-14088.004.patch, HDFS-14088.005.patch, > HDFS-14088.006.patch > > > {code:java} > if (currentUsedProxy != null) { > try { > Object retVal = method.invoke(currentUsedProxy.proxy, args); > LOG.debug("Invocation successful on [{}]", > currentUsedProxy.proxyInfo); > {code} > If a thread run try block and then other thread trigger a fail over calling > method > {code:java} > @Override > public synchronized void performFailover(T currentProxy) { > toIgnore = this.currentUsedProxy.proxyInfo; > this.currentUsedProxy = null; > } > {code} > It will set currentUsedProxy to null, and the first thread can throw a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org