[ https://issues.apache.org/jira/browse/YARN-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831901#comment-17831901 ]
ASF GitHub Bot commented on YARN-11664: --------------------------------------- steveloughran commented on PR #6631: URL: https://github.com/apache/hadoop/pull/6631#issuecomment-2025705847 waiting to see what hdfs people say; mentioned internally. now, there is a way to do this with a smaller diff, specifically, move the IOPair class into hadoop common *but keep with the same package name*. something to seriously consider. would reduce the risk of any code elsewhere making explicit use of the class then breaking. > Remove HDFS Binaries/Jars Dependency From YARN > ---------------------------------------------- > > Key: YARN-11664 > URL: https://issues.apache.org/jira/browse/YARN-11664 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn > Reporter: Syed Shameerur Rahman > Assignee: Syed Shameerur Rahman > Priority: Major > Labels: pull-request-available > > In principle Hadoop Yarn is independent of HDFS. It can work with any > filesystem. Currently there exists some code dependency for Yarn with HDFS. > This dependency requires Yarn to bring in some of the HDFS binaries/jars to > its class path. The idea behind this jira is to remove this dependency so > that Yarn can run without HDFS binaries/jars > *Scope* > 1. Non test classes are considered > 2. Some test classes which comes as transitive dependency are considered > *Out of scope* > 1. All test classes in Yarn module is not considered > > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > A quick search in Yarn module revealed following HDFS dependencies > 1. Constants > {code:java} > import > org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; > import org.apache.hadoop.hdfs.DFSConfigKeys;{code} > > > 2. Exception > {code:java} > import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;{code} > > 3. Utility > {code:java} > import org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair;{code} > > Both Yarn and HDFS depends on *hadoop-common* module, > * Constants variables and Utility classes can be moved to *hadoop-common* > * Instead of DSQuotaExceededException, Use the parent exception > ClusterStoragrCapacityExceeded -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org