[ https://issues.apache.org/jira/browse/SPARK-30272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean R. Owen resolved SPARK-30272. ---------------------------------- Fix Version/s: 3.0.0 Resolution: Fixed Issue resolved by pull request 26911 [https://github.com/apache/spark/pull/26911] > Remove usage of Guava that breaks in Guava 27 > --------------------------------------------- > > Key: SPARK-30272 > URL: https://issues.apache.org/jira/browse/SPARK-30272 > Project: Spark > Issue Type: Improvement > Components: Spark Core, SQL > Affects Versions: 3.0.0 > Reporter: Sean R. Owen > Assignee: Sean R. Owen > Priority: Major > Fix For: 3.0.0 > > > Background: > https://issues.apache.org/jira/browse/SPARK-29250 > https://github.com/apache/spark/pull/25932 > Hadoop 3.2.1 will update Guava from 11 to 27. There are a number of methods > that changed between those releases, typically just a rename, but, means one > set of code can't work with both, while we want to work with Hadoop 2.x and > 3.x. Among them: > - Objects.toStringHelper was moved to MoreObjects; we can just use the > Commons Lang3 equivalent > - Objects.hashCode etc were renamed; use java.util.Objects equivalents > - MoreExecutors.sameThreadExecutor() became directExecutor(); for same-thread > execution we can use a dummy implementation of ExecutorService / Executor > - TypeToken.isAssignableFrom become isSupertypeOf; work around with reflection > There is probably more to the Guava issue than just this change, but it will > make Spark itself work with more versions and reduce our exposure to Guava > along the way anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org