[ 
https://issues.apache.org/jira/browse/SPARK-30272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069996#comment-17069996
 ] 

Sean R. Owen commented on SPARK-30272:
--------------------------------------

Spark still uses Guava 14 in order to retain compatibility with Hadoop <= 
3.2.0. However this patch tries to ensure it doesn't use things that are 
present in Guava 27.

Spark doesn't use that class directly, nor do I remember seeing an error like 
this. However I see for example: 
https://github.com/google/guava/blob/master/futures/failureaccess/src/com/google/common/util/concurrent/internal/InternalFutureFailureAccess.java#L26
 which suggests this is part of a different JAR that _should_ be included 
automatically as a transitive dependency.

How are you building and running? are you using Hadoop 3.2.1? 
This doesn't seem to make our PR builders fail, but doesn't mean there isn't 
some subtle issue in there.

> Remove usage of Guava that breaks in Guava 27
> ---------------------------------------------
>
>                 Key: SPARK-30272
>                 URL: https://issues.apache.org/jira/browse/SPARK-30272
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, SQL
>    Affects Versions: 3.0.0
>            Reporter: Sean R. Owen
>            Assignee: Sean R. Owen
>            Priority: Major
>             Fix For: 3.0.0
>
>
> Background:
> https://issues.apache.org/jira/browse/SPARK-29250
> https://github.com/apache/spark/pull/25932
> Hadoop 3.2.1 will update Guava from 11 to 27. There are a number of methods 
> that changed between those releases, typically just a rename, but, means one 
> set of code can't work with both, while we want to work with Hadoop 2.x and 
> 3.x. Among them:
> - Objects.toStringHelper was moved to MoreObjects; we can just use the 
> Commons Lang3 equivalent
> - Objects.hashCode etc were renamed; use java.util.Objects equivalents
> - MoreExecutors.sameThreadExecutor() became directExecutor(); for same-thread 
> execution we can use a dummy implementation of ExecutorService / Executor
> - TypeToken.isAssignableFrom become isSupertypeOf; work around with reflection
> There is probably more to the Guava issue than just this change, but it will 
> make Spark itself work with more versions and reduce our exposure to Guava 
> along the way anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to