[ 
https://issues.apache.org/jira/browse/SPARK-30272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070255#comment-17070255
 ] 

Jorge Machado commented on SPARK-30272:
---------------------------------------

So I was able to fix it. I build it with profile hadoop 3.2 but after the build 
the hadoop-azure.jar is missing so I added manually into my container and now 
it seems to load. 

I was trying to put guava 28 and remove the 14 but this is a lot of work... why 
do we use a old guava version ? 

> Remove usage of Guava that breaks in Guava 27
> ---------------------------------------------
>
>                 Key: SPARK-30272
>                 URL: https://issues.apache.org/jira/browse/SPARK-30272
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, SQL
>    Affects Versions: 3.0.0
>            Reporter: Sean R. Owen
>            Assignee: Sean R. Owen
>            Priority: Major
>             Fix For: 3.0.0
>
>
> Background:
> https://issues.apache.org/jira/browse/SPARK-29250
> https://github.com/apache/spark/pull/25932
> Hadoop 3.2.1 will update Guava from 11 to 27. There are a number of methods 
> that changed between those releases, typically just a rename, but, means one 
> set of code can't work with both, while we want to work with Hadoop 2.x and 
> 3.x. Among them:
> - Objects.toStringHelper was moved to MoreObjects; we can just use the 
> Commons Lang3 equivalent
> - Objects.hashCode etc were renamed; use java.util.Objects equivalents
> - MoreExecutors.sameThreadExecutor() became directExecutor(); for same-thread 
> execution we can use a dummy implementation of ExecutorService / Executor
> - TypeToken.isAssignableFrom become isSupertypeOf; work around with reflection
> There is probably more to the Guava issue than just this change, but it will 
> make Spark itself work with more versions and reduce our exposure to Guava 
> along the way anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to