LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1521071604
> Interesting, thanks for the detailed analysis @LuciferYang !
>
> > Use hadoop 3.2.x can't build hadoop-cloud module too
>
> This is Hadoop 3.2.2 ? I remember at some poi
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1520363712
More
1. The conclusion using hadoop 3.0.x and hadoop 3.1.x is the same
2. User hadoop 3.2.x can't build `hadoop-cloud` module too
3. Currently, only hadoop 3.3. x can build all
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1520218583
@xkrogen @sunchao @pan3793 Synchronize my experimental results
1. Before building, we need to add the following content to
`resource-managers/yarn/pom.xml` refer to
https://git
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1519102114
convert to draft to avoid accidental merging
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1518931685
> > So if there is an way to build and test Hadoop 3.0/3.1 successfully
before this pr, but it loses after this pr, I think we should stop this work
because Apache Spark has not previ
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1517170281
@xkrogen @sunchao @pan3793 I would like to clarify, actually, no longer
using Hadoop 3.0/3.1 support for compilation is not the original intention of
this PR.
So if there is an
LuciferYang commented on PR #40847:
URL: https://github.com/apache/spark/pull/40847#issuecomment-1515842174
> ~Does it mean we drop support for building against vanilla Hadoop3 client?~
>
>
https://github.com/apache/spark/blob/09a43531d30346bb7c8d213822513dc35c70f82e/sql/hive/src/main