[ 
https://issues.apache.org/jira/browse/HIVE-27087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17690522#comment-17690522
 ] 

Vihang Karajgaonkar commented on HIVE-27087:
--------------------------------------------

I understand what you are saying [~amanraj2520]but if netty upgrade has broken 
a feature we should also consider that. I looked into the possibility of if 
upgrading spark will solve the problem or not. Unfortunately, it looks like 
even if we upgrade spark to 2.4 it would still depend on 4.1.47 as seen 
[here|https://github.com/apache/spark/blob/branch-2.4/pom.xml#L634] so that is 
not a solution either. Is there is a way to have a dependency only for 
spark-client to limit the exposure to it. I don't see Hive-on-Spark on master 
branch so the upgrade doesn't affect in master branch in this context and hence 
the goal of having it closer to master branch doesn't make much sense. branch-3 
and master branches are significantly different.

> Fix TestMiniSparkOnYarnCliDriver test failures on branch-3
> ----------------------------------------------------------
>
>                 Key: HIVE-27087
>                 URL: https://issues.apache.org/jira/browse/HIVE-27087
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Vihang Karajgaonkar
>            Assignee: Vihang Karajgaonkar
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> TestMiniSparkOnYarnCliDriver are failing with the error below
> [ERROR] 2023-02-16 14:13:08.991 [Driver] SparkContext - Error initializing 
> SparkContext.
> java.lang.RuntimeException: java.lang.NoSuchFieldException: 
> DEFAULT_TINY_CACHE_SIZE
> at 
> org.apache.spark.network.util.NettyUtils.getPrivateStaticField(NettyUtils.java:131)
>  ~[spark-network-common_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.network.util.NettyUtils.createPooledByteBufAllocator(NettyUtils.java:118)
>  ~[spark-network-common_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.network.server.TransportServer.init(TransportServer.java:94) 
> ~[spark-network-common_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.network.server.TransportServer.<init>(TransportServer.java:73)
>  ~[spark-network-common_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.network.TransportContext.createServer(TransportContext.java:114)
>  ~[spark-network-common_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.rpc.netty.NettyRpcEnv.startServer(NettyRpcEnv.scala:119) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.rpc.netty.NettyRpcEnvFactory$$anonfun$4.apply(NettyRpcEnv.scala:465)
>  ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.rpc.netty.NettyRpcEnvFactory$$anonfun$4.apply(NettyRpcEnv.scala:464)
>  ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:2271)
>  ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) 
> ~[scala-library-2.11.8.jar:?]
> at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:2263) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:469) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:57) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.SparkEnv$.create(SparkEnv.scala:249) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175) 
> ~[spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256) 
> [spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:423) 
> [spark-core_2.11-2.3.0.jar:2.3.0]
> at 
> org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 
> [spark-core_2.11-2.3.0.jar:2.3.0]
> at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:161) 
> [hive-exec-3.2.0-SNAPSHOT.jar:3.2.0-SNAPSHOT]
> at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:536) 
> [hive-exec-3.2.0-SNAPSHOT.jar:3.2.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_322]
> The root cause of the problem is that we upgrade the netty library from 
> 4.1.17.Final to 4.1.69.Final. The upgraded library does not have 
> `DEFAULT_TINY_CACHE_SIZE` field 
> [here|https://github.com/netty/netty/blob/netty-4.1.51.Final/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java#L46]
>  which was removed in 4.1.52.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to