@Bjørn Jørgensen
I did some investigation on upgrading Guava after Spark drop Hadoop2 support,
but unfortunately, the Hive still depends on it, the worse thing is, that
Guava’s classes are marked as shared in IsolatedClientLoader[1], which means
Spark can not upgrade Guava even after upgrading
@Dongjoon Hyun Thank you.
I have two points to discuss.
First, we are currently conducting tests with Python versions 3.8 and 3.9.
Should we consider replacing 3.9 with 3.11?
Secondly, I'd like to know the status of Google Guava.
With Hadoop version 2 no longer being utilized, is there any
Hi folks,
Thanks a lot to the help form Hykjin! We've create the
https://github.com/apache/spark-connect-go as the first contrib repository
for Spark Connect under the Apache Spark project. We will move the
development of the Golang client to this repository and make it very clear
from the README
I think it makes sense to split this discussion into two pieces. On the
contribution side, my personal perspective is that these new clients are
explicitly marked as experimental and unsupported until we deem them mature
enough to be supported using the standard release process etc. However, the
I don't know whether it is related but Scala 2.12.17 is fine for the Spark
3 family (compile and run) . I spent a day compiling Spark 3.4.0 code
against Scala 2.13.8 with maven and was getting all sorts of weird and
wonderful errors at runtime.
HTH
Mich Talebzadeh,
Lead Solutions