I have some problems that I am looking for if there is no solution for them
(due to the current implementation) or if there is a way and I was not aware of
it.
1)
Currently, we can enable and configure dynamic resource allocation based on
below documentation.
https://spark.apache.org/docs/late
e the new Spark clients capable of connecting to the Hadoop 2.x cluster?
(According to a simple test, Spark client 3.2.1 had no problem with the Hadoop
2.7 cluster but we wanted to know if there was any guarantee from Spark?)
Thank you very much in advance
Amin Borjian
Hello all,
We use Apache Spark 3.2.0 and our data stored on Apache Hadoop with parquet
format.
One of the advantages of the parquet format is the presence of the predicate
pushdown filter feature, which allows only the necessary data to be read. This
feature is well provided by Spark. For exam
)
From: Sean Owen<mailto:sro...@gmail.com>
Sent: Wednesday, November 24, 2021 10:48 PM
To: Amin Borjian<mailto:borjianami...@outlook.com>
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: [Spark] Does Spark support backward and forward compatibility?
I think
sro...@gmail.com>
Sent: Wednesday, November 24, 2021 5:38 PM
To: Amin Borjian<mailto:borjianami...@outlook.com>
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: [Spark] Does Spark support backward and forward compatibility?
Can you mix different Spark versions on drive
I have a simple question about using Spark, which although most tools usually
explain this question explicitly (in important text, such as a specific format
or a separate page), I did not find it anywhere. Maybe my search was not
enough, but I thought it was good that I ask this question in the
be more restrictive, and it was strange to us that the
simple query stated in the email needed such a run time (because with such
execution time, heavier queries take longer).
Does it make sense that the duration of the query is so long? Is there
something we need to pay attention to or can we improve by changing it?
Thanks,
Amin Borjian