The problem you describe is the motivation for developing Spark on MR3.
>From the blog article (https://www.datamonad.com/post/2021-08-18-spark-mr3/
):
*The main motivation for developing Spark on MR3 is to allow multiple Spark
applications to share compute resources such as Yarn containers or
Hi,
Spark dynamic resource allocation cannot solve my problem, because the
resources of the production environment are limited. I expect that under this
premise, by reserving resources to ensure that job tasks of different groups
can be scheduled in time.
Thank you,
Bowen Song
Hi. I think you need Spark dynamic resource allocation. Please refer to
https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
.
And If you use Spark SQL, AQE maybe help.
https://spark.apache.org/docs/latest/sql-performance-tuning.html#adaptive-query-execution
Bowen
I don't think that is standard SQL? what are you trying to do, and why not
do it outside SQL?
On Tue, May 17, 2022 at 6:03 PM K. N. Ramachandran
wrote:
> Gentle ping. Any info here would be great.
>
> Regards,
> Ram
>
> On Sun, May 15, 2022 at 5:16 PM K. N. Ramachandran
> wrote:
>
>> Hello
Gentle ping. Any info here would be great.
Regards,
Ram
On Sun, May 15, 2022 at 5:16 PM K. N. Ramachandran
wrote:
> Hello Spark Users Group,
>
> I've just recently started working on tools that use Apache Spark.
> When I try WAITFOR in the spark-sql command line, I just get:
>
> Error: Error
Yes, it should be possible, any interest to work on this together? Need
more hands to add more features here :)
On Tue, May 17, 2022 at 2:06 PM Holden Karau wrote:
> Could we make it do the same sort of history server fallback approach?
>
> On Tue, May 17, 2022 at 10:41 PM bo yang wrote:
>
>>
Could we make it do the same sort of history server fallback approach?
On Tue, May 17, 2022 at 10:41 PM bo yang wrote:
> It is like Web Application Proxy in YARN (
> https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html),
> to provide easy access for Spark
It is like Web Application Proxy in YARN (
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html),
to provide easy access for Spark UI when the Spark application is running.
When running Spark on Kubernetes with S3, there is no YARN. The reverse
proxy here is
Thanks Holden :)
On Mon, May 16, 2022 at 11:12 PM Holden Karau wrote:
> Oh that’s rad
>
> On Tue, May 17, 2022 at 7:47 AM bo yang wrote:
>
>> Hi Spark Folks,
>>
>> I built a web reverse proxy to access Spark UI on Kubernetes (working
>> together with
>>
Hi all,
I find Spark performance is unstable in this scene: we divided the jobs into
two groups according to the job completion time. One group of jobs had an
execution time of less than 10s, and the other group of jobs had an execution
time from 10s to 300s. The reason for the difference is
what's the advantage of using reverse proxy for spark UI?
Thanks
On Tue, May 17, 2022 at 1:47 PM bo yang wrote:
> Hi Spark Folks,
>
> I built a web reverse proxy to access Spark UI on Kubernetes (working
> together with https://github.com/GoogleCloudPlatform/spark-on-k8s-operator).
> Want to
Oh that’s rad
On Tue, May 17, 2022 at 7:47 AM bo yang wrote:
> Hi Spark Folks,
>
> I built a web reverse proxy to access Spark UI on Kubernetes (working
> together with https://github.com/GoogleCloudPlatform/spark-on-k8s-operator).
> Want to share here in case other people have similar need.
12 matches
Mail list logo