Hi,

just trying to understand:
1.  Are you using JDBC to consume data from HIVE?
2. Or are you reading data directly from S3 and just using HIVE Metastore
in SPARK just to find out where the table is stored and its metadata?

Regards,
Gourav Sengupta

On Thu, Dec 23, 2021 at 2:13 PM Arthur Li <lianyou1...@126.com> wrote:

> Dear experts,
>
> Recently there’s some OOM issue in my demo jobs which consuming data from
> the hive database, and I know I can increase the executor memory size to
> eliminate the OOM error. While I don’t know how to do the executor memory
> assessment and how to automatically adopt the executor memory size by the
> data size.
>
> Any options I appreciated.
> Arthur Li
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to