Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-11 Thread Mich Talebzadeh
Hi Patrick,

There is not anything wrong with Hive On-premise it is the best data
warehouse there is

Hive handles both ORC and Parquet formal well. They are both columnar
implementations of relational model. What you are seeing is the Spark API
to Hive which prefers Parquet. I found out a few years ago.

>From your point of view I suggest you stick to parquet format with Hive
specific to Spark. As far as I know you don't have a fully independent Hive
DB as yet.

Anyway stick to Hive for now as you never know what issues you may be
facing using moving to Delta Lake.

You can also use compression

STORED AS PARQUET
TBLPROPERTIES ("parquet.compression"="SNAPPY")

ALSO

ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 11:26, Patrick Tucci  wrote:

> Thanks for the reply Stephen and Mich.
>
> Stephen, you're right, it feels like Spark is waiting for something, but
> I'm not sure what. I'm the only user on the cluster and there are plenty of
> resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
> and the host servers to make sure nothing was lingering in the background.
>
> Mich, thank you so much, your suggestion worked. Storing the tables as
> Parquet solves the issue.
>
> Interestingly, I found that only the MemberEnrollment table needs to be
> Parquet. The ID field in MemberEnrollment is an int calculated during load
> by a ROW_NUMBER() function. Further testing found that if I hard code a 0
> as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
> query works without issue even if both tables are ORC.
>
> Should I infer from this issue that the Hive components prefer Parquet
> over ORC? Furthermore, should I consider using a different table storage
> framework, like Delta Lake, instead of the Hive components? Given this
> issue and other issues I've had with Hive, I'm starting to think a
> different solution might be more robust and stable. The main condition is
> that my application operates solely through Thrift server, so I need to be
> able to connect to Spark through Thrift server and have it write tables
> using Delta Lake instead of Hive. From this StackOverflow question, it
> looks like this is possible:
> https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc
>
> Thanks again to everyone who replied for their help.
>
> Patrick
>
>
> On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh 
> wrote:
>
>> Steve may have a valid point. You raised an issue with concurrent writes
>> before, if I recall correctly. Since this limitation may be due to Hive
>> metastore. By default Spark uses Apache Derby for its database
>> persistence. *However it is limited to only one Spark session at any
>> time for the purposes of metadata storage.*  That may be the cause here
>> as well. Does this happen if the underlying tables are created as PARQUET
>> as opposed to ORC?
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
>> wrote:
>>
>>> Hi Patrick,
>>>
>>> When this has happened to me in the past (admittedly via spark-submit)
>>> it has been because another job was still running and had already claimed
>>> some of the resources (cores and memory).
>>>
>>> I think this can also happen if your configuration tries to claim
>>> resources that will never be available.
>>>
>>> Cheers,
>>>
>>> SteveC
>>>
>>>
>>> On 11 Aug 2023, at 3:36 am, Patrick Tucci 
>>> wrote:
>>>
>>> Hello,
>>>
>>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>>> standalone mode using HDFS for storage.
>>>
>>> The query is as follows:
>>>
>>> SELECT ME.*, MB.BenefitID
>>> FROM MemberEnrollment ME
>>> JOIN MemberBenefits MB
>>> ON ME.ID  = MB.EnrollmentID
>>> WHERE MB.BenefitID 

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-11 Thread Patrick Tucci
Thanks for the reply Stephen and Mich.

Stephen, you're right, it feels like Spark is waiting for something, but
I'm not sure what. I'm the only user on the cluster and there are plenty of
resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
and the host servers to make sure nothing was lingering in the background.

Mich, thank you so much, your suggestion worked. Storing the tables as
Parquet solves the issue.

Interestingly, I found that only the MemberEnrollment table needs to be
Parquet. The ID field in MemberEnrollment is an int calculated during load
by a ROW_NUMBER() function. Further testing found that if I hard code a 0
as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
query works without issue even if both tables are ORC.

Should I infer from this issue that the Hive components prefer Parquet over
ORC? Furthermore, should I consider using a different table storage
framework, like Delta Lake, instead of the Hive components? Given this
issue and other issues I've had with Hive, I'm starting to think a
different solution might be more robust and stable. The main condition is
that my application operates solely through Thrift server, so I need to be
able to connect to Spark through Thrift server and have it write tables
using Delta Lake instead of Hive. From this StackOverflow question, it
looks like this is possible:
https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc

Thanks again to everyone who replied for their help.

Patrick


On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh 
wrote:

> Steve may have a valid point. You raised an issue with concurrent writes
> before, if I recall correctly. Since this limitation may be due to Hive
> metastore. By default Spark uses Apache Derby for its database
> persistence. *However it is limited to only one Spark session at any time
> for the purposes of metadata storage.*  That may be the cause here as
> well. Does this happen if the underlying tables are created as PARQUET as
> opposed to ORC?
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
> wrote:
>
>> Hi Patrick,
>>
>> When this has happened to me in the past (admittedly via spark-submit) it
>> has been because another job was still running and had already claimed some
>> of the resources (cores and memory).
>>
>> I think this can also happen if your configuration tries to claim
>> resources that will never be available.
>>
>> Cheers,
>>
>> SteveC
>>
>>
>> On 11 Aug 2023, at 3:36 am, Patrick Tucci 
>> wrote:
>>
>> Hello,
>>
>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>> standalone mode using HDFS for storage.
>>
>> The query is as follows:
>>
>> SELECT ME.*, MB.BenefitID
>> FROM MemberEnrollment ME
>> JOIN MemberBenefits MB
>> ON ME.ID  = MB.EnrollmentID
>> WHERE MB.BenefitID = 5
>> LIMIT 10
>>
>> The tables are defined as follows:
>>
>> -- Contains about 3M rows
>> CREATE TABLE MemberEnrollment
>> (
>> ID INT
>> , MemberID VARCHAR(50)
>> , StartDate DATE
>> , EndDate DATE
>> -- Other columns, but these are the most important
>> ) STORED AS ORC;
>>
>> -- Contains about 25m rows
>> CREATE TABLE MemberBenefits
>> (
>> EnrollmentID INT
>> , BenefitID INT
>> ) STORED AS ORC;
>>
>> When I execute the query, it runs a single broadcast exchange stage,
>> which completes after a few seconds. Then everything just hangs. The
>> JDBC/ODBC tab in the UI shows the query state as COMPILED, but no stages or
>> tasks are executing or pending:
>>
>> 
>>
>> I've let the query run for as long as 30 minutes with no additional
>> stages, progress, or errors. I'm not sure where to start troubleshooting.
>>
>> Thanks for your help,
>>
>> Patrick
>>
>>
>> This email contains confidential information of and is the copyright of
>> Infomedia. It must not be forwarded, amended or disclosed without consent
>> of the sender. If you received this message by mistake, please advise the
>> sender and delete all copies. Security of transmission on the internet
>> cannot be guaranteed, could be infected, intercepted, or corrupted and you
>> should ensure you have suitable antivirus protection in place. By sending
>> us your or any third party personal details, you consent to (or confirm you
>> have obtained 

Re: unsubscribe

2023-08-11 Thread Mich Talebzadeh
To unsubscribe e-mail: user-unsubscr...@spark.apache.org


Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 10:33, Yifan LI  wrote:

> unsubscribe
>


unsubscribe

2023-08-11 Thread Yifan LI
unsubscribe


Re: Extracting Logical Plan

2023-08-11 Thread Vibhatha Abeykoon
Hello Winston,

I looked into the suggested code snippet. But I am getting the following
error

```
value listenerManager is not a member of org.apache.spark.sql.SparkSession
```

Although I can see it is available in the API.

https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/SparkSession.html

With Regards,
Vibhatha Abeykoon, PhD


On Wed, Aug 2, 2023 at 4:22 PM Vibhatha Abeykoon  wrote:

> Hello Winston,
>
> Thanks again for this response, I will check this one out.
>
> On Wed, Aug 2, 2023 at 3:50 PM Winston Lai  wrote:
>
>>
>> Hi Vibhatha,
>>
>> I helped you post this question to another community. There is one answer
>> by someone else for your reference.
>>
>> To access the logical plan or optimized plan, you can register a custom
>> QueryExecutionListener and retrieve the plans during the query execution
>> process. Here's an example of how to do it in Scala:
>>
>> > import org.apache.spark.sql.{SparkSession, QueryExecutionListener}
>> >
>> > // Create a custom QueryExecutionListener
>> > class CustomQueryExecutionListener extends QueryExecutionListener {
>> > override def onSuccess(funcName: String, qe:
>> org.apache.spark.sql.execution.QueryExecution, durationNs: Long): Unit = {
>> > // Retrieve the logical plan
>> > val logicalPlan = qe.logical
>> >
>> > // Retrieve the optimized plan
>> > val optimizedPlan = qe.optimizedPlan
>> >
>> > // Process the plans with your custom function
>> > processPlans(logicalPlan, optimizedPlan)
>> > }
>> >
>> > override def onFailure(funcName: String, qe:
>> org.apache.spark.sql.execution.QueryExecution, exception: Exception): Unit
>> = {}
>> > }
>> >
>> > // Create a SparkSession
>> > val spark = SparkSession.builder()
>> > .appName("Example")
>> > .getOrCreate()
>> >
>> > // Register the custom QueryExecutionListener
>> > spark.listenerManager.register(new CustomQueryExecutionListener)
>> >
>> > // Perform your DataFrame operations
>> > val df = spark.read.csv("path/to/file.csv")
>> > val filteredDF = df.filter(df("column") > 10)
>> > val resultDF = filteredDF.select("column1", "column2")
>> >
>> > // Trigger the execution of the DF to invoke the listener
>> > resultDF.show()
>>
>> Thank You & Best Regards
>> Winston Lai
>> --
>> *From:* Vibhatha Abeykoon 
>> *Sent:* Wednesday, August 2, 2023 5:03:15 PM
>> *To:* Ruifeng Zheng 
>> *Cc:* Winston Lai ; user@spark.apache.org <
>> user@spark.apache.org>
>> *Subject:* Re: Extracting Logical Plan
>>
>> I understand. I sort of drew the same conclusion. But I wasn’t sure.
>> Thanks everyone for taking time on this.
>>
>> On Wed, Aug 2, 2023 at 2:29 PM Ruifeng Zheng  wrote:
>>
>> In Spark Connect, I think the only API to show optimized plan is
>> `df.explain("extended")` as Winston mentioned, but it is not a LogicalPlan
>> object.
>>
>> On Wed, Aug 2, 2023 at 4:36 PM Vibhatha Abeykoon 
>> wrote:
>>
>> Hello Ruifeng,
>>
>> Thank you for these pointers. Would it be different if I use the Spark
>> connect? I am not using the regular SparkSession. I am pretty new to these
>> APIs. Appreciate your thoughts.
>>
>> On Wed, Aug 2, 2023 at 2:00 PM Ruifeng Zheng  wrote:
>>
>> Hi Vibhatha,
>>I think those APIs are still avaiable?
>>
>>
>>
>> ```
>> Welcome to
>>     __
>>  / __/__  ___ _/ /__
>> _\ \/ _ \/ _ `/ __/  '_/
>>/___/ .__/\_,_/_/ /_/\_\   version 3.4.1
>>   /_/
>>
>> Using Scala version 2.12.17 (OpenJDK 64-Bit Server VM, Java 11.0.19)
>> Type in expressions to have them evaluated.
>> Type :help for more information.
>>
>> scala> val df = spark.range(0, 10)
>> df: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>>
>> scala> df.queryExecution
>> res0: org.apache.spark.sql.execution.QueryExecution =
>> == Parsed Logical Plan ==
>> Range (0, 10, step=1, splits=Some(12))
>>
>> == Analyzed Logical Plan ==
>> id: bigint
>> Range (0, 10, step=1, splits=Some(12))
>>
>> == Optimized Logical Plan ==
>> Range (0, 10, step=1, splits=Some(12))
>>
>> == Physical Plan ==
>> *(1) Range (0, 10, step=1, splits=12)
>>
>> scala> df.queryExecution.optimizedPlan
>> res1: org.apache.spark.sql.catalyst.plans.logical.LogicalPlan =
>> Range (0, 10, step=1, splits=Some(12))
>> ```
>>
>>
>>
>> On Wed, Aug 2, 2023 at 3:58 PM Vibhatha Abeykoon 
>> wrote:
>>
>> Hi Winston,
>>
>> I need to use the LogicalPlan object and process it with another function
>> I have written. In earlier Spark versions we can access that via the
>> dataframe object. So if it can be accessed via the UI, is there an API to
>> access the object?
>>
>> On Wed, Aug 2, 2023 at 1:24 PM Winston Lai  wrote:
>>
>> Hi Vibhatha,
>>
>> How about reading the logical plan from Spark UI, do you have access to
>> the Spark UI? I am not sure what infra you run your Spark jobs on. Usually
>> you should be able to view the logical and physical plan under Spark UI in
>> text version at least. It is independent from the language (e.g.,
>> scala/Python/R) that you use to run Spark.
>>
>>
>> On Wednesday, 

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-11 Thread Mich Talebzadeh
Steve may have a valid point. You raised an issue with concurrent writes
before, if I recall correctly. Since this limitation may be due to Hive
metastore. By default Spark uses Apache Derby for its database
persistence. *However
it is limited to only one Spark session at any time for the purposes of
metadata storage.*  That may be the cause here as well. Does this happen if
the underlying tables are created as PARQUET as opposed to ORC?

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
wrote:

> Hi Patrick,
>
> When this has happened to me in the past (admittedly via spark-submit) it
> has been because another job was still running and had already claimed some
> of the resources (cores and memory).
>
> I think this can also happen if your configuration tries to claim
> resources that will never be available.
>
> Cheers,
>
> SteveC
>
>
> On 11 Aug 2023, at 3:36 am, Patrick Tucci  wrote:
>
> Hello,
>
> I'm attempting to run a query on Spark 3.4.0 through the Spark
> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
> standalone mode using HDFS for storage.
>
> The query is as follows:
>
> SELECT ME.*, MB.BenefitID
> FROM MemberEnrollment ME
> JOIN MemberBenefits MB
> ON ME.ID  = MB.EnrollmentID
> WHERE MB.BenefitID = 5
> LIMIT 10
>
> The tables are defined as follows:
>
> -- Contains about 3M rows
> CREATE TABLE MemberEnrollment
> (
> ID INT
> , MemberID VARCHAR(50)
> , StartDate DATE
> , EndDate DATE
> -- Other columns, but these are the most important
> ) STORED AS ORC;
>
> -- Contains about 25m rows
> CREATE TABLE MemberBenefits
> (
> EnrollmentID INT
> , BenefitID INT
> ) STORED AS ORC;
>
> When I execute the query, it runs a single broadcast exchange stage, which
> completes after a few seconds. Then everything just hangs. The JDBC/ODBC
> tab in the UI shows the query state as COMPILED, but no stages or tasks are
> executing or pending:
>
> 
>
> I've let the query run for as long as 30 minutes with no additional
> stages, progress, or errors. I'm not sure where to start troubleshooting.
>
> Thanks for your help,
>
> Patrick
>
>
> This email contains confidential information of and is the copyright of
> Infomedia. It must not be forwarded, amended or disclosed without consent
> of the sender. If you received this message by mistake, please advise the
> sender and delete all copies. Security of transmission on the internet
> cannot be guaranteed, could be infected, intercepted, or corrupted and you
> should ensure you have suitable antivirus protection in place. By sending
> us your or any third party personal details, you consent to (or confirm you
> have obtained consent from such third parties) to Infomedia’s privacy
> policy. http://www.infomedia.com.au/privacy-policy/
>