[ 
https://issues.apache.org/jira/browse/SPARK-37254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17442046#comment-17442046
 ] 

Hyukjin Kwon commented on SPARK-37254:
--------------------------------------

it would be much easier to investigate the issue if there're reproducible steps.

> 100% CPU usage on Spark Thrift Server.
> --------------------------------------
>
>                 Key: SPARK-37254
>                 URL: https://issues.apache.org/jira/browse/SPARK-37254
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.1.2
>            Reporter: ramakrishna chilaka
>            Priority: Major
>
> We are trying to use Spark thrift server as a distributed sql query engine, 
> the queries work when the resident memory occupied by Spark thrift server 
> identified through HTOP is comparatively less than the driver memory. The 
> same queries result in 100% cpu usage when the resident memory occupied by 
> spark thrift server is greater than the configured driver memory and keeps 
> running at 100% cpu usage. I am using incremental collect as false, as i need 
> faster responses for exploratory queries. I am trying to understand the 
> following points
>  * Why isn't spark thrift server releasing back the memory, when there are no 
> queries. 
>  * What is causing spark thrift server to go into 100% cpu usage on all the 
> cores, when spark thrift server's memory is greater than the driver memory 
> (by 10% usually) and why are queries just stuck.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to