A-little-bit-of-data opened a new issue, #6855: URL: https://github.com/apache/kyuubi/issues/6855
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no similar issues. ### Describe the bug When using kyuubi to start a spark SQL ON K8S cluster, the number of internal sessions increases. The parameter `kyuubi.engine.user.isolated.spark.session.idle.timeout=PT30M` is found in the documentation. The default value is 6H. After changing it to 30M, there are still sessions that exceed 30M and are not closed, and the SQL of the session has been executed. I would like to ask whether the creation of this session is based on a SQL or how it is created. I found that there are not so many SQLs in my cluster, but there are many sessions. I don't know how they are created.   ### Affects Version(s) 1.9.1 ### Kyuubi Server Log Output _No response_ ### Kyuubi Engine Log Output _No response_ ### Kyuubi Server Configurations _No response_ ### Kyuubi Engine Configurations _No response_ ### Additional context _No response_ ### Are you willing to submit PR? - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix. - [ ] No. I cannot submit a PR at this time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
