Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-06 Thread via GitHub


beliefer commented on code in PR #43250:
URL: https://github.com/apache/spark/pull/43250#discussion_r1349475794


##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -490,7 +490,12 @@ private class LiveExecutionData(val executionId: Long) 
extends LiveEntity {
   var details: String = null
   var physicalPlanDescription: String = null
   var modifiedConfigs: Map[String, String] = _
-  var metrics = collection.Seq[SQLPlanMetric]()
+  private var _metrics = collection.Seq[SQLPlanMetric]()
+  def metrics: collection.Seq[SQLPlanMetric] = _metrics
+  // This mapping is shared across all LiveStageMetrics instances associated 
with
+  // this LiveExecutionData, helping to reduce memory overhead by avoiding 
waste
+  // from separate immutable maps with largely overlapping sets of entries.
+  val metricAccumulatorIdToMetricType = new ConcurrentHashMap[Long, String]()

Review Comment:
   AFAIK, SQLAppStatusListener only works in single thread. Should we only use 
Map here? please tell me detail if I thought missing something.



##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -490,7 +490,12 @@ private class LiveExecutionData(val executionId: Long) 
extends LiveEntity {
   var details: String = null
   var physicalPlanDescription: String = null
   var modifiedConfigs: Map[String, String] = _
-  var metrics = collection.Seq[SQLPlanMetric]()
+  private var _metrics = collection.Seq[SQLPlanMetric]()
+  def metrics: collection.Seq[SQLPlanMetric] = _metrics
+  // This mapping is shared across all LiveStageMetrics instances associated 
with
+  // this LiveExecutionData, helping to reduce memory overhead by avoiding 
waste
+  // from separate immutable maps with largely overlapping sets of entries.
+  val metricAccumulatorIdToMetricType = new ConcurrentHashMap[Long, String]()

Review Comment:
   AFAIK, `SQLAppStatusListener` only works in single thread. Should we only 
use Map here? please tell me detail if I thought missing something.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-11 Thread via GitHub


JoshRosen commented on code in PR #43250:
URL: https://github.com/apache/spark/pull/43250#discussion_r1355983329


##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -490,7 +490,12 @@ private class LiveExecutionData(val executionId: Long) 
extends LiveEntity {
   var details: String = null
   var physicalPlanDescription: String = null
   var modifiedConfigs: Map[String, String] = _
-  var metrics = collection.Seq[SQLPlanMetric]()
+  private var _metrics = collection.Seq[SQLPlanMetric]()
+  def metrics: collection.Seq[SQLPlanMetric] = _metrics
+  // This mapping is shared across all LiveStageMetrics instances associated 
with
+  // this LiveExecutionData, helping to reduce memory overhead by avoiding 
waste
+  // from separate immutable maps with largely overlapping sets of entries.
+  val metricAccumulatorIdToMetricType = new ConcurrentHashMap[Long, String]()

Review Comment:
   Our Spark fork contains a codepath that performs multi-threaded access to 
this data structure, so that's why I used a thread-safe data structure here.
   
   If you would prefer that I avoid leaking those proprietary thread-safety 
concerns into OSS Apache Spark then I would be glad to refactor this to use a 
`mutable.Map` instead and keep the thread-safe version in our fork.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-12 Thread via GitHub


mridulm commented on code in PR #43250:
URL: https://github.com/apache/spark/pull/43250#discussion_r1357386637


##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -490,7 +490,12 @@ private class LiveExecutionData(val executionId: Long) 
extends LiveEntity {
   var details: String = null
   var physicalPlanDescription: String = null
   var modifiedConfigs: Map[String, String] = _
-  var metrics = collection.Seq[SQLPlanMetric]()
+  private var _metrics = collection.Seq[SQLPlanMetric]()
+  def metrics: collection.Seq[SQLPlanMetric] = _metrics
+  // This mapping is shared across all LiveStageMetrics instances associated 
with
+  // this LiveExecutionData, helping to reduce memory overhead by avoiding 
waste
+  // from separate immutable maps with largely overlapping sets of entries.
+  val metricAccumulatorIdToMetricType = new ConcurrentHashMap[Long, String]()

Review Comment:
   Curious why this is diverging ? How is it used ? Is it something that might 
be relevant to Apache Spark as a functional enhancement ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-12 Thread via GitHub


mridulm commented on code in PR #43250:
URL: https://github.com/apache/spark/pull/43250#discussion_r1357389742


##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -589,7 +600,7 @@ private class LiveStageMetrics(
 val metricValues = taskMetrics.computeIfAbsent(acc.id, _ => new 
Array(numTasks))
 metricValues(taskIdx) = value
 
-if (SQLMetrics.metricNeedsMax(accumIdsToMetricType(acc.id))) {
+if 
(SQLMetrics.metricNeedsMax(Option(accumIdsToMetricType.get(acc.id)).get)) {

Review Comment:
   QQ: If we are always expecting `acc.id` to be present, avoid the 
`Option().get` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-12 Thread via GitHub


JoshRosen commented on code in PR #43250:
URL: https://github.com/apache/spark/pull/43250#discussion_r1357445322


##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala:
##
@@ -490,7 +490,12 @@ private class LiveExecutionData(val executionId: Long) 
extends LiveEntity {
   var details: String = null
   var physicalPlanDescription: String = null
   var modifiedConfigs: Map[String, String] = _
-  var metrics = collection.Seq[SQLPlanMetric]()
+  private var _metrics = collection.Seq[SQLPlanMetric]()
+  def metrics: collection.Seq[SQLPlanMetric] = _metrics
+  // This mapping is shared across all LiveStageMetrics instances associated 
with
+  // this LiveExecutionData, helping to reduce memory overhead by avoiding 
waste
+  // from separate immutable maps with largely overlapping sets of entries.
+  val metricAccumulatorIdToMetricType = new ConcurrentHashMap[Long, String]()

Review Comment:
   The codepath in question isn't of general relevance and, as it turns out, 
might be deprecated soon. Given this, I have updated this PR to use a 
`mutable.HashMap` (which simplifies the rest of the code and addresses your 
other review comment).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-13 Thread via GitHub


JoshRosen commented on PR #43250:
URL: https://github.com/apache/spark/pull/43250#issuecomment-1762420388

   Hmm, it looks like the `OracleIntegrationSuite` tests are flaky but I don't 
think that's related to this PR's changes:
   
   ```
   [info] OracleIntegrationSuite:
   [info] org.apache.spark.sql.jdbc.OracleIntegrationSuite *** ABORTED *** (7 
minutes, 38 seconds)
   [info]   The code passed to eventually never returned normally. Attempted 
429 times over 7.00309507997 minutes. Last failure message: ORA-12514: 
Cannot connect to database. Service freepdb1 is not registered with the 
listener at host 10.1.0.126 port 45139. 
(CONNECTION_ID=CC2wkBm6SPGoMF7IghzCeQ==). (DockerJDBCIntegrationSuite.scala:166)
   [info]   org.scalatest.exceptions.TestFailedDueToTimeoutException:
   [info]   at 
org.scalatest.enablers.Retrying$$anon$4.tryTryAgain$2(Retrying.scala:219)
   [info]   at org.scalatest.enablers.Retrying$$anon$4.retry(Retrying.scala:226)
   [info]   at 
org.scalatest.concurrent.Eventually.eventually(Eventually.scala:313)
   [info]   at 
org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:312)
   [info]   at 
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.eventually(DockerJDBCIntegrationSuite.scala:95)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-14 Thread via GitHub


mridulm commented on PR #43250:
URL: https://github.com/apache/spark/pull/43250#issuecomment-1762699766

   The test failures are not related - unfortunately reattempt did not fix them.
   Merging to master.
   Thanks for fixing this @JoshRosen !
   Thanks for the reviews @jiangxb1987, @beliefer, @Ngone51 :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-45439][SQL][UI] Reduce memory usage of LiveStageMetrics.accumIdsToMetricType [spark]

2023-10-14 Thread via GitHub


mridulm closed pull request #43250: [SPARK-45439][SQL][UI] Reduce memory usage 
of LiveStageMetrics.accumIdsToMetricType
URL: https://github.com/apache/spark/pull/43250


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org