Tartarus0zm commented on code in PR #2105:
URL: https://github.com/apache/auron/pull/2105#discussion_r2958218423


##########
auron-core/src/main/java/org/apache/auron/configuration/AuronConfiguration.java:
##########
@@ -39,6 +39,31 @@ public abstract class AuronConfiguration {
             .withDescription("Log level for native execution.")
             .withDefaultValue("info");
 
+    public static final ConfigOption<Integer> TOKIO_WORKER_THREADS_PER_CPU = 
new ConfigOption<>(Integer.class)
+            .withKey("auron.tokio.worker.threads.per.cpu")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of Tokio worker threads to create per CPU core 
(spark.task.cpus). Set to 0 for automatic detection "
+                            + "based on available CPU cores. This setting 
controls the thread pool size for Tokio-based asynchronous operations.")
+            .withDefaultValue(0);
+
+    public static final ConfigOption<Integer> SUGGESTED_BATCH_MEM_SIZE = new 
ConfigOption<>(Integer.class)
+            .withKey("auron.suggested.batch.memSize")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Suggested memory size in bytes for record batches. This 
setting controls the target memory allocation "
+                            + "for individual data batches to optimize memory 
usage and processing efficiency. Default is 8MB (8,388,608 bytes).")
+            .withDefaultValue(8388608);
+
+    public static final ConfigOption<Integer> TASK_CPUS = new 
ConfigOption<>(Integer.class)
+            .withKey("task.cpus")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of CPU cores allocated per Spark task. This 
setting determines the parallelism level "
+                            + "for individual tasks and affects resource 
allocation and task scheduling. "
+                            + "Defaults to spark.task.cpus.")

Review Comment:
   The description of this option has not been modified. In the Spark engine, 
the value of `spark.task.cpus` is indeed retrieved from the SparkEnv; the 
default value of 1 is only used if it does not exist.
   maybe we should remove `Defaults to spark.task.cpus.` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to