Github user zzcclp commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1455#discussion_r148997065
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
 ---
    @@ -84,6 +84,32 @@ case class LoadTableCommand(
     
         val carbonProperty: CarbonProperties = CarbonProperties.getInstance()
         carbonProperty.addProperty("zookeeper.enable.lock", "false")
    +
    +    val numCoresLoading =
    +      try {
    +        Integer.parseInt(CarbonProperties.getInstance()
    +            .getProperty(CarbonCommonConstants.NUM_CORES_LOADING,
    +                CarbonCommonConstants.NUM_CORES_MAX_VAL.toString()))
    +      } catch {
    +        case exc: NumberFormatException =>
    +          LOGGER.error("Configured value for property " + 
CarbonCommonConstants.NUM_CORES_LOADING
    +              + " is wrong. ")
    +          CarbonCommonConstants.NUM_CORES_MAX_VAL
    +      }
    +    // Get the minimum value of 'spark.executor.cores' and 
NUM_CORES_LOADING,
    +    // If user set the NUM_CORES_LOADING, it can't exceed the value of 
'spark.executor.cores';
    +    // If user doesn't set the NUM_CORES_LOADING, it will use the value of 
'spark.executor.cores',
    +    // but the value can't exceed the value of NUM_CORES_MAX_VAL,
    +    // NUM_CORES_LOADING's default value is NUM_CORES_MAX_VAL;
    +    val newNumCoresLoading =
    +      Math.min(
    +          sparkSession.sparkContext.conf.getInt("spark.executor.cores", 1),
    +          numCoresLoading
    +      )
    +    // update the property with new value
    +    carbonProperty.addProperty(CarbonCommonConstants.NUM_CORES_LOADING,
    +        newNumCoresLoading.toString())
    +
    --- End diff --
    
    I think it is unnecessary, If do so, it will affect other jobs and reduce 
the task parallelism, right?


---

Reply via email to