Github user zzcclp commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1455#discussion_r148963119
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
 ---
    @@ -84,6 +84,10 @@ case class LoadTableCommand(
     
         val carbonProperty: CarbonProperties = CarbonProperties.getInstance()
         carbonProperty.addProperty("zookeeper.enable.lock", "false")
    +    carbonProperty.addProperty(CarbonCommonConstants.NUM_CORES_LOADING,
    +        carbonProperty.getProperty(CarbonCommonConstants.NUM_CORES_LOADING,
    +            
Math.min(sparkSession.sparkContext.conf.getInt("spark.executor.cores", 1),
    +                CarbonCommonConstants.NUM_CORES_MAX_VAL).toString()))
    --- End diff --
    
    Can't modify *NUM_CORES_DEFAULT_VAL* to 32 directly, there are some places 
to use *NUM_CORES_DEFAULT_VAL*, for example:
    in org.apache.carbondata.core.datastore.BlockIndexStore.getAll:
    `try {`
    `      numberOfCores = Integer.parseInt(CarbonProperties.getInstance()`
    `          .getProperty(CarbonCommonConstants.NUM_CORES,`
    `              CarbonCommonConstants.NUM_CORES_DEFAULT_VAL));`
    `    } catch (NumberFormatException e) {`
    `      numberOfCores = 
Integer.parseInt(CarbonCommonConstants.NUM_CORES_DEFAULT_VAL);`
    `    }`
    
    32 is too big. 
    



---

Reply via email to