Github user dhatchayani commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2654#discussion_r218669311
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/carbondata/datamap/IndexDataMapRebuildRDD.scala
 ---
    @@ -264,8 +264,17 @@ class RawBytesReadSupport(segmentProperties: 
SegmentProperties, indexColumns: Ar
           rtn(i) = if (indexCol2IdxInDictArray.contains(col.getColName)) {
             
surrogatKeys(indexCol2IdxInDictArray(col.getColName)).toInt.asInstanceOf[Integer]
           } else if (indexCol2IdxInNoDictArray.contains(col.getColName)) {
    -        data(0).asInstanceOf[ByteArrayWrapper].getNoDictionaryKeyByIndex(
    +        val bytes = 
data(0).asInstanceOf[ByteArrayWrapper].getNoDictionaryKeyByIndex(
               indexCol2IdxInNoDictArray(col.getColName))
    +        // no dictionary primitive columns are expected to be in original 
data while loading,
    +        // so convert it to original data
    +        if (DataTypeUtil.isPrimitiveColumn(col.getDataType)) {
    +          val dataFromBytes = DataTypeUtil
    +            .getDataBasedOnDataTypeForNoDictionaryColumn(bytes, 
col.getDataType)
    +          dataFromBytes
    --- End diff --
    
    i think measure null and no dictionary null values are different, can u 
please give me any scenario which fall into no dictionary null case?


---

Reply via email to