akashrn5 commented on a change in pull request #4180:
URL: https://github.com/apache/carbondata/pull/4180#discussion_r693653217



##########
File path: docs/ddl-of-carbondata.md
##########
@@ -866,6 +866,12 @@ Users can specify which columns to include and exclude for 
local dictionary gene
      ALTER TABLE test_db.carbon CHANGE oldArray newArray array<int>
      ```
 
+     Example 7: Change column name in column: mapField map\<int, int> from 
mapField to mapField1.

Review comment:
       ```suggestion
        Example 7: Change column name in column: oldMapCol map\<int, int> from 
oldMapCol to newMapCol.
   ```

##########
File path: 
core/src/main/java/org/apache/carbondata/core/scan/executor/util/RestructureUtil.java
##########
@@ -119,12 +122,15 @@
         for (CarbonDimension tableDimension : tableComplexDimension) {
           if (isColumnMatches(isTransactionalTable, 
queryDimension.getDimension(),
               tableDimension)) {
-            ProjectionDimension currentBlockDimension = null;
+            ProjectionDimension currentBlockDimension;
             // If projection dimension is child of struct field and contains 
Parent Ordinal
             if (null != 
queryDimension.getDimension().getComplexParentDimension()) {
               currentBlockDimension = new 
ProjectionDimension(queryDimension.getDimension());
             } else {
               currentBlockDimension = new ProjectionDimension(tableDimension);
+              newComplexChildrenDatatypes = new HashMap<>();
+              
fillNewDatatypesForComplexChildren(queryDimension.getDimension());
+              compareDatatypes(currentBlockDimension.getDimension());

Review comment:
       here instead of creating the temporary map and having two functions to 
compare and set, can you do it in a single method using java functional APIs? 
that would look more clean i feel

##########
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -81,6 +81,7 @@ private[sql] case class 
CarbonAlterTableColRenameDataTypeChangeCommand(
   // stores mapping of altered column names: old-column-name -> 
new-column-name.
   // Including both parent/table and children columns
   val alteredColumnNamesMap = collection.mutable.LinkedHashMap.empty[String, 
String]
+  val alteredDatatypesMap = collection.mutable.LinkedHashMap.empty[String, 
DataTypeInfo]

Review comment:
       please add description of the variable 

##########
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
##########
@@ -263,18 +258,49 @@ class AlterTableColumnSchemaGenerator(
       isVarcharColumn(childField.name.getOrElse(childField.column)))
   }
 
+  def addComplexChildCols(currField: Field,
+      currColumnSchema: ColumnSchema,
+      newCols: mutable.Buffer[ColumnSchema],
+      allColumns: mutable.Buffer[ColumnSchema],
+      longStringCols: mutable.Buffer[ColumnSchema],
+      currentSchemaOrdinal: Int): Unit = {
+    if (currField.children.get == null || currField.children.size == 0) {
+      return
+    }
+    if (DataTypes.isArrayType(currColumnSchema.getDataType) ||
+        DataTypes.isStructType(currColumnSchema.getDataType) ||
+        DataTypes.isMapType(currColumnSchema.getDataType)) {
+      val noOfChildren = currField.children.get.size
+      currColumnSchema.setNumberOfChild(noOfChildren)
+      for (i <- 0 to noOfChildren - 1) {

Review comment:
       can you use foreach instead of for?

##########
File path: docs/ddl-of-carbondata.md
##########
@@ -866,6 +866,12 @@ Users can specify which columns to include and exclude for 
local dictionary gene
      ALTER TABLE test_db.carbon CHANGE oldArray newArray array<int>
      ```
 
+     Example 7: Change column name in column: mapField map\<int, int> from 
mapField to mapField1.
+
+     ```
+     ALTER TABLE test_db.carbon CHANGE mapField mapField1 map<int, int>

Review comment:
       ```suggestion
        ALTER TABLE test_db.carbon CHANGE oldMapCol newMapCol map<int, int>
   ```

##########
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
##########
@@ -263,18 +258,49 @@ class AlterTableColumnSchemaGenerator(
       isVarcharColumn(childField.name.getOrElse(childField.column)))
   }
 
+  def addComplexChildCols(currField: Field,
+      currColumnSchema: ColumnSchema,
+      newCols: mutable.Buffer[ColumnSchema],
+      allColumns: mutable.Buffer[ColumnSchema],
+      longStringCols: mutable.Buffer[ColumnSchema],
+      currentSchemaOrdinal: Int): Unit = {
+    if (currField.children.get == null || currField.children.size == 0) {

Review comment:
       ```suggestion
       if (currField.children.get == null || currField.children.isEmpty) {
   ```

##########
File path: 
integration/spark/src/main/scala/org/apache/spark/util/AlterTableUtil.scala
##########
@@ -1105,24 +1106,33 @@ object AlterTableUtil {
         val new_column_name = newDimensionInfo
           .columnName.split(CarbonCommonConstants.POINT.toCharArray).last
         val new_column_datatype = newDimensionInfo.dataType
+
+        // check if column datatypes are altered. If altered, validate them
         if (!old_column_datatype.equalsIgnoreCase(new_column_datatype)) {
-          // datatypes of complex children cannot be altered. So throwing 
exception for now.
-          throw new UnsupportedOperationException(
-            "Altering datatypes of any child column is not supported")
+          this.validateColumnDataType(newDimensionInfo, oldDimensionInfo)
+          alteredDatatypesMap += (oldDimensionInfo.getColName -> 
newDimensionInfo)

Review comment:
       here the key for both `alteredDatatypesMap` and `alteredColumnNamesMap` 
is a string and its the column name and in one map it has column name as value 
which we will get from `newDimensionInfo` and one has the `newDimensionInfo` 
itself. So can we maintain one map and handle both the scenarios? if possible 
its better todo that




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to