Fokko commented on code in PR #5419:
URL: https://github.com/apache/iceberg/pull/5419#discussion_r979749088


##########
api/src/main/java/org/apache/iceberg/PartitionField.java:
##########
@@ -28,12 +28,14 @@ public class PartitionField implements Serializable {
   private final int fieldId;
   private final String name;
   private final Transform<?, ?> transform;
+  private final int schemaId;
 
-  PartitionField(int sourceId, int fieldId, String name, Transform<?, ?> 
transform) {

Review Comment:
   I think we want to keep this constructor, and add another one to avoid 
breaking the API.



##########
api/src/main/java/org/apache/iceberg/PartitionSpec.java:
##########
@@ -536,14 +576,49 @@ Builder add(int sourceId, String name, Transform<?, ?> 
transform) {
 
     Builder add(int sourceId, int fieldId, String name, Transform<?, ?> 
transform) {
       checkAndAddPartitionName(name, sourceId);
-      fields.add(new PartitionField(sourceId, fieldId, name, transform));
+      fields.add(new PartitionField(sourceId, fieldId, name, transform, 
schema.schemaId()));
+      lastAssignedFieldId.getAndAccumulate(fieldId, Math::max);
+      return this;
+    }
+
+    Builder addExistingWithNewTransform(PartitionField field, Transform<?, ?> 
transform) {
+      return addFrom(field.sourceId(), field.fieldId(), field.name(), 
transform, field.schemaId());
+    }
+
+    // to add an existing UnboundPartitionField
+    Builder addExisting(UnboundPartitionSpec.UnboundPartitionField field) {
+      if (field.partitionId() != null) {
+        return addFrom(
+            field.sourceId(),
+            field.partitionId(),
+            field.name(),
+            field.transform(),
+            field.schemaId());
+      }
+      return addFrom(
+          field.sourceId(), nextFieldId(), field.name(), field.transform(), 
field.schemaId());
+    }
+
+    // to add an existing PartitionField
+    Builder addExisting(PartitionField field) {
+      return addFrom(
+          field.sourceId(), field.fieldId(), field.name(), field.transform(), 
field.schemaId());
+    }
+
+    // to add an existing field with minor or no modifications
+    Builder addFrom(
+        int sourceId, int fieldId, String name, Transform<?, ?> transform, int 
schemaId) {
+      checkAndAddPartitionName(name, sourceId);
+      fields.add(new PartitionField(sourceId, fieldId, name, transform, 
schemaId));
       lastAssignedFieldId.getAndAccumulate(fieldId, Math::max);
       return this;
     }
 
     public PartitionSpec build() {
       PartitionSpec spec = buildUnchecked();
-      checkCompatibility(spec, schema);
+      // TODO: fields in a spec can be mapped to more than one schema. Hence, 
we cannot check
+      // against single schema.
+      //      checkCompatibility(spec, schema);

Review Comment:
   Without this we can't really tell if it works



##########
spark/v3.3/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAlterTablePartitionFields.java:
##########
@@ -436,4 +441,20 @@ private SparkTable sparkTable() throws Exception {
     Identifier identifier = Identifier.of(tableIdent.namespace().levels(), 
tableIdent.name());
     return (SparkTable) catalog.loadTable(identifier);
   }
+
+  @Test
+  public void testDropColumnAfterDropPartition() {
+    
Assume.assumeTrue(catalogName.equals(SparkCatalogConfig.HADOOP.catalogName()));
+    sql(
+        "CREATE TABLE %s (id bigint, data string, category string) USING 
iceberg PARTITIONED BY (category) TBLPROPERTIES('format-version' = '2')",
+        tableName);
+    sql("ALTER TABLE %s DROP PARTITION FIELD category", tableName);
+    sql("ALTER TABLE %s DROP COLUMN category", tableName);
+    Set<String> fieldNames =
+        
validationCatalog.loadTable(tableIdent).schema().asStruct().fields().stream()
+            .map(Types.NestedField::name)
+            .collect(Collectors.toSet());
+    Assert.assertEquals("Should only have two columns", fieldNames.size(), 2);
+    Assert.assertFalse(fieldNames.contains("category"));
+  }

Review Comment:
   I think we want to test selecting rows here as well.



##########
api/src/main/java/org/apache/iceberg/UnboundPartitionSpec.java:
##########
@@ -53,11 +53,7 @@ private PartitionSpec.Builder copyToBuilder(Schema schema) {
     PartitionSpec.Builder builder = 
PartitionSpec.builderFor(schema).withSpecId(specId);
 
     for (UnboundPartitionField field : fields) {
-      if (field.partitionId != null) {

Review Comment:
   I think this is there for backward compatibility with V1



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to