chenjunjiedada commented on a change in pull request #374: Migrate spark table to iceberg table URL: https://github.com/apache/incubator-iceberg/pull/374#discussion_r315939963
########## File path: spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scala ########## @@ -19,18 +19,22 @@ package org.apache.iceberg.spark +import com.google.common.collect.ImmutableMap import com.google.common.collect.Maps import java.nio.ByteBuffer import java.util +import java.util.UUID import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs.{Path, PathFilter} -import org.apache.iceberg.{DataFile, DataFiles, Metrics, MetricsConfig, PartitionSpec} -import org.apache.iceberg.hadoop.HadoopInputFile +import org.apache.iceberg._ Review comment: It looks like we need to import 9 entities which exceed 100 line length limit. From spark's coding style, when importing more than 6 entities it prefers to use a wildcard. Just want to confirm which one do we prefer? line break or following spark coding style for scala? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org