Github user viirya commented on the pull request: https://github.com/apache/spark/pull/4826#issuecomment-76519402 I wouldn't like to say this. But, @liancheng, @yhuai I think you should more respect other's contribution... You in #4782 made changes to `ParquetConversions`. That is almost as same as what I did in #4729. Again, the point 4 in this pr: > 4. When generating a new parquet table, we always set nullable/containsNull/valueContainsNull to true. So, we will not face situations that we cannot append data because containsNull/valueContainsNull in an Array/Map column of the existing table has already been set to false. This change makes the whole data pipeline more robust. This is exactly what I did in #4729. I remember you said this is a bad idea and we would not use it. Basically, I can't see why #4729 can't be modified and merged to do that. And I think it is important to respect other's contribution and the time spent on it...
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org