Github user ueshin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19246#discussion_r139871986
  
    --- Diff: python/pyspark/sql/types.py ---
    @@ -410,6 +410,24 @@ def __init__(self, name, dataType, nullable=True, 
metadata=None):
             self.dataType = dataType
             self.nullable = nullable
             self.metadata = metadata or {}
    +        self.needConversion = dataType.needConversion
    +        self.toInternal = dataType.toInternal
    +        self.fromInternal = dataType.fromInternal
    +
    +    def __getstate__(self):
    +        """Return state values to be pickled."""
    +        return (self.name, self.dataType, self.nullable, self.metadata)
    +
    +    def __setstate__(self, state):
    +        """Restore state from the unpickled state values."""
    +        name, dataType, nullable, metadata = state
    +        self.name = name
    +        self.dataType = dataType
    +        self.nullable = nullable
    +        self.metadata = metadata
    +        self.needConversion = dataType.needConversion
    --- End diff --
    
    What's the difference between your benchmark and @maver1ck's? Why are the 
improvements so different?
    If the improvement is not quite significant, we shouldn't take this patch 
because it confesses developers as you said.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to