Github user BryanCutler commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21546#discussion_r204868891
  
    --- Diff: python/pyspark/serializers.py ---
    @@ -184,27 +184,67 @@ def loads(self, obj):
             raise NotImplementedError
     
     
    -class ArrowSerializer(FramedSerializer):
    +class BatchOrderSerializer(Serializer):
    --- End diff --
    
    Yeah, I could separate this but is there anything I can do to alleviate 
your concern?  I'm not sure I'll have the time to try to make another PR before 
2.4.0 code freeze and I think this is a really useful memory optimization to 
help prevent OOM in the driver JVM.  Also, I might have to rerun the benchmarks 
here, just to be thorough, because the previous ones were from quite a while 
ago.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to