Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19226#discussion_r139112030
  
    --- Diff: python/pyspark/serializers.py ---
    @@ -343,6 +343,8 @@ def _load_stream_without_unbatching(self, stream):
             key_batch_stream = 
self.key_ser._load_stream_without_unbatching(stream)
             val_batch_stream = 
self.val_ser._load_stream_without_unbatching(stream)
             for (key_batch, val_batch) in zip(key_batch_stream, 
val_batch_stream):
    +            key_batch = key_batch if hasattr(key_batch, '__len__') else 
list(key_batch)
    --- End diff --
    
    Could we add a small comment that this is required because 
`_load_stream_without_unbatching` could return an  iterator of iterators in 
this case?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to