Github user staple commented on the pull request:

    https://github.com/apache/spark/pull/1592#issuecomment-50577338
  
    Sure, I’m fine with reworking based on other changes (it seems that some 
merge conflicts have already cropped up in master since I submitted my PR last 
week). I think my change set is a little simpler than the one you linked to, so 
would it make sense for me to wait until that one goes in?
    
    I also thought I’d add a couple of notes on what I had in mind with this 
patch:
    
    1) I added a new Row serialization pathway between python and java, based 
on JList[Array[Byte]] versus the existing RDD[Array[Byte]]. I wasn’t 
overjoyed about doing this, but I noticed that some QueryPlans implement 
optimizations in executeCollect(), which outputs an Array[Row] rather than the 
typical RDD[Row] that can be shipped to python using the existing serialization 
code. To me it made sense to ship the Array[Row] over to python directly 
instead of converting it back to an RDD[Row] just for the purpose of sending 
the Rows to python using the existing serialization code. But let me know if 
you have any thoughts about this.
    
    2) I moved JavaStackTrace from rdd.py to context.py. This made sense to me 
since JavaStackTrace is all about configuring a context attribute, and the 
_extract_concise_traceback function it depends on was already being called 
separately from context.py (as a ‘private’ function of rdd.py).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to