Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/4521#issuecomment-74394386
  
    The parquet write path needs to assume the data matches the schema
    otherwise we'll slow down all writing of data to parquet.  Instead I
    suggest we check at the jvm side of the Python jvm boundry and convert ints
    to longs.
    On Feb 14, 2015 11:09 AM, "Don Drake" <notificati...@github.com> wrote:
    
    > I'm struggling with how to handle this. I would prefer that the
    > saveAsParquet() would handle converting the value to a long for me.
    > However, I could update the test to store a long datatype, but again that
    > means if I update a SchemaRDD long value in python, I have to guarantee
    > it is a long. Not very pythonic IMO.
    >
    > Thoughts?
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/4521#issuecomment-74387840>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to