Github user aarondav commented on the pull request:

    https://github.com/apache/spark/pull/3527#issuecomment-65028176
  
    I believe it is only 1 bit, not byte, per block. Further I would estimate
    compression on largely uniform data to be at least around 10x. So your
    example would ideally only use around 1.2MB.
    
    Anyway, you can arbitrarily multiply the number of partitions to
    demonstrate the issue. 1mil by 1mil is still a tough cookie to crack, but
    we don't really want users to have to meddle with frame sizes.
    
    Having this check is fine, of course, whether or not users should have to
    change it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to