Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/397#issuecomment-40285054
  
    Sure, I myself was not suggesting that we should make them throw 
exceptions. If one really wanted to prohibit their use, that would be a way to 
do so even when subclassing, but I don't suggest they must be prohibited.
    
    To me, the methods aren't broken or anything, and the resulting class can 
be reused for the purpose @rxin has in mind in this PR, if these methods are 
available.
    
    Agree that the max size of such a buffer could be configurable. However the 
limit can't be more than `Integer.MAX_VALUE` just because this is the largest 
an array can be . I was just pointing this out in regard to `var newCapacity: 
Int = oldCapacity << 1` which will fail if `oldCapacity` is more than half of 
`Integer.MAX_VALUE` already. 
    
    (There's another tangent here about whether the buffer class should even 
enforce a limit -- what does it do, fail with `IOException`? -- because the 
caller has to manage it either way.)
    
    The gist of my strawman suggestion is: what if we had a simple subclass of 
`ByteArrayOutputStream` that exposes a `ByteBuffer`? I argue that is a basis 
for removing some long-standing array copies in various parts of the code, 
which is @rxin's purpose. And then I think it suits your purpose too, excepting 
the compaction logic, but I was wondering about whether it is needed. (Maybe I 
should take this to the JIRA issue you opened about your work?)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to