Github user ala commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21206#discussion_r185493309
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
 ---
    @@ -92,17 +92,22 @@ public void reserve(int requiredCapacity) {
           } else {
             throwUnsupportedException(requiredCapacity, null);
           }
    +    } else if (requiredCapacity < 0) {
    --- End diff --
    
    It is definitely possible. In fact, if the overflow value lands between 0 
and `MAX_CAPACITY`, we're not going to detect the error.
    
    However, covering those "big overflow" cases would be more complex. We'd 
have to use 64-bit integers or other safeguards in multiple places, instead of 
one simple `if`. I think it would be worth implementing in the future, but for 
now this simple check should help Spark users in majority of cases.  


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to