Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20184
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/20184
cc @jiangxb1987
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20184
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20184
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20184
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user liutang123 commented on the issue:
https://github.com/apache/spark/pull/20184
hi, @jerryshao , I try lazily allocate all the InputStream and byte arr in
UnsafeSorterSpillReader.
And would you please look at this when you have time?
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20184
>I think that a lazy buffer allocation can not thoroughly solve this
problem because UnsafeSorterSpillReader has BufferedFileInputStream witch will
allocate off heap memory.
Can you
Github user liutang123 commented on the issue:
https://github.com/apache/spark/pull/20184
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user liutang123 commented on the issue:
https://github.com/apache/spark/pull/20184
I think that a lazy buffer allocation can not thoroughly solve this problem
because UnsafeSorterSpillReader has BufferedFileInputStream witch will allocate
off heap memory.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20184
The code here should be fine for normal case. The problem is that there're
so many spill files, which requires to maintain lots of handler's buffer. A
lazy buffer allocation could solve this
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20184
Thanks, let me try to reproduce it locally.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user liutang123 commented on the issue:
https://github.com/apache/spark/pull/20184
Hi, @jerryshao , we can produce this issue as follows:
```
$ bin/spark-shell --master local --conf
spark.sql.windowExec.buffer.spill.threshold=1 --driver-memory 1G
scala>sc.range(1,
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20184
@liutang123 , can you please tell us how to produce your issue easily?
---
-
To unsubscribe, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20184
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
14 matches
Mail list logo