Unsubscribe
unsubscribe xiaoxingstack 邮箱:xiaoxingst...@gmail.com 签名由 网易邮箱大师 定制
Hi Copon, Python In worker use python3 to termine, It may return python3.4
In some nodes, Could you check python3 results? Best wishes, Jinxin
xiaoxingstack 邮箱:xiaoxingst...@gmail.com 签名由 网易邮箱大师 定制 On 04/23/2020 01:02,
Odon Copon wrote: Hi, Something is happening to me that I don't quite
will no longer increase, and after a few minutes, the shell will report
this error. Best regards, maqy 发件人: Tang Jinxin 发送时间: 2020年4月22日 23:16 收件人:
maqy 抄送: user@spark.apache.org 主题: 回复:[Spark SQL] [Beginner] Dataset[Row]
collect to driver throwjava.io.EOFException: Premature EOF: no length
Hi maqy, Thanks for your question.Through consideration,I have some ideas as
follow:firstly,try not collect to driver if not nessessary,instead (use
foreachpartition)send data from ececutors;secondly,if not use some high
performance ser/deser like kryo, we could have a try.As a summary,I
Maybe datanode stop data transfer due to timeout.Could you please provide
exception stack? xiaoxingstack 邮箱:xiaoxingst...@gmail.com 签名由 网易邮箱大师 定制
在2020年04月22日 19:53,maqy 写道: Today I meet the same problem using rdd.collect
(), the format of rdd is Tuple2 [Int, Int]. And this problem will
maybe could try someway like foreachpartition in foreachrdd,which will not
together to driver take too extra consumption. xiaoxingstack
邮箱:xiaoxingst...@gmail.com 签名由 网易邮箱大师 定制 On 04/22/2020 21:02, Andrew Melo
wrote: Hi Maqy On Wed, Apr 22, 2020 at 3:24 AM maqy <454618...@qq.com> wrote: >
> I