Thank you ,
  I have tried to resolve this issue by making changes in the spark
configuration and use two fields as DICTIONARY_INCLUDE.
  test data(30G) load 8 times, each time about 1.5 minutes to complete

 Is currently testing another larger data, hope to be successful, thank you
very much for the help!
=========================
Liu feng


-----邮件原件-----
发件人: manishgupta88 [mailto:tomanishgupt...@gmail.com] 
发送时间: 2017年9月19日 13:27
收件人: dev@carbondata.apache.org
主题: Re: insert carbondata table failed

Hi Feng,

You can also refer the below links wherein the spark users have tried to
resolve this issue by making changes in the configuration. This might help
you.

https://stackoverflow.com/questions/28901123/why-do-spark-jobs-fail-with-org
-apache-spark-shuffle-metadatafetchfailedexceptio

https://stackoverflow.com/questions/29850784/what-are-the-likely-causes-of-o
rg-apache-spark-shuffle-metadatafetchfailedexcept

Regards
Manish Gupta



--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/



---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

Reply via email to