Try to switch the trace logging, is your es cluster running behind docker. Its 
possible that your spark cluster can’t communicate using docker ips.

Regards
Rohit

On May 15, 2017, at 4:55 PM, Nick Pentreath 
<nick.pentre...@gmail.com<mailto:nick.pentre...@gmail.com>> wrote:

It may be best to ask on the elasticsearch-Hadoop github project

On Mon, 15 May 2017 at 13:19, nayan sharma 
<nayansharm...@gmail.com<mailto:nayansharm...@gmail.com>> wrote:
Hi All,

ERROR:-

Caused by: org.apache.spark.util.TaskCompletionListenerException: Connection 
error (check network and/or proxy settings)- all nodes failed; tried 
[[10.0.1.8*:9200, 10.0.1.**:9200, 10.0.1.***:9200]]

I am getting this error while trying to show the dataframe.

df.count =5190767 and df.printSchema both are working fine.
It has 329 columns.

Do any one have any idea regarding this.Please help me to fix this.


Thanks,
Nayan




Reply via email to