Hi,

I am using spark-streaming application to process some data over a 3 node
cluster. It is, however, not processing any file that contains 0.4 million
entires. Files with any other number of entries are processed fine. When
running in local mode, even the 0.4 million entries file is processed fine.

I've tried using different files that have that many number of entries but
none of them work.

Any ideas why such a wierd issue could occur?

Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-not-processing-file-with-particular-number-of-entries-tp6694.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to