When you get a stream from sc.fileStream() spark will process only files with
file timestamp > then current timestamp so all data from HDFS should not be
processed  again. You may have a another problem - spark will not process
files that moved to your HDFS folder between your application restarts. To
avoid this you should use the checkpoints as discribed here :
https://spark.apache.org/docs/latest/streaming-programming-guide.html#failure-of-the-driver-node


akso wrote
> When streaming from HDFS through eihter sc.fileStream() or
> sc.textFileStream(), how can state info be saved so that it won't process
> duplicate data.
> When app is stop and restart, all data from HDFS is processed again.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/can-fileStream-or-textFileStream-remember-state-tp9105p13950.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to