[ https://issues.apache.org/jira/browse/SPARK-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-15919. ------------------------------- Resolution: Not A Problem Look at the implementation of DStream.saveAsTextFiles -- about all it does is call foreachRDD as I described. You can make this do whatever you like to name the file in your own code, but, you have to do something like this to achieve what you want. This JIRA should not be reopened. > DStream "saveAsTextFile" doesn't update the prefix after each checkpoint > ------------------------------------------------------------------------ > > Key: SPARK-15919 > URL: https://issues.apache.org/jira/browse/SPARK-15919 > Project: Spark > Issue Type: Bug > Components: Java API > Affects Versions: 1.6.1 > Environment: Amazon EMR > Reporter: Aamir Abbas > > I have a Spark streaming job that reads a data stream, and saves it as a text > file after a predefined time interval. In the function > stream.dstream().repartition(1).saveAsTextFiles(getOutputPath(), ""); > The function getOutputPath() generates a new path every time the function is > called, depending on the current system time. > However, the output path prefix remains the same for all the batches, which > effectively means that function is not called again for the next batch of the > stream, although the files are being saved after each checkpoint interval. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org