cchighman commented on a change in pull request #28841:
URL: https://github.com/apache/spark/pull/28841#discussion_r450583791



##########
File path: docs/sql-data-sources-generic-options.md
##########
@@ -119,3 +119,31 @@ To load all files recursively, you can use:
 {% include_example recursive_file_lookup r/RSparkSQLExample.R %}
 </div>
 </div>
+
+### Modification Date Filter
+
+`modifiedDateFilter` is an option used to only load files after a specified 
modification

Review comment:
       @gengliangwang 
   This is out of date here after recent comments from @HeartSaVioR.  I updated 
the PR commit above and title based on his feedback.  I'm currently working on 
these additions with the following:
   
   **Example Usages**
   _Load all CSV files modified after date:_
   
`spark.read.format("csv").option("modifiedAfter","2020-06-15T05:00:00").load()`
   
   _Load all CSV files modified before date:_
   
`spark.read.format("csv").option("modifiedBefore","2020-06-15T05:00:00").load()`
   
   _Load all CSV files modified between two dates:_
   
`spark.read.format("csv").option("modifiedAfter","2019-01-15T05:00:00").option("modifiedBefore","2020-06-15T05:00:00").load()`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to