This is an automated email from the ASF dual-hosted git repository. srowen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push: new 2fa5c3b9635 [MINOR][DOCS] Rename Global to Glob 2fa5c3b9635 is described below commit 2fa5c3b9635ebf9e3470073e27b76a3037ccc3c5 Author: Khaled Hammouda <khal...@gmail.com> AuthorDate: Sun Aug 14 14:03:42 2022 -0500 [MINOR][DOCS] Rename Global to Glob This section is about the path **glob** filter. The word **global** seems to be a mistake. ### What changes were proposed in this pull request? Just a typo fix. ### Why are the changes needed? To be less confusing in what the section is about. ### Does this PR introduce _any_ user-facing change? Apart from docs, no. ### How was this patch tested? No testing needed. Closes #37484 from khaledh/patch-1. Authored-by: Khaled Hammouda <khal...@gmail.com> Signed-off-by: Sean Owen <sro...@gmail.com> --- docs/sql-data-sources-generic-options.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/docs/sql-data-sources-generic-options.md b/docs/sql-data-sources-generic-options.md index 7835371ec43..49896eba25f 100644 --- a/docs/sql-data-sources-generic-options.md +++ b/docs/sql-data-sources-generic-options.md @@ -69,11 +69,10 @@ from files. Here, missing file really means the deleted file under directory aft `DataFrame`. When set to true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned. -### Path Global Filter +### Path Glob Filter -`pathGlobFilter` is used to only include files with file names matching the pattern. -The syntax follows <code>org.apache.hadoop.fs.GlobFilter</code>. -It does not change the behavior of partition discovery. +`pathGlobFilter` is used to only include files with file names matching the pattern. The syntax follows +<code>org.apache.hadoop.fs.GlobFilter</code>. It does not change the behavior of partition discovery. To load files with paths matching a given glob pattern while keeping the behavior of partition discovery, you can use: @@ -155,4 +154,4 @@ To load files with paths matching a given modified time range, you can use: <div data-lang="r" markdown="1"> {% include_example load_with_modified_time_filter r/RSparkSQLExample.R %} </div> -</div> \ No newline at end of file +</div> --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org