[ 
https://issues.apache.org/jira/browse/SPARK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-22366:
---------------------------------
    Description: 
There's an existing flag "spark.sql.files.ignoreCorruptFiles" that will quietly 
ignore attempted reads from files that have been corrupted, but it still allows 
the query to fail on missing files. Being able to ignore missing files too is 
useful in some replication scenarios.

We should add a "spark.sql.files.ignoreMissingFiles" to fill out the 
functionality.

  was:
+underlined text+There's an existing flag "spark.sql.files.ignoreCorruptFiles" 
that will quietly ignore attempted reads from files that have been corrupted, 
but it still allows the query to fail on missing files. Being able to ignore 
missing files too is useful in some replication scenarios.

We should add a "spark.sql.files.ignoreMissingFiles" to fill out the 
functionality.


> Support ignoreMissingFiles flag parallel to ignoreCorruptFiles
> --------------------------------------------------------------
>
>                 Key: SPARK-22366
>                 URL: https://issues.apache.org/jira/browse/SPARK-22366
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Jose Torres
>            Assignee: Jose Torres
>            Priority: Minor
>             Fix For: 2.3.0
>
>
> There's an existing flag "spark.sql.files.ignoreCorruptFiles" that will 
> quietly ignore attempted reads from files that have been corrupted, but it 
> still allows the query to fail on missing files. Being able to ignore missing 
> files too is useful in some replication scenarios.
> We should add a "spark.sql.files.ignoreMissingFiles" to fill out the 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to