[jira] [Updated] (SPARK-29621) Querying internal corrupt record column should not be allowed in filter operation

2019-10-29 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-29621:
-
Description: 
As per 
*https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
_"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the 
referenced columns only include the internal corrupt record column"_

But it's allowing while querying only the internal corrupt record column in 
case of *filter* operation.

{code}
from pyspark.sql.types import *

schema = StructType([
StructField("_corrupt_record", StringType(), False),
StructField("Name", StringType(), False),
StructField("Colour", StringType(), True),
StructField("Price", IntegerType(), True),
StructField("Quantity", IntegerType(), True)])
df = spark.read.csv("fruit.csv", schema=schema, mode="PERMISSIVE")
df.filter(df._corrupt_record.isNotNull()).show()  # Allowed
{code}


  was:
As per 
*https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
_"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the 
referenced columns only include the internal corrupt record column"_

But it's allowing while querying only the internal corrupt record column in 
case of *filter* operation.

{code}
from pyspark.sql.types import *

schema = StructType([
StructField("_corrupt_record",StringType(),False),
StructField("Name",StringType(),False),
StructField("Colour",StringType(),True),
StructField("Price",IntegerType(),True),
StructField("Quantity",IntegerType(),True)])
df = spark.read.csv("fruit.csv",schema=schema,mode="PERMISSIVE")
df.filter(df._corrupt_record.isNotNull()).show()   // Allowed
{code}



> Querying internal corrupt record column should not be allowed in filter 
> operation
> -
>
> Key: SPARK-29621
> URL: https://issues.apache.org/jira/browse/SPARK-29621
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.3.0
>Reporter: Suchintak Patnaik
>Priority: Major
>  Labels: PySpark, SparkSQL
>
> As per 
> *https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
> _"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when 
> the referenced columns only include the internal corrupt record column"_
> But it's allowing while querying only the internal corrupt record column in 
> case of *filter* operation.
> {code}
> from pyspark.sql.types import *
> schema = StructType([
> StructField("_corrupt_record", StringType(), False),
> StructField("Name", StringType(), False),
> StructField("Colour", StringType(), True),
> StructField("Price", IntegerType(), True),
> StructField("Quantity", IntegerType(), True)])
> df = spark.read.csv("fruit.csv", schema=schema, mode="PERMISSIVE")
> df.filter(df._corrupt_record.isNotNull()).show()  # Allowed
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29621) Querying internal corrupt record column should not be allowed in filter operation

2019-10-29 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-29621:
-
Description: 
As per 
*https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
_"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the 
referenced columns only include the internal corrupt record column"_

But it's allowing while querying only the internal corrupt record column in 
case of *filter* operation.

{code}
from pyspark.sql.types import *

schema = StructType([
StructField("_corrupt_record",StringType(),False),
StructField("Name",StringType(),False),
StructField("Colour",StringType(),True),
StructField("Price",IntegerType(),True),
StructField("Quantity",IntegerType(),True)])
df = spark.read.csv("fruit.csv",schema=schema,mode="PERMISSIVE")
df.filter(df._corrupt_record.isNotNull()).show()   // Allowed
{code}


  was:
As per 
*https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
_"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the 
referenced columns only include the internal corrupt record column"_

But it's allowing while querying only the internal corrupt record column in 
case of *filter* operation.

from pyspark.sql.types import *
schema = 
StructType([StructField("_corrupt_record",StringType(),False),StructField("Name",StringType(),False),StructField("Colour",StringType(),True),StructField("Price",IntegerType(),True),StructField("Quantity",IntegerType(),True)])

df = spark.read.csv("fruit.csv",schema=schema,mode="PERMISSIVE")

df.filter(df._corrupt_record.isNotNull()).show()   // Allowed


> Querying internal corrupt record column should not be allowed in filter 
> operation
> -
>
> Key: SPARK-29621
> URL: https://issues.apache.org/jira/browse/SPARK-29621
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.3.0
>Reporter: Suchintak Patnaik
>Priority: Major
>  Labels: PySpark, SparkSQL
>
> As per 
> *https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
> _"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when 
> the referenced columns only include the internal corrupt record column"_
> But it's allowing while querying only the internal corrupt record column in 
> case of *filter* operation.
> {code}
> from pyspark.sql.types import *
> schema = StructType([
> StructField("_corrupt_record",StringType(),False),
> StructField("Name",StringType(),False),
> StructField("Colour",StringType(),True),
> StructField("Price",IntegerType(),True),
> StructField("Quantity",IntegerType(),True)])
> df = spark.read.csv("fruit.csv",schema=schema,mode="PERMISSIVE")
> df.filter(df._corrupt_record.isNotNull()).show()   // Allowed
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29621) Querying internal corrupt record column should not be allowed in filter operation

2019-10-28 Thread Suchintak Patnaik (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suchintak Patnaik updated SPARK-29621:
--
Labels: PySpark SparkSQL  (was: )

> Querying internal corrupt record column should not be allowed in filter 
> operation
> -
>
> Key: SPARK-29621
> URL: https://issues.apache.org/jira/browse/SPARK-29621
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.3.0
>Reporter: Suchintak Patnaik
>Priority: Major
>  Labels: PySpark, SparkSQL
>
> As per 
> *https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala#L119-L126)*,
> _"Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when 
> the referenced columns only include the internal corrupt record column"_
> But it's allowing while querying only the internal corrupt record column in 
> case of *filter* operation.
> from pyspark.sql.types import *
> schema = 
> StructType([StructField("_corrupt_record",StringType(),False),StructField("Name",StringType(),False),StructField("Colour",StringType(),True),StructField("Price",IntegerType(),True),StructField("Quantity",IntegerType(),True)])
> df = spark.read.csv("fruit.csv",schema=schema,mode="PERMISSIVE")
> df.filter(df._corrupt_record.isNotNull()).show()   // Allowed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org