[ 
https://issues.apache.org/jira/browse/SPARK-28338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayadevan M updated SPARK-28338:
--------------------------------
    Description: 
The csv input file

+cat sample.csv+ 
 Name,Lastname,Age
 abc,,32
 pqr,xxx,30

 

+spark-shell+

spark.read.format("csv").option("header", 
"true").load("/media/ub_share/projects/*.csv").head(3)
 res14: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])

 

scala> spark.read.format("csv").option("header", "true").option("nullValue", 
"?").load("/media/ub_share/projects/*.csv").head(3)
 res15: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])

 

The empty string get converted to null. Its works fine if the csv file have 
quotes in columns.

  was:
The csv input file

cat sample.csv 
Name,Lastname,Age
abc,,32
pqr,xxx,30

 

spark-shell

spark.read.format("csv").option("header", 
"true").load("/media/ub_share/projects/*.csv").head(3)
res14: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])

 

scala> spark.read.format("csv").option("header", "true").option("nullValue", 
"?").load("/media/ub_share/projects/*.csv").head(3)
res15: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])


> spark.read.format("csv") treat empty string as null if csv file don't have 
> quotes in data
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-28338
>                 URL: https://issues.apache.org/jira/browse/SPARK-28338
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.3
>            Reporter: Jayadevan M
>            Priority: Major
>
> The csv input file
> +cat sample.csv+ 
>  Name,Lastname,Age
>  abc,,32
>  pqr,xxx,30
>  
> +spark-shell+
> spark.read.format("csv").option("header", 
> "true").load("/media/ub_share/projects/*.csv").head(3)
>  res14: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])
>  
> scala> spark.read.format("csv").option("header", "true").option("nullValue", 
> "?").load("/media/ub_share/projects/*.csv").head(3)
>  res15: Array[org.apache.spark.sql.Row] = Array([abc,null,32], [pqr,xxx,30])
>  
> The empty string get converted to null. Its works fine if the csv file have 
> quotes in columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to