[ 
https://issues.apache.org/jira/browse/SPARK-26631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated SPARK-26631:
----------------------------
    Description: 
While reading Parquet file from Hadoop Archive file Spark is failing with below 
exception

 
{code:java}
scala> val hardf = 
sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet") 
org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It 
must be specified manually.;   at 
org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
   at 
org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
   at scala.Option.getOrElse(Option.scala:121)   at 
org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
   at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:393)
   at 
org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)   
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)   at 
org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)   at 
org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)   ... 
49 elided
{code}
 

Whereas the same parquet file can be read normally without any issues
{code:java}
scala> val df = 
sqlContext.read.parquet("hdfs:///tmp/testparquet/userdata1.parquet")

df: org.apache.spark.sql.DataFrame = [registration_dttm: timestamp, id: int ... 
11 more fields]{code}
 

+Here are the steps to reproduce the issue+

 

a) hadoop fs -mkdir /tmp/testparquet

b) Get sample parquet data and rename the file to userdata1.parquet

wget 
[https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet?raw=true]

c) hadoop fs -put userdata.parquet /tmp/testparquet

d) hadoop archive -archiveName testarchive.har -p /tmp/testparquet /tmp

e) We should be able to see the file under har file

hadoop fs -ls har:///tmp/testarchive.har

f) Launch spark2 / spark shell

g)
{code:java}
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    val df = 
sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet"){code}

is there anything which I am missing here.

 

  was:
While reading Parquet file from Hadoop Archive file Spark is failing with below 
exception

 
{code:java}
scala> val hardf = 
sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet") 
org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It 
must be specified manually.;   at 
org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
   at 
org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
   at scala.Option.getOrElse(Option.scala:121)   at 
org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
   at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:393)
   at 
org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)   
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)   at 
org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)   at 
org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)   ... 
49 elided
{code}
 

Whereas the same parquet file can be read normally without any issues
{code:java}
scala> val df = 
sqlContext.read.parquet("hdfs:///tmp/testparquet/userdata1.parquet")

df: org.apache.spark.sql.DataFrame = [registration_dttm: timestamp, id: int ... 
11 more fields]
{code}


> Issue while reading Parquet data from Hadoop Archive files (.har)
> -----------------------------------------------------------------
>
>                 Key: SPARK-26631
>                 URL: https://issues.apache.org/jira/browse/SPARK-26631
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Sathish
>            Priority: Minor
>
> While reading Parquet file from Hadoop Archive file Spark is failing with 
> below exception
>  
> {code:java}
> scala> val hardf = 
> sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet") 
> org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. 
> It must be specified manually.;   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
>    at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
>    at scala.Option.getOrElse(Option.scala:121)   at 
> org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
>    at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:393)
>    at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)  
>  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)   at 
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)   at 
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)   ... 
> 49 elided
> {code}
>  
> Whereas the same parquet file can be read normally without any issues
> {code:java}
> scala> val df = 
> sqlContext.read.parquet("hdfs:///tmp/testparquet/userdata1.parquet")
> df: org.apache.spark.sql.DataFrame = [registration_dttm: timestamp, id: int 
> ... 11 more fields]{code}
>  
> +Here are the steps to reproduce the issue+
>  
> a) hadoop fs -mkdir /tmp/testparquet
> b) Get sample parquet data and rename the file to userdata1.parquet
> wget 
> [https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet?raw=true]
> c) hadoop fs -put userdata.parquet /tmp/testparquet
> d) hadoop archive -archiveName testarchive.har -p /tmp/testparquet /tmp
> e) We should be able to see the file under har file
> hadoop fs -ls har:///tmp/testarchive.har
> f) Launch spark2 / spark shell
> g)
> {code:java}
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
>     val df = 
> sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet"){code}
> is there anything which I am missing here.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to