Any suggestions?
On 25 May 2016 17:25, "vaibhav srivastava" <vaibhavcs...@gmail.com> wrote:

> Hi,
> I am using spark 1.2.1. when I am trying to read a parquet file using SQL
> context.parquetFile("path to file") . The parquet file is using
> parquethiveserde and input format is mapredParquetInputFormat.
>
> Thanks
> Vaibhav.
> On 25 May 2016 17:03, "Takeshi Yamamuro" <linguin....@gmail.com> wrote:
>
>> Hi,
>>
>> You need to describe more to make others easily understood;
>> what's the version of spark and what's the query you use?
>>
>> // maropu
>>
>>
>> On Wed, May 25, 2016 at 8:27 PM, vaibhav srivastava <
>> vaibhavcs...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>>  I am facing below stack traces while reading data from parquet file
>>>
>>> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 7
>>>
>>>         at parquet.bytes.BytesUtils.bytesToLong(BytesUtils.java:247)
>>>
>>>         at
>>> parquet.column.statistics.LongStatistics.setMinMaxFromBytes(LongStatistics.java:47)
>>>
>>>         at
>>> parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:249)
>>>
>>>         at
>>> parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:543)
>>>
>>>         at
>>> parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:520)
>>>
>>>         at
>>> parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:426)
>>>
>>>         at
>>> parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:389)
>>>
>>>         at
>>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:457)
>>>
>>>         at
>>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:457)
>>>
>>>         at scala.Option.map(Option.scala:145)
>>>
>>>         at
>>> org.apache.spark.sql.parquet.ParquetTypesConverter$.readMetaData(ParquetTypes.scala:457)
>>>
>>>         at
>>> org.apache.spark.sql.parquet.ParquetTypesConverter$.readSchemaFromFile(ParquetTypes.scala:477)
>>>
>>>         at
>>> org.apache.spark.sql.parquet.ParquetRelation.<init>(ParquetRelation.scala:65)
>>>
>>>         at
>>> org.apache.spark.sql.SQLContext.parquetFile(SQLContext.scala:165)
>>>
>>> Please suggest. It seems like it not able to convert some data
>>>
>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>

Reply via email to