[
https://issues.apache.org/jira/browse/SPARK-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jakob Odersky updated SPARK-13581:
--
Description:
When running an action on a DataFrame obtained by reading from a libsvm file a
MatchError is thrown, however doing the same on a cached DataFrame works fine.
{code}
val df =
sqlContext.read.format("libsvm").load("../data/mllib/sample_libsvm_data.txt")
//file is in spark repository
df.select(df("features")).show() //MatchError
df.cache()
df.select(df("features")).show() //OK
{code}
The exception stack trace is the following:
{code}
scala.MatchError: 1.0 (of class java.lang.Double)
[info] at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:207)
[info] at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:192)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$UDTConverter.toCatalystImpl(CatalystTypeConverters.scala:142)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
[info] at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
[info] at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
{code}
This issue first appeared in commit {{1dac964c1}}, in PR
[#9595|https://github.com/apache/spark/pull/9595] fixing SPARK-11622.
[~jeffzhang], do you have any insight of what could be going on?
cc [~iyounus]
was:
When running an action on a DataFrame obtained by reading from a libsvm file a
MatchError is thrown, however doing the same on a cached DataFrame works fine.
{code}
val df =
sqlContext.read.format("libsvm").load("../data/mllib/sample_libsvm_data.txt")
//file is
df.select(df("features")).show() //MatchError
df.cache()
df.select(df("features")).show() //OK
{code}
The exception stack trace is the following:
{code}
scala.MatchError: 1.0 (of class java.lang.Double)
[info] at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:207)
[info] at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:192)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$UDTConverter.toCatalystImpl(CatalystTypeConverters.scala:142)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
[info] at
org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
[info] at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
[info] at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
{code}
This issue first appeared in commit {{1dac964c1}}, in PR
[#9595|https://github.com/apache/spark/pull/9595] fixing SPARK-11622.
[~jeffzhang], do you have any insight of what could be going on?
cc [~iyounus]
> LibSVM throws MatchError
>
>
> Key: SPARK-13581
> URL: https://issues.apache.org/jira/browse/SPARK-13581
> Project: Spark
> Issue Type: Bug
> Components: SQL
>Affects Versions: 2.0.0
>Reporter: Jakob Odersky
>Assignee: Jeff Zhang
>Priority: Minor
>
> When running an action on a DataFrame obtained by reading from a libsvm file
> a MatchError is thrown, however doing the same on a cached DataFrame works
> fine.
> {code}
> val df =
> sqlContext.read.format("libsvm").load("../data/mllib/sample_libsvm_data.txt")
> //file is in spark repository
> df.select(df("features")).show() //MatchError
> df.cache()
> df.select(df("features")).show() //OK
> {code}
> The exception stack trace is the following:
> {code}
> scala.MatchError: 1.0 (of class java.lang.Double)
> [info]at
> org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:207)
> [info]at
> org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:192)
> [info]at
> org.apache.spark.sql.catalyst.CatalystTypeConverters$UDTConverter.toCatalystImpl(CatalystTypeConverters.scala:142)
> [info]at
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
> [info]at
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
> [info]at
> org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
> [info]at
> org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$