[jira] [Assigned] (SPARK-33894) Word2VecSuite failed for Scala 2.13

2021-01-04 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-33894:
-

Assignee: koert kuipers  (was: Darcy Shen)

> Word2VecSuite failed for Scala 2.13
> ---
>
> Key: SPARK-33894
> URL: https://issues.apache.org/jira/browse/SPARK-33894
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib
>Affects Versions: 3.2.0
>Reporter: Darcy Shen
>Assignee: koert kuipers
>Priority: Major
> Fix For: 3.1.0
>
>
> This may be the first failed build:
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7-scala-2.13/52/
> h2. Possible Work Around Fix
> Move 
> case class Data(word: String, vector: Array[Float])
> out of the class Word2VecModel
> h2. Attempts to git bisect
> master branch git "bisect"
> cc23581e2645c91fa8d6e6c81dc87b4221718bb1 fail
> 3d0323401f7a3e4369a3d3f4ff98f15d19e8a643  fail
> 9d9d4a8e122cf1137edeca857e925f7e76c1ace2   fail
> f5d2165c95fe83f24be9841807613950c1d5d6d0 fail 2020-12-01
> h2. Attached Stack Trace
> To reproduce it in master:
> ./dev/change-scala-version.sh 2.13
> sbt -Pscala-2.13
> > project mllib
> > testOnly org.apache.spark.ml.feature.Word2VecSuite
> [info] Word2VecSuite:
> [info] - params (45 milliseconds)
> [info] - Word2Vec (5 seconds, 768 milliseconds)
> [info] - getVectors (549 milliseconds)
> [info] - findSynonyms (222 milliseconds)
> [info] - window size (382 milliseconds)
> [info] - Word2Vec read/write numPartitions calculation (1 millisecond)
> [info] - Word2Vec read/write (669 milliseconds)
> [info] - Word2VecModel read/write *** FAILED *** (423 milliseconds)
> [info]   org.apache.spark.SparkException: Job aborted.
> [info]   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
> [info]   at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
> [info]   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecModel$Word2VecModelWriter.saveImpl(Word2Vec.scala:368)
> [info]   at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
> [info]   at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.feature.Word2VecModel.save(Word2Vec.scala:207)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite(DefaultReadWriteTest.scala:51)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite$(DefaultReadWriteTest.scala:42)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecSuite.testDefaultReadWrite(Word2VecSuit

[jira] [Assigned] (SPARK-33894) Word2VecSuite failed for Scala 2.13

2021-01-04 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-33894:
-

Assignee: Darcy Shen

> Word2VecSuite failed for Scala 2.13
> ---
>
> Key: SPARK-33894
> URL: https://issues.apache.org/jira/browse/SPARK-33894
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib
>Affects Versions: 3.2.0
>Reporter: Darcy Shen
>Assignee: Darcy Shen
>Priority: Major
>
> This may be the first failed build:
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7-scala-2.13/52/
> h2. Possible Work Around Fix
> Move 
> case class Data(word: String, vector: Array[Float])
> out of the class Word2VecModel
> h2. Attempts to git bisect
> master branch git "bisect"
> cc23581e2645c91fa8d6e6c81dc87b4221718bb1 fail
> 3d0323401f7a3e4369a3d3f4ff98f15d19e8a643  fail
> 9d9d4a8e122cf1137edeca857e925f7e76c1ace2   fail
> f5d2165c95fe83f24be9841807613950c1d5d6d0 fail 2020-12-01
> h2. Attached Stack Trace
> To reproduce it in master:
> ./dev/change-scala-version.sh 2.13
> sbt -Pscala-2.13
> > project mllib
> > testOnly org.apache.spark.ml.feature.Word2VecSuite
> [info] Word2VecSuite:
> [info] - params (45 milliseconds)
> [info] - Word2Vec (5 seconds, 768 milliseconds)
> [info] - getVectors (549 milliseconds)
> [info] - findSynonyms (222 milliseconds)
> [info] - window size (382 milliseconds)
> [info] - Word2Vec read/write numPartitions calculation (1 millisecond)
> [info] - Word2Vec read/write (669 milliseconds)
> [info] - Word2VecModel read/write *** FAILED *** (423 milliseconds)
> [info]   org.apache.spark.SparkException: Job aborted.
> [info]   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
> [info]   at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
> [info]   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecModel$Word2VecModelWriter.saveImpl(Word2Vec.scala:368)
> [info]   at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
> [info]   at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.feature.Word2VecModel.save(Word2Vec.scala:207)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite(DefaultReadWriteTest.scala:51)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite$(DefaultReadWriteTest.scala:42)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecSuite.testDefaultReadWrite(Word2VecSuite.scala:28)
> [info]   at 
> org.apache.spark.ml.feature

[jira] [Assigned] (SPARK-33894) Word2VecSuite failed for Scala 2.13

2021-01-04 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-33894:


Assignee: (was: Apache Spark)

> Word2VecSuite failed for Scala 2.13
> ---
>
> Key: SPARK-33894
> URL: https://issues.apache.org/jira/browse/SPARK-33894
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib
>Affects Versions: 3.2.0
>Reporter: Darcy Shen
>Priority: Major
>
> This may be the first failed build:
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7-scala-2.13/52/
> h2. Possible Work Around Fix
> Move 
> case class Data(word: String, vector: Array[Float])
> out of the class Word2VecModel
> h2. Attempts to git bisect
> master branch git "bisect"
> cc23581e2645c91fa8d6e6c81dc87b4221718bb1 fail
> 3d0323401f7a3e4369a3d3f4ff98f15d19e8a643  fail
> 9d9d4a8e122cf1137edeca857e925f7e76c1ace2   fail
> f5d2165c95fe83f24be9841807613950c1d5d6d0 fail 2020-12-01
> h2. Attached Stack Trace
> To reproduce it in master:
> ./dev/change-scala-version.sh 2.13
> sbt -Pscala-2.13
> > project mllib
> > testOnly org.apache.spark.ml.feature.Word2VecSuite
> [info] Word2VecSuite:
> [info] - params (45 milliseconds)
> [info] - Word2Vec (5 seconds, 768 milliseconds)
> [info] - getVectors (549 milliseconds)
> [info] - findSynonyms (222 milliseconds)
> [info] - window size (382 milliseconds)
> [info] - Word2Vec read/write numPartitions calculation (1 millisecond)
> [info] - Word2Vec read/write (669 milliseconds)
> [info] - Word2VecModel read/write *** FAILED *** (423 milliseconds)
> [info]   org.apache.spark.SparkException: Job aborted.
> [info]   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
> [info]   at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
> [info]   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecModel$Word2VecModelWriter.saveImpl(Word2Vec.scala:368)
> [info]   at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
> [info]   at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.feature.Word2VecModel.save(Word2Vec.scala:207)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite(DefaultReadWriteTest.scala:51)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite$(DefaultReadWriteTest.scala:42)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecSuite.testDefaultReadWrite(Word2VecSuite.scala:28)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecSuite.$anonfun

[jira] [Assigned] (SPARK-33894) Word2VecSuite failed for Scala 2.13

2021-01-04 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-33894:


Assignee: Apache Spark

> Word2VecSuite failed for Scala 2.13
> ---
>
> Key: SPARK-33894
> URL: https://issues.apache.org/jira/browse/SPARK-33894
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib
>Affects Versions: 3.2.0
>Reporter: Darcy Shen
>Assignee: Apache Spark
>Priority: Major
>
> This may be the first failed build:
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7-scala-2.13/52/
> h2. Possible Work Around Fix
> Move 
> case class Data(word: String, vector: Array[Float])
> out of the class Word2VecModel
> h2. Attempts to git bisect
> master branch git "bisect"
> cc23581e2645c91fa8d6e6c81dc87b4221718bb1 fail
> 3d0323401f7a3e4369a3d3f4ff98f15d19e8a643  fail
> 9d9d4a8e122cf1137edeca857e925f7e76c1ace2   fail
> f5d2165c95fe83f24be9841807613950c1d5d6d0 fail 2020-12-01
> h2. Attached Stack Trace
> To reproduce it in master:
> ./dev/change-scala-version.sh 2.13
> sbt -Pscala-2.13
> > project mllib
> > testOnly org.apache.spark.ml.feature.Word2VecSuite
> [info] Word2VecSuite:
> [info] - params (45 milliseconds)
> [info] - Word2Vec (5 seconds, 768 milliseconds)
> [info] - getVectors (549 milliseconds)
> [info] - findSynonyms (222 milliseconds)
> [info] - window size (382 milliseconds)
> [info] - Word2Vec read/write numPartitions calculation (1 millisecond)
> [info] - Word2Vec read/write (669 milliseconds)
> [info] - Word2VecModel read/write *** FAILED *** (423 milliseconds)
> [info]   org.apache.spark.SparkException: Job aborted.
> [info]   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
> [info]   at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
> [info]   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
> [info]   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
> [info]   at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
> [info]   at 
> org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecModel$Word2VecModelWriter.saveImpl(Word2Vec.scala:368)
> [info]   at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
> [info]   at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
> [info]   at org.apache.spark.ml.feature.Word2VecModel.save(Word2Vec.scala:207)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite(DefaultReadWriteTest.scala:51)
> [info]   at 
> org.apache.spark.ml.util.DefaultReadWriteTest.testDefaultReadWrite$(DefaultReadWriteTest.scala:42)
> [info]   at 
> org.apache.spark.ml.feature.Word2VecSuite.testDefaultReadWrite(Word2VecSuite.scala:28)
> [info]   at 
> org.apache.spark.ml.featu