[ 
https://issues.apache.org/jira/browse/SPARK-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418547#comment-15418547
 ] 

胡振宇 edited comment on SPARK-14850 at 8/12/16 9:02 AM:
------------------------------------------------------

/*code is for spark 1.6.1*/ 
object Example{
def main (args:Array[String]){

    val conf = new SparkConf.setAppname("Example")
    val sc=new sparkContext(conf)
    val sqlContext=new SQLContext(sc) 
    import sqlContext.implicts._ 
    val count=sqlContext.sparkContext.parallelize(0,until 1e4.toInt,1).map{
    i=>(i,Vectors.dense(Array.fill(1e6.toInt)(1.0)))
                 }.toDF().rdd.count()     //at this step toDF can be used on 
Spark1.6.1
    }
} 

so I am not able to  test the simple serialization example 



was (Author: fox19960207):
/*code is for spark 1.6.1*/ 
object Example{
def main (args:Array[String]){

    val conf = new SparkConf.setAppname("Example")
    val sc=new sparkContext(conf)
    val sqlContext=new SQLContext(sc) 
    import sqlContext.implicts._ 
    val count=sqlContext.sparkContext.parallelize(0,until 1e4.toInt,1).map{
    i=>Test(i,Vectors.dense(Array.fill(1e6.toInt)(1.0)))
                 }.toDF().rdd.count()     //at this step toDF can be used on 
Spark1.6.1
    }
} 

so I am not able to  test the simple serialization example 


> VectorUDT/MatrixUDT should take primitive arrays without boxing
> ---------------------------------------------------------------
>
>                 Key: SPARK-14850
>                 URL: https://issues.apache.org/jira/browse/SPARK-14850
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, SQL
>    Affects Versions: 1.5.2, 1.6.1, 2.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Wenchen Fan
>            Priority: Critical
>             Fix For: 2.0.0
>
>
> In SPARK-9390, we switched to use GenericArrayData to store indices and 
> values in vector/matrix UDTs. However, GenericArrayData is not specialized 
> for primitive types. This might hurt MLlib performance badly. We should 
> consider either specialize GenericArrayData or use a different container.
> cc: [~cloud_fan] [~yhuai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to