[ https://issues.apache.org/jira/browse/SPARK-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14962085#comment-14962085 ]
Narine Kokhlikyan commented on SPARK-11057: ------------------------------------------- Thank you for you quick response [~rxin] I have one more question :) Since my goal is to compute the correlation and covariance for column-pair combinations and those are independent from each other, I think that it is better to do it in parallel. After exploring the APIs in spark I came up with smth like this: 1st sequential example: let's assume these are my combinations and that for now all my columns are numerical: combs res214: Array[(String, String)] = Array((rating,rating), (rating,income), (rating,age), (income,rating), (income,income), (income,age), (age,rating), (age,income), (age,age)) this is how I compute the covariances and it works pefectly. combs.map(x => peopleDF.stat.cov(x._1, x._2)).foreach(println) 2nd - now I want to compute my covariances in parallel: val parcombs = sc.parallelize(combs) parcombs.map(x => peopleDF.stat.cov(x._1, x._2)).foreach(println) Above example fails with a NullpointerException. I'm new to this, probably I'm doing something unexpected and if you could point it out me that would be great! Thanks! Caused by: java.lang.NullPointerException at org.apache.spark.sql.DataFrame.schema(DataFrame.scala:290) at org.apache.spark.sql.execution.stat.StatFunctions$$anonfun$collectStatisticalData$2.apply(StatFunctions.scala:80) at org.apache.spark.sql.execution.stat.StatFunctions$$anonfun$collectStatisticalData$2.apply(StatFunctions.scala:80) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > SQL: corr and cov for many columns > ---------------------------------- > > Key: SPARK-11057 > URL: https://issues.apache.org/jira/browse/SPARK-11057 > Project: Spark > Issue Type: New Feature > Components: SQL > Reporter: Narine Kokhlikyan > > Hi there, > As we know R has the option to calculate the correlation and covariance for > all columns of a dataframe or between columns of two dataframes. > If we look at apache math package we can see that, they have that too. > http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/stat/correlation/PearsonsCorrelation.html#computeCorrelationMatrix%28org.apache.commons.math3.linear.RealMatrix%29 > In case we have as input only one DataFrame: > ------------------------------------------------------ > for correlation: > cor[i,j] = cor[j,i] > and for the main diagonal we can have 1s. > --------------------- > for covariance: > cov[i,j] = cov[j,i] > and for main diagonal: we can compute the variance for that specific column: > See: > http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/stat/correlation/Covariance.html#computeCovarianceMatrix%28org.apache.commons.math3.linear.RealMatrix%29 > Let me know what do you think. > I'm working on this and will make a pull request soon. > Thanks, > Narine -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org