[ https://issues.apache.org/jira/browse/SPARK-8520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiangrui Meng updated SPARK-8520: --------------------------------- Issue Type: Brainstorming (was: Improvement) > Improve GLM's scalability on number of features > ----------------------------------------------- > > Key: SPARK-8520 > URL: https://issues.apache.org/jira/browse/SPARK-8520 > Project: Spark > Issue Type: Brainstorming > Components: ML > Affects Versions: 1.4.0 > Reporter: Xiangrui Meng > Assignee: Xiangrui Meng > Priority: Critical > Labels: advanced > > MLlib's GLM implementation uses driver to collect gradient updates. When > there exist many features (>20 million), the driver becomes the performance > bottleneck. In practice, it is common to see a problem with a large feature > dimension, resulting from hashing or other feature transformations. So it is > important to improve MLlib's scalability on number of features. > There are couple possible solutions: > 1. still use driver to collect updates, but reduce the amount of data it > collects at each iteration. > 2. apply 2D partitioning to the training data and store the model > coefficients distributively (e.g., vector-free l-bfgs) > 3. parameter server > 4. ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org