I am trying to build spark after cloning from github repo:
I am executing:
./sbt/sbt -Dhadoop.version=2.4.0 -Pyarn assembly
I am getting following error:
[warn] ^
[error]
[error] while compiling:
/home/m3.sharma/installSrc/spark/spark/sql/core/src/main/scala
Thanks, it works now :)
On Wed, Jul 23, 2014 at 11:45 AM, Xiangrui Meng [via Apache Spark User
List] ml-node+s1001560n10537...@n3.nabble.com wrote:
try `sbt/sbt clean` first? -Xiangrui
On Wed, Jul 23, 2014 at 11:23 AM, m3.sharma [hidden email]
http://user/SendEmail.jtp?type=nodenode=10537i
Thanks Nick real-time suggestion is good, will see if we can add that to our
deployment strategy and you are correct we may not need recommendation for
each user.
Will try adding more resources and broadcasting item features suggestion as
currently they don't seem to be huge.
As users and
Christopher, that's really a great idea to search in latent factor space
rather than computing each entry of matrix, now the complexity of the
problem has reduced drastically from naive O(n*m). Since our data is not
that huge I will try exact nbrhood search then fallback to approximate if
that
Hi,
I am trying to develop a recommender system for about 1 million users and 10
thousand items. Currently it's a simple regression based model where for
every user, item pair in dataset we generate some features and learn model
from it. Till training and evaluation everything is fine the
We are using RegressionModels that comes with *mllib* package in SPARK.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Yes, thats what prediction should be doing, taking dot products or sigmoid
function for each user,item pair. For 1 million users and 10 K items data
there are 10 billion pairs.
--
View this message in context: