I have update my spark source code to 1.3.1.
the checkpoint works well.
BUT the shuffle data still could not be delete automatically…the disk usage is
still 30TB…
I have set the spark.cleaner.referenceTracking.blocking.shuffle to true.
Do you know how to solve my problem?
Sendong Li
在
...@cloudera.com; GuoQiang
Liwi...@qq.com mailto:wi...@qq.com;
: Re: different result from implicit ALS with explicit ALS
I have update my spark source code to 1.3.1.
the checkpoint works well.
BUT the shuffle data still could not be delete automatically??the disk usage
is still 30TB
mailto:user@spark.apache.org; Sean
Owenso...@cloudera.com mailto:so...@cloudera.com; GuoQiang
Liwi...@qq.com mailto:wi...@qq.com;
主题: Re: different result from implicit ALS with explicit ALS
I have update my spark source code to 1.3.1.
the checkpoint works well.
BUT the shuffle data
mailto:wi...@qq.com;
: Re: different result from implicit ALS with explicit ALS
I have update my spark source code to 1.3.1.
the checkpoint works well.
BUT the shuffle data still could not be delete automatically??the disk usage
is still 30TB??
I have set
)
}
}
}
-- 原始邮件 --
*发件人:* lisendonglisend...@163.com;
*发送时间:* 2015年3月31日(星期二) 下午3:47
*收件人:* Xiangrui Mengmen...@gmail.com;
*抄送:* Xiangrui Mengm...@databricks.com; useruser@spark.apache.org;
Sean Owenso...@cloudera.com; GuoQiang Liwi...@qq.com;
*主题:* Re: different result from
I believe that's right, and is what I was getting at. yes the implicit
formulation ends up implicitly including every possible interaction in
its loss function, even unobserved ones. That could be the difference.
This is mostly an academic question though. In practice, you have
click-like data
oh my god, I think I understood...
In my case, there are three kinds of user-item pairs:
Display and click pair(positive pair)
Display but no-click pair(negative pair)
No-display pair(unobserved pair)
Explicit ALS only consider the first and the second kinds
But implicit ALS consider all the
Thank you very much for your opinion:)
In our case, maybe it 's dangerous to treat un-observed item as negative
interaction(although we could give them small confidence, I think they are
still incredible...)
I will do more experiments and give you feedback:)
Thank you;)
在
)
I could not understand why, could you help me?
Thank you very much!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/different-result-from-implicit-ALS-with-explicit-ALS-tp21823.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
okay, I have brought this to the user@list
I don’t think the negative pair should be omitted…..
if the score of all of the pairs are 1.0, the result will be worse…I have tried…
Best Regards,
Sendong Li
在 2015年2月26日,下午10:07,Sean Owen so...@cloudera.com 写道:
Yes, I mean, do not generate a
+user
On Thu, Feb 26, 2015 at 2:26 PM, Sean Owen so...@cloudera.com wrote:
I think I may have it backwards, and that you are correct to keep the 0
elements in train() in order to try to reproduce the same result.
The second formulation is called 'weighted regularization' and is used for
Lisen, did you use all m-by-n pairs during training? Implicit model
penalizes unobserved ratings, while explicit model doesn't. -Xiangrui
On Feb 26, 2015 6:26 AM, Sean Owen so...@cloudera.com wrote:
+user
On Thu, Feb 26, 2015 at 2:26 PM, Sean Owen so...@cloudera.com wrote:
I think I may
12 matches
Mail list logo