Ayman Farahat
ayman.fara...@yahoo.com.invalid wrote:
How do you partition by product in Python?
the only API is partitionBy(50)
On Jun 18, 2015, at 8:42 AM, Debasish Das debasish.da...@gmail.com wrote:
Also in my experiments, it's much faster to blocked BLAS through cartesian
rather than
hyperthreading but I will let Spark handle the threading.
On Thu, Jun 18, 2015 at 8:38 AM, Debasish Das debasish.da...@gmail.com
wrote:
We added SPARK-3066 for this. In 1.4 you should get the code to do BLAS dgemm
based calculation.
On Thu, Jun 18, 2015 at 8:20 AM, Ayman Farahat
ayman.fara
Where do I do that ?
Thanks
Sent from my iPhone
On Jun 27, 2015, at 8:59 PM, Sabarish Sasidharan
sabarish.sasidha...@manthan.com wrote:
Try setting the yarn executor memory overhead to a higher value like 1g or
1.5g or more.
Regards
Sab
On 28-Jun-2015 9:22 am, Ayman Farahat
.
Regards
Sab
On 28-Jun-2015 8:47 am, Ayman Farahat ayman.fara...@yahoo.com.invalid
wrote:
Hello;
I tried to adjust the number of blocks by repartitioning the input.
Here is How I do it; (I am partitioning by users )
tot = newrdd.map(lambda l:
(l[1],Rating(int(l[1]),int(l[2]),l[4
and check the numbers there. Kyro
serializer doesn't help much here. You can try disabling it (though I don't
think it caused the failure). -Xiangrui
On Fri, Jun 26, 2015 at 11:00 AM, Ayman Farahat ayman.fara...@yahoo.com
wrote:
Hello ;
I checked on my partitions/storage and here is what
, but I think checkpointing
defaults to every 10 iterations? One notable thing is the crashes often
start on or after the 9th iteration, so it may be related to checkpointing.
But this could just be a coincidence.
Thanks!
On Fri, Jun 26, 2015 at 1:08 AM, Ayman Farahat ayman.fara
Hello ;
I checked on my partitions/storage and here is what I have
I have 80 executors
5 G per executore.
Do i need to set additional params
say cores
spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 5g
#
.
But this could just be a coincidence.
If you didn't set checkpointDir in SparkContext, the
checkpointInterval setting in ALS has no effect.
Thanks!
On Fri, Jun 26, 2015 at 1:08 AM, Ayman Farahat ayman.fara...@yahoo.com
wrote:
was there any resolution to that problem
). -Xiangrui
On Fri, Jun 26, 2015 at 11:00 AM, Ayman Farahat ayman.fara...@yahoo.com
wrote:
Hello ;
I checked on my partitions/storage and here is what I have
I have 80 executors
5 G per executore.
Do i need to set additional params
say cores
spark.serializer
was there any resolution to that problem?
I am also having that with Pyspark 1.4
380 Million observations
100 factors and 5 iterations
Thanks
Ayman
On Jun 23, 2015, at 6:20 PM, Xiangrui Meng men...@gmail.com wrote:
It shouldn't be hard to handle 1 billion ratings in 1.3. Just need
more
Thanks Sabarish and Nick
Would you happen to have some code snippets that you can share.
Best
Ayman
On Jun 17, 2015, at 10:35 PM, Sabarish Sasidharan
sabarish.sasidha...@manthan.com wrote:
Nick is right. I too have implemented this way and it works just fine. In my
case, there can be even
.
On Thu, Jun 18, 2015 at 8:38 AM, Debasish Das debasish.da...@gmail.com
wrote:
We added SPARK-3066 for this. In 1.4 you should get the code to do BLAS dgemm
based calculation.
On Thu, Jun 18, 2015 at 8:20 AM, Ayman Farahat
ayman.fara...@yahoo.com.invalid wrote:
Thanks Sabarish
This is 1.3.1 Ayman Farahat
--
View my research on my SSRN Author page:
http://ssrn.com/author=1594571
From: Nick Pentreath nick.pentre...@gmail.com
To: user@spark.apache.org user@spark.apache.org
Sent: Tuesday, June 16, 2015 4:23 AM
13 matches
Mail list logo