unsubscribe
unsubscribe
On Thu, Mar 3, 2022 at 5:44 AM Basavaraj wrote:
> unsubscribe
.
Appreciate any suggestions in this regard.
Many Thanks,
Ghousia.
On Mon, Aug 18, 2014 at 4:05 PM, Ghousia wrote:
> But this would be applicable only to operations that have a shuffle phase.
>
> This might not be applicable to a simple Map operation where a record is
> mapped to a ne
this limit, beyond which the contents will begin to
> spill to disk. If spills are often, consider increasing this value at the
> expense of spark.storage.memoryFraction.
>
> You can give it a try.
>
>
> Thanks
> Best Regards
>
>
> On Mon, Aug 18, 2014 at 12:21 PM,
, 2014 at 12:02 PM, Akhil Das
wrote:
> Hi Ghousia,
>
> You can try the following:
>
> 1. Increase the heap size
> <https://spark.apache.org/docs/0.9.0/configuration.html>
> 2. Increase the number of partitions
> <http://stackoverflow.com/questions/21698443/spark
,
Ghousia.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user
,
Ghousia.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Query-on-Merge-Message-Graph-pregel-operator-tp7909.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-- Forwarded message --
From: Ghousia
Date: Wed, Jun 18, 2014 at 5:41 PM
Subject: BSP realization on Spark
To: user@spark.apache.org
Hi,
We are trying to implement a BSP model in Spark with the help of GraphX.
One thing I encountered is a Pregel operator in Graph class. But
provides
a way to define a vertex program, but nothing is mentioned about the
barrier synchronization.
Any help in this regard is truly appreciated.
Many Thanks,
Ghousia.