Data set is not big. It is 56K X 9K . It does have column names as long
strings.

It fits very easily in Pandas. That is also in memory thing. So I am not
sure if memory is an issue here. If Pandas can fit it very easily and work
on it very fast then Spark shouldnt have problems too right?
ᐧ

On Tue, Sep 13, 2016 at 10:24 AM, neil90 [via Apache Spark User List] <
ml-node+s1001560n27707...@n3.nabble.com> wrote:

> Im assuming the dataset your dealing with is big hence why you wanted to
> allocate ur full 16gb of Ram to it.
>
> I suggest running the python spark-shell as such "pyspark --driver-memory
> 16g".
>
> Also if you cache your data and it doesn't fully fit in memory you can do
> df.cache(StorageLevel.MEMORY_AND_DISK).
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-
> Java-Heap-Error-tp27669p27707.html
> To unsubscribe from Spark Java Heap Error, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=27669&code=dHIubWFuaXNoQGdtYWlsLmNvbXwyNzY2OXwtNjcyNzMzNjcz>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Java-Heap-Error-tp27669p27708.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to