ring all these information here for letting you know that I could
solve the issue raised here. And thank you for that. Secondly, can help
someone if they have the same issue.
>
>
> Thanks again!
>
> Arijit
>
> ____________
> From: arijit chakraborty
> S
From: arijit chakraborty
Sent: Friday, May 12, 2017 8:41:26 PM
To: dev@systemml.incubator.apache.org
Subject: Re: Improve SystemML execution speed in Spark
HI Niketan,
You are right. I was actually testing 2 seperate code in the same environment.
Maybe that's why having this high st
ketan Pansare
Sent: Friday, May 12, 2017 8:04:41 PM
To: dev@systemml.incubator.apache.org
Subject: Re: Improve SystemML execution speed in Spark
Hi Arijit
The second statistics is bit surprising. Is it possible for you to share
the exact setup may be via git: bash script (for example run.sh wit
sec 916
> -- 8) rangeReIndex0.010 sec 72
> -- 9) createvar 0.010 sec 576
> -- 10) rmempty 0.007 sec 54
>
>
> I can see JVM GC time is high (is pretty low in above case) & append is
taking time (even though we are not appending any
___
From: arijit chakraborty
Sent: Friday, May 12, 2017 2:32:07 AM
To: dev@systemml.incubator.apache.org
Subject: Re: Improve SystemML execution speed in Spark
Hi Niketan,
Thank you for your suggestion!
I tried what you suggested.
## Changed it here:
from pyspark.sql
. Rest of the portion were almost instantaneous. The dml code part
> was taking time. And I could not able to figure out why it could be.
>
>
> Thanks again!
>
> Arijit
>
>
> From: Niketan Pansare
> Sent: Thursday, May 11, 2017 1:33
ing time. And I could not able to figure out why it could be.
>
>
> Thanks again!
>
> Arijit
>
>
> From: Niketan Pansare
> Sent: Thursday, May 11, 2017 1:33:15 AM
> To: dev@systemml.incubator.apache.org
> Subject: Re: Improve Syst
Pansare
Sent: Thursday, May 11, 2017 1:33:15 AM
To: dev@systemml.incubator.apache.org
Subject: Re: Improve SystemML execution speed in Spark
Hi Arijit,
Can you please put timing counters around below code to understand 20-30
seconds you observe:
1. Creation of SparkContext:
sc = SparkContext
Hi Arijit,
Can you please put timing counters around below code to understand 20-30
seconds you observe:
1. Creation of SparkContext:
sc = SparkContext("local[*]", "test")
2. Converting pandas to Pyspark dataframe:
> train_data= pd.read_csv("data1.csv")
> test_data = pd.read_csv("data2.csv")
Hi,
I'm creating a process in SystemML, and running it through spark. I'm running
the code in the following way:
# Spark Specifications:
import os
import sys
import pandas as pd
import numpy as np
spark_path = "C:\spark"
os.environ['SPARK_HOME'] = spark_path
os.environ['HADOOP_HOME'] = spar
10 matches
Mail list logo