, and 1
column of labels. From this dataset, I split 80% for training set and 20%
for test set. The features are integer counts and labels are binary (1/0).
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290p13311
on the
cluster or if others have also been getting similar results.
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
like to know
if there is something I need to be doing to optimize the performance on the
cluster or if others have also been getting similar results.
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290.html
node. According
to the application detail stats in the spark UI, the total memory consumed
is around 95.5 GB.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290p13299.html
Sent from the Apache Spark User List mailing list
, with 16GB per node. According
to the application detail stats in the spark UI, the total memory consumed
is around 95.5 GB.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290p13299.html
Sent from the Apache Spark User
-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290p13311.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands
and labels are binary (1/0).
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/mllib-performance-on-cluster-tp13290p13311.html
Sent from the Apache Spark User List mailing list archive at Nabble.com