Hi,
My configurations are follows:
SPARK_EXECUTOR_INSTANCES=4
SPARK_EXECUTOR_MEMORY=1G
But on my spark UI it shows:
* *Alive Workers:*1
* *Cores in use:*4 Total, 0 Used
* *Memory in use:*6.7 GB Total, 0.0 B Used
Also while running a program in java for spark I am getting the
following
can't convert a source RDD from JVM to RRDD, you can't further use SparkR RDD
API to apply transformations on it.
-Original Message-
From: madhvi.gupta [mailto:madhvi.gu...@orkash.com]
Sent: Wednesday, September 23, 2015 11:42 AM
To: Sun, Rui; user
Subject: Re: SparkR for accumulo
Hi Rui
API to apply transformations on it.
-Original Message-
From: madhvi.gupta [mailto:madhvi.gu...@orkash.com]
Sent: Wednesday, September 23, 2015 11:42 AM
To: Sun, Rui; user
Subject: Re: SparkR for accumulo
Hi Rui,
Cant we use the accumulo data RDD created from JAVA in spark, in sparkR
for accumulo, so we can't
create SparkR dataframe on accumulo
2. It is possible to create RDD from accumulo via AccumuloInputFormat in Scala.
But unfortunately, SparkR does not support creating RDD from Hadoop files other
than text file.
-Original Message-
From: madhvi.gupta
Hi,
I want to process accumulo data in R through sparkR.Can anyone help me
and let me know how to get accumulo data in spark to be used in R?
--
Thanks and Regards
Madhvi Gupta
-
To unsubscribe, e-mail: