Re: How to install spark in spark on yarn mode

2015-04-30 Thread madhvi
Hi, you have to specify the worker nodes of the spark cluster at the time of configurations of the cluster. Thanks Madhvi On Thursday 30 April 2015 01:30 PM, xiaohe lan wrote: Hi Madhvi, If I only install spark on one node, and use spark-submit to run an application, which are the Worker

Re: How to install spark in spark on yarn mode

2015-04-29 Thread madhvi
Hi, Follow the instructions to install on the following link: http://mbonaci.github.io/mbo-spark/ You dont need to install spark on every node.Just install it on one node or you can install it on remote system also and made a spark cluster. Thanks Madhvi On Thursday 30 April 2015 09:31 AM

Re: Serialization error

2015-04-28 Thread madhvi
Thankyou Deepak.It worked. Madhvi On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote: val conf = new SparkConf() .setAppName(detail) .set(spark.serializer, org.apache.spark.serializer.KryoSerializer) .set(spark.kryoserializer.buffer.mb, arguments.get(buffersize).get

Re: Serialization error

2015-04-28 Thread madhvi
(spark.kryoserializer.buffer.max.mb, arguments.get(maxbuffersize).get) .set(spark.driver.maxResultSize, arguments.get(maxResultSize).get) .registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key])) Can you try this ? On Tue, Apr 28, 2015 at 11:11 AM, madhvi madhvi.gu...@orkash.com

Serialization error

2015-04-27 Thread madhvi
can be used with spark Thanks Madhvi - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org

Re: Error in creating spark RDD

2015-04-23 Thread madhvi
:19 PM, Akhil Das ak...@sigmoidanalytics.com mailto:ak...@sigmoidanalytics.com wrote: Change your import from mapred to mapreduce. like : import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat; Thanks Best Regards On Wed, Apr 22, 2015 at 2:42 PM, madhvi

Error in creating spark RDD

2015-04-22 Thread madhvi
InputFormatK,V I am using the following import statements: import org.apache.accumulo.core.client.mapred.AccumuloInputFormat; import org.apache.accumulo.core.data.Key; import org.apache.accumulo.core.data.Value; I am not getting what is the problem in this. Thanks Madhvi

Re: Running spark over HDFS

2015-04-21 Thread madhvi
On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote: Your spark master should be spark://swetha:7077 :) Thanks Best Regards On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com mailto:madhvi.gu...@orkash.com wrote: PFA screenshot of my cluster UI Thanks On Monday 20

Spark and accumulo

2015-04-20 Thread madhvi
Hi all, Is there anything to integrate spark with accumulo or make spark to process over accumulo data? Thanks Madhvi Gupta - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h

Re: Running spark over HDFS

2015-04-20 Thread madhvi
. On Mon, Apr 20, 2015 at 3:05 PM, madhvi madhvi.gu...@orkash.com mailto:madhvi.gu...@orkash.com wrote: On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote: Hi Madhvi, I think the memory requested by your job, i.e. 2.0 GB is higher than what is available. Please request

Re: Running spark over HDFS

2015-04-20 Thread madhvi
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote: Hi Madhvi, I think the memory requested by your job, i.e. 2.0 GB is higher than what is available. Please request for 256 MB explicitly while creating Spark Context and try again. Thanks and Regards, Suraj Sheth Tried the same but still

Re: Running spark over HDFS

2015-04-20 Thread madhvi
Hi, I Did the same you told but now it is giving the following error: ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. On UI it is showing that master is working Thanks Madhvi On Monday 20 April 2015 12:28 PM, Akhil Das wrote

Re: Running spark over HDFS

2015-04-20 Thread madhvi
a screenshot of your cluster UI and the code snippet that you are trying to run? Thanks Best Regards On Mon, Apr 20, 2015 at 12:37 PM, madhvi madhvi.gu...@orkash.com mailto:madhvi.gu...@orkash.com wrote: Hi, I Did the same you told but now it is giving the following error: ERROR

Running spark over HDFS

2015-04-20 Thread madhvi
=2 export SPARK_EXECUTOR_MEMORY=1g I am running the spark standalone cluster.In cluster UI it is showing all workers with allocated resources but still its not working. what other configurations are needed to be changed? Thanks Madhvi Gupta