Fwd: Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
Spark are you using ? > > Is CapacityScheduler being used ? > > Thanks > > On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. <prabsma...@gmail.com> wrote: > >> Hi I am using the below command to run a spark job and I get an error >> like "Container preempted by scheduler&qu

Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
Hi I am using the below command to run a spark job and I get an error like "Container preempted by scheduler" I am not sure if it's related to the wrong usage of Memory: nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \ --deploy-mode cluster \ --queue adhoc \

Re: Spark + Jupyter (IPython Notebook)

2015-08-18 Thread Prabeesh K.
Refer this post http://blog.prabeeshk.com/blog/2015/06/19/pyspark-notebook-with-docker/ Spark + Jupyter + Docker On 18 August 2015 at 21:29, Jerry Lam chiling...@gmail.com wrote: Hi Guru, Thanks! Great to hear that someone tried it in production. How do you like it so far? Best Regards,

Re: Packaging Java + Python library

2015-04-13 Thread prabeesh k
Refer this post http://blog.prabeeshk.com/blog/2015/04/07/self-contained-pyspark-application/ On 13 April 2015 at 17:41, Punya Biswal pbis...@palantir.com wrote: Dear Spark users, My team is working on a small library that builds on PySpark and is organized like PySpark as well -- it has a

Re: How to learn Spark ?

2015-04-02 Thread prabeesh k
You can also refer this blog http://blog.prabeeshk.com/blog/archives/ On 2 April 2015 at 12:19, Star Guo st...@ceph.me wrote: Hi, all I am new to here. Could you give me some suggestion to learn Spark ? Thanks. Best Regards, Star Guo

Re: Beginner in Spark

2015-02-10 Thread prabeesh k
Refer this blog http://blog.prabeeshk.com/blog/2014/10/31/install-apache-spark-on-ubuntu-14-dot-04/ for step by step installation of Spark on Ubuntu On 7 February 2015 at 03:12, Matei Zaharia matei.zaha...@gmail.com wrote: You don't need HDFS or virtual machines to run Spark. You can just

Re: Kestrel and Spark Stream

2014-11-18 Thread prabeesh k
You can refer the following link https://github.com/prabeesh/Spark-Kestrel On Tue, Nov 18, 2014 at 3:51 PM, Akhil Das ak...@sigmoidanalytics.com wrote: You can implement a custom receiver http://spark.apache.org/docs/latest/streaming-custom-receivers.html to connect to Kestrel and use it. I

Re: Unable to run a Standalone job

2014-06-05 Thread prabeesh k
try sbt clean command before build the app. or delete .ivy2 ans .sbt folders(not a good methode). Then try to rebuild the project. On Thu, Jun 5, 2014 at 11:45 AM, Sean Owen so...@cloudera.com wrote: I think this is SPARK-1949 again: https://github.com/apache/spark/pull/906 I think this

Re: mismatched hdfs protocol

2014-06-04 Thread prabeesh k
For building Spark for particular version of Hadoop Refer http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html On Thu, Jun 5, 2014 at 8:14 AM, Koert Kuipers ko...@tresata.com wrote: you have to build spark against the version of hadoop your are using On Wed, Jun 4, 2014

Unable to execute saveAsTextFile on multi node mesos

2014-05-31 Thread prabeesh k
Hi, scenario : Read data from HDFS and apply hive query on it and the result is written back to HDFS. Scheme creation, Querying and saveAsTextFile are working fine with following mode - local mode - mesos cluster with single node - spark cluster with multi node Schema creation and

Re: Announcing Spark 1.0.0

2014-05-30 Thread prabeesh k
Please update the http://spark.apache.org/docs/latest/ link On Fri, May 30, 2014 at 4:03 PM, Margusja mar...@roo.ee wrote: Is it possible to download pre build package? http://mirror.symnds.com/software/Apache/incubator/ spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz - gives me 404 Best

java.lang.OutOfMemoryError while running Shark on Mesos

2014-05-22 Thread prabeesh k
Hi, I am trying to apply inner join in shark using 64MB and 27MB files. I am able to run the following queris on Mesos - SELECT * FROM geoLocation1 - SELECT * FROM geoLocation1 WHERE country = 'US' But while trying inner join as SELECT * FROM geoLocation1 g1 INNER JOIN

Re: Better option to use Querying in Spark

2014-05-06 Thread prabeesh k
at 11:22 AM, prabeesh k prabsma...@gmail.com wrote: Hi, I have seen three different ways to query data from Spark 1. Default SQL support( https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/sql/examples/HiveFromSpark.scala ) 2. Shark 3. Blink

Re: Compile SimpleApp.scala encountered error, please can any one help?

2014-04-12 Thread prabeesh k
ensure the only one SimpleApp object in your project, also check is there any copy of SimpleApp.scala. Normally the file SimpleApp.scala in src/main/scala or in the project root folder. On Sat, Apr 12, 2014 at 11:07 AM, jni2000 james...@federatedwireless.comwrote: Hi I am a new Spark user

[BLOG] For Beginners

2014-04-07 Thread prabeesh k
Hi all, Here I am sharing a blog for beginners, about creating spark streaming stand alone application and bundle the app as single runnable jar. Take a look and drop your comments in blog page. http://prabstechblog.blogspot.in/2014/04/a-standalone-spark-application-in-scala.html