Re: Building Spark 2.X in Intellij

2016-06-22 Thread Praveen R
I had some errors like SqlBaseParser class missing, and figured out I needed to get these classes from SqlBase.g4 using antlr4. It works fine now. On Thu, Jun 23, 2016 at 9:20 AM, Jeff Zhang wrote: > It works well with me. You can try reimport it into intellij. > > On Thu, Jun 23, 2016 at 10:25

Broadcast variable in Spark Java application

2014-07-07 Thread Praveen R
I need a variable to be broadcasted from driver to executor processes in my spark java application. I tried using spark broadcast mechanism to achieve, but no luck there. Could someone help me doing this, share some code probably ? Thanks, Praveen R

Re: Shark resilience to unusable slaves

2014-05-23 Thread Praveen R
You might use bin/shark-withdebug to find the exact issue for the failure. That said, easiest way to get the cluster running, is to get rid of dis-functional machine from spark cluster (remove it from slaves file). Hope that helps. On Thu, May 22, 2014 at 9:04 PM, Yana Kadiyska wrote: > Hi, I a

Re: ERROR TaskSchedulerImpl: Lost an executor

2014-04-22 Thread Praveen R
I guess you need to limit the heap size. Add the below line in spark-env.sh and make sure to rsync it all workers. SPARK_JAVA_OPTS+=" -Xms512m -Xmx512m " On Wed, Apr 23, 2014 at 4:55 AM, jaeholee wrote: > Ok. I tried setting the partition number to 128 and numbers greater than > 128, > and no

Re: ERROR TaskSchedulerImpl: Lost an executor

2014-04-22 Thread Praveen R
Could you try setting MASTER variable in spark-env.sh export MASTER=spark://:7077 For starting the standalone cluster, ./sbin/start-all.sh should work as far as you have password less access to all machines. Any error here ? On Tue, Apr 22, 2014 at 10:10 PM, jaeholee wrote: > No, I am not u

Re: ERROR TaskSchedulerImpl: Lost an executor

2014-04-21 Thread Praveen R
Do have cluster deployed on aws? Could you try checking if 7077 port is accessible from worker nodes. On Tue, Apr 22, 2014 at 2:56 AM, jaeholee wrote: > Hi, I am trying to set up my own standalone Spark, and I started the master > node and worker nodes. Then I ran ./bin/spark-shell, and I get t

Re: Spark recovery from bad nodes

2014-04-21 Thread Praveen R
Please check my comment on the shark-users thread . On Tue, Apr 22, 2014 at 8:06 AM, rama0120 wrote: > Hi, > > I couldn't find any details regarding th

Re: Lost an executor error - Jobs fail

2014-04-14 Thread Praveen R
; have been able to execute just fine. Was this not the case? > > > On Mon, Apr 14, 2014 at 7:38 AM, Praveen R > wrote: > >> Configuration comes from spark-ec2 setup script, sets spark.local.dir to >> use /mnt/spark, /mnt2/spark. >> Setup actually worked for quite

Re: cannot exec. job: "TaskSchedulerImpl: Initial job has not accepted any resources"

2014-04-14 Thread Praveen R
Can you try adding this to your spark-env file and sync to all hosts export MASTER="spark://hadoop-pg-5.cluster:7077" On Sat, Apr 12, 2014 at 6:50 PM, ge ko wrote: > Hi, > > I'm starting using Spark and have installed Spark within CDH5 using > ClouderaManager. > I set up one master (hadoop-pg-

Re: Lost an executor error - Jobs fail

2014-04-14 Thread Praveen R
? > > I think this is the reason of your error > > Wisely Chen > > > On Mon, Apr 14, 2014 at 9:29 PM, Praveen R > wrote: > >> Had below error while running shark queries on 30 node cluster and was >> not able to start shark server or run any jobs. >> &g

Lost an executor error - Jobs fail

2014-04-14 Thread Praveen R
er be marked dead in such scenario, instead of making the cluster non-usable so the debugging can be done at leisure. Thanks, Praveen R

Re: Shark does not give any results with SELECT count(*) command

2014-03-26 Thread Praveen R
the result > which exist on bigdata003. while i run bin/shark on bigdata003, i can get > result. > > though it is the reason, i still can not understand why the result is on > bigdata003(master is bigdata001)? > > > > > 2014-03-25 18:41 GMT+08:00 Praveen R : > > Hi Qin

Re: Shark does not give any results with SELECT count(*) command

2014-03-25 Thread Praveen R
Hi Qingyang Li, Shark-0.9.0 uses a patched version of hive-0.11 and using configuration/metastore of hive-0.12 could be incompatible. May I know the reason you are using hive-site.xml from previous hive version(to use existing metastore?). You might just leave hive-site.xml blank, otherwise. S