Re: Re: Re: Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-11 Thread Mich Talebzadeh
Ok you can see that the process 10603 Worker is running as the worker/slave in your drive manager connection to GUI port webui-port 8081 spark://ES01:7077. That you can access through web Also you have process 12420 running as SparkSubmit. that is telling you the JVM you have submitted for this

Re:Re: Re: Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-10 Thread 李明伟
[root@ES01 test]# jps 10409 Master 12578 CoarseGrainedExecutorBackend 24089 NameNode 17705 Jps 24184 DataNode 10603 Worker 12420 SparkSubmit [root@ES01 test]# ps -awx | grep -i spark | grep java 10409 ?Sl 1:52 java -cp

Re: Re: Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-10 Thread Mich Talebzadeh
what does jps returning? jps 16738 ResourceManager 14786 Worker 17059 JobHistoryServer 12421 QuorumPeerMain 9061 RunJar 9286 RunJar 5190 SparkSubmit 16806 NodeManager 16264 DataNode 16138 NameNode 16430 SecondaryNameNode 22036 SparkSubmit 9557 Jps 13240 Kafka 2522 Master and ps -awx | grep -i

Re:Re: Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-10 Thread 李明伟
Hi Mich From the ps command. I can find four process. 10409 is the master and 10603 is the worker. 12420 is the driver program and 12578 should be the executor (worker). Am I right? So you mean the 12420 is actually running both the driver and the worker role? [root@ES01 ~]# ps -awx | grep

Re: Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-10 Thread Mich Talebzadeh
hm, This is a standalone mode. When you are running Spark in Standalone mode, you only have one worker that lives within the driver JVM process that you start when you start spark-shell or spark-submit. However, since driver-memory setting encapsulates the JVM, you will need to set the amount

Re: Re: Re: Re: How big the spark stream window could be ?

2016-05-10 Thread Mich Talebzadeh
Hi Mingwei, In your Spark conf setting what are you providing for these parameters. *Are you capping them?* For example val conf = new SparkConf(). setAppName("AppName"). setMaster("local[2]"). set("spark.executor.memory", "4G").