You will have to do a repartition after creating the dstream to utilize all cores. directStream keeps exactly the same partitions as in kafka for spark.
Thanks Best Regards On Thu, Dec 3, 2015 at 9:42 AM, Charan Ganga Phani Adabala < char...@eiqnetworks.com> wrote: > Hi, > > We have* 1 kafka topic*, by using the direct stream approach in spark we > have to processing the data present in topic , with one node R&D cluster > for to understand how the Spark will behave. > > My machine configuration is *4 Cores, 16 GB RAM with 1 executor.* > > My question is how many cores are used for this job while running. > > *In web console it show 4 cores are used.* > > *How the cores are used in Directstream approach*? > > Command to run the Job : > > *./spark/bin/spark-submit --master spark://XX.XX.XX.XXX:7077 --class > org.eiq.IndexingClient ~/spark/lib/IndexingClient.jar* > > > > Thanks & Regards, > > *Ganga Phani Charan Adabala | Software Engineer* > > o: +91-40-23116680 | c: +91-9491418099 > > EiQ Networks, Inc. <http://www.eiqnetworks.com/> > > > > > > [image: cid:image001.png@01D11C9D.AF5CC1F0] <http://www.eiqnetworks.com/> > > *"This email is intended only for the use of the individual or entity > named above and may contain information that is confidential and > privileged. If you are not the intended recipient, you are hereby notified > that any dissemination, distribution or copying of the email is strictly > prohibited. If you have received this email in error, please destroy > the original message."* > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org >