Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread akshay naidu
Today, entire Indian nationals are mourning for the demise of Irfan Khan, A true Indian Muslim. And this idiot Zahid Amin or whoever created this bot(not sure if its a bot or something) spreading rumors about India. Rights given to Muslims in India are much open then any other Muslim majority

Re: java vs scala for Apache Spark - is there a performance difference ?

2018-10-30 Thread akshay naidu
how about Python. java vs scala vs python vs R which is better. On Sat, Oct 27, 2018 at 3:34 AM karan alang wrote: > Hello > - is there a "performance" difference when using Java or Scala for Apache > Spark ? > > I understand, there are other obvious differences (less code with scala, > easier

Re: [Spark Optimization] Why is one node getting all the pressure?

2018-06-11 Thread akshay naidu
try --num-executors 3 --executor-cores 4 --executor-memory 2G --conf spark.scheduler.mode=FAIR On Mon, Jun 11, 2018 at 2:43 PM, Aakash Basu wrote: > Hi, > > I have submitted a job on* 4 node cluster*, where I see, most of the > operations happening at one of the worker nodes and other two are

Re: Spark EMR executor-core vs Vcores

2018-02-26 Thread akshay naidu
Putting all cores won't solve the purpose alone, you'll have to mention executors as well executor memory accordingly to it.. On Tue 27 Feb, 2018, 12:15 AM Vadim Semenov, wrote: > All used cores aren't getting reported correctly in EMR, and YARN itself > has no control over

Re: sqoop import job not working when spark thrift server is running.

2018-02-24 Thread akshay naidu
t; On 24. Feb 2018, at 13:47, akshay naidu <akshaynaid...@gmail.com> wrote: > > it sure is not able to get sufficient resources from YARN to start the >> containers. >> > that's right. I worked when I reduced executors from thrift but it also > reduced thrift's performa

Re: sqoop import job not working when spark thrift server is running.

2018-02-24 Thread akshay naidu
> > it sure is not able to get sufficient resources from YARN to start the > containers. > that's right. I worked when I reduced executors from thrift but it also reduced thrift's performance. But it is not the solution i am looking forward to. my sqoop import job runs just once a day, and thrift

Re: sqoop import job not working when spark thrift server is running.

2018-02-20 Thread akshay naidu
hello vijay, appreciate your reply. what was the error when you are trying to run mapreduce import job when > the > thrift server is running. it didnt throw any error, it just gets stuck at INFO mapreduce.Job: Running job: job_151911053 and resumes the moment i kill Thrift . thanks On Tue,

sqoop import job not working when spark thrift server is running.

2018-02-19 Thread akshay naidu
Hello , I was trying to optimize my spark cluster. I did it to some extent by doing some changes in yarn-site.xml and spark-defaults.conf file. before the changes the mapreduce import job was running fine along with slow thrift server. after changes, i have to kill the thrift server to execute my

Re: Run Multiple Spark jobs. Reduce Execution time.

2018-02-14 Thread akshay naidu
a small hint would be very helpful . On Wed, Feb 14, 2018 at 5:17 PM, akshay naidu <akshaynaid...@gmail.com> wrote: > Hello Siva, > Thanks for your reply. > > Actually i'm trying to generate online reports for my clients. For this I > want the jobs should be executed faste

Re: Run Multiple Spark jobs. Reduce Execution time.

2018-02-14 Thread akshay naidu
g triggered. > > I would recommend slow running job to be configured in a separate pool. > > Regards > Shiv > > On Feb 14, 2018, at 5:44 AM, akshay naidu <akshaynaid...@gmail.com> wrote: > > >

Re: Run Multiple Spark jobs. Reduce Execution time.

2018-02-14 Thread akshay naidu
:43 PM, akshay naidu <akshaynaid...@gmail.com> wrote: > Hello, > I'm try to run multiple spark jobs on cluster running in yarn. > Master is 24GB server with 6 Slaves of 12GB > > fairscheduler.xml settings are - > > FAIR > 10 > 2 > > >

Re: Run Multiple Spark jobs. Reduce Execution time.

2018-02-14 Thread akshay naidu
On Tue, Feb 13, 2018 at 4:43 PM, akshay naidu <akshaynaid...@gmail.com> wrote: > Hello, > I'm try to run multiple spark jobs on cluster running in yarn. > Master is 24GB server with 6 Slaves of 12GB > > fairscheduler.xml settings are - > > FAIR > 10 >

Run Multiple Spark jobs. Reduce Execution time.

2018-02-13 Thread akshay naidu
Hello, I'm try to run multiple spark jobs on cluster running in yarn. Master is 24GB server with 6 Slaves of 12GB fairscheduler.xml settings are - FAIR 10 2 I am running 8 jobs simultaneously , jobs are running parallelly but not all. at a time only 7 of then runs simultaneously

Re: Is Apache Spark-2.2.1 compatible with Hadoop-3.0.0

2018-01-08 Thread akshay naidu
page when you select Spark 2.2.1 it gives you an >> option to select package type. In that, there is an option to select >> "Pre-Built for Apache Hadoop 2.7 and later". I am assuming it means that it >> does support Hadoop 3.0. >> >> http://spark.apache.org/do

Is Apache Spark-2.2.1 compatible with Hadoop-3.0.0

2018-01-06 Thread akshay naidu
hello Users, I need to know whether we can run latest spark on latest hadoop version i.e., spark-2.2.1 released on 1st dec and hadoop-3.0.0 released on 13th dec. thanks.