Hi Fawze,

Yes, it is true that i am running in yarn mode, 5 containers represents
4executor and 1 master.
But i am not expecting this details as i already aware of this. What i want
to know is relationship between Vcores(Emr yarn) vs executor-core(Spark).


>From my slave configuration i understand that only 8 thread available in my
slave machine which means 8 thread run at a time at max.

Thread(s) per core:    8
Core(s) per socket:    1
Socket(s):             1


so i don't think so it is valid to give executore-core-10 in my
spark-submission.

On Mon, Feb 26, 2018 at 10:54 AM, Fawze Abujaber <fawz...@gmail.com> wrote:

> It's recommended to sue executor-cores of 5.
>
> Each executor here will utilize 20 GB which mean the spark job will
> utilize 50 cpu cores and 100GB memory.
>
> You can not run more than 4 executors because your cluster doesn't have
> enough memory.
>
> Use see 5 executor because 4 for the job and one for the application
> master.
>
> serr the used menory and the total memory.
>
> On Mon, Feb 26, 2018 at 12:20 PM, Selvam Raman <sel...@gmail.com> wrote:
>
>> Hi,
>>
>> spark version - 2.0.0
>> spark distribution - EMR 5.0.0
>>
>> Spark Cluster - one master, 5 slaves
>>
>> Master node - m3.xlarge - 8 vCore, 15 GiB memory, 80 SSD GB storage
>> Slave node - m3.2xlarge - 16 vCore, 30 GiB memory, 160 SSD GB storage
>>
>>
>> Cluster Metrics
>> Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
>> UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
>> NodesDecommissioning NodesDecommissioned NodesLost NodesUnhealthy 
>> NodesRebooted
>> Nodes
>> 16 0 1 15 5 88.88 GB 90.50 GB 22 GB 5 79 1 5
>> <http://localhost:8088/cluster/nodes> 0
>> <http://localhost:8088/cluster/nodes/decommissioning> 0
>> <http://localhost:8088/cluster/nodes/decommissioned> 5
>> <http://localhost:8088/cluster/nodes/lost> 0
>> <http://localhost:8088/cluster/nodes/unhealthy> 0
>> <http://localhost:8088/cluster/nodes/rebooted>
>> I have submitted job with below configuration
>> --num-executors 5 --executor-cores 10 --executor-memory 20g
>>
>>
>>
>> spark.task.cpus - be default 1
>>
>>
>> My understanding is there will be 5 executore each can run 10 task at a
>> time and task can share total memory of 20g. Here, i could see only 5
>> vcores used which means 1 executor instance use 20g+10%overhead ram(22gb),
>> 10 core(number of threads), 1 Vcore(cpu).
>>
>> please correct me if my understand is wrong.
>>
>> how can i utilize number of vcore in EMR effectively. Will Vcore boost
>> performance?
>>
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>
>


-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"

Reply via email to