[jira] [Created] (SPARK-21004) remove the never used declaration in function warnDeprecatedVersions

2017-06-07 Thread yangZhiguo (JIRA)
yangZhiguo created SPARK-21004:
--

 Summary: remove the never used declaration in function 
warnDeprecatedVersions
 Key: SPARK-21004
 URL: https://issues.apache.org/jira/browse/SPARK-21004
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: yangZhiguo
Priority: Minor


the function {warnDeprecatedVersions} of class SparkContext, the declaration of 
val javaVersion = System.getProperty("java.version").split("[+.\\-]+", 3) 
is never used in this function



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21225) decrease the Mem using for variable 'tasks' in function resourceOffers

2017-06-27 Thread yangZhiguo (JIRA)
yangZhiguo created SPARK-21225:
--

 Summary: decrease the Mem using for variable 'tasks' in function 
resourceOffers
 Key: SPARK-21225
 URL: https://issues.apache.org/jira/browse/SPARK-21225
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.1.1, 2.1.0
Reporter: yangZhiguo
Priority: Minor


In the function 'resourceOffers', It declare a variable 'tasks' for storage 
the tasks which have  allocated a executor. It declared like this:
*{color:#d04437}val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](o.cores)){color}*

But, I think this code only conside a situation for that one task per core. If 
the user config the "spark.task.cpus" as 2 or 3, It really don't need so much 
space. I think It can motify as follow:

val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](Math.ceil(o.cores*1.0/CPUS_PER_TASK).toInt))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21225) decrease the Mem using for variable 'tasks' in function resourceOffers

2017-06-27 Thread yangZhiguo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangZhiguo updated SPARK-21225:
---
Description: 
In the function 'resourceOffers', It declare a variable 'tasks' for storage 
the tasks which have  allocated a executor. It declared like this:
*{color:#d04437}val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](o.cores)){color}*

But, I think this code only conside a situation for that one task per core. If 
the user config the "spark.task.cpus" as 2 or 3, It really don't need so much 
space. I think It can motify as follow:

{color:#14892c}*val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](Math.ceil(o.cores*1.0/CPUS_PER_TASK).toInt))*{color}

  was:
In the function 'resourceOffers', It declare a variable 'tasks' for storage 
the tasks which have  allocated a executor. It declared like this:
*{color:#d04437}val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](o.cores)){color}*

But, I think this code only conside a situation for that one task per core. If 
the user config the "spark.task.cpus" as 2 or 3, It really don't need so much 
space. I think It can motify as follow:

val tasks = shuffledOffers.map(o => new 
ArrayBuffer[TaskDescription](Math.ceil(o.cores*1.0/CPUS_PER_TASK).toInt))


> decrease the Mem using for variable 'tasks' in function resourceOffers
> --
>
> Key: SPARK-21225
> URL: https://issues.apache.org/jira/browse/SPARK-21225
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: yangZhiguo
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> In the function 'resourceOffers', It declare a variable 'tasks' for 
> storage the tasks which have  allocated a executor. It declared like this:
> *{color:#d04437}val tasks = shuffledOffers.map(o => new 
> ArrayBuffer[TaskDescription](o.cores)){color}*
> But, I think this code only conside a situation for that one task per core. 
> If the user config the "spark.task.cpus" as 2 or 3, It really don't need so 
> much space. I think It can motify as follow:
> {color:#14892c}*val tasks = shuffledOffers.map(o => new 
> ArrayBuffer[TaskDescription](Math.ceil(o.cores*1.0/CPUS_PER_TASK).toInt))*{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org