[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-28 Thread dongjoon-hyun
Github user dongjoon-hyun closed the pull request at:

https://github.com/apache/spark/pull/13915


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68544191
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -528,7 +528,7 @@ class ColumnExpressionSuite extends QueryTest with 
SharedSQLContext {
 
   test("upper") {
 checkAnswer(
-  lowerCaseData.select(upper('l)),
+  lowerCaseData.select(upper('s)),
--- End diff --

Right, I see that now, thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68544172
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -60,29 +60,31 @@ private[sql] case class InMemoryTableScanExec(
   if buildFilter.isDefinedAt(lhs) && buildFilter.isDefinedAt(rhs) =>
   buildFilter(lhs) || buildFilter(rhs)
 
-case EqualTo(a: AttributeReference, l: Literal) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
-case EqualTo(l: Literal, a: AttributeReference) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
+case EqualTo(a: AttributeReference, value: Literal) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
+case EqualTo(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
 
-case LessThan(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound < l
-case LessThan(l: Literal, a: AttributeReference) => l < 
statsFor(a).upperBound
+case LessThan(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound < value
+case LessThan(value: Literal, a: AttributeReference) => value < 
statsFor(a).upperBound
 
-case LessThanOrEqual(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound <= l
-case LessThanOrEqual(l: Literal, a: AttributeReference) => l <= 
statsFor(a).upperBound
+case LessThanOrEqual(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound <= value
+case LessThanOrEqual(value: Literal, a: AttributeReference) => value 
<= statsFor(a).upperBound
 
-case GreaterThan(a: AttributeReference, l: Literal) => l < 
statsFor(a).upperBound
-case GreaterThan(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < l
+case GreaterThan(a: AttributeReference, value: Literal) => value < 
statsFor(a).upperBound
+case GreaterThan(value: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < value
 
-case GreaterThanOrEqual(a: AttributeReference, l: Literal) => l <= 
statsFor(a).upperBound
-case GreaterThanOrEqual(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound <= l
+case GreaterThanOrEqual(a: AttributeReference, value: Literal) =>
+  value <= statsFor(a).upperBound
+case GreaterThanOrEqual(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
 
 case In(a: AttributeReference, list: Seq[Expression]) if 
list.forall(_.isInstanceOf[Literal]) =>
-  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
-l.asInstanceOf[Literal] <= statsFor(a).upperBound).reduce(_ || _)
+  list.map(value => statsFor(a).lowerBound <= 
value.asInstanceOf[Literal] &&
--- End diff --

Oh OK, nevermind that


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68543958
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -528,7 +528,7 @@ class ColumnExpressionSuite extends QueryTest with 
SharedSQLContext {
 
   test("upper") {
 checkAnswer(
-  lowerCaseData.select(upper('l)),
+  lowerCaseData.select(upper('s)),
--- End diff --

Yes, it's safe. It's due to the following change. 
```
- case class LowerCaseData(n: Int, l: String)
+ case class LowerCaseData(n: Int, s: String)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68543707
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -60,29 +60,31 @@ private[sql] case class InMemoryTableScanExec(
   if buildFilter.isDefinedAt(lhs) && buildFilter.isDefinedAt(rhs) =>
   buildFilter(lhs) || buildFilter(rhs)
 
-case EqualTo(a: AttributeReference, l: Literal) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
-case EqualTo(l: Literal, a: AttributeReference) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
+case EqualTo(a: AttributeReference, value: Literal) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
+case EqualTo(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
 
-case LessThan(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound < l
-case LessThan(l: Literal, a: AttributeReference) => l < 
statsFor(a).upperBound
+case LessThan(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound < value
+case LessThan(value: Literal, a: AttributeReference) => value < 
statsFor(a).upperBound
 
-case LessThanOrEqual(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound <= l
-case LessThanOrEqual(l: Literal, a: AttributeReference) => l <= 
statsFor(a).upperBound
+case LessThanOrEqual(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound <= value
+case LessThanOrEqual(value: Literal, a: AttributeReference) => value 
<= statsFor(a).upperBound
 
-case GreaterThan(a: AttributeReference, l: Literal) => l < 
statsFor(a).upperBound
-case GreaterThan(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < l
+case GreaterThan(a: AttributeReference, value: Literal) => value < 
statsFor(a).upperBound
+case GreaterThan(value: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < value
 
-case GreaterThanOrEqual(a: AttributeReference, l: Literal) => l <= 
statsFor(a).upperBound
-case GreaterThanOrEqual(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound <= l
+case GreaterThanOrEqual(a: AttributeReference, value: Literal) =>
+  value <= statsFor(a).upperBound
+case GreaterThanOrEqual(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
 
 case In(a: AttributeReference, list: Seq[Expression]) if 
list.forall(_.isInstanceOf[Literal]) =>
-  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
-l.asInstanceOf[Literal] <= statsFor(a).upperBound).reduce(_ || _)
+  list.map(value => statsFor(a).lowerBound <= 
value.asInstanceOf[Literal] &&
--- End diff --

For this one, this is a part of `buildFilter`, `PartialFunction[Expression, 
Expression]`. So, `||` returns predicates.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68535713
  
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala 
---
@@ -677,11 +677,11 @@ class DatasetSuite extends QueryTest with 
SharedSQLContext {
 
   test("runtime null check for RowEncoder") {
 val schema = new StructType().add("i", IntegerType, nullable = false)
-val df = spark.range(10).map(l => {
-  if (l % 5 == 0) {
+val df = spark.range(10).map(v => {
--- End diff --

PS I think the braces are redundant at the end of this line


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68535638
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -528,7 +528,7 @@ class ColumnExpressionSuite extends QueryTest with 
SharedSQLContext {
 
   test("upper") {
 checkAnswer(
-  lowerCaseData.select(upper('l)),
+  lowerCaseData.select(upper('s)),
--- End diff --

Is this change safe? I actually don't know what this change does


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68535509
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -60,29 +60,31 @@ private[sql] case class InMemoryTableScanExec(
   if buildFilter.isDefinedAt(lhs) && buildFilter.isDefinedAt(rhs) =>
   buildFilter(lhs) || buildFilter(rhs)
 
-case EqualTo(a: AttributeReference, l: Literal) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
-case EqualTo(l: Literal, a: AttributeReference) =>
-  statsFor(a).lowerBound <= l && l <= statsFor(a).upperBound
+case EqualTo(a: AttributeReference, value: Literal) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
+case EqualTo(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value && value <= statsFor(a).upperBound
 
-case LessThan(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound < l
-case LessThan(l: Literal, a: AttributeReference) => l < 
statsFor(a).upperBound
+case LessThan(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound < value
+case LessThan(value: Literal, a: AttributeReference) => value < 
statsFor(a).upperBound
 
-case LessThanOrEqual(a: AttributeReference, l: Literal) => 
statsFor(a).lowerBound <= l
-case LessThanOrEqual(l: Literal, a: AttributeReference) => l <= 
statsFor(a).upperBound
+case LessThanOrEqual(a: AttributeReference, value: Literal) => 
statsFor(a).lowerBound <= value
+case LessThanOrEqual(value: Literal, a: AttributeReference) => value 
<= statsFor(a).upperBound
 
-case GreaterThan(a: AttributeReference, l: Literal) => l < 
statsFor(a).upperBound
-case GreaterThan(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < l
+case GreaterThan(a: AttributeReference, value: Literal) => value < 
statsFor(a).upperBound
+case GreaterThan(value: Literal, a: AttributeReference) => 
statsFor(a).lowerBound < value
 
-case GreaterThanOrEqual(a: AttributeReference, l: Literal) => l <= 
statsFor(a).upperBound
-case GreaterThanOrEqual(l: Literal, a: AttributeReference) => 
statsFor(a).lowerBound <= l
+case GreaterThanOrEqual(a: AttributeReference, value: Literal) =>
+  value <= statsFor(a).upperBound
+case GreaterThanOrEqual(value: Literal, a: AttributeReference) =>
+  statsFor(a).lowerBound <= value
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
 
 case In(a: AttributeReference, list: Seq[Expression]) if 
list.forall(_.isInstanceOf[Literal]) =>
-  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
-l.asInstanceOf[Literal] <= statsFor(a).upperBound).reduce(_ || _)
+  list.map(value => statsFor(a).lowerBound <= 
value.asInstanceOf[Literal] &&
--- End diff --

Maybe while we're here we can fix this redundant expression too. 
`list.map(...).reduce(_ || _)` is less clear than simply `list.exists(...)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68535004
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/AnalysisException.scala ---
@@ -43,7 +43,7 @@ class AnalysisException protected[sql] (
   }
 
   override def getMessage: String = {
-val lineAnnotation = line.map(l => s" line $l").getOrElse("")
+val lineAnnotation = line.map(li => s" line $li").getOrElse("")
--- End diff --

Maybe just `.map(" line " + _)...` to keep it even simpler


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68534694
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala ---
@@ -200,7 +200,7 @@ private[ml] object WeightedLeastSquares {
  * Adds an instance.
  */
 def add(instance: Instance): this.type = {
-  val Instance(l, w, f) = instance
+  val Instance(label, w, f) = instance
--- End diff --

Feel free to just write out weight and feature too, while we're touching 
this, for consistency


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68534330
  
--- Diff: 
examples/src/main/scala/org/apache/spark/examples/mllib/IsotonicRegressionExample.scala
 ---
@@ -55,7 +55,7 @@ object IsotonicRegressionExample {
 }
 
 // Calculate mean squared error between predicted and real labels.
-val meanSquaredError = predictionAndLabel.map { case (p, l) => 
math.pow((p - l), 2) }.mean()
+val meanSquaredError = predictionAndLabel.map { case (p, la) => 
math.pow((p - la), 2) }.mean()
--- End diff --

You could write out a little more, like pred and label, in cases like this. 
If we're going to expand them might as well do more.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68534264
  
--- Diff: 
core/src/test/scala/org/apache/spark/util/ClosureCleanerSuite2.scala ---
@@ -415,9 +415,9 @@ class ClosureCleanerSuite2 extends SparkFunSuite with 
BeforeAndAfterAll with Pri
 (1 to x).map { y => y + localValue } // 2 levels
   }
 }
-val closure3 = (k: Int, l: Int, m: Int) => {
+val closure3 = (k: Int, v: Int, m: Int) => {
--- End diff --

k then v suggests "key, value" to me. Instead, use k, m and n?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68534165
  
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -900,7 +900,7 @@ private[spark] object JsonProtocol {
   .map(RDDOperationScope.fromJson)
 val callsite = Utils.jsonOption(json \ 
"Callsite").map(_.extract[String]).getOrElse("")
 val parentIds = Utils.jsonOption(json \ "Parent IDs")
-  .map { l => l.extract[List[JValue]].map(_.extract[Int]) }
+  .map { ids => ids.extract[List[JValue]].map(_.extract[Int]) }
--- End diff --

Could just be `.map(_.extract ...)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13915: [SPARK-16081][BUILD] Disallow using `l` as variab...

2016-06-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/13915#discussion_r68534110
  
--- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonHadoopUtil.scala ---
@@ -122,7 +122,7 @@ private[python] class JavaToWritableConverter extends 
Converter[Any, Writable] {
 obj match {
   case i: java.lang.Integer => new IntWritable(i)
   case d: java.lang.Double => new DoubleWritable(d)
-  case l: java.lang.Long => new LongWritable(l)
+  case v: java.lang.Long => new LongWritable(v)
--- End diff --

Hm, tough one. I understand the logic behind avoiding `l` because it looks 
like `1`. Here, it feels OK to use `l` though. `v` suggests `void` here, 
another primitive type. Maybe `n`? not sure. I'd argue we could leave instances 
like this alone, though then we have to remove the style rule.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org