[GitHub] spark pull request: [SPARK-14338][SQL] Improve `SimplifyConditiona...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12122#issuecomment-204617933
  
@dongjoon-hyun Review complete. This matches Hive semantics, where `SELECT 
if(null, 1, 2)` returns 2. However, this prevents a future optimization where 
`if(anything, TrueLiteral, FalseLiteral)` can be simplified to `anything`.

This requires human decision. @JoshRosen @sameeragarwal Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14338][SQL] Improve `SimplifyConditiona...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12122#discussion_r58284189
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
 ---
@@ -33,6 +33,8 @@ object Literal {
 
   val FalseLiteral: Literal = Literal(false, BooleanType)
 
+  val NullLiteral: Literal = Literal(null, NullType)
--- End diff --

**Detected duplicate pattern.**

We should replace `Literal(null, NullType)` in L55 with `NullLiteral`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14251][SQL] Add SQL command for printin...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12099#issuecomment-204589471
  
@andrewor14 Review complete. No major issues found.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14251][SQL] Add SQL command for printin...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12099#discussion_r58275215
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/debug/package.scala ---
@@ -48,6 +48,25 @@ package object debug {
 // scalastyle:on println
   }
 
+  def codegenString(plan : SparkPlan): String = {
--- End diff --

**Detected minor style violation.**

Suggested improvement: remove whitespace before `:`
```
def codegenString(plan: SparkPlan): String = {
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14251][SQL] Add SQL command for printin...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12099#discussion_r58275150
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -237,15 +237,18 @@ case class ExplainCommand(
 logicalPlan: LogicalPlan,
 override val output: Seq[Attribute] =
   Seq(AttributeReference("plan", StringType, nullable = true)()),
-extended: Boolean = false)
+extended: Boolean = false,
+codegen: Boolean = false)
   extends RunnableCommand {
 
   // Run through the optimizer to generate the physical plan.
   override def run(sqlContext: SQLContext): Seq[Row] = try {
 // TODO in Hive, the "extended" ExplainCommand prints the AST as well, 
and detailed properties.
 val queryExecution = sqlContext.executePlan(logicalPlan)
-val outputString = if (extended) queryExecution.toString else 
queryExecution.simpleString
-
+var outputString = if (extended) queryExecution.toString else 
queryExecution.simpleString
+if (codegen) {
+  outputString = codegenString(queryExecution.executedPlan)
--- End diff --

**Detected unnecessary var.**

Suggested improvement:
```
val outputString
  if (codegen) {
codegenString(queryExecution.executedPlan)
  } else if (extended) {
queryExecution.toString
  } else {
queryExecution.simpleString
  }
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14316][SQL]StateStoreCoordinator should...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12100#issuecomment-204540086
  
@andrewor14 Review complete. LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14316][SQL]StateStoreCoordinator should...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12100#issuecomment-204539569
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Backport [SPARK-11327] [MESOS] Dispatcher does...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12101#issuecomment-204523898
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-13992] Add support for off-heap caching

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11805#issuecomment-204522195
  
@andrewor14 Review complete. In the future we should add tests to ensure 
buffers are released. @JoshRosen please file a JIRA for this issue.

LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11570#issuecomment-204514705
  
@andrewor14 Review complete. No major issues found.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58247459
  
--- Diff: project/SparkBuild.scala ---
@@ -743,8 +744,18 @@ object TestSettings {
 parallelExecution in Test := false,
 // Make sure the test temp directory exists.
 resourceGenerators in Test <+= resourceManaged in Test map { outDir: 
File =>
-  if (!new File(testTempDir).isDirectory()) {
-require(new File(testTempDir).mkdirs())
+  var dir = new File(testTempDir)
+  if (!dir.isDirectory()) {
+val stack = new Stack[File]()
+while (!dir.isDirectory()) {
+  stack.push(dir)
+  dir = dir.getParentFile()
+}
+
+while (stack.nonEmpty) {
+  val d = stack.pop()
+  require(d.mkdir() || d.isDirectory(), s"Failed to create 
directory $d")
+}
--- End diff --

It does not seem to be related to SPARK-529. Is this introduced to make 
tests pass? Why is this necessary for this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58247995
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -36,418 +38,305 @@ import org.apache.spark.util.Utils
 object SQLConf {
 
   private val sqlConfEntries = java.util.Collections.synchronizedMap(
-new java.util.HashMap[String, SQLConfEntry[_]]())
+new java.util.HashMap[String, ConfigEntry[_]]())
 
-  /**
-   * An entry contains all meta information for a configuration.
-   *
-   * @param key the key for the configuration
-   * @param defaultValue the default value for the configuration
-   * @param valueConverter how to convert a string to the value. It should 
throw an exception if the
-   *   string does not have the required format.
-   * @param stringConverter how to convert a value to a string that the 
user can use it as a valid
-   *string value. It's usually `toString`. But 
sometimes, a custom converter
-   *is necessary. E.g., if T is List[String], `a, 
b, c` is better than
-   *`List(a, b, c)`.
-   * @param doc the document for the configuration
-   * @param isPublic if this configuration is public to the user. If it's 
`false`, this
-   * configuration is only used internally and we should 
not expose it to the user.
-   * @tparam T the value type
-   */
-  class SQLConfEntry[T] private(
-  val key: String,
-  val defaultValue: Option[T],
-  val valueConverter: String => T,
-  val stringConverter: T => String,
-  val doc: String,
-  val isPublic: Boolean) {
-
-def defaultValueString: String = 
defaultValue.map(stringConverter).getOrElse("")
-
-override def toString: String = {
-  s"SQLConfEntry(key = $key, defaultValue=$defaultValueString, 
doc=$doc, isPublic = $isPublic)"
-}
+  private def register(entry: ConfigEntry[_]): Unit = 
sqlConfEntries.synchronized {
--- End diff --

Is there a reason why this duplicate detection is done in SQL, but not in 
core? Does this logic not apply to other modules?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58247569
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -36,418 +38,305 @@ import org.apache.spark.util.Utils
 object SQLConf {
 
   private val sqlConfEntries = java.util.Collections.synchronizedMap(
-new java.util.HashMap[String, SQLConfEntry[_]]())
+new java.util.HashMap[String, ConfigEntry[_]]())
 
-  /**
-   * An entry contains all meta information for a configuration.
-   *
-   * @param key the key for the configuration
-   * @param defaultValue the default value for the configuration
-   * @param valueConverter how to convert a string to the value. It should 
throw an exception if the
-   *   string does not have the required format.
-   * @param stringConverter how to convert a value to a string that the 
user can use it as a valid
-   *string value. It's usually `toString`. But 
sometimes, a custom converter
-   *is necessary. E.g., if T is List[String], `a, 
b, c` is better than
-   *`List(a, b, c)`.
-   * @param doc the document for the configuration
-   * @param isPublic if this configuration is public to the user. If it's 
`false`, this
-   * configuration is only used internally and we should 
not expose it to the user.
-   * @tparam T the value type
-   */
-  class SQLConfEntry[T] private(
-  val key: String,
-  val defaultValue: Option[T],
-  val valueConverter: String => T,
-  val stringConverter: T => String,
-  val doc: String,
-  val isPublic: Boolean) {
-
-def defaultValueString: String = 
defaultValue.map(stringConverter).getOrElse("")
-
-override def toString: String = {
-  s"SQLConfEntry(key = $key, defaultValue=$defaultValueString, 
doc=$doc, isPublic = $isPublic)"
-}
+  private def register(entry: ConfigEntry[_]): Unit = 
sqlConfEntries.synchronized {
+require(!sqlConfEntries.containsKey(entry.key),
+  s"Duplicate SQLConfigEntry. ${entry.key} has been registered")
--- End diff --

Would you mind adding a test case for this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58247308
  
--- Diff: project/SparkBuild.scala ---
@@ -743,8 +744,18 @@ object TestSettings {
 parallelExecution in Test := false,
 // Make sure the test temp directory exists.
 resourceGenerators in Test <+= resourceManaged in Test map { outDir: 
File =>
-  if (!new File(testTempDir).isDirectory()) {
-require(new File(testTempDir).mkdirs())
+  var dir = new File(testTempDir)
+  if (!dir.isDirectory()) {
+val stack = new Stack[File]()
+while (!dir.isDirectory()) {
+  stack.push(dir)
+  dir = dir.getParentFile()
+}
+
+while (stack.nonEmpty) {
+  val d = stack.pop()
+  require(d.mkdir() || d.isDirectory(), s"Failed to create 
directory $d")
+}
--- End diff --

Please add a short comment to explain this logic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58246946
  
--- Diff: 
core/src/main/scala/org/apache/spark/internal/config/ConfigBuilder.scala ---
@@ -136,10 +146,11 @@ private[spark] case class ConfigBuilder(key: String) {
 
   import ConfigHelpers._
 
-  var _public = true
-  var _doc = ""
+  private[config] var _public = true
+  private[config] var _doc = ""
+  private[config] var _onCreate: Option[ConfigEntry[_] => Unit] = None
--- End diff --

This variable is more aptly named `onCreateCallback` or `creationCallback`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58246828
  
--- Diff: 
core/src/main/scala/org/apache/spark/internal/config/ConfigBuilder.scala ---
@@ -136,10 +146,11 @@ private[spark] case class ConfigBuilder(key: String) {
 
   import ConfigHelpers._
 
-  var _public = true
-  var _doc = ""
+  private[config] var _public = true
+  private[config] var _doc = ""
+  private[config] var _onCreate: Option[ConfigEntry[_] => Unit] = None
--- End diff --

It is strange to expose these private vars to callers. It is preferable to 
make these strictly `private` and introduce getters for them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/11570#discussion_r58245146
  
--- Diff: 
core/src/main/scala/org/apache/spark/internal/config/ConfigBuilder.scala ---
@@ -136,10 +146,11 @@ private[spark] case class ConfigBuilder(key: String) {
 
   import ConfigHelpers._
 
-  var _public = true
-  var _doc = ""
+  private[config] var _public = true
+  private[config] var _doc = ""
--- End diff --

**Detected tightening visibility bound.** Good job!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11570#issuecomment-204499186
  
deep-review this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-529] [sql] Modify SQLConf to use new co...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11570#issuecomment-204498755
  
@andrewor14 @rxin Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12104#issuecomment-204498505
  
@andrewor14 Review complete. No major issues found.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12104#discussion_r58244147
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -533,7 +533,7 @@ class SessionCatalog(
   def listFunctions(db: String, pattern: String): Seq[FunctionIdentifier] 
= {
 val dbFunctions =
   externalCatalog.listFunctions(db, pattern).map { f => 
FunctionIdentifier(f, Some(db)) }
-val regex = pattern.replaceAll("\\*", ".*").r
+val regex = pattern.replaceAll(".\\*", "\\*").replaceAll("\\*", ".*").r
--- End diff --

This is correct. A similar change is also needed in `listTables`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12104#discussion_r58243740
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -365,18 +364,20 @@ case class ShowFunctions(db: Option[String], pattern: 
Option[String]) extends Ru
 schema.toAttributes
   }
 
-  override def run(sqlContext: SQLContext): Seq[Row] = pattern match {
-case Some(p) =>
-  try {
-val regex = java.util.regex.Pattern.compile(p)
-sqlContext.sessionState.functionRegistry.listFunction()
-  .filter(regex.matcher(_).matches()).map(Row(_))
-  } catch {
-// probably will failed in the regex that user provided, then 
returns empty row.
-case _: Throwable => Seq.empty[Row]
-  }
-case None =>
-  sqlContext.sessionState.functionRegistry.listFunction().map(Row(_))
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val database = db.getOrElse(catalog.getCurrentDatabase)
+pattern match {
+  case Some(p) =>
+try {
+  catalog.listFunctions(database, p).map{f: FunctionIdentifier => 
Row(f.name)}
+} catch {
+  // probably will failed in the regex that user provided, then 
returns empty row.
+  case _: Throwable => Seq.empty[Row]
--- End diff --

**Detected overly permissive catch.**

Suggested improvement:
```
case NonFatal(_) => Seq.empty[Row]
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12104#discussion_r58243646
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -365,18 +364,20 @@ case class ShowFunctions(db: Option[String], pattern: 
Option[String]) extends Ru
 schema.toAttributes
   }
 
-  override def run(sqlContext: SQLContext): Seq[Row] = pattern match {
-case Some(p) =>
-  try {
-val regex = java.util.regex.Pattern.compile(p)
-sqlContext.sessionState.functionRegistry.listFunction()
-  .filter(regex.matcher(_).matches()).map(Row(_))
-  } catch {
-// probably will failed in the regex that user provided, then 
returns empty row.
-case _: Throwable => Seq.empty[Row]
-  }
-case None =>
-  sqlContext.sessionState.functionRegistry.listFunction().map(Row(_))
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val database = db.getOrElse(catalog.getCurrentDatabase)
+pattern match {
+  case Some(p) =>
+try {
+  catalog.listFunctions(database, p).map{f: FunctionIdentifier => 
Row(f.name)}
+} catch {
+  // probably will failed in the regex that user provided, then 
returns empty row.
+  case _: Throwable => Seq.empty[Row]
+}
+  case None =>
+catalog.listFunctions(database, ".*").map{f: FunctionIdentifier => 
Row(f.name)}
--- End diff --

**Detected style violation.**

Suggested improvement:
```
catalog.listFunctions(database, ".*").map { f => Row(f.name) }
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12104#discussion_r58243620
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -365,18 +364,20 @@ case class ShowFunctions(db: Option[String], pattern: 
Option[String]) extends Ru
 schema.toAttributes
   }
 
-  override def run(sqlContext: SQLContext): Seq[Row] = pattern match {
-case Some(p) =>
-  try {
-val regex = java.util.regex.Pattern.compile(p)
-sqlContext.sessionState.functionRegistry.listFunction()
-  .filter(regex.matcher(_).matches()).map(Row(_))
-  } catch {
-// probably will failed in the regex that user provided, then 
returns empty row.
-case _: Throwable => Seq.empty[Row]
-  }
-case None =>
-  sqlContext.sessionState.functionRegistry.listFunction().map(Row(_))
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val database = db.getOrElse(catalog.getCurrentDatabase)
+pattern match {
+  case Some(p) =>
+try {
+  catalog.listFunctions(database, p).map{f: FunctionIdentifier => 
Row(f.name)}
--- End diff --

**Detected style violation.**

Suggested improvement:
```
catalog.listFunctions(database, p).map { f => Row(f.name) }
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-6717][ML] Clear shuffle files after che...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11919#issuecomment-204490237
  
@mengxr @jkbradley Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14100][ML] Merging Estimator and Model:...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11983#issuecomment-204489694
  
@mengxr Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [CORE][SPARK-14178]DAGScheduler should get map...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/11986#issuecomment-204489599
  
@markhamstra @kayousterhout Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14234] [CORE] Executor crashes for Task...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12031#issuecomment-204488984
  
@andrewor14 @zsxwing Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14287] isStreaming method for Dataset

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12080#issuecomment-204488338
  
Please advise @marmbrus @tdas


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14124] [SQL] [FOLLOWUP] Implement Datab...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12081#issuecomment-204488209
  
Please advise @andrewor14 @yhaui


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14294] [SQL] native support alter table...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12086#issuecomment-204488022
  
**Detected duplicate issue.**

@bomeng Please close this PR because the parent issue is a duplicate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12104#issuecomment-204486585
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14323] [SQL] fix the show functions by ...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12104#issuecomment-204486650
  
@rxin @andrewor14 Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14320][SQL] Make ColumnarBatch.Row muta...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12103#discussion_r58240859
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/vectorized/ColumnarBatchSuite.scala
 ---
@@ -756,4 +756,33 @@ class ColumnarBatchSuite extends SparkFunSuite {
   }}
 }
   }
+
+  test("mutable ColumnarBatch rows") {
+(MemoryMode.ON_HEAP :: MemoryMode.OFF_HEAP :: Nil).foreach { memMode 
=> {
--- End diff --

**Detected minor style violation.**

Suggested improvement:
```
(MemoryMode.ON_HEAP :: MemoryMode.OFF_HEAP :: Nil).foreach { memMode =>
  ...
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [MINOR][SQL] Update usage of `debug` by removi...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12094#issuecomment-204478200
  
@nongli @davies Please advise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Modified NAMESPACE to allow user to access lap...

2016-04-01 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12111#issuecomment-204477716
  
**Missing JIRA detected.**

Please file one and attach it to the title of this PR. For further 
instructions, see 
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14270][SQL] whole stage codegen support...

2016-03-31 Thread DeepSparkBot
Github user DeepSparkBot commented on a diff in the pull request:

https://github.com/apache/spark/pull/12061#discussion_r58147400
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects.scala
 ---
@@ -214,6 +216,14 @@ case class NewInstance(
 
   override def children: Seq[Expression] = arguments
 
+  override lazy val resolved: Boolean = childrenResolved && !{
+// If the class to construct is an inner class, we need to get its 
outer pointer, or this
+// expression should be regarded as unresolved.
+// Note that static inner classes (e.g., inner classes within Scala 
objects) don't need
+// outer pointer registration.
+outerPointer.isEmpty && cls.isMemberClass && 
!Modifier.isStatic(cls.getModifiers)
+  }
--- End diff --

Detected style violation.

Suggested improvement:
```
override lazy val resolved: Boolean = {
  // If the class to construct is an inner class, we need to get its outer 
pointer, or this
  // expression should be regarded as unresolved.
  // Note that static inner classes (e.g., inner classes within Scala 
objects) don't need
  // outer pointer registration.
  val innerStaticClass = outerPointer.isEmpty && cls.isMemberClass && 
!Modifier.isStatic(cls.getModifiers)
  childrenResolved && !innerStaticClass
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14287] isStreaming method for Dataset

2016-03-31 Thread DeepSparkBot
Github user DeepSparkBot commented on the pull request:

https://github.com/apache/spark/pull/12080#issuecomment-204181165
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org