Github user willb commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1359#discussion_r14783468
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringOperations.scala
 ---
    @@ -207,3 +210,64 @@ case class StartsWith(left: Expression, right: 
Expression) extends StringCompari
     case class EndsWith(left: Expression, right: Expression) extends 
StringComparison {
       def compare(l: String, r: String) = l.endsWith(r)
     }
    +
    +/**
    + * A function that takes a substring of its first argument starting at a 
given position.
    + * Defined for String and Binary types.
    + */
    +case class Substring(str: Expression, pos: Expression, len: Expression) 
extends Expression {
    +  
    +  type EvaluatedType = Any
    +  
    +  def nullable: Boolean = true
    +  def dataType: DataType = {
    +    if (str.dataType == BinaryType) str.dataType else StringType
    +  }
    +  
    +  def references = children.flatMap(_.references).toSet
    +  
    +  override def children = str :: pos :: len :: Nil
    +  
    +  def slice[T, C <% IndexedSeqOptimized[T,_]](str: C, startPos: Int, 
sliceLen: Int): Any = {
    +    val len = str.length
    +    // Hive and SQL use one-based indexing for SUBSTR arguments but also 
accept zero and
    --- End diff --
    
    Hive supports 0-based indexing in the same way as this patch.  I agree that 
supporting both in this way is ugly (both from an interface and from an 
implementation perspective), but it seems likely that people are depending on 
this behavior in the wild, doesn't it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to