Github user OopsOutOfMemory closed the pull request at:
https://github.com/apache/spark/pull/3909
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68985811
This bug will be fixed in #3926
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project do
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68985021
@chenghao-intel Thanks for working on this :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68983892
@OopsOutOfMemory , I've updated the code to fix the long keyword issue at
#3926, can you review that for me?
---
If your project is set up for it, you can reply t
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68979911
@OopsOutOfMemory seems some other Parsers have the same bug, I've created
the #3924 to refactor the code first. And will create another PR for the bug
fixing, prob
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68975221
@chenghao-intel @marmbrus
I add a test suit to reproduce this exception. Could u have a look at it : )
Also I think this is not a hack since `Keyword is alw
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/3909#discussion_r22567269
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SparkSQLParser.scala
---
@@ -66,7 +66,13 @@ class SqlLexical(val keywords: Seq[St
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68960150
I can confirm this is a bug when the keyword is too long, however, this
fixing seems a little hack to me, Sorry, @OopsOutOfMemory , I need more time in
investigati
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/3909#discussion_r22563282
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SparkSQLParser.scala
---
@@ -66,7 +66,13 @@ class SqlLexical(val keywords: Seq[Str
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68862582
@chenghao-intel
Any suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68845914
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
GitHub user OopsOutOfMemory opened a pull request:
https://github.com/apache/spark/pull/3909
[SPARK-5009][SQL][Bug FIx] allCaseVersions leads to stackoverflow.
Currently, we use `allCaseVersion` function to match all possible case
versions of `Keyword` that user passing into to sql
12 matches
Mail list logo