Github user smola commented on the pull request:
https://github.com/apache/spark/pull/9938#issuecomment-166673431
I will not have time to finish this before 1.6 release. Closing this at the
moment.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user smola closed the pull request at:
https://github.com/apache/spark/pull/9938
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/9938
[WIP][SPARK-11855][SQL] Add compatibility methods to catalyst.
- Added backwards compatibility methods to Catalog,
UnresolvedStar, UnresolvedRelation, TableIdentifier.
You can merge this pull
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/9935
[SPARK-11780][SQL] Add type aliases backwards compatibility.
Added type aliases in org.apache.spark.sql.types for classes
moved to org.apache.spark.sql.catalyst.util.
You can merge this pull
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/8746#discussion_r45063997
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/DDLTestSuite.scala ---
@@ -113,4 +113,23 @@ class DDLTestSuite extends DataSourceTest with
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/8746#issuecomment-144693706
@sabhyankar Great! The implementation looks good. Could you add a test case
for it?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/7015#issuecomment-115342910
This does not solve the problem. Calling the initialization method
concurrently is OK. The failure happens when one thread calls this
initialization while another thread
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6853#issuecomment-113169112
@marmbrus Yes, this patch is meant just to delay the check until check
analysis. The reason is that just because ResolveReferences rule cannot resolve
the plan, that does
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6326#issuecomment-113166469
@marmbrus Done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/6326#discussion_r32732289
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -228,7 +228,12 @@ class SqlParser extends AbstractSparkSQLParser
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6853
[SQL][SPARK-7088] Fix analysis for 3rd party logical plan.
ResolveReferences analysis rule now does not throw when it cannot resolve
references in a self-join.
You can merge this pull request into a
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6853#issuecomment-112716142
This is still a work in progress, since I have not tested yet.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6463#issuecomment-111050410
@rxin Thank you! I'm closing this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user smola closed the pull request at:
https://github.com/apache/spark/pull/6463
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6463#issuecomment-106565794
I'm working on a simpler solution without macros or reflection involved.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6463#issuecomment-106525540
This is still a work in progress.
@rxin I would like some feedback about how to continue with the
implementation. In ExpressionBuilders I added some helper
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6463
[SPARK-7886][SQL] Add built-in expressions to FunctionRegistry.
- ExpressionBuilders is provided with helpers to create a function builder
for each Expression.
- Built-in functions removed
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6327
[SPARK-7724] [SQL] Support Intersect/Except in Catalyst DSL.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/smola/spark feature/catalyst-dsl-set
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6326
[SPARK-6740][SQL] Fix NOT operator precedence.
NOT has lower precedence than comparison operations.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/6319#discussion_r30826883
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/package.scala
---
@@ -19,6 +19,7 @@ package
Github user smola closed the pull request at:
https://github.com/apache/spark/pull/6177
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6177#issuecomment-104030852
@marmbrus Sure. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6177
[SPARK-7566][SQL] Add type to HiveContext.analyzer
This makes HiveContext.analyzer overrideable.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6086#issuecomment-102295560
@rxin Ok. Here's the PR for branch-1.3:
https://github.com/apache/spark/pull/6177
---
If your project is set up for it, you can reply to this email and have
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/6086#issuecomment-101999128
@rxin Any chance to merge this to branch-1.3 too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/6122#discussion_r30306982
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -131,7 +181,7 @@ private[sql] abstract class SparkStrategies
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/6122#discussion_r30306837
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -1429,3 +1293,95 @@ class SQLContext(@transient val sparkContext:
SparkContext
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/6086
[SPARK-7566][SQL] Add type to HiveContext.analyzer
This makes HiveContext.analyzer overrideable.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user smola commented on a diff in the pull request:
https://github.com/apache/spark/pull/5483#discussion_r28703116
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DataTypeParser.scala ---
@@ -27,7 +28,7 @@ import org.apache.spark.sql.catalyst.SqlLexical
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/5469#issuecomment-94496369
@marmbrus Maybe this and https://github.com/apache/spark/pull/5468 can be
left for an alternative SQL dialect implementation in Spark that aims for
completeness and
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/5469#issuecomment-92106383
@marmbrus My goal is to have a parser as compatible as possible with
standard SQL. *SELECT [ALL | DISTINCT]* is the standard syntax since SQL'92 and
it's implem
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/5483
[SPARK-6874][SQL] Support standard SQL syntax for array declaration.
E.g. BIGINT ARRAY[1000]
More info:
- http://savage.net.au/SQL/sql-2003-2.bnf.html#array%20type
- http
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/5472
[SPARK-6863] Fix formatting on SQL programming guide.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/smola/spark fix/sql-docs
Alternatively you
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/5271#issuecomment-91831288
It seems there was a git fetch error on Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/5469
[SPARK-6741][SQL] Add support for SELECT ALL syntax.
https://issues.apache.org/jira/browse/SPARK-6741
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/5468
[SPARK-6744][SQL] Add support for CROSS JOIN syntax.
https://issues.apache.org/jira/browse/SPARK-6744
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/5271
[SPARK-6611] Add support for INTEGER as synonym of INT.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/smola/spark features/integer-parse
Github user smola closed the pull request at:
https://github.com/apache/spark/pull/3645
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/3645#issuecomment-69654075
@pwendell Thanks! #3893 is good for me. I'm closing this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user smola commented on the pull request:
https://github.com/apache/spark/pull/3645#issuecomment-69569760
@pwendell Right. The problem is that there is no way to force the use of a
given IP (ignoring reverse lookups or any other hostname/ip detection
mechanisms).
I
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/3645
[SPARK-4799] Use IP address instead of local hostname in ConnectionManager
See https://issues.apache.org/jira/browse/SPARK-4799
Spark fails when a node hostname is not resolvable by
GitHub user smola opened a pull request:
https://github.com/apache/spark/pull/2472
Fix Java example in Streaming Programming Guide
"val conf" was used instead of "SparkConf conf" in Java snippet.
You can merge this pull request into a Git repository by runn
42 matches
Mail list logo