Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8963#issuecomment-154125587
A decimal value with precision 7 and scale 0 must incur an out-of-range
error when put into a decimal column which has precision 7 and scale 2. The
number has 7 digits
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9352#issuecomment-153823011
Thanks for addressing the SQL injection concerns, Suresh. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9162#issuecomment-153814208
I think these are fine defaults. The work on SPARK-10849 gives fine-grained
control to power users. LGTM. Thanks.
---
If your project is set up for it, you can reply
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9419#issuecomment-153159484
Thanks for the ongoing discussion, Xiao and Herman. At the very least, it
would be useful to capture Herman's analysis in the code comment which explains
why w
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-152661995
Thanks for the quick response, Michael. Simply parenthesizing the query
will result in non-Standard syntax which an ANSI-compliant database will
reject
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-152638711
Thanks for that feedback, Michael. To answer your question:
MA> I haven't looked closely at the implementation, but one high level
question is whether
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-152542748
Any further refinements which I should make to this patch? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9345#issuecomment-152355615
Hi Christian,
The patch looks good to me. I would recommend putting the restoration of
the logging state inside a finally block. I don't know if that
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-151608061
Thanks for the advice about how to improve this patch, Sean. Hopefully, my
last commit addresses your concerns. I have made the following changes:
1
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-150267546
Thanks for the review comments, Sean.
I did not polish the Java code in SqlIdentifierUtil, and I didn't translate
it into Scala. This was in the interes
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-150043741
Thanks, Josh. My last commit should address that long line problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-150032606
The following style tests ran cleanly for me:
build/sbt scalastyle
What other style tests should I run?
Thanks,
-Rick
---
If your
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/9202#issuecomment-150025478
According to the log for the build/test cycle, a git pull request
succeeded...
GitHub pull request #9202 of commit
b3b845de9960e003637d26fecaf8dccaa0206f59
GitHub user rick-ibm opened a pull request:
https://github.com/apache/spark/pull/9202
[SPARK-10857] [SQL] Block SQL injection vulnerabilities under
DataFrameWriter.jdbc().
@marmbrus
@rxin
@joshrosen
@srowen
This patch attempts to address both
https
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8982#issuecomment-146943701
My latest commit (d56897e) adjusts the Option usage per Reynold and
Michael's advice. I believe that I have addressed all issues with this pull
request. We can f
Github user rick-ibm commented on a diff in the pull request:
https://github.com/apache/spark/pull/8982#discussion_r41650794
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -287,3 +288,30 @@ case object MsSqlServerDialect extends JdbcDialect
Github user rick-ibm commented on a diff in the pull request:
https://github.com/apache/spark/pull/8982#discussion_r41649928
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -287,3 +288,30 @@ case object MsSqlServerDialect extends JdbcDialect
Github user rick-ibm commented on a diff in the pull request:
https://github.com/apache/spark/pull/8982#discussion_r41649765
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -464,5 +476,6 @@ class JDBCSuite extends SparkFunSuite with
Github user rick-ibm commented on a diff in the pull request:
https://github.com/apache/spark/pull/8982#discussion_r41649575
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -287,3 +288,30 @@ case object MsSqlServerDialect extends JdbcDialect
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8983#issuecomment-146280859
I don't find any guidance in the Standard for what should be done if the
left side of the IN operator is an untyped NULL literal. Technically, there is
no such
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8983#issuecomment-146271123
Hi Michael,
Postgres and Derby raise an error if the expressions in the IN list can't
be implicitly cast to a common type. MySQL is more forg
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8983#issuecomment-146244545
According to my reading of the SQL Standard,
NULL IN (expr1, ...)
should always evaluate to NULL. Here is my reasoning:
The 2011 SQL
Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8982#issuecomment-145921778
Tests seem to have passed cleanly. Any comments? Suggestions for
improvements? Thanks.
---
If your project is set up for it, you can reply to this email and have your
GitHub user rick-ibm opened a pull request:
https://github.com/apache/spark/pull/8982
[SPARK-10855] [SQL] Add a JDBC dialect for Apache Derby
@marmbrus
@rxin
This patch adds a JdbcDialect class, which customizes the datatype mappings
for Derby backends. The patch also
24 matches
Mail list logo