Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16550#discussion_r95871812
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -835,6 +835,187 @@ public UTF8String translate(Map
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/16145
[MINOR][CORE][SQL] Remove explicit RDD and Partition overrides
## What changes were proposed in this pull request?
I **believe** that I _only_ removed duplicated code (that adds
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/16144
[MINOR][CORE][SQL][DOCS] Typo fixes
## What changes were proposed in this pull request?
Typo fixes
## How was this patch tested?
Local build. Awaiting the official
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Partially agree @srowen. The reason for the change was `blockId.isShuffle`
condition that both methods use to do their shuffle-specific handling. The
change might not be the most correct one
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Proposed the changes since it made easier to understand the role of
`getBlockData` vs `getLocalBytes` and in the end `ShuffleBlockResolver`. I'm
not saying it should be accepted, but I'd
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/16475
[MINOR][CORE] Remove code duplication (so the interface is used instead)
## What changes were proposed in this pull request?
Removed code duplication and used the interface instead
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16492#discussion_r95065415
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -238,7 +238,7 @@ class StreamSuite extends StreamTest
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16492#discussion_r95065464
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamTest.scala ---
@@ -235,7 +235,10 @@ trait StreamTest extends QueryTest
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16481#discussion_r95065596
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -494,8 +500,13 @@ case class DataSource
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/16475
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Closed as per @rxin's request.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
The reason for the update to `scala-xml_2.11-1.0.5.jar` was that once I
updated ScalaTest I got the issue from Jenkins that the dependency list
changed. That's when I was told about `./dev
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
For reference: [scala-xml
releases](https://github.com/scala/scala-xml/releases)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16309#discussion_r92784014
--- Diff: pom.xml ---
@@ -714,7 +714,7 @@
org.scalacheck
scalacheck_${scala.binary.version}
-1.12.5
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
I'm testing the changes locally with the following:
```
export SCALACTIC_FILL_FILE_PATHNAMES=yes
export SBT_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"
sb
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17409
[SQL][MINOR] Fix for typo in Analyzer
## What changes were proposed in this pull request?
Fix for typo in Analyzer
## How was this patch tested?
local build
You
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
Hey @srowen Would appreciate your looking at the changes again and comments
(or merge). Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
I'm going to merge the two PRs with your comments applied (i.e. excluding
changes that are not necessarily doc-only). Thanks a lot for your time, Sean.
Appreciate a lot.
---
If your
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17434
You'd asked I delivered @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17417
[SQL][DOC] Use recommended values for row boundaries in Window's scalâ¦
â¦adoc
## What changes were proposed in this pull request?
Use recommended values for row
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17434
[SQL][DOC][MINOR] Squashing a typo in from_json function
## What changes were proposed in this pull request?
Just squashing a typo in `from_json` function
## How
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035464
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -22,7 +22,7 @@ import org.apache.spark.sql.Column
import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035475
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -113,12 +113,12 @@ object Window {
* Creates
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035549
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -113,12 +113,12 @@ object Window {
* Creates
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035498
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -131,9 +131,9 @@ object Window {
* import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108777037
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -75,7 +75,6 @@ case class
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r10842
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -60,7 +60,7 @@ import org.apache.spark.util.Utils
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108776915
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java
---
@@ -52,16 +52,15 @@
* This class implements
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108777513
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
---
@@ -26,7 +26,8 @@ import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108632590
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/sort/SortShuffleManager.scala ---
@@ -82,13 +82,13 @@ private[spark] class SortShuffleManager
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108633245
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -323,7 +323,7 @@ class SparkSession private(
* // |-- age
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108632961
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -492,7 +492,7 @@ class AstBuilder extends
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108634808
--- Diff: core/src/main/java/org/apache/spark/memory/MemoryConsumer.java ---
@@ -60,8 +60,6 @@ protected long getUsed
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/17434
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17434
Closing as it was merged into https://github.com/apache/spark/pull/17417.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
Executed `cd docs && SKIP_PYTHONDOC=1 SKIP_RDOC=1 jekyll serve` to check
the changes and they've seemed fine. I had to fix some extra javadoc-related
places to pleas
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108665073
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -492,7 +492,7 @@ class AstBuilder extends
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17337
[SQL][MINOR] Fix scaladoc for UDFRegistration
## What changes were proposed in this pull request?
Fix scaladoc for UDFRegistration
## How was this patch tested
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17657
[TEST][MINOR] Replace repartitionBy with distribute in
CollapseRepartitionSuite
## What changes were proposed in this pull request?
Replace non-existent `repartitionBy
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634044
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,12 +47,20 @@ case class UserDefinedFunction
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634273
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/UDFSuite.scala ---
@@ -256,10 +256,12 @@ class UDFSuite extends QueryTest
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634952
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/UDFSuite.scala ---
@@ -256,10 +256,12 @@ class UDFSuite extends QueryTest
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112692504
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,20 +47,31 @@ case class UserDefinedFunction
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112692885
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,20 +47,31 @@ case class UserDefinedFunction
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17727
[SQL][MINOR] Remove misleading comment (and tags do better)
## What changes were proposed in this pull request?
Misleading comment removed (and tags do a better job to express
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17727
Fair enough. Let's do it here. Quoting directly from the code:
> Converts a logical plan into zero or more SparkPlans. This API is
exposed for experimenting with the query plan
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104205883
--- Diff: kubernetes/README.md ---
@@ -0,0 +1,21 @@
+# Pre-requisites
+* maven, JDK and all other pre-requisites for building Spark
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104206163
--- Diff: kubernetes/pom.xml ---
@@ -0,0 +1,54 @@
+
+
+http://maven.apache.org/POM/4.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLS
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104206250
--- Diff: kubernetes/pom.xml ---
@@ -0,0 +1,54 @@
+
+
+http://maven.apache.org/POM/4.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLS
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17670
I think the change should rather be [here](ResolveTableValuedFunctions)
where the built-in table-valued function `range` is resolved.
---
If your project is set up for it, you can reply
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18642
[MINOR][REFACTORING] KeyValueGroupedDataset.mapGroupsWithState uses
flatMapGroupsWithState
## What changes were proposed in this pull request?
Refactored
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18642
@zsxwing @tdas Your friendly reminder to give the change a nice review. I'd
appreciate. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18642
@zsxwing @tdas Could you review the change and let me know what you think?
I'd appreciate. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18523#discussion_r125397518
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/VectorAssembler.scala ---
@@ -113,12 +113,12 @@ class VectorAssembler @Since("
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125353689
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18509
[SS][MINOR] Make EventTimeWatermarkExec explicitly UnaryExecNode
## What changes were proposed in this pull request?
Making EventTimeWatermarkExec explicitly UnaryExecNode
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18539
I think that `ConsoleSink` was the only one with this mysterious name. We
could however have another JIRA to _somehow_ unify how options are printed out
for sources and sinks. I don't think
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18539
[SPARK-21313][SS] ConsoleSink's string representation
## What changes were proposed in this pull request?
Add `toString` with options for `ConsoleSink` so it shows nicely in query
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125571708
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18523
Thanks @facaiy for the changes. I wonder if the code could `collect` all
the columns with incorrect type in one go (rather than reporting issues column
by column until a user fixed all
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125874046
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18539
If you know how to display `ForeachWriter` that's passed in to
`ForeachSink` nicely, let me know. `getClass.getName` didn't convince me and so
I left it out. It'd be very helpful to see what
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17727
Is the comment correct then? I don't think so. What about improving it? I
don't mind if we stop discussing it either. It's a tiny change after all (and
don't want to drag it along and waste
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17801
Are the errors (that led to `fails to generate documentation`) after my
change? Look very weird to me.
```
[error]
/home/jenkins/workspace/SparkPullRequestBuilder/core/target
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17801
[MINOR][SQL][DOCS] Improve unix_timestamp's scaladoc (and typo hunting)
## What changes were proposed in this pull request?
* Docs are consistent (across different `unix_timestamp
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17917#discussion_r115711771
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaRelation.scala
---
@@ -143,4 +143,6 @@ private[kafka010] class
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16960
I think that the commit has left
[numGeneratedRows](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala#L344
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16960
I'll have a look at this this week and send a PR unless you beat me to it
:) Thanks @ala!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17917
https://cloud.githubusercontent.com/assets/62313/25960541/879096ce-3677-11e7-900f-09bd5f200a00.png;>
---
If your project is set up for it, you can reply to this email and have your
re
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17917
[SPARK-20600][SS] KafkaRelation should be pretty printed in web UI
## What changes were proposed in this pull request?
User-friendly name of `KafkaRelation` in web UI (under Details
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17904
WFM. Thanks @ajbozarth!
```
$ git fetch origin pull/17904/head:17904
$ gco 17904
$ ./build/mvn -Phadoop-2.7,yarn,mesos,hive,hive-thriftserver -DskipTests
clean install
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18026
[SPARK-16202][SQL][DOC] Follow-up to Correct The Description of
CreatableRelationProvider's createRelation
## What changes were proposed in this pull request?
Follow-up to SPARK
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18074
[DOCS][MINOR] Scaladoc fixes (aka typo hunting)
## What changes were proposed in this pull request?
Minor changes to scaladoc
## How was this patch tested?
Local
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r118436041
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -35,12 +35,13 @@ import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r118788857
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -153,19 +153,24 @@ case class WindowExec
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/15575
I'm late with this, but just leaving it for future code reviewers...
I think the change took the most extreme path where even such simple
`outputPartitioning` as the one
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18074
Hey @srowen could you review the changes again and accept possibly? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18144
@cloud-fan I don't understand why would that be an issue...ever. The API is
not consistent and I often run into it.
---
If your project is set up for it, you can reply to this email
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r119168394
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -153,12 +153,13 @@ case class WindowExec
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18347#discussion_r122616876
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -465,6 +465,8 @@ case class DataSource
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18347#discussion_r122617147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -465,6 +465,8 @@ case class DataSource
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/17727
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18144
@cloud-fan If consistency is to remove (not add) I'm fine. Either way
consistency is the ultimate goal (as I myself am running into this discrepancy
far too often).
---
If your project
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/19261
@gatorsmile Dunno, but the logical operator does.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/19261
@rxin @gatorsmile Let me ask you a very similar question then, why does
`CurrentDate` operator has the optional timezone parameter? What's the purpose?
Wouldn't that answer your questions
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19261#discussion_r139309272
--- Diff: python/pyspark/sql/functions.py ---
@@ -793,12 +793,12 @@ def ntile(n):
# -- Date/Timestamp functions
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19261#discussion_r139309246
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2508,6 +2508,14 @@ object functions {
def current_date
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19261#discussion_r139309261
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2508,6 +2508,14 @@ object functions {
def current_date
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135989439
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -130,16 +130,7 @@ class TextSocketSource(host
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/19112
Hey @HyukjinKwon, as the only committer who's been involved in this PR,
could you review it again and possibly merge to master? Thanks
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/19261
OK I feel convinced that you feel convinced Spark SQL should not offer this
as part of the public API. Thanks for being with me for so long and patient to
explain the things. Thanks
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19407#discussion_r142022819
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -269,7 +269,7 @@ final class DataStreamWriter[T
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610992
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -126,16 +128,17 @@ class TextSocketSource(host
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610234
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -39,6 +39,16 @@ abstract class Optimizer
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610632
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -126,16 +128,17 @@ class TextSocketSource(host
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/19095
[SPARK-21886][SQL] Use SparkSession.internalCreateDataFrame to createâ¦
⦠Dataset with LogicalRDD logical operator
## What changes were proposed in this pull request
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19112#discussion_r136750244
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -200,7 +202,7 @@ class SourceProgress protected[sql
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/19112
[SPARK-21901][SS] Define toString for StateOperatorProgress
## What changes were proposed in this pull request?
Just `StateOperatorProgress.toString` + few formatting fixes
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19112#discussion_r136726289
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -200,7 +202,7 @@ class SourceProgress protected[sql
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19112#discussion_r136726445
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -177,11 +179,11 @@ class SourceProgress protected[sql
401 - 500 of 589 matches
Mail list logo