squito commented on a change in pull request #25943:
[WIP][SPARK-29261][SQL][CORE] Support recover live entities from KVStore for
(SQL)AppStatusListener
URL: https://github.com/apache/spark/pull/25943#discussion_r333716820
##
File path: core/src/main/scala/org/apache/spark/status/s
huaxingao commented on a change in pull request #26064:
[SPARK-23578][ML][PYSPARK] Binarizer support multi-column
URL: https://github.com/apache/spark/pull/26064#discussion_r333725521
##
File path: python/pyspark/ml/feature.py
##
@@ -65,7 +65,8 @@
@inherit_doc
-class
huaxingao commented on a change in pull request #26064:
[SPARK-23578][ML][PYSPARK] Binarizer support multi-column
URL: https://github.com/apache/spark/pull/26064#discussion_r333727044
##
File path:
mllib/src/test/scala/org/apache/spark/ml/feature/BinarizerSuite.scala
##
@
huaxingao commented on issue #26064: [SPARK-23578][ML][PYSPARK] Binarizer
support multi-column
URL: https://github.com/apache/spark/pull/26064#issuecomment-540787575
I have a general question: What is the criteria for adding multi-column
support to a ML algorithm? Right now ```Bucketizer``
srowen commented on a change in pull request #26064: [SPARK-23578][ML][PYSPARK]
Binarizer support multi-column
URL: https://github.com/apache/spark/pull/26064#discussion_r333737897
##
File path: mllib/src/main/scala/org/apache/spark/ml/feature/Binarizer.scala
##
@@ -69,66
srowen commented on a change in pull request #26064: [SPARK-23578][ML][PYSPARK]
Binarizer support multi-column
URL: https://github.com/apache/spark/pull/26064#discussion_r333737235
##
File path: mllib/src/main/scala/org/apache/spark/ml/feature/Binarizer.scala
##
@@ -69,66
planga82 opened a new pull request #26084: [SPARK-29433][WebUI] Fix tooltip
stages table
URL: https://github.com/apache/spark/pull/26084
### What changes were proposed in this pull request?
In the Web UI, Stages table, the tool tip of Input and output column are not
corrrect.
Act
rdblue commented on a change in pull request #25955: [SPARK-29277][SQL] Add
early DSv2 filter and projection pushdown
URL: https://github.com/apache/spark/pull/25955#discussion_r333747933
##
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSo
rdblue commented on issue #26006: [SPARK-29279][SQL] Merge SHOW NAMESPACES and
SHOW DATABASES code path
URL: https://github.com/apache/spark/pull/26006#issuecomment-540814215
+1 from me as well
This is an automated message fr
vanzin commented on a change in pull request #26058: [SPARK-10614][core] Add
monotonic time to Clock interface.
URL: https://github.com/apache/spark/pull/26058#discussion_r333755039
##
File path: core/src/main/scala/org/apache/spark/util/Clock.scala
##
@@ -21,7 +21,14 @@ p
JoshRosen commented on issue #26076: [SPARK-29419][SQL] Fix Encoder
thread-safety bug in createDataset(Seq)
URL: https://github.com/apache/spark/pull/26076#issuecomment-540824919
> That sounds like a good idea to me. Can we do that first?
I'll prototype this. If I get it working then
dbtsai opened a new pull request #26085: [SPARK-29434] [Core] Improve the
MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085
### What changes were proposed in this pull request?
Instead of using GZIP for compressing the serialized `MapStatuses`, ZStd
p
dbtsai commented on issue #26085: [SPARK-29434] [Core] Improve the MapStatuses
Serialization Performance
URL: https://github.com/apache/spark/pull/26085#issuecomment-540829740
ping @dongjoon-hyun @holdenk @viirya
This is an a
dbtsai commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333769537
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
@@
dbtsai edited a comment on issue #26085: [SPARK-29434] [Core] Improve the
MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#issuecomment-540829740
ping @dongjoon-hyun @holdenk @viirya @tgravescs
--
gatorsmile commented on a change in pull request #24922: [SPARK-28120][SS]
Rocksdb state storage implementation
URL: https://github.com/apache/spark/pull/24922#discussion_r333772305
##
File path: sql/core/pom.xml
##
@@ -147,6 +147,12 @@
mockito-core
test
gatorsmile commented on issue #25721: [WIP][SPARK-29018][SQL] Implement Spark
Thrift Server with it's own code base on PROTOCOL_VERSION_V9
URL: https://github.com/apache/spark/pull/25721#issuecomment-540845787
@AngersZh @wangyum Could you address the comment and move it forward?
--
HyukjinKwon commented on issue #26045: [SPARK-29367][DOC] Add compatibility
note for Arrow 0.15.0 to SQL guide
URL: https://github.com/apache/spark/pull/26045#issuecomment-540849488
Merged to master.
This is an automated mess
HyukjinKwon closed pull request #26045: [SPARK-29367][DOC] Add compatibility
note for Arrow 0.15.0 to SQL guide
URL: https://github.com/apache/spark/pull/26045
This is an automated message from the Apache Git Service.
To res
MaxGekk commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333788013
##
File path: core/src/test/scala/org/apache/spark/MapOutputTrackerSuite.scala
MaxGekk commented on a change in pull request #26055: [SPARK-29368][SQL][TEST]
Port interval.sql
URL: https://github.com/apache/spark/pull/26055#discussion_r333788660
##
File path: sql/core/src/test/resources/sql-tests/inputs/postgreSQL/interval.sql
##
@@ -0,0 +1,330 @@
+-
BryanCutler commented on issue #26045: [SPARK-29367][DOC] Add compatibility
note for Arrow 0.15.0 to SQL guide
URL: https://github.com/apache/spark/pull/26045#issuecomment-540857594
Thanks @HyukjinKwon and others for reviewing!
--
skonto commented on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk There are many operator projects out there but does
not make sense to be part of the core project. Same for the Flink project. Btw
in
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser There are many operator projects out
there but does not make sense to be part of the core project. Same for the
Flin
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but does not make sense to be part of the core project. Same
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
AngersZh commented on issue #25721: [WIP][SPARK-29018][SQL] Implement Spark
Thrift Server with it's own code base on PROTOCOL_VERSION_V9
URL: https://github.com/apache/spark/pull/25721#issuecomment-540861739
@gatorsmile Thanks for your attention.
Since there have been a lot of chan
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
skonto edited a comment on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540858668
@dongjoon-hyun @holdenk @jkremser @liyinan926 There are many operator
projects out there but it does not make sense to make them part of the core
proj
HyukjinKwon commented on issue #25914: [SPARK-29227][SS]Track rule info in
optimization phase
URL: https://github.com/apache/spark/pull/25914#issuecomment-540863339
ok to test
This is an automated message from the Apache Git
AmplabJenkins removed a comment on issue #25914: [SPARK-29227][SS]Track rule
info in optimization phase
URL: https://github.com/apache/spark/pull/25914#issuecomment-534461235
Can one of the admins verify this patch?
This is a
liucht-inspur commented on issue #25994: [SPARK-29323][WEBUI] Add tooltip for
The Executors Tab's column names in the Spark history server Page
URL: https://github.com/apache/spark/pull/25994#issuecomment-540864157
cc @wangyum
--
HyukjinKwon commented on issue #25961: [SPARK-29284][SQL] Adaptive query
execution works correct…
URL: https://github.com/apache/spark/pull/25961#issuecomment-540865669
@hn5092, please identify which PR fixed this, and see if it's feasible to
backport.
HyukjinKwon closed pull request #25961: [SPARK-29284][SQL] Adaptive query
execution works correct…
URL: https://github.com/apache/spark/pull/25961
This is an automated message from the Apache Git Service.
To respond to the m
HyukjinKwon commented on a change in pull request #25398: [SPARK-28659][SQL]
Use data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333802719
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/Spark
HyukjinKwon commented on a change in pull request #25398: [SPARK-28659][SQL]
Use data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333802719
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/Spark
HyukjinKwon edited a comment on issue #25398: [SPARK-28659][SQL] Use data
source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540870091
If you only target to fix Hive ser/de to respect compression, why don't you
set Hive compress
HyukjinKwon commented on issue #25398: [SPARK-28659][SQL] Use data source if
convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540870091
If you only target to fix Hive ser/de to respect configuration, why don't
you set Hive compression p
HyukjinKwon closed pull request #19955: [SPARK-21867][CORE] Support async
spilling in UnsafeShuffleWriter
URL: https://github.com/apache/spark/pull/19955
This is an automated message from the Apache Git Service.
To respond t
wangyum commented on a change in pull request #26055: [SPARK-29368][SQL][TEST]
Port interval.sql
URL: https://github.com/apache/spark/pull/26055#discussion_r333806629
##
File path: sql/core/src/test/resources/sql-tests/inputs/postgreSQL/interval.sql
##
@@ -0,0 +1,330 @@
+-
liucht-inspur edited a comment on issue #25994: [SPARK-29323][WEBUI] Add
tooltip for The Executors Tab's column names in the Spark history server Page
URL: https://github.com/apache/spark/pull/25994#issuecomment-540864157
cc @wangyum We are waiting for spark's Jenkins to test,can you help t
dongjoon-hyun commented on issue #26085: [SPARK-29434] [Core] Improve the
MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#issuecomment-540880356
Thank you for pinging me, @dbtsai . I'll take a look tomorrow.
advancedxy commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333810252
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
advancedxy commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333809995
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
advancedxy commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333809807
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
HyukjinKwon commented on issue #24405: [SPARK-27506][SQL] Allow deserialization
of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-540881369
ok to test
This is an automated mess
HyukjinKwon commented on issue #24405: [SPARK-27506][SQL] Allow deserialization
of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-540881610
@giamo mind updating PR? looks like we're getting closer to merge.
--
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333811076
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0 +1,2
AmplabJenkins removed a comment on issue #24405: [SPARK-27506][SQL] Allow
deserialization of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-531894129
Can one of the admins verify this patch?
--
wangyum commented on issue #25994: [SPARK-29323][WEBUI] Add tooltip for The
Executors Tab's column names in the Spark history server Page
URL: https://github.com/apache/spark/pull/25994#issuecomment-540881448
Sorry @liucht-inspur This has nothing to do with authorization:
http://apache-
HyukjinKwon edited a comment on issue #24405: [SPARK-27506][SQL] Allow
deserialization of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-540881610
@giamo mind updating PR? Sorry for my late response. Looks like we're
getting closer to merge.
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333811998
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrame.scala
##
@@ -0,0
liucht-inspur commented on issue #25994: [SPARK-29323][WEBUI] Add tooltip for
The Executors Tab's column names in the Spark history server Page
URL: https://github.com/apache/spark/pull/25994#issuecomment-540883751
Ok, I see. We will wait quietly. Thank you all!
---
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333812890
##
File path:
graph/api/src/test/java/org/apache/spark/graph/api/JavaPropertyGraphSuite.java
##
@@ -0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333813221
##
File path:
graph/api/src/test/scala/org/apache/spark/graph/api/PropertyGraphSuite.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814508
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherResult.scala
##
@@ -0,0 +1,42
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814464
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherResult.scala
##
@@ -0,0 +1,44
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814596
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0 +1,2
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814713
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0 +1,2
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814672
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0 +1,1
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814769
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/GraphElementFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814743
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0 +1,2
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814802
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/GraphElementFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814839
##
File path: graph/api/src/main/scala/org/apache/spark/graph/api/NodeFrame.scala
##
@@ -0,0 +1,39 @@
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814876
##
File path: graph/api/src/main/scala/org/apache/spark/graph/api/NodeFrame.scala
##
@@ -0,0 +1,39 @@
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814998
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/PropertyGraph.scala
##
@@ -0,0 +1,1
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814906
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/NodeFrameBuilder.scala
##
@@ -0,0 +
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333814921
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/NodeFrameBuilder.scala
##
@@ -0,0 +
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815075
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/PropertyGraph.scala
##
@@ -0,0 +1,1
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815049
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/PropertyGraph.scala
##
@@ -0,0 +1,1
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815169
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815189
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815222
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrameBuilder.scala
##
@
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815301
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrameBuilder.scala
##
@
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815461
##
File path:
graph/api/src/test/scala/org/apache/spark/graph/api/PropertyGraphSuite.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815564
##
File path:
graph/api/src/test/scala/org/apache/spark/graph/api/PropertyGraphSuite.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333815513
##
File path:
graph/api/src/test/scala/org/apache/spark/graph/api/PropertyGraphSuite.scala
##
@@ -0,0
shivusondur commented on a change in pull request #25561:
[SPARK-28810][DOC][SQL] Document SHOW TABLES in SQL Reference.
URL: https://github.com/apache/spark/pull/25561#discussion_r333816964
##
File path: docs/sql-ref-syntax-aux-show-tables.md
##
@@ -18,5 +18,90 @@ license
cloud-fan commented on issue #19424: [SPARK-22197][SQL] push down operators to
data source before planning
URL: https://github.com/apache/spark/pull/19424#issuecomment-540890347
Usually a data source scans its data incrementally. So when the query has a
limit, Spark stops consuming the ite
zhengruifeng commented on issue #25929: [SPARK-29116][PYTHON][ML] Refactor py
classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-540890576
retest this please
This is an automated mess
liyinan926 commented on issue #26075: [WIP][K8S] Spark operator
URL: https://github.com/apache/spark/pull/26075#issuecomment-540893571
Agreed with what @skonto said. This doesn't feel like something that is
necessarily part of core Spark.
--
viirya commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333778200
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
@@
viirya commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r333821336
##
File path: core/src/test/scala/org/apache/spark/MapOutputTrackerSuite.scala
#
HyukjinKwon commented on issue #25333: [SPARK-28597][SS] Add config to retry
spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#issuecomment-540898145
retest this please
This is an au
HyukjinKwon commented on a change in pull request #25333: [SPARK-28597][SS] Add
config to retry spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#discussion_r333822084
##
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/st
AngersZh commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show
'!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-540937912
> Keeping it consistent sounds fine for now but I think we should fix this
ambiguity between functions and op
101 - 189 of 189 matches
Mail list logo