gatorsmile commented on issue #25416: [SPARK-28330][SQL] Support ANSI SQL:
result offset clause in query expression
URL: https://github.com/apache/spark/pull/25416#issuecomment-539861146
cc @cloud-fan
This is an automated me
TomokoKomiyama opened a new pull request #26065: [SPARK-29404][DOCS] Add an
explanation about the executor color changed in sql documentation
URL: https://github.com/apache/spark/pull/26065
### What changes were proposed in this pull request?
Add an explanation about changing the
shahidki31 commented on issue #26038: [SPARK-29235][ML][Pyspark]Support
avgMetrics in read/write of CrossValidatorModel
URL: https://github.com/apache/spark/pull/26038#issuecomment-539857077
Thanks @zhengruifeng I will add metrics for `TrainValidationSplitModel` too.
-
cloud-fan commented on issue #26006: [SPARK-29279][SQL] Merge SHOW NAMESPACES
and SHOW DATABASES code path
URL: https://github.com/apache/spark/pull/26006#issuecomment-539853093
LGTM except one code style nit. We may need to wait for a few days until
Jenkins is back online.
--
cloud-fan commented on a change in pull request #26006: [SPARK-29279][SQL]
Merge SHOW NAMESPACES and SHOW DATABASES code path
URL: https://github.com/apache/spark/pull/26006#discussion_r332842678
##
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/Looku
LantaoJin commented on issue #25960: [SPARK-29283][SQL] Error message is hidden
when query from JDBC, especially enabled adaptive execution
URL: https://github.com/apache/spark/pull/25960#issuecomment-539848420
Retest this please.
---
LantaoJin commented on issue #25960: [SPARK-29283][SQL] Error message is hidden
when query from JDBC, especially enabled adaptive execution
URL: https://github.com/apache/spark/pull/25960#issuecomment-539844688
retest this please.
---
dilipbiswal commented on issue #26042: [SPARK-29092][SQL] Report additional
information about DataSourceScanExec in EXPLAIN FORMATTED
URL: https://github.com/apache/spark/pull/26042#issuecomment-539844257
cc @cloud-fan
This
huaxingao commented on issue #25929: [SPARK-29116][PYTHON][ML] Refactor py
classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-539840347
OK. I will add _single_leading_underscore to the classes you mentioned in
the comments. Thanks!
---
itsvikramagr edited a comment on issue #24922: [SPARK-28120][SS] Rocksdb state
storage implementation
URL: https://github.com/apache/spark/pull/24922#issuecomment-539838772
> 1. we using flatMapGroupsWithState, it cause it fail at begining
Will update the PR with the fix
> 2.
itsvikramagr commented on issue #24922: [SPARK-28120][SS] Rocksdb state
storage implementation
URL: https://github.com/apache/spark/pull/24922#issuecomment-539838772
> 1. we using flatMapGroupsWithState, it cause it fail at begining
Will update the PR with the fix
> 2. Rocksdb ch
zhengruifeng commented on issue #25929: [SPARK-29116][PYTHON][ML] Refactor py
classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-539837239
@huaxingao Yes, I can reproduce your case.
The 'private' classes can only be imported explicitly. I guess t
huaxingao commented on issue #25929: [SPARK-29116][PYTHON][ML] Refactor py
classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-539830362
@zhengruifeng
Thanks for your comments.
I didn't add _single_leading_underscore for classes that are used
shivusondur commented on a change in pull request #25561:
[SPARK-28810][DOC][SQL] Document SHOW TABLES in SQL Reference.
URL: https://github.com/apache/spark/pull/25561#discussion_r332826762
##
File path: docs/sql-ref-syntax-aux-show-tables.md
##
@@ -18,5 +18,86 @@ license
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
LantaoJin commented on issue #25971: [SPARK-29298][CORE] Separate block manager
heartbeat endpoint from driver endpoint
URL: https://github.com/apache/spark/pull/25971#issuecomment-539820817
Thanks for the explanation @cloud-fan .
---
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand y
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820403
Nope. Why do you collect all? It's up to your configuration.
Back to the beginning, I fully understand your clu
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539820493
I don't see any number from you so far here. :)
This
LantaoJin commented on a change in pull request #25960: [SPARK-29283][SQL]
Error message is hidden when query from JDBC, especially enabled adaptive
execution
URL: https://github.com/apache/spark/pull/25960#discussion_r332823271
##
File path:
sql/hive-thriftserver/src/main/scala/o
yuecong edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539819286
> BTW, the following cover some gigantic clusters, but not all cases. There
is a different and cheaper approach like
imback82 commented on a change in pull request #26006: [SPARK-29279][SQL] Merge
SHOW NAMESPACES and SHOW DATABASES code path
URL: https://github.com/apache/spark/pull/26006#discussion_r332823038
##
File path: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539819286
> BTW, the following cover some gigantic clusters, but not all cases. There
is a different and cheaper approach like `Feder
imback82 commented on a change in pull request #26006: [SPARK-29279][SQL] Merge
SHOW NAMESPACES and SHOW DATABASES code path
URL: https://github.com/apache/spark/pull/26006#discussion_r332823038
##
File path: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
yuecong edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539818338
> * `storage.tsdb.retention.time`:
https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539818709
BTW, the following cover some gigantic clusters, but not all cases. There is
a different and cheaper approach like `F
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539818338
> * `storage.tsdb.retention.time`:
https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects
Thanks
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539817155
Sorry for the misleading naming. I meant the following.
- `storage.tsdb.retention.time`:
https://prometheus.io/doc
LantaoJin commented on a change in pull request #25960: [SPARK-29283][SQL]
Error message is hidden when query from JDBC, especially enabled adaptive
execution
URL: https://github.com/apache/spark/pull/25960#discussion_r332819911
##
File path:
sql/hive-thriftserver/src/main/scala/o
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-53981
> Second, you can use `Prometheus` TTL feature, @yuecong . Have you try that?
could you share the link on this one? w
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539813010
> That is a general issue on Apache Spark monitoring instead of this PR,
isn't it? So, I have three questions for you.
>
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S]
Status logging not happens at an interval for liveness
URL: https://github.com/apache/spark/pull/25648#discussion_r332818335
##
File path:
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/dep
cloud-fan commented on issue #26048: [SPARK-29373][SQL] DataSourceV2: Commands
should not submit a spark job
URL: https://github.com/apache/spark/pull/26048#issuecomment-539811954
thanks, merging to master!
This is an automat
cloud-fan closed pull request #26048: [SPARK-29373][SQL] DataSourceV2: Commands
should not submit a spark job
URL: https://github.com/apache/spark/pull/26048
This is an automated message from the Apache Git Service.
To respo
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539811514
This PR doesn't collect new metrics, only exposing the existing one. So, the
following is not about this PR. I
advancedxy commented on a change in pull request #26058: [SPARK-10614][core]
Add monotonic time to Clock interface.
URL: https://github.com/apache/spark/pull/26058#discussion_r332817520
##
File path: core/src/main/scala/org/apache/spark/util/Clock.scala
##
@@ -21,7 +21,14
advancedxy commented on a change in pull request #26058: [SPARK-10614][core]
Add monotonic time to Clock interface.
URL: https://github.com/apache/spark/pull/26058#discussion_r332817884
##
File path: core/src/main/scala/org/apache/spark/util/Clock.scala
##
@@ -36,19 +43,23
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539811514
This PR doesn't collect new metrics, only exposing the existing one. So, the
following is not about this PR.
> If
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539811514
This PR doesn't collect new metrics, only exposing the existing one. So, the
following is not about this PR.
firestarman commented on a change in pull request #25983:
[SPARK-29327][MLLIB]Support specifying features via multiple columns
URL: https://github.com/apache/spark/pull/25983#discussion_r332817984
##
File path: mllib/src/test/scala/org/apache/spark/ml/PredictorSuite.scala
#
viirya commented on issue #20935: [SPARK-23819][SQL] Fix InMemoryTableScanExec
complex type pruning
URL: https://github.com/apache/spark/pull/20935#issuecomment-539811088
I think it is fine as his last response is more than 1 yr ago.
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539810831
Do the metrics for the Spark application disappear after the application
finished? I guess the answer is No. If driver keep
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539810783
@yuecong . That is a general issue on Apache Spark monitoring instead of
this PR, isn't it? So, I have three question
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539809234
> 1. Please see this PR's description. The metric name is **unique** with
cadinality 1 by using labels,
`metrics_executor_
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-53980
Hi, @yuecong . Thank you for review.
1. That was true in the old Prometheus plugin. So, Apache Spark 3.0.0
dongjoon-hyun commented on a change in pull request #26060: [SPARK-29400][CORE]
Improve PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#discussion_r332816528
##
File path:
core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539808598
> 1. That was true in the old Prometheus plugin. So, Apache Spark 3.0.0
exposes this Prometheus metric on the driver port,
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-53980
Hi, @yuecong . Thank you for review.
1. That was true in the old Prometheus plugin. So, Apache Spark 3.0.0
dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-53980
Hi, @yuecong . Thank you for review.
1. That's true in the old Prometheus plugin. So, Apache Spark 3.0.0 exp
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-53980
@yuecong .
1. That's true in the old Prometheus plugin. So, Apache Spark 3.0.0 exposes
this Prometheus metric on t
viirya commented on a change in pull request #26060: [SPARK-29400][CORE]
Improve PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#discussion_r332815572
##
File path:
core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala
###
dongjoon-hyun closed pull request #26062:
[SPARK-29401][CORE][ML][SQL][GRAPHX][TESTS] Replace calls to .parallelize
Arrays of tuples, ambiguous in Scala 2.13, with Seqs of tuples
URL: https://github.com/apache/spark/pull/26062
-
yuecong edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539804591
@dongjoon-hyun Thanks for fixing this.
I have several questions on this.
1. Short-lived metrics
As Prom
yuecong edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539804591
@dongjoon-hyun Thanks for fixing this.
I have several questions on this.
1. Short-lived metrics
As Prom
yuecong edited a comment on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539804591
@dongjoon-hyun Thanks for fixing this.
I have several questions on this.
1. Short-lived metrics
As Prom
dongjoon-hyun closed pull request #26061: [SPARK-29392][CORE][SQL][STREAMING]
Remove symbol literal syntax 'foo, deprecated in Scala 2.13, in favor of
Symbol("foo")
URL: https://github.com/apache/spark/pull/26061
This is an
yuecong commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539804591
@dongjoon-hyun Thanks for fixing this.
I have several questions on this.
1. Short-lived metrics
As Prometheus
dongjoon-hyun commented on issue #26061: [SPARK-29392][CORE][SQL][STREAMING]
Remove symbol literal syntax 'foo, deprecated in Scala 2.13, in favor of
Symbol("foo")
URL: https://github.com/apache/spark/pull/26061#issuecomment-539804605
Merged to master.
dongjoon-hyun commented on issue #26060: [SPARK-29400][CORE] Improve
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539803942
Hi, @srowen , @dbtsai , @HyukjinKwon .
Could you review this PR, please?
-
zhengruifeng opened a new pull request #26064: [SPARK-23578][ML][PYSPARK]
Binarizer support multi-column
URL: https://github.com/apache/spark/pull/26064
### What changes were proposed in this pull request?
Binarizer support multi-column by extending
`HasInputCols`/`HasOutputCols`/`HasTh
kiszk commented on issue #20935: [SPARK-23819][SQL] Fix InMemoryTableScanExec
complex type pruning
URL: https://github.com/apache/spark/pull/20935#issuecomment-539795529
@pwoody @HyukjinKwon @viirya May I take over this since he did not respond
for a long time?
---
kiszk commented on a change in pull request #26045: [SPARK-29367][DOC] Add
compatibility note for Arrow 0.15.0 to SQL guide
URL: https://github.com/apache/spark/pull/26045#discussion_r332807321
##
File path: docs/sql-pyspark-pandas-with-arrow.md
##
@@ -219,3 +219,14 @@ Not
gatorsmile commented on issue #26051: [SPARK-24640][SQL] Return `NULL` from
`size(NULL)` by default
URL: https://github.com/apache/spark/pull/26051#issuecomment-539793779
@MaxGekk Could you submit a follow-up PR to update the migration guide?
---
imback82 commented on issue #26048: [SPARK-29373][SQL] DataSourceV2: Commands
should not submit a spark job
URL: https://github.com/apache/spark/pull/26048#issuecomment-539793532
I double-checked this. `V2TableWriteExec.writeWithV2` returns
`sparkContext.emptyRDD`. In this case, `DAGSchedu
beliefer commented on issue #25416: [SPARK-28330][SQL] Support ANSI SQL: result
offset clause in query expression
URL: https://github.com/apache/spark/pull/25416#issuecomment-539793201
@dongjoon-hyun @HyukjinKwon Could you help me to review this PR?
AmplabJenkins removed a comment on issue #26053: [SPARK-29379][SQL]SHOW
FUNCTIONS show '!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-539790518
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #26053: [SPARK-29379][SQL]SHOW
FUNCTIONS show '!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-539790526
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https:
AmplabJenkins commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show
'!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-539790518
Merged build finished. Test PASSed.
This is
AmplabJenkins commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show
'!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-539790526
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab
SparkQA commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show '!=',
'<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-539789913
**[Test build #111929 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111929/testR
AmplabJenkins removed a comment on issue #24851: [SPARK-27303][GRAPH] Add Spark
Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539789010
Merged build finished. Test PASSed.
This is an automated message
AmplabJenkins removed a comment on issue #24851: [SPARK-27303][GRAPH] Add Spark
Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539789015
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenki
AmplabJenkins commented on issue #26062: [SPARK-29401][CORE][ML] Replace calls
to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with Seqs of tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539788946
Merged build finished. Test PASSed.
-
AmplabJenkins commented on issue #26062: [SPARK-29401][CORE][ML] Replace calls
to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with Seqs of tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539788951
Test PASSed.
Refer to this link for build results (access r
AmplabJenkins commented on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph
API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539789015
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/
AmplabJenkins commented on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph
API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539789010
Merged build finished. Test PASSed.
This is an automated message from th
AmplabJenkins removed a comment on issue #26062: [SPARK-29401][CORE][ML]
Replace calls to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with
Seqs of tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539788946
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #26062: [SPARK-29401][CORE][ML]
Replace calls to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with
Seqs of tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539788951
Test PASSed.
Refer to this link for build results
SparkQA removed a comment on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph
API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539749285
**[Test build #111924 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111924/testReport)**
for PR 2485
zhengruifeng commented on issue #25909: [SPARK-29224]Implement Factorization
Machines as a ml-pipeline component
URL: https://github.com/apache/spark/pull/25909#issuecomment-539788608
@mob-ai Thanks for this work!
But before you continue, I guess you can refer to previous dicsussion
[SP
SparkQA removed a comment on issue #26062: [SPARK-29401][CORE][ML] Replace
calls to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with Seqs of
tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539741136
**[Test build #111923 has
started](https://amplab.cs.berke
SparkQA commented on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-539788379
**[Test build #111924 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111924/testReport)**
for PR 24851 at com
SparkQA commented on issue #26062: [SPARK-29401][CORE][ML] Replace calls to
.parallelize Arrays of tuples, ambiguous in Scala 2.13, with Seqs of tuples
URL: https://github.com/apache/spark/pull/26062#issuecomment-539788014
**[Test build #111923 has
finished](https://amplab.cs.berkeley.edu/
beliefer commented on issue #25963: [SPARK-28137][SQL] Add Postgresql function
to_number.
URL: https://github.com/apache/spark/pull/25963#issuecomment-539787262
@dongjoon-hyun @wangyum Could you help me to review this PR?
Thi
AmplabJenkins removed a comment on issue #26041: [SPARK-29403][INFRA][R] Uses
Arrow R 0.14.1 in AppVeyor for now
URL: https://github.com/apache/spark/pull/26041#issuecomment-539785681
Merged build finished. Test PASSed.
This
AmplabJenkins removed a comment on issue #25984: [WIP][SPARK-29308][BUILD] Fix
incorrect dep in dev/deps/spark-deps-hadoop-3.2
URL: https://github.com/apache/spark/pull/25984#issuecomment-539785727
Merged build finished. Test PASSed.
---
AmplabJenkins removed a comment on issue #26041: [SPARK-29403][INFRA][R] Uses
Arrow R 0.14.1 in AppVeyor for now
URL: https://github.com/apache/spark/pull/26041#issuecomment-539785687
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://ampl
AmplabJenkins removed a comment on issue #25984: [WIP][SPARK-29308][BUILD] Fix
incorrect dep in dev/deps/spark-deps-hadoop-3.2
URL: https://github.com/apache/spark/pull/25984#issuecomment-539785731
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #25984: [WIP][SPARK-29308][BUILD] Fix
incorrect dep in dev/deps/spark-deps-hadoop-3.2
URL: https://github.com/apache/spark/pull/25984#issuecomment-539785731
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https
AmplabJenkins commented on issue #25984: [WIP][SPARK-29308][BUILD] Fix
incorrect dep in dev/deps/spark-deps-hadoop-3.2
URL: https://github.com/apache/spark/pull/25984#issuecomment-539785727
Merged build finished. Test PASSed.
---
AmplabJenkins commented on issue #26041: [SPARK-29403][INFRA][R] Uses Arrow R
0.14.1 in AppVeyor for now
URL: https://github.com/apache/spark/pull/26041#issuecomment-539785681
Merged build finished. Test PASSed.
This is an au
AmplabJenkins commented on issue #26041: [SPARK-29403][INFRA][R] Uses Arrow R
0.14.1 in AppVeyor for now
URL: https://github.com/apache/spark/pull/26041#issuecomment-539785687
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.be
SparkQA commented on issue #25984: [WIP][SPARK-29308][BUILD] Fix incorrect dep
in dev/deps/spark-deps-hadoop-3.2
URL: https://github.com/apache/spark/pull/25984#issuecomment-539784367
**[Test build #111928 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111
SparkQA commented on issue #26041: [SPARK-29403][INFRA][R] Uses Arrow R 0.14.1
in AppVeyor for now
URL: https://github.com/apache/spark/pull/26041#issuecomment-539784395
**[Test build #111927 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111927/testReport)
1 - 100 of 844 matches
Mail list logo