GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/19095
[SPARK-21886][SQL] Use SparkSession.internalCreateDataFrame to createâ¦
⦠Dataset with LogicalRDD logical operator
## What changes were proposed in this pull request
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/19089
Logs are back with the change. ð Thanks (and don't mess it up again
fixing STS :))
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135989439
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -130,16 +130,7 @@ class TextSocketSource(host
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610992
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -126,16 +128,17 @@ class TextSocketSource(host
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610632
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -126,16 +128,17 @@ class TextSocketSource(host
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19056#discussion_r135610234
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -39,6 +39,16 @@ abstract class Optimizer
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18642
@zsxwing @tdas Could you review the change and let me know what you think?
I'd appreciate. Thanks.
---
If your project is set up for it, you can reply to this email and have your
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18642
@zsxwing @tdas Your friendly reminder to give the change a nice review. I'd
appreciate. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply a
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18642
[MINOR][REFACTORING] KeyValueGroupedDataset.mapGroupsWithState uses
flatMapGroupsWithState
## What changes were proposed in this pull request?
Refactored
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18539
If you know how to display `ForeachWriter` that's passed in to
`ForeachSink` nicely, let me know. `getClass.getName` didn't convince me and so
I left it out. It'd be very help
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125874046
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18539
I think that `ConsoleSink` was the only one with this mysterious name. We
could however have another JIRA to _somehow_ unify how options are printed out
for sources and sinks. I don't
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18523
Thanks @facaiy for the changes. I wonder if the code could `collect` all
the columns with incorrect type in one go (rather than reporting issues column
by column until a user fixed all
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18539
[SPARK-21313][SS] ConsoleSink's string representation
## What changes were proposed in this pull request?
Add `toString` with options for `ConsoleSink` so it shows nicely in
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125571708
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18523#discussion_r125397518
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/VectorAssembler.scala ---
@@ -113,12 +113,12 @@ class VectorAssembler @Since("
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18509#discussion_r125353689
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala
---
@@ -81,7 +81,7 @@ class
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18509
[SS][MINOR] Make EventTimeWatermarkExec explicitly UnaryExecNode
## What changes were proposed in this pull request?
Making EventTimeWatermarkExec explicitly UnaryExecNode
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18347#discussion_r122617147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -465,6 +465,8 @@ case class DataSource
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18347#discussion_r122616876
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -465,6 +465,8 @@ case class DataSource
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18144
@cloud-fan If consistency is to remove (not add) I'm fine. Either way
consistency is the ultimate goal (as I myself am running into this discrepancy
far too often).
---
If your proje
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r119168394
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -153,12 +153,13 @@ case class WindowExec
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18144
@cloud-fan I don't understand why would that be an issue...ever. The API is
not consistent and I often run into it.
---
If your project is set up for it, you can reply to this emai
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/18074
Hey @srowen could you review the changes again and accept possibly? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r118788857
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -153,19 +153,24 @@ case class WindowExec
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/15575
I'm late with this, but just leaving it for future code reviewers...
I think the change took the most extreme path where even such simple
`outputPartitioning` as the o
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/18074#discussion_r118436041
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -35,12 +35,13 @@ import
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18074
[DOCS][MINOR] Scaladoc fixes (aka typo hunting)
## What changes were proposed in this pull request?
Minor changes to scaladoc
## How was this patch tested?
Local
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18026
[SPARK-16202][SQL][DOC] Follow-up to Correct The Description of
CreatableRelationProvider's createRelation
## What changes were proposed in this pull request?
Follow-up to
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17917
https://cloud.githubusercontent.com/assets/62313/25960541/879096ce-3677-11e7-900f-09bd5f200a00.png";>
---
If your project is set up for it, you can reply to this email and have yo
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16960
I'll have a look at this this week and send a PR unless you beat me to it
:) Thanks @ala!
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17917#discussion_r115711771
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaRelation.scala
---
@@ -143,4 +143,6 @@ private[kafka010] class
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16960
I think that the commit has left
[numGeneratedRows](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala#L344
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17904
WFM. Thanks @ajbozarth!
```
$ git fetch origin pull/17904/head:17904
$ gco 17904
$ ./build/mvn -Phadoop-2.7,yarn,mesos,hive,hive-thriftserver -DskipTests
clean install
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17917
[SPARK-20600][SS] KafkaRelation should be pretty printed in web UI
## What changes were proposed in this pull request?
User-friendly name of `KafkaRelation` in web UI (under Details
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/17727
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17801
Are the errors (that led to `fails to generate documentation`) after my
change? Look very weird to me.
```
[error]
/home/jenkins/workspace/SparkPullRequestBuilder/core/target
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17801
[MINOR][SQL][DOCS] Improve unix_timestamp's scaladoc (and typo hunting)
## What changes were proposed in this pull request?
* Docs are consistent (across different `unix_time
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17727
Is the comment correct then? I don't think so. What about improving it? I
don't mind if we stop discussing it either. It's a tiny change after all (and
don't want to dra
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17727
Fair enough. Let's do it here. Quoting directly from the code:
> Converts a logical plan into zero or more SparkPlans. This API is
exposed for experimenting with the query
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17727
[SQL][MINOR] Remove misleading comment (and tags do better)
## What changes were proposed in this pull request?
Misleading comment removed (and tags do a better job to express
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112692885
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,20 +47,31 @@ case class UserDefinedFunction
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112692504
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,20 +47,31 @@ case class UserDefinedFunction
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634952
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/UDFSuite.scala ---
@@ -256,10 +256,12 @@ class UDFSuite extends QueryTest with
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634044
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -47,12 +47,20 @@ case class UserDefinedFunction
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17712#discussion_r112634273
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/UDFSuite.scala ---
@@ -256,10 +256,12 @@ class UDFSuite extends QueryTest with
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17670
I think the change should rather be [here](ResolveTableValuedFunctions)
where the built-in table-valued function `range` is resolved.
---
If your project is set up for it, you can reply to
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17657
[TEST][MINOR] Replace repartitionBy with distribute in
CollapseRepartitionSuite
## What changes were proposed in this pull request?
Replace non-existent `repartitionBy` with
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r10842
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -60,7 +60,7 @@ import org.apache.spark.util.Utils
* The
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108777513
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
---
@@ -26,7 +26,8 @@ import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108777037
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -75,7 +75,6 @@ case class
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108776915
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java
---
@@ -52,16 +52,15 @@
* This class implements
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
Executed `cd docs && SKIP_PYTHONDOC=1 SKIP_RDOC=1 jekyll serve` to check
the changes and they've seemed fine. I had to fix some extra javadoc-related
places to
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108665073
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -492,7 +492,7 @@ class AstBuilder extends
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/17434
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17434
Closing as it was merged into https://github.com/apache/spark/pull/17417.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108634808
--- Diff: core/src/main/java/org/apache/spark/memory/MemoryConsumer.java ---
@@ -60,8 +60,6 @@ protected long getUsed
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108633245
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -323,7 +323,7 @@ class SparkSession private(
* // |-- age
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108632961
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -492,7 +492,7 @@ class AstBuilder extends
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17434#discussion_r108632590
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/sort/SortShuffleManager.scala ---
@@ -82,13 +82,13 @@ private[spark] class SortShuffleManager
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
I'm going to merge the two PRs with your comments applied (i.e. excluding
changes that are not necessarily doc-only). Thanks a lot for your time, Sean.
Appreciate a lot.
---
If
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17434
You'd asked I delivered @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17417
Hey @srowen Would appreciate your looking at the changes again and comments
(or merge). Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17434
[SQL][DOC][MINOR] Squashing a typo in from_json function
## What changes were proposed in this pull request?
Just squashing a typo in `from_json` function
## How was this
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035549
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -113,12 +113,12 @@ object Window {
* Creates a
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035498
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -131,9 +131,9 @@ object Window {
* import
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035475
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -113,12 +113,12 @@ object Window {
* Creates a
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/17417#discussion_r108035464
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala ---
@@ -22,7 +22,7 @@ import org.apache.spark.sql.Column
import
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17417
[SQL][DOC] Use recommended values for row boundaries in Window's scalâ¦
â¦adoc
## What changes were proposed in this pull request?
Use recommended values fo
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17409
[SQL][MINOR] Fix for typo in Analyzer
## What changes were proposed in this pull request?
Fix for typo in Analyzer
## How was this patch tested?
local build
You
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17337
[SQL][MINOR] Fix scaladoc for UDFRegistration
## What changes were proposed in this pull request?
Fix scaladoc for UDFRegistration
## How was this patch tested
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104206250
--- Diff: kubernetes/pom.xml ---
@@ -0,0 +1,54 @@
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104206163
--- Diff: kubernetes/pom.xml ---
@@ -0,0 +1,54 @@
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r104205883
--- Diff: kubernetes/README.md ---
@@ -0,0 +1,21 @@
+# Pre-requisites
+* maven, JDK and all other pre-requisites for building Spark
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/17042
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/17042
Makes sense. Thanks @srowen!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/17042
[CORE][MINOR] Fix scaladoc
## What changes were proposed in this pull request?
Minor change to scaladoc of `HeartbeatReceiver` (the method is certainly
not for tests only
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16812#discussion_r99935530
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -139,6 +140,20 @@ class CSVSuite extends
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16812#discussion_r99935349
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala
---
@@ -73,6 +73,12 @@ class
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16812#discussion_r99934599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -110,7 +110,11 @@ private[csv] class
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16550#discussion_r95871812
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -835,6 +835,187 @@ public UTF8String translate(Map
dict
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16481#discussion_r95065596
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -494,8 +500,13 @@ case class DataSource
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16492#discussion_r95065464
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamTest.scala ---
@@ -235,7 +235,10 @@ trait StreamTest extends QueryTest with
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/16492#discussion_r95065415
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -238,7 +238,7 @@ class StreamSuite extends StreamTest
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Closed as per @rxin's request.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user jaceklaskowski closed the pull request at:
https://github.com/apache/spark/pull/16475
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Partially agree @srowen. The reason for the change was `blockId.isShuffle`
condition that both methods use to do their shuffle-specific handling. The
change might not be the most correct one
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16475
Proposed the changes since it made easier to understand the role of
`getBlockData` vs `getLocalBytes` and in the end `ShuffleBlockResolver`. I'm
not saying it should be accepted, bu
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/16475
[MINOR][CORE] Remove code duplication (so the interface is used instead)
## What changes were proposed in this pull request?
Removed code duplication and used the interface instead
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
For reference: [scala-xml
releases](https://github.com/scala/scala-xml/releases)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
The reason for the update to `scala-xml_2.11-1.0.5.jar` was that once I
updated ScalaTest I got the issue from Jenkins that the dependency list
changed. That's when I was told about `
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
The tests ran locally on my laptop have finished after...`7431 s` which is
2 hours (!)
```
[error] (sql/test:test) sbt.TestsFailedException: Tests unsuccessful
[error
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
@srowen Please help as I'm stuck with the `OutOfMemoryError: GC overhead
limit exceeded` error. Should Jenkins run the tests with 6g?
What's even more interesting is that
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Rebasing with master to trigger tests on Jenkins...(hoping this time they
pass)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
I'm testing the changes locally with the following:
```
export SCALACTIC_FILL_FILE_PATHNAMES=yes
export SBT_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Hey @srowen, any idea about the following error? I'd appreciate any hints
to help me fix it.
```
[info] - can use a custom recovery mode factory (57 milliseconds)
Exce
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Think the latest test failures are somehow related to:
```
Please set the environment variable SCALACTIC_FILL_FILE_PATHNAMES to yes at
compile time to enable this feature
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Learnt about `./dev/test-dependencies.sh --replace-manifest` just now.
(Where's this all described?)
---
If your project is set up for it, you can reply to this email and have your
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Just learnt about `export SPARK_TESTING=1` to avoid some test failures.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Had to introduce changes to the tests given "Expired deprecations" in
[ScalaTest 3.0.0](http://www.scalatest.org/release_notes/3.0.0).
---
If your project is set up for it, you ca
101 - 200 of 595 matches
Mail list logo