Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13258
@zsxwing , yeah let me update this in the next one day or so - I was
waiting for https://github.com/apache/spark/pull/13431. Thanks for the reminder!
---
If your project is set up for it, you can
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13515#discussion_r65812444
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -366,7 +366,7 @@ object SparkSubmit
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13518
[SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text` formats
in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support for writing
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13258
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lw-lin reopened a pull request:
https://github.com/apache/spark/pull/13518
[WIP][SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text`
formats in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13575
[SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text` formats
in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support for writing
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13575
@marmbrus @tdas @zsxwing , would you mind taking a look? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66466310
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -143,39 +146,99 @@ object CSVRelation extends
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66467191
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/text/TextFileFormat.scala
---
@@ -120,24 +109,31 @@ class TextFileFormat
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66466672
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala
---
@@ -146,16 +173,53 @@ class JsonFileFormat
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66466502
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -143,39 +146,99 @@ object CSVRelation extends
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66467095
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -488,7 +488,12 @@ private[sql] class
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66465201
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -246,7 +247,12 @@ case class DataSource
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13597#discussion_r66596861
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -572,8 +573,13 @@ final class DataFrameWriter[T] private[sql](ds
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13597
@marmbrus @cloud-fan @zsxwing , would you mind taking a look? Thanks! :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13595
@tdas @zsxwing , would you mind taking a look? Thanks! :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13595
[MINOR][SQL] Standardize 'continuous queries' to 'streaming
Datasets/DataFrames'
## What changes were proposed in this pull request?
This patch does some replacing (since `streaming
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13595#discussion_r66588898
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -433,8 +433,7 @@ final class DataFrameWriter[T] private[sql](ds:
Dataset
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13595#discussion_r66589007
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataFrameReaderWriterSuite.scala
---
@@ -371,66 +371,80 @@ class
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13595
@zsxwing @tdas, sure, this can wait. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13606
@srowen , the [[streaming programming guide] -
accumulators-and-broadcast-variables](https://github.com/apache/spark/blob/1e2c9311871968426e019164b129652fd6d0037f/docs/streaming-programming-guide.md
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13507
[SPARK-15765][SQL][Streaming] Make continuous Parquet writing consistent
with non-consistent Parquet writing
## What changes were proposed in this pull request?
Currently there are some
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13507
@liancheng @tdas @zsxwing would you mind taking a look? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13597
[SPARK-15871][SQL] Add `assertNotPartitioned` check in `DataFrameWriter`
## What changes were proposed in this pull request?
Sometimes it doesn't make sense to specify partitioning
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/11996
@devaraj-kavali @kayousterhout this is good to have, but I just wonder if
this would cause resources to leak? E.g when the task is in the middle of
releasing resources in a `finally` block -- like
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13683
[SPARK-15518][Core][Follow-up] Rename LocalSchedulerBackendEndpoint ->
LocalSchedulerBackend
## What changes were proposed in this pull request?
This patch is a follow-up to ht
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13683
@rxin would you mind taking a look? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13595
Thanks!
This patch introduced an compilation error because `DataFrameReader.text`'s
return type had been changed back to `DataFrame` very recently, and I should
have noticed this and updated
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user lw-lin reopened a pull request:
https://github.com/apache/spark/pull/13518
[SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text` formats
in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user lw-lin reopened a pull request:
https://github.com/apache/spark/pull/13518
[WIP][SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text`
formats in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/12981
@srowen, there are no other examples that need an update :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/13258#issuecomment-222035774
@zsxwing sure let's add an abstract layer. I'll rebase and do this in the
next two days or so. Thanks for the review! :-)
---
If your project is set up for it, you can
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12981
@rxin sure, I'll resolve the conflicts very soon; and once the Java APIs
are updated in the next couple of days, I'll update the Java examples
accordingly.
Thank you for bringing
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13685
[SPARK-15963][CORE] Catch `TaskKilledException` correctly in
Executor.TaskRunner
## What changes were proposed in this pull request?
Currently in
[Executor.TaskRunner](https://github.com
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13685
I couldn't come up with good syntax to express something like
```scala
case e @ (_: TaskKilledException) | (_: InterruptedException if
task.killed) =>
...
```
So this pa
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13685
Hi @squito thanks for the comments!
> how you'd have a TaskKilledException, but without setting the task to
`killed`
This can be reproduced when, a task gets
killed([Executor#L
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13575
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lw-lin reopened a pull request:
https://github.com/apache/spark/pull/13518
[WIP][SPARK-15472][SQL] Add support for writing in `csv`, `json`, `text`
formats in Structured Streaming
## What changes were proposed in this pull request?
This patch adds support
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13518
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/13518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13652
hi @davies, two tests still fail after this patch when I build locally:
```
- from UTC timestamp *** FAILED ***
"2016-03-13 0[2]:00:00.0" did not equal "2016-03-
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13685
Addressed all comments. @squito would you take another look? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13685#discussion_r68165895
--- Diff: core/src/test/scala/org/apache/spark/executor/ExecutorSuite.scala
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13705
I personally feel that it would be great if we can also support writing in
`csv`, `json`, `txt` formats in Structured Streaming for the 2.0 release (I'd
like to submit patches for `json`, `txt` very
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/13705
[SPARK-15472][SQL] Add support for writing in `csv` format in Structured
Streaming
## What changes were proposed in this pull request?
This patch adds support for writing in `csv` format
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/13685
Thanks, @squito @markhamstra !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11626#issuecomment-194672903
@rxin thanks !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11634
[SPARK-13618][STREAMING][WEB-UI] Make Streaming web UI page display
rate-limit lines on statistics graph - Part 3
## What changes were proposed in this pull request?
(Please fill
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11633
[SPARK-13618][STREAMING][WEB-UI] Make Streaming web UI page display
rate-limit lines on statistics graph - Part 2
## What changes were proposed in this pull request?
(Please fill
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11470#issuecomment-194874328
@zsxwing
Points taken -- indeed no need to display rate-limit line when an
`InputDStream` instance is not _under rate control_: I've added a field
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11633#issuecomment-194881019
All three parts are now ready for review. I've also drafted a design doc
(please see [Spark-13618](https://issues.apache.org/jira/browse/SPARK-13618)),
hopefully it can
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11634#issuecomment-194881087
All three parts are now ready for review. I've also drafted a design doc
(please see [Spark-13618](https://issues.apache.org/jira/browse/SPARK-13618)),
hopefully it can
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11470#issuecomment-194880816
All three parts are now ready for review. I've also drafted a design doc
(please see [Spark-13618](https://issues.apache.org/jira/browse/SPARK-13618)),
hopefully it can
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11650
[STREAMING][MINOR] Fix a duplicate "be" in comments
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/lw-lin/spark typo
Alternative
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11650#issuecomment-195299373
@rxin @zsxwing Would you take a look when you have time? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11650#issuecomment-195655635
Sure. @rxin thank you for your review and patient guidance!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11626
Enable test: o.a.s.streaming.JobGeneratorSuite "Do not clear receivedâ¦
## How was this patch tested?
unit test
You can merge this pull request into a Git repository by ru
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11626#issuecomment-194639133
Local tests passed, so maybe we can enable this again for 2.0.0. @rxin
@zsxwing would you mind taking a look please? Thanks!
---
If your project is set up for it, you
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11650#issuecomment-195335583
Sure, I'll close this for now. Thanks for your time!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/11650
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11643#issuecomment-195148951
All three parts are now ready for review. I've also drafted a design doc
(please see [Spark-13618](https://issues.apache.org/jira/browse/SPARK-13618)),
hopefully it can
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11643
Display rate limit on streaming web ui part 3
## What changes were proposed in this pull request?
This PR makes Streaming web UI display rate-limit lines in the statistics
graph
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/11634
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11645#discussion_r55794434
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -0,0 +1,462 @@
+/*
+ * Licensed
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11645#discussion_r55794534
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -0,0 +1,462 @@
+/*
+ * Licensed
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11645#discussion_r55794512
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -0,0 +1,462 @@
+/*
+ * Licensed
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11633#issuecomment-203804570
This PR has many conflicts to resolve, so I'm closing this for now and will
re-open later, thanks.
---
If your project is set up for it, you can reply to this email
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11470#issuecomment-203804602
This PR has many conflicts to resolve, so I'm closing this for now and will
re-open later, thanks.
---
If your project is set up for it, you can reply to this email
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/11470
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/11643
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11643#issuecomment-203804471
This PR has many conflicts to resolve, so I'm closing this for now and will
open later, thanks.
---
If your project is set up for it, you can reply to this email
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/11633
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12126#issuecomment-209216092
@rxin would you mind taking a look, or should I close this PR? Thank you!
:-)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12323#issuecomment-209201130
@zsxwing thank you for the review & merging ! :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12126#issuecomment-209217089
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/12323
[SPARK-14556][SQL] Code clean-ups for package
o.a.s.sql.execution.streaming.state
## What changes were proposed in this pull request?
- `StateStoreConf.**max**DeltasForSnapshot
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12323#issuecomment-208715736
@srowen @zsxwing would you mind taking a look at this? Thanks! :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/12323#discussion_r59318208
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -161,24 +163,27 @@ private[state
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/12323#discussion_r59317846
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreRDD.scala
---
@@ -22,12 +22,12 @@ import scala.reflect.ClassTag
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/12323#discussion_r59317875
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -506,7 +512,6 @@ private[state
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/12323#discussion_r59317781
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala
---
@@ -26,12 +26,11 @@ private[streaming] class
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/12145
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12174#issuecomment-206107803
@srowen thanks for the fix.
However, instead of allowing users to call these obsolete constructors and
improving the error messages, I'm inclined to guide users
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/12035#issuecomment-206123094
@srowen sure, thank you for this review, :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/12035
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/11845
[SPARK-14025][STREAMING][WEBUI] Fix streaming job descriptions on the event
line
## What changes were proposed in this pull request?
Removed the extra `...` for each streaming job's
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11845#issuecomment-198710234
Actually we've intentionally escaped the description for the event line, so
that it will be rendered as plain texts; please see
https://github.com/apache/spark/blob
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11845#issuecomment-198708658
Besides, the blue/green bar in the event line itself is a clickable,
linking to the specific job page. The `` thing is superfluous, let's
figure out how to remove
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11845#issuecomment-198695846
@andrewor14 @zsxwing would you mind taking a look at this when you have
time? Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11845#issuecomment-198707142
@srowen thanks for looking at this!
I believe job descriptions were intended to contains only plain texts at
first, but HTMLs were introduced in for streaming
Github user lw-lin commented on the pull request:
https://github.com/apache/spark/pull/11845#issuecomment-198718502
Ah, I guess the reason is:
- for the blue box in the event timeline, itself is clickable both for
streaming/non-streaming jobs, so it's unnecessary to contain any
1 - 100 of 597 matches
Mail list logo