[jira] [Updated] (SPARK-32869) Ignore deprecation warnings for sbt

2020-09-13 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32869:
---
Parent: SPARK-25075
Issue Type: Sub-task  (was: Bug)

> Ignore deprecation warnings for sbt
> ---
>
> Key: SPARK-32869
> URL: https://issues.apache.org/jira/browse/SPARK-32869
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> If we build Spark with scala-2.13 and sbt, some deprecation warnings are 
> treated as fatal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32869) Ignore deprecation warnings for sbt

2020-09-13 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32869:
---
Description: If we build Spark with scala-2.13 and sbt, some deprecation 
warnings are treated as fatal.  (was: If we build Spark with scala-2.13 and 
sbt, some deprecated warnings are treated as fatal.)

> Ignore deprecation warnings for sbt
> ---
>
> Key: SPARK-32869
> URL: https://issues.apache.org/jira/browse/SPARK-32869
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> If we build Spark with scala-2.13 and sbt, some deprecation warnings are 
> treated as fatal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32869) Ignore deprecation warnings for sbt

2020-09-13 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32869:
--

 Summary: Ignore deprecation warnings for sbt
 Key: SPARK-32869
 URL: https://issues.apache.org/jira/browse/SPARK-32869
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


If we build Spark with scala-2.13 and sbt, some deprecated warnings are treated 
as fatal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32843) Add an optimization rule to combine range repartition and range operation

2020-09-10 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32843:
--

 Summary: Add an optimization rule to combine range repartition and 
range operation
 Key: SPARK-32843
 URL: https://issues.apache.org/jira/browse/SPARK-32843
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


If all operations between range repartition and range operation are 
OrderPreservingUnaryNode, and ordering of the repartition and range operation 
are the same, they can be combined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32822) Change the number of partitions to zero when a range is empty with WholeStageCodegen disabled or falled back

2020-09-08 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32822:
---
Summary: Change the number of partitions to zero when a range is empty with 
WholeStageCodegen disabled or falled back  (was: Change the number of partition 
to zero when a range is empty with WholeStageCodegen disabled or falled back)

> Change the number of partitions to zero when a range is empty with 
> WholeStageCodegen disabled or falled back
> 
>
> Key: SPARK-32822
> URL: https://issues.apache.org/jira/browse/SPARK-32822
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> If WholeStageCodegen effects, the number of partition of an empty range will 
> be changed to zero. But it doesn't changed when WholeStageCodegen is disabled 
> or falled back.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32822) Change the number of partitions to zero when a range is empty with WholeStageCodegen disabled or falled back

2020-09-08 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32822:
---
Description: If WholeStageCodegen effects, the number of partitions of an 
empty range will be changed to zero. But it doesn't changed when 
WholeStageCodegen is disabled or falled back.  (was: If WholeStageCodegen 
effects, the number of partition of an empty range will be changed to zero. But 
it doesn't changed when WholeStageCodegen is disabled or falled back.)

> Change the number of partitions to zero when a range is empty with 
> WholeStageCodegen disabled or falled back
> 
>
> Key: SPARK-32822
> URL: https://issues.apache.org/jira/browse/SPARK-32822
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> If WholeStageCodegen effects, the number of partitions of an empty range will 
> be changed to zero. But it doesn't changed when WholeStageCodegen is disabled 
> or falled back.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32822) Change the number of partition to zero when a range is empty with WholeStageCodegen disabled or falled back

2020-09-08 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32822:
--

 Summary: Change the number of partition to zero when a range is 
empty with WholeStageCodegen disabled or falled back
 Key: SPARK-32822
 URL: https://issues.apache.org/jira/browse/SPARK-32822
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


If WholeStageCodegen effects, the number of partition of an empty range will be 
changed to zero. But it doesn't changed when WholeStageCodegen is disabled or 
falled back.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32820) Remove redundant shuffle exchanges inserted by EnsureRequirements

2020-09-08 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32820:
--

 Summary: Remove redundant shuffle exchanges inserted by 
EnsureRequirements
 Key: SPARK-32820
 URL: https://issues.apache.org/jira/browse/SPARK-32820
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


Redundant repartition operations are removed by CollapseRepartition rule but 
EnsureRequirements can insert another HashPartitioning or RangePartitioning 
immediately after the repartition, leading adjacent ShuffleExchanges will be in 
the physical plan.

 
{code:java}
val ordered = spark.range(1, 100).repartitionByRange(10, 
$"id".desc).orderBy($"id")
ordered.explain(true)

...

== Physical Plan ==
*(2) Sort [id#0L ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(id#0L ASC NULLS FIRST, 200), true, [id=#15]
   +- Exchange rangepartitioning(id#0L DESC NULLS LAST, 10), false, [id=#14]
  +- *(1) Range (1, 100, step=1, splits=12){code}
{code:java}
spark.conf.set("spark.sql.autoBroadcastJoinThreshold", 0)
val left = Seq(1,2,3).toDF.repartition(10, $"value")
val right = Seq(1,2,3).toDF
val joined = left.join(right, left("value") + 1 === right("value")
joined.explain(true)

...

== Physical Plan ==
*(3) SortMergeJoin [(value#7 + 1)], [value#12], Inner
:- *(1) Sort [(value#7 + 1) ASC NULLS FIRST], false, 0
:  +- Exchange hashpartitioning((value#7 + 1), 200), true, [id=#67]
: +- Exchange hashpartitioning(value#7, 10), false, [id=#63]
:+- LocalTableScan [value#7]
+- *(2) Sort [value#12 ASC NULLS FIRST], false, 0
   +- Exchange hashpartitioning(value#12, 200), true, [id=#68]
  +- LocalTableScan [value#12]{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32774) Don't track docs/.jekyll-cache

2020-09-01 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32774:
--

 Summary: Don't track docs/.jekyll-cache
 Key: SPARK-32774
 URL: https://issues.apache.org/jira/browse/SPARK-32774
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


I noticed sometimes, docs/.jekyll-cache can be created and it should not be 
tracked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars

2020-09-01 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188808#comment-17188808
 ] 

Kousuke Saruta commented on SPARK-32119:


Yeah, it's a bug fix so we may have a chance to backport this fix to 3.0.1.
I'll make a backport PR.

> ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars
> --
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> ExecutorPlugin can't work with Standalone Cluster and Kubernetes
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32773) The behavior of listJars and listFiles is not consistent between YARN and other cluster managers

2020-09-01 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32773:
---
Description: 
Jars/files specified with --jars / --files options are listed by sc.listJars 
and listFiles except when we run apps on YARN.
If we run apps not on YARN, those files are served by the embedded file server 
in the driver and listJars/listFiles list the served files.
But with YARN, such files specified by the options are not served by the 
embedded file server so listJars and listFiles don't list them.

  was:
Jars/files specified with --jars / --files options are listed by sc.listJars 
and listFiles except when we run apps on YARN.

If we run apps not on YARN, those files are served by the embedded file server 
in the driver and listJars/listFiles list the served files.

But with YARN, such files specified by the options are not served by the 
embedded file server so listJars and listFiles don't list them.


> The behavior of listJars and listFiles is not consistent between YARN and 
> other cluster managers
> 
>
> Key: SPARK-32773
> URL: https://issues.apache.org/jira/browse/SPARK-32773
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> Jars/files specified with --jars / --files options are listed by sc.listJars 
> and listFiles except when we run apps on YARN.
> If we run apps not on YARN, those files are served by the embedded file 
> server in the driver and listJars/listFiles list the served files.
> But with YARN, such files specified by the options are not served by the 
> embedded file server so listJars and listFiles don't list them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32773) The behavior of listJars and listFiles is not consistent between YARN and other cluster managers

2020-09-01 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32773:
---
Description: 
Jars/files specified with --jars / --files options are listed by sc.listJars 
and listFiles except when we run apps on YARN.

If we run apps not on YARN, those files are served by the embedded file server 
in the driver and listJars/listFiles list the served files.

But with YARN, such files specified by the options are not served by the 
embedded file server so listJars and listFiles don't list them.

  was:
Jars/files specified with --jars/--files options are listed by sc.listJars and 
listFiles except when we run apps on YARN.

If we run apps not on YARN, those files are served by the embedded file server 
in the driver and listJars/listFiles list the served files.

But with YARN, such files specified by the options are not served by the 
embedded file server so listJars and listFiles don't list them.


> The behavior of listJars and listFiles is not consistent between YARN and 
> other cluster managers
> 
>
> Key: SPARK-32773
> URL: https://issues.apache.org/jira/browse/SPARK-32773
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> Jars/files specified with --jars / --files options are listed by sc.listJars 
> and listFiles except when we run apps on YARN.
> If we run apps not on YARN, those files are served by the embedded file 
> server in the driver and listJars/listFiles list the served files.
> But with YARN, such files specified by the options are not served by the 
> embedded file server so listJars and listFiles don't list them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32773) The behavior of listJars and listFiles is not consistent between YARN and other cluster managers

2020-09-01 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32773:
---
Summary: The behavior of listJars and listFiles is not consistent between 
YARN and other cluster managers  (was: The behavior of listJars and listFiles 
is not consistent with YARN and other cluster managers)

> The behavior of listJars and listFiles is not consistent between YARN and 
> other cluster managers
> 
>
> Key: SPARK-32773
> URL: https://issues.apache.org/jira/browse/SPARK-32773
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> Jars/files specified with --jars/--files options are listed by sc.listJars 
> and listFiles except when we run apps on YARN.
> If we run apps not on YARN, those files are served by the embedded file 
> server in the driver and listJars/listFiles list the served files.
> But with YARN, such files specified by the options are not served by the 
> embedded file server so listJars and listFiles don't list them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32773) The behavior of listJars and listFiles is not consistent with YARN and other cluster managers

2020-09-01 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32773:
--

 Summary: The behavior of listJars and listFiles is not consistent 
with YARN and other cluster managers
 Key: SPARK-32773
 URL: https://issues.apache.org/jira/browse/SPARK-32773
 Project: Spark
  Issue Type: Bug
  Components: YARN
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


Jars/files specified with --jars/--files options are listed by sc.listJars and 
listFiles except when we run apps on YARN.

If we run apps not on YARN, those files are served by the embedded file server 
in the driver and listJars/listFiles list the served files.

But with YARN, such files specified by the options are not served by the 
embedded file server so listJars and listFiles don't list them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32772) Reduce log messages for spark-sql CLI

2020-09-01 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32772:
---
Description: 
When we launch spark-sql CLI, too many log messages are shown and it's 
sometimes difficult to find the result of query.

So I think it's better to reduce log messages like spark-shell and pyspark CLI.

  was:
When we launch spark-sql CLI, too many log messages are shown and it's 
sometimes difficult to find the result of query.

So I think it's better to suppress log like spark-shell and pyspark CLI.


> Reduce log messages for spark-sql CLI
> -
>
> Key: SPARK-32772
> URL: https://issues.apache.org/jira/browse/SPARK-32772
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> When we launch spark-sql CLI, too many log messages are shown and it's 
> sometimes difficult to find the result of query.
> So I think it's better to reduce log messages like spark-shell and pyspark 
> CLI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32772) Reduce log messages for spark-sql CLI

2020-09-01 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32772:
---
Summary: Reduce log messages for spark-sql CLI  (was: Suppress log for 
spark-sql CLI)

> Reduce log messages for spark-sql CLI
> -
>
> Key: SPARK-32772
> URL: https://issues.apache.org/jira/browse/SPARK-32772
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> When we launch spark-sql CLI, too many log messages are shown and it's 
> sometimes difficult to find the result of query.
> So I think it's better to suppress log like spark-shell and pyspark CLI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32772) Suppress log for spark-sql CLI

2020-09-01 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32772:
--

 Summary: Suppress log for spark-sql CLI
 Key: SPARK-32772
 URL: https://issues.apache.org/jira/browse/SPARK-32772
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


When we launch spark-sql CLI, too many log messages are shown and it's 
sometimes difficult to find the result of query.

So I think it's better to suppress log like spark-shell and pyspark CLI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32771) The example of expressions.Aggregator in Javadoc / Scaladoc is wrong

2020-09-01 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32771:
--

 Summary: The example of expressions.Aggregator in Javadoc / 
Scaladoc is wrong
 Key: SPARK-32771
 URL: https://issues.apache.org/jira/browse/SPARK-32771
 Project: Spark
  Issue Type: Bug
  Components: docs
Affects Versions: 3.0.0, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


There is an example of expressions.Aggregator in Javadoc and Scaladoc like as 
follows.
{code:java}
val customSummer =  new Aggregator[Data, Int, Int] {
  def zero: Int = 0
  def reduce(b: Int, a: Data): Int = b + a.i
  def merge(b1: Int, b2: Int): Int = b1 + b2
  def finish(r: Int): Int = r
}.toColumn(){code}
But this example doesn't work because it doesn't define bufferEncoder and 
outputEncoder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32610) Fix the link to metrics.dropwizard.io in monitoring.md to refer the proper version

2020-08-13 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32610:
--

 Summary: Fix the link to metrics.dropwizard.io in monitoring.md to 
refer the proper version
 Key: SPARK-32610
 URL: https://issues.apache.org/jira/browse/SPARK-32610
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 3.0.0, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


There are links to metrics.dropwizard.io in monitoring.md but the link targets 
refer the version 3.1.0, while we use 4.1.1.

Now that users can create their own metrics using the dropwizard library, it's 
better to fix the links to refer the proper version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32538) Use local time zone for the timestamp logged in unit-tests.log

2020-08-05 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32538:
---
Summary: Use local time zone for the timestamp logged in unit-tests.log  
(was: Timestamp logged in unit-tests.log is fixed to Ameria/Los_Angeles.)

> Use local time zone for the timestamp logged in unit-tests.log
> --
>
> Key: SPARK-32538
> URL: https://issues.apache.org/jira/browse/SPARK-32538
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> SparkFunSuite fixes the default time zone to America/Los_Angeles so the 
> timestamp logged in unit-tests.log is also based on the fixed time zone.
> It's confusable for developers whose time zone is not America/Los_Angeles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32538) Timestamp logged in unit-tests.log is fixed to Ameria/Los_Angeles.

2020-08-05 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32538:
--

 Summary: Timestamp logged in unit-tests.log is fixed to 
Ameria/Los_Angeles.
 Key: SPARK-32538
 URL: https://issues.apache.org/jira/browse/SPARK-32538
 Project: Spark
  Issue Type: Bug
  Components: Tests
Affects Versions: 3.0.0, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


SparkFunSuite fixes the default time zone to America/Los_Angeles so the 
timestamp logged in unit-tests.log is also based on the fixed time zone.

It's confusable for developers whose time zone is not America/Los_Angeles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32525) The layout of monitoring.html is broken

2020-08-03 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32525:
--

 Summary: The layout of monitoring.html is broken
 Key: SPARK-32525
 URL: https://issues.apache.org/jira/browse/SPARK-32525
 Project: Spark
  Issue Type: Bug
  Components: Documentation
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


The layout of monitoring.html is broken because there are 2  tags not 
closed in monitoring.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars

2020-07-31 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32119:
---
Description: 
ExecutorPlugin can't work with Standalone Cluster and Kubernetes
 when a jar which contains plugins and files used by the plugins are added by 
--jars and --files option with spark-submit.

This is because jars and files added by --jars and --files are not loaded on 
Executor initialization.
 I confirmed it works with YARN because jars/files are distributed as 
distributed cache.

  was:
ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
manager too except YARN. ) 
 when a jar which contains plugins and files used by the plugins are added by 
--jars and --files option with spark-submit.

This is because jars and files added by --jars and --files are not loaded on 
Executor initialization.
 I confirmed it works with YARN because jars/files are distributed as 
distributed cache.


> ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars
> --
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster and Kubernetes
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars

2020-07-31 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32119:
---
Summary: ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes 
with --jars  (was: ExecutorPlugin doesn't work with Standalone Cluster)

> ExecutorPlugin doesn't work with Standalone Cluster and Kubernetes with --jars
> --
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32462) Don't save the previous search text for datatable

2020-07-27 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32462:
---
Description: 
DataTable is used in stage-page and executors-page for pagination and filter 
tasks/executors by search text.

In the current implementation, search text is saved so if we visit stage-page 
for a job, the previous search text is filled in the textbox and the task table 
is filtered.

I'm sometimes surprised by this behavior as the stage-page lists no tasks 
because tasks are filtered by the previous search text.

  was:
DataTable is used in stage-page and executors-page for pagination and filter 
tasks/executors by search text.

In the current implementation, the keyword is saved so if we visit stage-page 
for a job, the previous search text is filled in the textbox and the task table 
is filtered.

I'm sometimes surprised by this behavior as the stage-page lists no tasks 
because tasks are filtered by the previous search text.


> Don't save the previous search text for datatable
> -
>
> Key: SPARK-32462
> URL: https://issues.apache.org/jira/browse/SPARK-32462
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> DataTable is used in stage-page and executors-page for pagination and filter 
> tasks/executors by search text.
> In the current implementation, search text is saved so if we visit stage-page 
> for a job, the previous search text is filled in the textbox and the task 
> table is filtered.
> I'm sometimes surprised by this behavior as the stage-page lists no tasks 
> because tasks are filtered by the previous search text.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32462) Don't save the previous search text for datatable

2020-07-27 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32462:
--

 Summary: Don't save the previous search text for datatable
 Key: SPARK-32462
 URL: https://issues.apache.org/jira/browse/SPARK-32462
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


DataTable is used in stage-page and executors-page for pagination and filter 
tasks/executors by search text.

In the current implementation, the keyword is saved so if we visit stage-page 
for a job, the previous search text is filled in the textbox and the task table 
is filtered.

I'm sometimes surprised by this behavior as the stage-page lists no tasks 
because tasks are filtered by the previous search text.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-32236) Local cluster should shutdown gracefully

2020-07-09 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved SPARK-32236.

Resolution: Duplicate

> Local cluster should shutdown gracefully
> 
>
> Key: SPARK-32236
> URL: https://issues.apache.org/jira/browse/SPARK-32236
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> Almost every time we call sc.stop with local cluster mode, like following 
> exceptions will be thrown.
> {code:java}
> 20/07/09 08:36:45 ERROR TransportRequestHandler: Error while invoking 
> RpcHandler#receive() for one-way message.
> org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
> at 
> org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:167)
> at 
> org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
> at 
> org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:253)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at 
> i

[jira] [Created] (SPARK-32236) Local cluster should shutdown gracefully

2020-07-08 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32236:
--

 Summary: Local cluster should shutdown gracefully
 Key: SPARK-32236
 URL: https://issues.apache.org/jira/browse/SPARK-32236
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


Almost every time we call sc.stop with local cluster mode, like following 
exceptions will be thrown.
{code:java}
20/07/09 08:36:45 ERROR TransportRequestHandler: Error while invoking 
RpcHandler#receive() for one-way message.
org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
at 
org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:167)
at 
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
at 
org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
at 
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:253)
at 
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
at 
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
at 
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
{code}
The reason is the asynchronously sent RPC message KillExecutor from Master can 
be processed after the message loop stops in Worker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32200) Redirect to the history page when accessed to /history without appliation id

2020-07-08 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32200:
---
Affects Version/s: (was: 3.0.1)
   3.0.0

> Redirect to the history page when accessed to /history without appliation id
> 
>
> Key: SPARK-32200
> URL: https://issues.apache.org/jira/browse/SPARK-32200
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.0, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
>  
> In the current master, when we access to /history on the HistoryServer with 
> without application id, status code 400 will be returned.
> I wonder it's better to redirect to the history page instead for the better 
> UX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32200) Redirect to the history page when accessed to /history without appliation id

2020-07-08 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32200:
---
Affects Version/s: (was: 3.0.0)

> Redirect to the history page when accessed to /history without appliation id
> 
>
> Key: SPARK-32200
> URL: https://issues.apache.org/jira/browse/SPARK-32200
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
>  
> In the current master, when we access to /history on the HistoryServer with 
> without application id, status code 400 will be returned.
> I wonder it's better to redirect to the history page instead for the better 
> UX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32214) The type conversion function generated in makeFromJava for "other" type uses a wrong variable.

2020-07-07 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32214:
--

 Summary: The type conversion function generated in makeFromJava 
for "other"  type uses a wrong variable.
 Key: SPARK-32214
 URL: https://issues.apache.org/jira/browse/SPARK-32214
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.0.0, 2.4.6, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


`makeFromJava` in `EvaluatePython` create a type conversion function for some 
Java/Scala types.

For `other` type, the parameter of the type conversion function is named `obj` 
but `other` is mistakenly used rather than `obj` in the function body.
{code:java}
case other => (obj: Any) => nullSafeConvert(other)(PartialFunction.empty) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32200) Redirect to the history page when accessed to /history without appliation id

2020-07-06 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32200:
--

 Summary: Redirect to the history page when accessed to /history 
without appliation id
 Key: SPARK-32200
 URL: https://issues.apache.org/jira/browse/SPARK-32200
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.0.1, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


 

In the current master, when we access to /history on the HistoryServer with 
without application id, status code 400 will be returned.

I wonder it's better to redirect to the history page instead for the better UX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-06 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152220#comment-17152220
 ] 

Kousuke Saruta commented on SPARK-32153:


Thanks [~shaneknapp]!

> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
> [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28971#issuecomment-652690849] 
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
> [https://github.com/apache/spark/pull/28942#issuecomment-652835679]
> These can be related to .m2 corruption.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32177) Remove the weird line from near the Spark logo on mouseover in the WebUI

2020-07-05 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32177:
---
Affects Version/s: (was: 3.0.0)

> Remove the weird line from near the Spark logo on mouseover in the WebUI
> 
>
> Key: SPARK-32177
> URL: https://issues.apache.org/jira/browse/SPARK-32177
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Minor
>
> In the webui, the Spark logo is on the top right side.
> When we move mouse cursor on the logo, a weird underline appears near the 
> logo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32177) Remove the weird line from near the Spark logo on mouseover in the WebUI

2020-07-05 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32177:
--

 Summary: Remove the weird line from near the Spark logo on 
mouseover in the WebUI
 Key: SPARK-32177
 URL: https://issues.apache.org/jira/browse/SPARK-32177
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.0.0, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


In the webui, the Spark logo is on the top right side.
When we move mouse cursor on the logo, a weird underline appears near the logo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32175) Fix the order between initialization for ExecutorPlugin and starting heartbeat thread

2020-07-05 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32175:
--

 Summary: Fix the order between initialization for ExecutorPlugin 
and starting heartbeat thread
 Key: SPARK-32175
 URL: https://issues.apache.org/jira/browse/SPARK-32175
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.0.0, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


In the current master, heartbeat thread in a executor starts after plugin 
initialization so if the initialization takes long time, heartbeat is not sent 
to driver and the executor will be removed from cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32153:
---
Description: 
Build task on Jenkins-worker4 often fails with dependency problem.
[https://github.com/apache/spark/pull/28971#issuecomment-652570066]
[https://github.com/apache/spark/pull/28971#issuecomment-652611025]
[https://github.com/apache/spark/pull/28971#issuecomment-652690849] 
[https://github.com/apache/spark/pull/28971#issuecomment-652611025]
[https://github.com/apache/spark/pull/28942#issuecomment-652842960]
[https://github.com/apache/spark/pull/28942#issuecomment-652835679]

These can be related to .m2 corruption.

 

  was:
Build task on Jenkins-worker4 often fails with dependency problem.
 [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
 [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
 [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
 [https://github.com/apache/spark/pull/28942#issuecomment-652832012
https://github.com/apache/spark/pull/28971#issuecomment-652611025 
|https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960] [
 |https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
[https://github.com/apache/spark/pull/28942#issuecomment-652835679] 
[|https://github.com/apache/spark/pull/28942#issuecomment-652832012] 

These can be related to .m2 corruption.


> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
> [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28971#issuecomment-652690849] 
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
> [https://github.com/apache/spark/pull/28942#issuecomment-652835679]
> These can be related to .m2 corruption.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32153:
---
Description: 
Build task on Jenkins-worker4 often fails with dependency problem.
[https://github.com/apache/spark/pull/28971#issuecomment-652570066]
 [https://github.com/apache/spark/pull/28971#issuecomment-652611025
https://github.com/apache/spark/pull/28971#issuecomment-652690849
https://github.com/apache/spark/pull/28942#issuecomment-652832012
|https://github.com/apache/spark/pull/28971#issuecomment-652611025] 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960]
[https://github.com/apache/spark/pull/28942#issuecomment-652835679]

 

These can be related to .m2 corruption.

  was:
Build task on Jenkins-worker4 often fails with dependency problem.
[https://github.com/apache/spark/pull/28971#issuecomment-652611025]

[https://github.com/apache/spark/pull/28942#issuecomment-652842960]

These can be related to .m2 corruption.


> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
> [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
>  [https://github.com/apache/spark/pull/28971#issuecomment-652611025
> https://github.com/apache/spark/pull/28971#issuecomment-652690849
> https://github.com/apache/spark/pull/28942#issuecomment-652832012
> |https://github.com/apache/spark/pull/28971#issuecomment-652611025] 
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
> [https://github.com/apache/spark/pull/28942#issuecomment-652835679]
>  
> These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32153:
---
Description: 
Build task on Jenkins-worker4 often fails with dependency problem.
 [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
 [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
 [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
 [https://github.com/apache/spark/pull/28942#issuecomment-652832012
https://github.com/apache/spark/pull/28971#issuecomment-652611025 
|https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960] [
 |https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
[https://github.com/apache/spark/pull/28942#issuecomment-652835679] 
[|https://github.com/apache/spark/pull/28942#issuecomment-652832012] 

These can be related to .m2 corruption.

  was:
Build task on Jenkins-worker4 often fails with dependency problem.
 [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
https://github.com/apache/spark/pull/28971#issuecomment-652611025
 [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
 [https://github.com/apache/spark/pull/28942#issuecomment-652832012]
|https://github.com/apache/spark/pull/28971#issuecomment-652611025 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960]
 [https://github.com/apache/spark/pull/28942#issuecomment-652835679]|

 

These can be related to .m2 corruption.


> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
>  [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
>  [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
>  [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
>  [https://github.com/apache/spark/pull/28942#issuecomment-652832012
> https://github.com/apache/spark/pull/28971#issuecomment-652611025 
> |https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960] [
>  |https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
> [https://github.com/apache/spark/pull/28942#issuecomment-652835679] 
> [|https://github.com/apache/spark/pull/28942#issuecomment-652832012] 
> These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32153:
---
Description: 
Build task on Jenkins-worker4 often fails with dependency problem.
 [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
https://github.com/apache/spark/pull/28971#issuecomment-652611025
 [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
 [https://github.com/apache/spark/pull/28942#issuecomment-652832012]
|https://github.com/apache/spark/pull/28971#issuecomment-652611025 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960]
 [https://github.com/apache/spark/pull/28942#issuecomment-652835679]|

 

These can be related to .m2 corruption.

  was:
Build task on Jenkins-worker4 often fails with dependency problem.
[https://github.com/apache/spark/pull/28971#issuecomment-652570066]
 [https://github.com/apache/spark/pull/28971#issuecomment-652611025
https://github.com/apache/spark/pull/28971#issuecomment-652690849
https://github.com/apache/spark/pull/28942#issuecomment-652832012
|https://github.com/apache/spark/pull/28971#issuecomment-652611025] 
[https://github.com/apache/spark/pull/28942#issuecomment-652842960]
[https://github.com/apache/spark/pull/28942#issuecomment-652835679]

 

These can be related to .m2 corruption.


> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
>  [https://github.com/apache/spark/pull/28971#issuecomment-652570066]
> https://github.com/apache/spark/pull/28971#issuecomment-652611025
>  [https://github.com/apache/spark/pull/28971#issuecomment-652690849]
>  [https://github.com/apache/spark/pull/28942#issuecomment-652832012]
> |https://github.com/apache/spark/pull/28971#issuecomment-652611025 
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
>  [https://github.com/apache/spark/pull/28942#issuecomment-652835679]|
>  
> These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17150048#comment-17150048
 ] 

Kousuke Saruta commented on SPARK-32153:


[~shaneknapp] Could you look into this?

> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
> These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32153:
---
Issue Type: Bug  (was: Improvement)

> .m2 repository corruption can happen on Jenkins-worker4
> ---
>
> Key: SPARK-32153
> URL: https://issues.apache.org/jira/browse/SPARK-32153
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Shane Knapp
>Priority: Critical
>
> Build task on Jenkins-worker4 often fails with dependency problem.
> [https://github.com/apache/spark/pull/28971#issuecomment-652611025]
> [https://github.com/apache/spark/pull/28942#issuecomment-652842960]
> These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32153) .m2 repository corruption can happen on Jenkins-worker4

2020-07-02 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32153:
--

 Summary: .m2 repository corruption can happen on Jenkins-worker4
 Key: SPARK-32153
 URL: https://issues.apache.org/jira/browse/SPARK-32153
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 3.0.1, 3.1.0
Reporter: Kousuke Saruta
Assignee: Shane Knapp


Build task on Jenkins-worker4 often fails with dependency problem.
[https://github.com/apache/spark/pull/28971#issuecomment-652611025]

[https://github.com/apache/spark/pull/28942#issuecomment-652842960]

These can be related to .m2 corruption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-30 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148790#comment-17148790
 ] 

Kousuke Saruta commented on SPARK-32119:


Yeah I know it works with extraClassPath but as you mention, it's a limitation.

> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-29 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148257#comment-17148257
 ] 

Kousuke Saruta commented on SPARK-32119:


[~lucacanali] Thanks. I got this is a known limitation. Are there any plan to 
eliminate the limitation?

> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-28 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32119:
---
Affects Version/s: 3.0.1

> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.1, 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-28 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147427#comment-17147427
 ] 

Kousuke Saruta commented on SPARK-32119:


Sorry, it's just a mistake. I've modified it.

> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-28 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32119:
---
Issue Type: Bug  (was: Improvement)

> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-28 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32119:
---
Description: 
ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
manager too except YARN. ) 
 when a jar which contains plugins and files used by the plugins are added by 
--jars and --files option with spark-submit.

This is because jars and files added by --jars and --files are not loaded on 
Executor initialization.
 I confirmed it works with YARN because jars/files are distributed as 
distributed cache.

  was:
ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
manager too except YARN. ) 
when a jar which contains plugins and files used by the plugins are added by 
--jars and --files option with spark-submit.

This is because jars and files added by --jars and --files are not loaded on 
Executor initialization.
I confirmed it works **with YARN because jars/files are distributed as 
distributed cache.


> ExecutorPlugin doesn't work with Standalone Cluster
> ---
>
> Key: SPARK-32119
> URL: https://issues.apache.org/jira/browse/SPARK-32119
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
> manager too except YARN. ) 
>  when a jar which contains plugins and files used by the plugins are added by 
> --jars and --files option with spark-submit.
> This is because jars and files added by --jars and --files are not loaded on 
> Executor initialization.
>  I confirmed it works with YARN because jars/files are distributed as 
> distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32119) ExecutorPlugin doesn't work with Standalone Cluster

2020-06-28 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32119:
--

 Summary: ExecutorPlugin doesn't work with Standalone Cluster
 Key: SPARK-32119
 URL: https://issues.apache.org/jira/browse/SPARK-32119
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


ExecutorPlugin can't work with Standalone Cluster (maybe with other cluster 
manager too except YARN. ) 
when a jar which contains plugins and files used by the plugins are added by 
--jars and --files option with spark-submit.

This is because jars and files added by --jars and --files are not loaded on 
Executor initialization.
I confirmed it works **with YARN because jars/files are distributed as 
distributed cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-31871) Display the canvas element icon for sorting column

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved SPARK-31871.

Resolution: Fixed

This issue is resolved in [https://github.com/apache/spark/pull/28799] .

> Display the canvas element icon for sorting column
> --
>
> Key: SPARK-31871
> URL: https://issues.apache.org/jira/browse/SPARK-31871
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Web UI
>Affects Versions: 2.4.3, 2.4.4, 2.4.5, 2.4.6
>Reporter: liucht-inspur
>Assignee: liucht-inspur
>Priority: Minor
> Fix For: 2.4.7
>
>
> In the history Server page and Executor page, due to the wrong canvas element 
> image path,
> The sorting icon cannot be displayed when the sequence is clicked. In order 
> to improve the user experience, the error path code is modified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-31871) Display the canvas element icon for sorting column

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned SPARK-31871:
--

Assignee: liucht-inspur

> Display the canvas element icon for sorting column
> --
>
> Key: SPARK-31871
> URL: https://issues.apache.org/jira/browse/SPARK-31871
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Web UI
>Affects Versions: 2.4.3, 2.4.4, 2.4.5
>Reporter: liucht-inspur
>Assignee: liucht-inspur
>Priority: Minor
>
> In the history Server page and Executor page, due to the wrong canvas element 
> image path,
> The sorting icon cannot be displayed when the sequence is clicked. In order 
> to improve the user experience, the error path code is modified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31871) Display the canvas element icon for sorting column

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31871:
---
Affects Version/s: 2.4.6

> Display the canvas element icon for sorting column
> --
>
> Key: SPARK-31871
> URL: https://issues.apache.org/jira/browse/SPARK-31871
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Web UI
>Affects Versions: 2.4.3, 2.4.4, 2.4.5, 2.4.6
>Reporter: liucht-inspur
>Assignee: liucht-inspur
>Priority: Minor
> Fix For: 2.4.7
>
>
> In the history Server page and Executor page, due to the wrong canvas element 
> image path,
> The sorting icon cannot be displayed when the sequence is clicked. In order 
> to improve the user experience, the error path code is modified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31871) Display the canvas element icon for sorting column

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31871:
---
Fix Version/s: 2.4.7

> Display the canvas element icon for sorting column
> --
>
> Key: SPARK-31871
> URL: https://issues.apache.org/jira/browse/SPARK-31871
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Web UI
>Affects Versions: 2.4.3, 2.4.4, 2.4.5
>Reporter: liucht-inspur
>Assignee: liucht-inspur
>Priority: Minor
> Fix For: 2.4.7
>
>
> In the history Server page and Executor page, due to the wrong canvas element 
> image path,
> The sorting icon cannot be displayed when the sequence is clicked. In order 
> to improve the user experience, the error path code is modified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-32000) Fix the flaky testcase for partially launched task in barrier-mode.

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned SPARK-32000:
--

Assignee: (was: Kousuke Saruta)

> Fix the flaky testcase for partially launched task in barrier-mode.
> ---
>
> Key: SPARK-32000
> URL: https://issues.apache.org/jira/browse/SPARK-32000
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Priority: Major
>
> I noticed sometimes the testcase for SPARK-31485 fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32000) Fix the flaky testcase for partially launched task in barrier-mode.

2020-06-17 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32000:
---
Description: I noticed sometimes the testcase for SPARK-31485 fails.  (was: 
I noticed sometimes the testcase for SPARK-31485 fails.
The reason should be related to the locality wait. 

If the scheduler waits for a resource offer which meets the preferred location 
for a task until the time-limit of process-local but no resource can be offered 
for the locality level, the scheduler will give up the preferred location. In 
this case, such task can be assigned to off-preferred location.

The testcase for SPARK-31485, there are two tasks and only one task is supposed 
to be assigned at one schedule round but both two tasks can be assigned in that 
situation mentioned above and the testcase will fail.)

> Fix the flaky testcase for partially launched task in barrier-mode.
> ---
>
> Key: SPARK-32000
> URL: https://issues.apache.org/jira/browse/SPARK-32000
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> I noticed sometimes the testcase for SPARK-31485 fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-31996) Specify the version of ChromeDriver and RemoteWebDriver which can work with guava 14.0.1

2020-06-16 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned SPARK-31996:
--

Assignee: (was: Kousuke Saruta)

> Specify the version of ChromeDriver and RemoteWebDriver which can work with 
> guava 14.0.1
> 
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver and RemoteWebDriver are implicitly 
> upgraded because of dependency and the the implicitly upgraded modules can't 
> work with guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older version which can work 
> with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32000) Fix the flaky testcase for partially launched task in barrier-mode.

2020-06-16 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-32000:
---
Issue Type: Bug  (was: Improvement)

> Fix the flaky testcase for partially launched task in barrier-mode.
> ---
>
> Key: SPARK-32000
> URL: https://issues.apache.org/jira/browse/SPARK-32000
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
>
> I noticed sometimes the testcase for SPARK-31485 fails.
> The reason should be related to the locality wait. 
> If the scheduler waits for a resource offer which meets the preferred 
> location for a task until the time-limit of process-local but no resource can 
> be offered for the locality level, the scheduler will give up the preferred 
> location. In this case, such task can be assigned to off-preferred location.
> The testcase for SPARK-31485, there are two tasks and only one task is 
> supposed to be assigned at one schedule round but both two tasks can be 
> assigned in that situation mentioned above and the testcase will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32000) Fix the flaky testcase for partially launched task in barrier-mode.

2020-06-16 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-32000:
--

 Summary: Fix the flaky testcase for partially launched task in 
barrier-mode.
 Key: SPARK-32000
 URL: https://issues.apache.org/jira/browse/SPARK-32000
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, Tests
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


I noticed sometimes the testcase for SPARK-31485 fails.
The reason should be related to the locality wait. 

If the scheduler waits for a resource offer which meets the preferred location 
for a task until the time-limit of process-local but no resource can be offered 
for the locality level, the scheduler will give up the preferred location. In 
this case, such task can be assigned to off-preferred location.

The testcase for SPARK-31485, there are two tasks and only one task is supposed 
to be assigned at one schedule round but both two tasks can be assigned in that 
situation mentioned above and the testcase will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of ChromeDriver and RemoteWebDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Description: 
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver and RemoteWebDriver are implicitly 
upgraded because of dependency and the the implicitly upgraded modules can't 
work with guava 14.0.1 due to an API compatibility so we need to run 
ChromeUISeleniumSuite with a guava version specified like 
-Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older version which can work with 
guava 14.0.1.

  was:
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, chrome-driver and remote-driver are implicitly 
upgraded because of dependency and the the implicitly upgraded modules can't 
work with guava 14.0.1 due to an API compatibility so we need to run 
ChromeUISeleniumSuite with a guava version specified like 
-Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older version which can work with 
guava 14.0.1.


> Specify the version of ChromeDriver and RemoteWebDriver which can work with 
> guava 14.0.1
> 
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>    Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver and RemoteWebDriver are implicitly 
> upgraded because of dependency and the the implicitly upgraded modules can't 
> work with guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older version which can work 
> with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of ChromeDriver and RemoteWebDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Summary: Specify the version of ChromeDriver and RemoteWebDriver which can 
work with guava 14.0.1  (was: Specify the version of chrome-driver and 
remote-driver which can work with guava 14.0.1)

> Specify the version of ChromeDriver and RemoteWebDriver which can work with 
> guava 14.0.1
> 
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, chrome-driver and remote-driver are implicitly 
> upgraded because of dependency and the the implicitly upgraded modules can't 
> work with guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older version which can work 
> with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of chrome-driver and remote-driver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Description: 
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, chrome-driver and remote-driver are implicitly 
upgraded because of dependency and the the implicitly upgraded modules can't 
work with guava 14.0.1 due to an API compatibility so we need to run 
ChromeUISeleniumSuite with a guava version specified like 
-Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older version which can work with 
guava 14.0.1.

  was:
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver and RemoteDriver is implicitly upgraded 
because of dependency and the upgraded ChromeDriver can't work with guava 
14.0.1 due to an API compatibility so we need to run ChromeUISeleniumSuite with 
a guava version specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.


> Specify the version of chrome-driver and remote-driver which can work with 
> guava 14.0.1
> ---
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>    Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, chrome-driver and remote-driver are implicitly 
> upgraded because of dependency and the the implicitly upgraded modules can't 
> work with guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older version which can work 
> with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of chrome-driver and remote-driver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Summary: Specify the version of chrome-driver and remote-driver which can 
work with guava 14.0.1  (was: Specify the version of ChromeDriver and 
RemoteDriver which can work with guava 14.0.1)

> Specify the version of chrome-driver and remote-driver which can work with 
> guava 14.0.1
> ---
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver and RemoteDriver is implicitly 
> upgraded because of dependency and the upgraded ChromeDriver can't work with 
> guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older ChromeDriver which can 
> work with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of ChromeDriver and RemoteDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Description: 
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver and RemoteDriver is implicitly upgraded 
because of dependency and the upgraded ChromeDriver can't work with guava 
14.0.1 due to an API compatibility so we need to run ChromeUISeleniumSuite with 
a guava version specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.

  was:
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver is implicitly upgraded because of 
dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to an 
API compatibility so we need to run ChromeUISeleniumSuite with a guava version 
specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.


> Specify the version of ChromeDriver and RemoteDriver which can work with 
> guava 14.0.1
> -
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>    Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver and RemoteDriver is implicitly 
> upgraded because of dependency and the upgraded ChromeDriver can't work with 
> guava 14.0.1 due to an API compatibility so we need to run 
> ChromeUISeleniumSuite with a guava version specified like 
> -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older ChromeDriver which can 
> work with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of ChromeDriver and RemoteDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Summary: Specify the version of ChromeDriver and RemoteDriver which can 
work with guava 14.0.1  (was: Specify the version of ChromeDriver which can 
work with guava 14.0.1)

> Specify the version of ChromeDriver and RemoteDriver which can work with 
> guava 14.0.1
> -
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver is implicitly upgraded because of 
> dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to 
> an API compatibility so we need to run ChromeUISeleniumSuite with a guava 
> version specified like -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older ChromeDriver which can 
> work with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31996) Specify the version of ChromeDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31996:
---
Description: 
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver is implicitly upgraded because of 
dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to an 
API compatibility so we need to run ChromeUISeleniumSuite with a guava version 
specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.

  was:
SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver is explicitly upgraded because of 
dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to an 
API compatibility so we need to run ChromeUISeleniumSuite with a guava version 
specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.


> Specify the version of ChromeDriver which can work with guava 14.0.1
> 
>
> Key: SPARK-31996
> URL: https://issues.apache.org/jira/browse/SPARK-31996
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
> needed to be upgraded to work with the upgraded HtmlUnit.
> After upgrading Selenium, ChromeDriver is implicitly upgraded because of 
> dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to 
> an API compatibility so we need to run ChromeUISeleniumSuite with a guava 
> version specified like -Dguava.version=25.0-jre.
> {code:java}
> $ build/sbt -Dguava.version=25.0-jre 
> -Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
> -Dtest.default.exclude.tags= "testOnly 
> org.apache.spark.ui.UISeleniumSuite"{code}
> It's a little bit inconvenience so let's use older ChromeDriver which can 
> work with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31996) Specify the version of ChromeDriver which can work with guava 14.0.1

2020-06-15 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31996:
--

 Summary: Specify the version of ChromeDriver which can work with 
guava 14.0.1
 Key: SPARK-31996
 URL: https://issues.apache.org/jira/browse/SPARK-31996
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


SPARK-31765 upgraded HtmlUnit due to a security reason and Selenium was also 
needed to be upgraded to work with the upgraded HtmlUnit.

After upgrading Selenium, ChromeDriver is explicitly upgraded because of 
dependency and the upgraded ChromeDriver can't work with guava 14.0.1 due to an 
API compatibility so we need to run ChromeUISeleniumSuite with a guava version 
specified like -Dguava.version=25.0-jre.
{code:java}
$ build/sbt -Dguava.version=25.0-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags= "testOnly 
org.apache.spark.ui.UISeleniumSuite"{code}
It's a little bit inconvenience so let's use older ChromeDriver which can work 
with guava 14.0.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31983) Tables of structured streaming tab show wrong result for duration column

2020-06-14 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31983:
---
Fix Version/s: 3.0.1

> Tables of structured streaming tab show wrong result for duration column
> 
>
> Key: SPARK-31983
> URL: https://issues.apache.org/jira/browse/SPARK-31983
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Web UI
>Affects Versions: 3.0.0
>Reporter: Rakesh Raushan
>Assignee: Rakesh Raushan
>Priority: Major
> Fix For: 3.0.1, 3.1.0
>
>
> Sorting result for duration column in tables of structured streaming tab is 
> sometimes wrong. As we are sorting on string values. Consider "3ms" and 
> "12ms".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31983) Tables of structured streaming tab show wrong result for duration column

2020-06-14 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135323#comment-17135323
 ] 

Kousuke Saruta commented on SPARK-31983:


Resolved for 3.0 by [https://github.com/apache/spark/pull/28823] .

> Tables of structured streaming tab show wrong result for duration column
> 
>
> Key: SPARK-31983
> URL: https://issues.apache.org/jira/browse/SPARK-31983
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Web UI
>Affects Versions: 3.0.0
>Reporter: Rakesh Raushan
>Assignee: Rakesh Raushan
>Priority: Major
> Fix For: 3.0.1, 3.1.0
>
>
> Sorting result for duration column in tables of structured streaming tab is 
> sometimes wrong. As we are sorting on string values. Consider "3ms" and 
> "12ms".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31971) Add pagination support for all jobs timeline

2020-06-11 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31971:
---
Description: 
If there are lots of jobs, rendering performance of all jobs timeline can 
significantly goes down. This issue is reported in SPARK-31967.
 For example, the following operation can take >40 sec.
{code:java}
(1 to 1000).foreach(_ => sc.parallelize(1 to 10).collect) {code}
Although it's not the fundamental solution, pagination can mitigate the issue.

  was:
If there are lots of jobs, rendering performance of all jobs timeline can 
significantly goes down. This issue is reported in SPARK-31967.
For example, the following operation can take >40 sec.
{code:java}
(1 to 301).foreach(_ => sc.parallelize(1 to 10).collect) {code}
Although it's not the fundamental solution, pagination can mitigate the issue.


> Add pagination support for all jobs timeline
> 
>
> Key: SPARK-31971
> URL: https://issues.apache.org/jira/browse/SPARK-31971
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.1, 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> If there are lots of jobs, rendering performance of all jobs timeline can 
> significantly goes down. This issue is reported in SPARK-31967.
>  For example, the following operation can take >40 sec.
> {code:java}
> (1 to 1000).foreach(_ => sc.parallelize(1 to 10).collect) {code}
> Although it's not the fundamental solution, pagination can mitigate the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31971) Add pagination support for all jobs timeline

2020-06-11 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31971:
---
Description: 
If there are lots of jobs, rendering performance of all jobs timeline can 
significantly goes down. This issue is reported in SPARK-31967.
For example, the following operation can take >40 sec.
{code:java}
(1 to 301).foreach(_ => sc.parallelize(1 to 10).collect) {code}
Although it's not the fundamental solution, pagination can mitigate the issue.

  was:
If there are lots of jobs, rendering performance of all jobs timeline can 
significantly goes down. This issue is reported in SPARK-31967.

Although the fundamental solution, pagination can mitigate the issue.


> Add pagination support for all jobs timeline
> 
>
> Key: SPARK-31971
> URL: https://issues.apache.org/jira/browse/SPARK-31971
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.1, 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> If there are lots of jobs, rendering performance of all jobs timeline can 
> significantly goes down. This issue is reported in SPARK-31967.
> For example, the following operation can take >40 sec.
> {code:java}
> (1 to 301).foreach(_ => sc.parallelize(1 to 10).collect) {code}
> Although it's not the fundamental solution, pagination can mitigate the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31971) Add pagination support for all jobs timeline

2020-06-11 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31971:
--

 Summary: Add pagination support for all jobs timeline
 Key: SPARK-31971
 URL: https://issues.apache.org/jira/browse/SPARK-31971
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.0.1, 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


If there are lots of jobs, rendering performance of all jobs timeline can 
significantly goes down. This issue is reported in SPARK-31967.

Although the fundamental solution, pagination can mitigate the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-31967) Loading jobs UI page takes 40 seconds

2020-06-11 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133109#comment-17133109
 ] 

Kousuke Saruta edited comment on SPARK-31967 at 6/11/20, 9:51 AM:
--

For now, I'm considering a paging solution like StagePage does.


was (Author: sarutak):
For now, I'm considering a solution like StagePage does.

> Loading jobs UI page takes 40 seconds
> -
>
> Key: SPARK-31967
> URL: https://issues.apache.org/jira/browse/SPARK-31967
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.1
>Reporter: Gengliang Wang
>Priority: Blocker
> Attachments: load_time.jpeg, profile.png
>
>
> In the latest master branch, I find that the job list page becomes very slow.
> To reproduce in local setup:
> {code:java}
> spark.read.parquet("/tmp/p1").createOrReplaceTempView("t1")
> spark.read.parquet("/tmp/p2").createOrReplaceTempView("t2")
> (1 to 1000).map(_ =>  spark.sql("select * from t1, t2 where 
> t1.value=t2.value").show())
> {code}
> And that, open live UI: http://localhost:4040/
> The loading time is about 40 seconds.
> If we comment out the function call for `drawApplicationTimeline`, then the 
> loading time is around 1 second.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31967) Loading jobs UI page takes 40 seconds

2020-06-11 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133109#comment-17133109
 ] 

Kousuke Saruta commented on SPARK-31967:


For now, I'm considering a solution like StagePage does.

> Loading jobs UI page takes 40 seconds
> -
>
> Key: SPARK-31967
> URL: https://issues.apache.org/jira/browse/SPARK-31967
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.1
>Reporter: Gengliang Wang
>Priority: Blocker
> Attachments: load_time.jpeg, profile.png
>
>
> In the latest master branch, I find that the job list page becomes very slow.
> To reproduce in local setup:
> {code:java}
> spark.read.parquet("/tmp/p1").createOrReplaceTempView("t1")
> spark.read.parquet("/tmp/p2").createOrReplaceTempView("t2")
> (1 to 1000).map(_ =>  spark.sql("select * from t1, t2 where 
> t1.value=t2.value").show())
> {code}
> And that, open live UI: http://localhost:4040/
> The loading time is about 40 seconds.
> If we comment out the function call for `drawApplicationTimeline`, then the 
> loading time is around 1 second.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31967) Loading jobs UI page takes 40 seconds

2020-06-11 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133049#comment-17133049
 ] 

Kousuke Saruta commented on SPARK-31967:


This issue can be related.

[https://github.com/visjs/vis-timeline/issues/379]

> Loading jobs UI page takes 40 seconds
> -
>
> Key: SPARK-31967
> URL: https://issues.apache.org/jira/browse/SPARK-31967
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.1
>Reporter: Gengliang Wang
>Priority: Blocker
> Attachments: load_time.jpeg, profile.png
>
>
> In the latest master branch, I find that the job list page becomes very slow.
> To reproduce in local setup:
> {code:java}
> spark.read.parquet("/tmp/p1").createOrReplaceTempView("t1")
> spark.read.parquet("/tmp/p2").createOrReplaceTempView("t2")
> (1 to 1000).map(_ =>  spark.sql("select * from t1, t2 where 
> t1.value=t2.value").show())
> {code}
> And that, open live UI: http://localhost:4040/
> The loading time is about 40 seconds.
> If we comment out the function call for `drawApplicationTimeline`, then the 
> loading time is around 1 second.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31967) Loading jobs UI page takes 40 seconds

2020-06-11 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133039#comment-17133039
 ] 

Kousuke Saruta commented on SPARK-31967:


I'll investigate.

> Loading jobs UI page takes 40 seconds
> -
>
> Key: SPARK-31967
> URL: https://issues.apache.org/jira/browse/SPARK-31967
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.1
>Reporter: Gengliang Wang
>Priority: Blocker
> Attachments: load_time.jpeg, profile.png
>
>
> In the latest master branch, I find that the job list page becomes very slow.
> To reproduce in local setup:
> {code:java}
> spark.read.parquet("/tmp/p1").createOrReplaceTempView("t1")
> spark.read.parquet("/tmp/p2").createOrReplaceTempView("t2")
> (1 to 1000).map(_ =>  spark.sql("select * from t1, t2 where 
> t1.value=t2.value").show())
> {code}
> And that, open live UI: http://localhost:4040/
> The loading time is about 40 seconds.
> If we comment out the function call for `drawApplicationTimeline`, then the 
> loading time is around 1 second.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31941) Handling the exception in SparkUI for getSparkUser method

2020-06-10 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31941:
---
Fix Version/s: 2.4.7
   3.1.0
   3.0.1

> Handling the exception in SparkUI for getSparkUser method
> -
>
> Key: SPARK-31941
> URL: https://issues.apache.org/jira/browse/SPARK-31941
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.6, 3.0.0, 3.1.0
>Reporter: Saurabh Chawla
>Assignee: Saurabh Chawla
>Priority: Minor
> Fix For: 3.0.1, 3.1.0, 2.4.7
>
>
> After SPARK-31632 SparkException is thrown from  def applicationInfo(
> {code:java}
>   def applicationInfo(): v1.ApplicationInfo = {
> try {
>   // The ApplicationInfo may not be available when Spark is starting up.
>   
> store.view(classOf[ApplicationInfoWrapper]).max(1).iterator().next().info
> } catch {
>   case _: NoSuchElementException =>
> throw new SparkException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> }
>   }
> {code}
> Where as the caller for this method def getSparkUser in Spark UI is not 
> handling SparkException in the catch
> {code:java}
>   def getSparkUser: String = {
> try {
>   Option(store.applicationInfo().attempts.head.sparkUser)
> 
> .orElse(store.environmentInfo().systemProperties.toMap.get("user.name"))
> .getOrElse("")
> } catch {
>   case _: NoSuchElementException => ""
> }
>   }
> {code}
> So On using this method (getSparkUser )we can get the application erred out.
> So either we should thow
> {code:java}
>  throw new NoSuchElementException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> {code}
> or else add the scenario to catch spark exception in getSparkUser
> case _: SparkException => ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31941) Handling the exception in SparkUI for getSparkUser method

2020-06-10 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17130373#comment-17130373
 ] 

Kousuke Saruta commented on SPARK-31941:


Fixed in https://github.com/apache/spark/pull/28768

> Handling the exception in SparkUI for getSparkUser method
> -
>
> Key: SPARK-31941
> URL: https://issues.apache.org/jira/browse/SPARK-31941
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.6, 3.0.0, 3.1.0
>Reporter: Saurabh Chawla
>Assignee: Saurabh Chawla
>Priority: Minor
> Fix For: 3.0.1, 3.1.0, 2.4.7
>
>
> After SPARK-31632 SparkException is thrown from  def applicationInfo(
> {code:java}
>   def applicationInfo(): v1.ApplicationInfo = {
> try {
>   // The ApplicationInfo may not be available when Spark is starting up.
>   
> store.view(classOf[ApplicationInfoWrapper]).max(1).iterator().next().info
> } catch {
>   case _: NoSuchElementException =>
> throw new SparkException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> }
>   }
> {code}
> Where as the caller for this method def getSparkUser in Spark UI is not 
> handling SparkException in the catch
> {code:java}
>   def getSparkUser: String = {
> try {
>   Option(store.applicationInfo().attempts.head.sparkUser)
> 
> .orElse(store.environmentInfo().systemProperties.toMap.get("user.name"))
> .getOrElse("")
> } catch {
>   case _: NoSuchElementException => ""
> }
>   }
> {code}
> So On using this method (getSparkUser )we can get the application erred out.
> So either we should thow
> {code:java}
>  throw new NoSuchElementException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> {code}
> or else add the scenario to catch spark exception in getSparkUser
> case _: SparkException => ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31941) Handling the exception in SparkUI for getSparkUser method

2020-06-10 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31941:
---
Affects Version/s: 3.0.0
   2.4.6

> Handling the exception in SparkUI for getSparkUser method
> -
>
> Key: SPARK-31941
> URL: https://issues.apache.org/jira/browse/SPARK-31941
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.6, 3.0.0, 3.1.0
>Reporter: Saurabh Chawla
>Assignee: Saurabh Chawla
>Priority: Minor
>
> After SPARK-31632 SparkException is thrown from  def applicationInfo(
> {code:java}
>   def applicationInfo(): v1.ApplicationInfo = {
> try {
>   // The ApplicationInfo may not be available when Spark is starting up.
>   
> store.view(classOf[ApplicationInfoWrapper]).max(1).iterator().next().info
> } catch {
>   case _: NoSuchElementException =>
> throw new SparkException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> }
>   }
> {code}
> Where as the caller for this method def getSparkUser in Spark UI is not 
> handling SparkException in the catch
> {code:java}
>   def getSparkUser: String = {
> try {
>   Option(store.applicationInfo().attempts.head.sparkUser)
> 
> .orElse(store.environmentInfo().systemProperties.toMap.get("user.name"))
> .getOrElse("")
> } catch {
>   case _: NoSuchElementException => ""
> }
>   }
> {code}
> So On using this method (getSparkUser )we can get the application erred out.
> So either we should thow
> {code:java}
>  throw new NoSuchElementException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> {code}
> or else add the scenario to catch spark exception in getSparkUser
> case _: SparkException => ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-31941) Handling the exception in SparkUI for getSparkUser method

2020-06-10 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved SPARK-31941.

Resolution: Fixed

> Handling the exception in SparkUI for getSparkUser method
> -
>
> Key: SPARK-31941
> URL: https://issues.apache.org/jira/browse/SPARK-31941
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Saurabh Chawla
>Assignee: Saurabh Chawla
>Priority: Minor
>
> After SPARK-31632 SparkException is thrown from  def applicationInfo(
> {code:java}
>   def applicationInfo(): v1.ApplicationInfo = {
> try {
>   // The ApplicationInfo may not be available when Spark is starting up.
>   
> store.view(classOf[ApplicationInfoWrapper]).max(1).iterator().next().info
> } catch {
>   case _: NoSuchElementException =>
> throw new SparkException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> }
>   }
> {code}
> Where as the caller for this method def getSparkUser in Spark UI is not 
> handling SparkException in the catch
> {code:java}
>   def getSparkUser: String = {
> try {
>   Option(store.applicationInfo().attempts.head.sparkUser)
> 
> .orElse(store.environmentInfo().systemProperties.toMap.get("user.name"))
> .getOrElse("")
> } catch {
>   case _: NoSuchElementException => ""
> }
>   }
> {code}
> So On using this method (getSparkUser )we can get the application erred out.
> So either we should thow
> {code:java}
>  throw new NoSuchElementException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> {code}
> or else add the scenario to catch spark exception in getSparkUser
> case _: SparkException => ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-31941) Handling the exception in SparkUI for getSparkUser method

2020-06-10 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned SPARK-31941:
--

Assignee: Saurabh Chawla

> Handling the exception in SparkUI for getSparkUser method
> -
>
> Key: SPARK-31941
> URL: https://issues.apache.org/jira/browse/SPARK-31941
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Saurabh Chawla
>Assignee: Saurabh Chawla
>Priority: Minor
>
> After SPARK-31632 SparkException is thrown from  def applicationInfo(
> {code:java}
>   def applicationInfo(): v1.ApplicationInfo = {
> try {
>   // The ApplicationInfo may not be available when Spark is starting up.
>   
> store.view(classOf[ApplicationInfoWrapper]).max(1).iterator().next().info
> } catch {
>   case _: NoSuchElementException =>
> throw new SparkException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> }
>   }
> {code}
> Where as the caller for this method def getSparkUser in Spark UI is not 
> handling SparkException in the catch
> {code:java}
>   def getSparkUser: String = {
> try {
>   Option(store.applicationInfo().attempts.head.sparkUser)
> 
> .orElse(store.environmentInfo().systemProperties.toMap.get("user.name"))
> .getOrElse("")
> } catch {
>   case _: NoSuchElementException => ""
> }
>   }
> {code}
> So On using this method (getSparkUser )we can get the application erred out.
> So either we should thow
> {code:java}
>  throw new NoSuchElementException("Failed to get the application information. 
> " +
>   "If you are starting up Spark, please wait a while until it's 
> ready.")
> {code}
> or else add the scenario to catch spark exception in getSparkUser
> case _: SparkException => ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30119) Support pagination for spark streaming tab

2020-06-07 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-30119:
---
Comment: was deleted

(was: This issue is resolved in https://github.com/apache/spark/pull/28439)

> Support pagination for  spark streaming tab
> ---
>
> Key: SPARK-30119
> URL: https://issues.apache.org/jira/browse/SPARK-30119
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Assignee: Rakesh Raushan
>Priority: Minor
> Fix For: 3.1.0
>
>
> Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-30119) Support pagination for spark streaming tab

2020-06-07 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reopened SPARK-30119:


> Support pagination for  spark streaming tab
> ---
>
> Key: SPARK-30119
> URL: https://issues.apache.org/jira/browse/SPARK-30119
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Assignee: Rakesh Raushan
>Priority: Minor
> Fix For: 3.1.0
>
>
> Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-30119) Support pagination for spark streaming tab

2020-06-06 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved SPARK-30119.

Fix Version/s: 3.1.0
   Resolution: Fixed

This issue is resolved in https://github.com/apache/spark/pull/28439

> Support pagination for  spark streaming tab
> ---
>
> Key: SPARK-30119
> URL: https://issues.apache.org/jira/browse/SPARK-30119
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Assignee: Rakesh Raushan
>Priority: Minor
> Fix For: 3.1.0
>
>
> Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-30119) Support pagination for spark streaming tab

2020-06-06 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned SPARK-30119:
--

Assignee: Rakesh Raushan

> Support pagination for  spark streaming tab
> ---
>
> Key: SPARK-30119
> URL: https://issues.apache.org/jira/browse/SPARK-30119
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Assignee: Rakesh Raushan
>Priority: Minor
>
> Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31886) Fix the wrong coloring of nodes in DAG-viz

2020-06-01 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31886:
--

 Summary: Fix the wrong coloring of nodes in DAG-viz
 Key: SPARK-31886
 URL: https://issues.apache.org/jira/browse/SPARK-31886
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


In the Job Page and Stage Page, nodes which are associated with "barrier mode" 
in the DAG-viz will be colored pale green.

But, with some type of jobs, nodes which are not associated with the mode will 
also colored.

You can reproduce with the following operation.
{code:java}
sc.parallelize(1 to 
10).barrier.mapPartitions(identity).repartition(1).collect() {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31882) DAG-viz is not rendered correctly with pagination.

2020-05-31 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31882:
---
Description: 
Because DAG-viz for a job fetches link urls for each stage from the stage 
table, rendering can fail with pagination.

You can reproduce this issue with the following operation.
{code:java}
 sc.parallelize(1 to 10).map(value => (value 
,value)).repartition(1).repartition(1).repartition(1).reduceByKey(_ + 
_).collect{code}
And then, visit the corresponding job page.
There are 5 stages so show <5 stages in the paged table.

  was:
Because DAG-viz for a job fetches link urls for each stage from the stage 
table, rendering can fail with pagination.

You can reproduce this issue with the following operation.
{code:java}
 sc.parallelize(1 to 10).map(value => (value 
,value)).repartition(1).repartition(1).repartition(1).reduceByKey(_ + 
_).collect {code}


> DAG-viz is not rendered correctly with pagination.
> --
>
> Key: SPARK-31882
> URL: https://issues.apache.org/jira/browse/SPARK-31882
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
>
> Because DAG-viz for a job fetches link urls for each stage from the stage 
> table, rendering can fail with pagination.
> You can reproduce this issue with the following operation.
> {code:java}
>  sc.parallelize(1 to 10).map(value => (value 
> ,value)).repartition(1).repartition(1).repartition(1).reduceByKey(_ + 
> _).collect{code}
> And then, visit the corresponding job page.
> There are 5 stages so show <5 stages in the paged table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31882) DAG-viz is not rendered correctly with pagination.

2020-05-31 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31882:
--

 Summary: DAG-viz is not rendered correctly with pagination.
 Key: SPARK-31882
 URL: https://issues.apache.org/jira/browse/SPARK-31882
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


Because DAG-viz for a job fetches link urls for each stage from the stage 
table, rendering can fail with pagination.

You can reproduce this issue with the following operation.
{code:java}
 sc.parallelize(1 to 10).map(value => (value 
,value)).repartition(1).repartition(1).repartition(1).reduceByKey(_ + 
_).collect {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Build time limit in PR builder

2020-05-28 Thread Kousuke Saruta

Thanks all. It's very helpful!

- Kousuke

On 2020/05/29 3:31, shane knapp ☠ wrote:

https://github.com/apache/spark/pull/28666

On Thu, May 28, 2020 at 11:20 AM shane knapp ☠ mailto:skn...@berkeley.edu>> wrote:

i'll get a PR put together now.

On Thu, May 28, 2020 at 8:26 AM Hyukjin Kwon mailto:gurwls...@gmail.com>> wrote:

I remember we were able to cut down pretty considerably in the past. 
For example, I investigated 
(https://github.com/apache/spark/pull/21822#issuecomment-407295739)
and fixed some before at, like 
https://github.com/apache/spark/pull/23111. Maybe we could skim again to reduce 
the build/testing time ..

But I agree with increasing it at this moment for now to unblock PRs. 
Don't forget to fix here too 
https://github.com/apache/spark/blob/master/dev/run-tests-jenkins.py#L201 :-).

2020년 5월 29일 (금) 오전 12:14, shane knapp ☠ mailto:skn...@berkeley.edu>>님이 작성:

On Thu, May 28, 2020 at 7:16 AM Sean Owen mailto:sro...@gmail.com>> wrote:

What else can we do, I suppose?

there,s not much else we can do.  i'll add 30m to the timeout.

shane
-- 
Shane Knapp

Computer Guy / Voice of Reason
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu



-- 
Shane Knapp

Computer Guy / Voice of Reason
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu



--
Shane Knapp
Computer Guy / Voice of Reason
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu




[jira] [Comment Edited] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-27 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118242#comment-17118242
 ] 

Kousuke Saruta edited comment on SPARK-31764 at 5/28/20, 2:34 AM:
--

As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

(I don't remember why I marked the label "bug". Maybe, it's a mistake.)

I'll make a backporting PR.


was (Author: sarutak):
As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

(I don't remember why I mark the label "bug". Maybe, it's a mistake.)

I'll make a backporting PR.

> JsonProtocol doesn't write RDDInfo#isBarrier
> 
>
> Key: SPARK-31764
> URL: https://issues.apache.org/jira/browse/SPARK-31764
> Project: Spark
>  Issue Type: Bug
>      Components: Spark Core
>    Affects Versions: 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-27 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118242#comment-17118242
 ] 

Kousuke Saruta edited comment on SPARK-31764 at 5/28/20, 2:34 AM:
--

As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

(I don't remember why I mark the label "bug". Maybe, it's a mistake.)

I'll make a backporting PR.


was (Author: sarutak):
As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

I'll make a backporting PR.

> JsonProtocol doesn't write RDDInfo#isBarrier
> 
>
> Key: SPARK-31764
> URL: https://issues.apache.org/jira/browse/SPARK-31764
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-27 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118242#comment-17118242
 ] 

Kousuke Saruta edited comment on SPARK-31764 at 5/28/20, 2:33 AM:
--

As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

I'll make a backporting PR.


was (Author: sarutak):
As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

> JsonProtocol doesn't write RDDInfo#isBarrier
> 
>
> Key: SPARK-31764
> URL: https://issues.apache.org/jira/browse/SPARK-31764
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-27 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31764:
---
Issue Type: Bug  (was: Improvement)

> JsonProtocol doesn't write RDDInfo#isBarrier
> 
>
> Key: SPARK-31764
> URL: https://issues.apache.org/jira/browse/SPARK-31764
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>    Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-27 Thread Kousuke Saruta (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118242#comment-17118242
 ] 

Kousuke Saruta commented on SPARK-31764:


As you say, it's more proper to mark the label "bug" rather than "improvement" 
so I think it's better to go to 3.0.

> JsonProtocol doesn't write RDDInfo#isBarrier
> 
>
> Key: SPARK-31764
> URL: https://issues.apache.org/jira/browse/SPARK-31764
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>    Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Major
> Fix For: 3.1.0
>
>
> JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-31642) Support pagination for spark structured streaming tab

2020-05-23 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved SPARK-31642.

Fix Version/s: 3.1.0
 Assignee: Rakesh Raushan
   Resolution: Fixed

> Support pagination for  spark structured streaming tab
> --
>
> Key: SPARK-31642
> URL: https://issues.apache.org/jira/browse/SPARK-31642
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Assignee: Rakesh Raushan
>Priority: Minor
> Fix For: 3.1.0
>
>
> Support pagination for spark structured streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31804) Add real headless browser support for HistoryServer tests

2020-05-23 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31804:
--

 Summary: Add real headless browser support for HistoryServer tests
 Key: SPARK-31804
 URL: https://issues.apache.org/jira/browse/SPARK-31804
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


The motivation is same as SPARK-31756.

In the current master, there is a testcase for HistoryServer which uses Ajax so 
we need the support for HistoryServer tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31765) Upgrade HtmlUnit >= 2.37.0

2020-05-19 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31765:
--

 Summary: Upgrade HtmlUnit >= 2.37.0
 Key: SPARK-31765
 URL: https://issues.apache.org/jira/browse/SPARK-31765
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


Recently, a security issue which affects HtmlUnit is reported.

[https://nvd.nist.gov/vuln/detail/CVE-2020-5529]

According to the report, arbitrary code can be run by malicious users.

HtmlUnit is used for test so the impact might not be large but it's better to 
upgrade it just in case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31764) JsonProtocol doesn't write RDDInfo#isBarrier

2020-05-19 Thread Kousuke Saruta (Jira)
Kousuke Saruta created SPARK-31764:
--

 Summary: JsonProtocol doesn't write RDDInfo#isBarrier
 Key: SPARK-31764
 URL: https://issues.apache.org/jira/browse/SPARK-31764
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.1.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


JsonProtocol read RDDInfo#isBarrier but doesn't write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31756) Add real headless browser support for UI test

2020-05-19 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31756:
---
Description: 
In the current master, there are two problems for UI test.

1. Lots of tests especially JavaScript related ones are done manually.

Appearance is better to be confirmed by our eyes but logic should be tested by 
test cases ideally.

 

2. Compared to the real web browsers, HtmlUnit doesn't seem to support 
JavaScript enough.

I added a JavaScript related test before for SPARK-31534 using HtmlUnit which 
is simple library based headless browser for test.

The test I added works somehow but some JavaScript related error is shown in 
unit-tests.log.
{code:java}
=== EXCEPTION START 
Exception class=[net.sourceforge.htmlunit.corejs.javascript.JavaScriptException]
com.gargoylesoftware.htmlunit.ScriptException: Error: TOOLTIP: Option 
"sanitizeFn" provided type "window" but expected type "(null|function)". 
(http://192.168.1.209:60724/static/jquery-3.4.1.min.js#2)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:904)
        at 
net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:628)
        at 
net.sourceforge.htmlunit.corejs.javascript.ContextFactory.call(ContextFactory.java:515)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.callFunction(JavaScriptEngine.java:835)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.callFunction(JavaScriptEngine.java:807)
        at 
com.gargoylesoftware.htmlunit.InteractivePage.executeJavaScriptFunctionIfPossible(InteractivePage.java:216)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptFunctionJob.runJavaScript(JavaScriptFunctionJob.java:52)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptExecutionJob.run(JavaScriptExecutionJob.java:102)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptJobManagerImpl.runSingleJob(JavaScriptJobManagerImpl.java:426)
        at 
com.gargoylesoftware.htmlunit.javascript.background.DefaultJavaScriptExecutor.run(DefaultJavaScriptExecutor.java:157)
        at java.lang.Thread.run(Thread.java:748)
Caused by: net.sourceforge.htmlunit.corejs.javascript.JavaScriptException: 
Error: TOOLTIP: Option "sanitizeFn" provided type "window" but expected type 
"(null|function)". (http://192.168.1.209:60724/static/jquery-3.4.1.min.js#2)
        at 
net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpretLoop(Interpreter.java:1009)
        at 
net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpret(Interpreter.java:800)
        at 
net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.call(InterpretedFunction.java:105)
        at 
net.sourceforge.htmlunit.corejs.javascript.ContextFactory.doTopCall(ContextFactory.java:413)
        at 
com.gargoylesoftware.htmlunit.javascript.HtmlUnitContextFactory.doTopCall(HtmlUnitContextFactory.java:252)
        at 
net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3264)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$4.doRun(JavaScriptEngine.java:828)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:889)
        ... 10 more
JavaScriptException value = Error: TOOLTIP: Option "sanitizeFn" provided type 
"window" but expected type "(null|function)".
== CALLING JAVASCRIPT ==
  function () {
      throw e;
  }
=== EXCEPTION END {code}
 

I tried to upgrade HtmlUnit to 2.40.0 but what is worse, the test become not 
working even though it works on real browsers like Chrome, Safari and Firefox 
without error.
{code:java}
[info] UISeleniumSuite:
[info] - SPARK-31534: text for tooltip should be escaped *** FAILED *** (17 
seconds, 745 milliseconds)
[info]   The code passed to eventually never returned normally. Attempted 2 
times over 12.910785232 seconds. Last failure message: 
com.gargoylesoftware.htmlunit.ScriptException: ReferenceError: Assignment to 
undefined "regeneratorRuntime" in strict mode 
(http://192.168.1.209:62132/static/vis-timeline-graph2d.min.js#52(Function)#1){code}
 

 To resolve those problems, it's better to support headless browser for UI test.

  was:
In the current master, there are two problems for UI test.

1. Lots of tests especially JavaScript related ones are done manually.

Appearance is better to be confirmed by our eyes but logic should be tested by 
test cases ideally.

 

2. Compared to the real web browsers, HtmlUnit doesn't seem to support 
JavaScriopt enough.

I added a JavaScript related test before for SPARK-31534 using HtmlUnit which 
is simple library based headless browser for test.

The test I added works 

[jira] [Updated] (SPARK-31756) Add real headless browser support for UI test

2020-05-19 Thread Kousuke Saruta (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated SPARK-31756:
---
Description: 
In the current master, there are two problems for UI test.

1. Lots of tests especially JavaScript related ones are done manually.

Appearance is better to be confirmed by our eyes but logic should be tested by 
test cases ideally.

 

2. Compared to the real web browsers, HtmlUnit doesn't seem to support 
JavaScriopt enough.

I added a JavaScript related test before for SPARK-31534 using HtmlUnit which 
is simple library based headless browser for test.

The test I added works somehow but some JavaScript related error is shown in 
unit-tests.log.
{code:java}
=== EXCEPTION START 
Exception class=[net.sourceforge.htmlunit.corejs.javascript.JavaScriptException]
com.gargoylesoftware.htmlunit.ScriptException: Error: TOOLTIP: Option 
"sanitizeFn" provided type "window" but expected type "(null|function)". 
(http://192.168.1.209:60724/static/jquery-3.4.1.min.js#2)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:904)
        at 
net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:628)
        at 
net.sourceforge.htmlunit.corejs.javascript.ContextFactory.call(ContextFactory.java:515)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.callFunction(JavaScriptEngine.java:835)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.callFunction(JavaScriptEngine.java:807)
        at 
com.gargoylesoftware.htmlunit.InteractivePage.executeJavaScriptFunctionIfPossible(InteractivePage.java:216)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptFunctionJob.runJavaScript(JavaScriptFunctionJob.java:52)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptExecutionJob.run(JavaScriptExecutionJob.java:102)
        at 
com.gargoylesoftware.htmlunit.javascript.background.JavaScriptJobManagerImpl.runSingleJob(JavaScriptJobManagerImpl.java:426)
        at 
com.gargoylesoftware.htmlunit.javascript.background.DefaultJavaScriptExecutor.run(DefaultJavaScriptExecutor.java:157)
        at java.lang.Thread.run(Thread.java:748)
Caused by: net.sourceforge.htmlunit.corejs.javascript.JavaScriptException: 
Error: TOOLTIP: Option "sanitizeFn" provided type "window" but expected type 
"(null|function)". (http://192.168.1.209:60724/static/jquery-3.4.1.min.js#2)
        at 
net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpretLoop(Interpreter.java:1009)
        at 
net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpret(Interpreter.java:800)
        at 
net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.call(InterpretedFunction.java:105)
        at 
net.sourceforge.htmlunit.corejs.javascript.ContextFactory.doTopCall(ContextFactory.java:413)
        at 
com.gargoylesoftware.htmlunit.javascript.HtmlUnitContextFactory.doTopCall(HtmlUnitContextFactory.java:252)
        at 
net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3264)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$4.doRun(JavaScriptEngine.java:828)
        at 
com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:889)
        ... 10 more
JavaScriptException value = Error: TOOLTIP: Option "sanitizeFn" provided type 
"window" but expected type "(null|function)".
== CALLING JAVASCRIPT ==
  function () {
      throw e;
  }
=== EXCEPTION END {code}
 

I tried to upgrade HtmlUnit to 2.40.0 but what is worse, the test become not 
working even though it works on real browsers like Chrome, Safari and Firefox 
without error.
{code:java}
[info] UISeleniumSuite:
[info] - SPARK-31534: text for tooltip should be escaped *** FAILED *** (17 
seconds, 745 milliseconds)
[info]   The code passed to eventually never returned normally. Attempted 2 
times over 12.910785232 seconds. Last failure message: 
com.gargoylesoftware.htmlunit.ScriptException: ReferenceError: Assignment to 
undefined "regeneratorRuntime" in strict mode 
(http://192.168.1.209:62132/static/vis-timeline-graph2d.min.js#52(Function)#1){code}
 

 To resolve those problems, it's better to support headless browser for UI test.

  was:
In the current master, there are two problems for UI test.

1. Lots of tests especially JavaScript related ones are done manually.

Appearance is better to be confirmed by our eyes but logic should be tested by 
test cases ideally.

 

2. Compared to the real web browsers, HtmlUnit doesn't seem to support 
JavaScriopt enough.

I added a JavaScript related test before for SPARK-31534 using HtmlUnit which 
is simple library based headless browser for test.

The 

<    2   3   4   5   6   7   8   9   10   11   >