[GitHub] spark pull request #20867: [SPARK-23759][UI] Unable to bind Spark2 history s...

2018-03-22 Thread felixalbani
Github user felixalbani closed the pull request at:

https://github.com/apache/spark/pull/20867


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20867: [SPARK-23759][UI] Unable to bind Spark2 history s...

2018-03-21 Thread mgaido91
Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/spark/pull/20867#discussion_r176068898
  
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -330,12 +330,13 @@ private[spark] object JettyUtils extends Logging {
   -1,
   connectionFactories: _*)
 connector.setPort(port)
+connector.setHost(hostName)
 connector.start()
 
 // Currently we only use "SelectChannelConnector"
 // Limit the max acceptor number to 8 so that we don't waste a lot 
of threads
 connector.setAcceptQueueSize(math.min(connector.getAcceptors, 8))
--- End diff --

I think also this line should be moved before the start.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20867: Spark 23759

2018-03-20 Thread felixalbani
GitHub user felixalbani reopened a pull request:

https://github.com/apache/spark/pull/20867

Spark 23759

## What changes were proposed in this pull request?

This pull is to fix SPARK-23759 issue

Problem was created due connector.setHost(hostName) call was after 
connector.start()

## How was this patch tested?

Patch was tested after build and deployment. This patch requires 
SPARK_LOCAL_IP environment variable to be set on spark-env.sh

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/felixalbani/spark SPARK-23759

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20867


commit f9d1293991b34a80b4fa11a846812e3a79e1493f
Author: bag_of_tricks 
Date:   2018-03-20T22:46:55Z

Solution to SPARK-23759 is to setHost before starting the connector

Solution to SPARK-23759 is to setHost before starting the connector.

I run few tests and was able to confirm the binding happens as expected.




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20867: Spark 23759

2018-03-20 Thread felixalbani
Github user felixalbani closed the pull request at:

https://github.com/apache/spark/pull/20867


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20867: Spark 23759

2018-03-20 Thread felixalbani
GitHub user felixalbani opened a pull request:

https://github.com/apache/spark/pull/20867

Spark 23759

## What changes were proposed in this pull request?

This pull is to fix SPARK-23759 issue

Problem was created due connector.setHost(hostName) call was after 
connector.start()

## How was this patch tested?

Patch was tested after build and deployment. This patch requires 
SPARK_LOCAL_IP environment variable to be set on spark-env.sh

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/felixalbani/spark SPARK-23759

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20867


commit edcd9fbc92683753d55ed0c69f391bf3bed59da4
Author: Shixiong Zhu 
Date:   2017-07-11T03:26:17Z

[SPARK-21369][CORE] Don't use Scala Tuple2 in common/network-*

## What changes were proposed in this pull request?

Remove all usages of Scala Tuple2 from common/network-* projects. 
Otherwise, Yarn users cannot use `spark.reducer.maxReqSizeShuffleToMem`.

## How was this patch tested?

Jenkins.

Author: Shixiong Zhu 

Closes #18593 from zsxwing/SPARK-21369.

(cherry picked from commit 833eab2c9bd273ee9577fbf9e480d3e3a4b7d203)
Signed-off-by: Wenchen Fan 

commit 399aa016e8f44fea4e5ef4b71a9a80484dd755f8
Author: Xingbo Jiang 
Date:   2017-07-11T13:52:54Z

[SPARK-21366][SQL][TEST] Add sql test for window functions

## What changes were proposed in this pull request?

Add sql test for window functions, also remove uncecessary test cases in 
`WindowQuerySuite`.

## How was this patch tested?

Added `window.sql` and the corresponding output file.

Author: Xingbo Jiang 

Closes #18591 from jiangxb1987/window.

(cherry picked from commit 66d21686556681457aab6e44e19f5614c5635f0c)
Signed-off-by: Wenchen Fan 

commit cb6fc89ba20a427fa7d66fa5036b17c1a5d5d87f
Author: Eric Vandenberg 
Date:   2017-07-12T06:49:15Z

[SPARK-21219][CORE] Task retry occurs on same executor due to race co…

…ndition with blacklisting

There's a race condition in the current TaskSetManager where a failed task 
is added for retry (addPendingTask), and can asynchronously be assigned to an 
executor *prior* to the blacklist state (updateBlacklistForFailedTask), the 
result is the task might re-execute on the same executor.  This is particularly 
problematic if the executor is shutting down since the retry task immediately 
becomes a lost task (ExecutorLostFailure).  Another side effect is that the 
actual failure reason gets obscured by the retry task which never actually 
executed.  There are sample logs showing the issue in the 
https://issues.apache.org/jira/browse/SPARK-21219

The fix is to change the ordering of the addPendingTask and 
updatingBlackListForFailedTask calls in TaskSetManager.handleFailedTask

Implemented a unit test that verifies the task is black listed before it is 
added to the pending task.  Ran the unit test without the fix and it fails.  
Ran the unit test with the fix and it passes.

Please review http://spark.apache.org/contributing.html before opening a 
pull request.

Author: Eric Vandenberg 

Closes #18427 from ericvandenbergfb/blacklistFix.

## What changes were proposed in this pull request?

This is a backport of the fix to SPARK-21219, already checked in as 96d58f2.

## How was this patch tested?

Ran TaskSetManagerSuite tests locally.

Author: Eric Vandenberg 

Closes #18604 from jsoltren/branch-2.2.

commit 39eba3053ac99f03d9df56471bae5fc5cc9f4462
Author: Kohki Nishio 
Date:   2017-07-13T00:22:40Z

[SPARK-18646][REPL] Set parent classloader as null for ExecutorClassLoader

## What changes were proposed in this pull request?

`ClassLoader` will preferentially load class from `parent`. Only when 
`parent` is null or the load failed, that it will call the overridden 
`findClass` function. To avoid the potential issue caused by loading class 
using inappropriate class loader, we should set the `parent` of `ClassLoader` 
to null, so that we can fully control which class loader is used.

This is take over of #17074,  the primary author of this PR is taroplus .

Should close #17074 after this PR get merged.

## How was this patch tested?

Add test case in