[GitHub] spark pull request #20586: Branch 2.1

2018-02-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/20586


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20586: Branch 2.1

2018-02-12 Thread zhuge134
GitHub user zhuge134 opened a pull request:

https://github.com/apache/spark/pull/20586

Branch 2.1

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/spark branch-2.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20586.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20586


commit 21afc4534f90e063330ad31033aa178b37ef8340
Author: Marcelo Vanzin 
Date:   2017-02-22T21:19:31Z

[SPARK-19652][UI] Do auth checks for REST API access (branch-2.1).

The REST API has a security filter that performs auth checks
based on the UI root's security manager. That works fine when
the UI root is the app's UI, but not when it's the history server.

In the SHS case, all users would be allowed to see all applications
through the REST API, even if the UI itself wouldn't be available
to them.

This change adds auth checks for each app access through the API
too, so that only authorized users can see the app's data.

The change also modifies the existing security filter to use
`HttpServletRequest.getRemoteUser()`, which is used in other
places. That is not necessarily the same as the principal's
name; for example, when using Hadoop's SPNEGO auth filter,
the remote user strips the realm information, which then matches
the user name registered as the owner of the application.

I also renamed the UIRootFromServletContext trait to a more generic
name since I'm using it to store more context information now.

Tested manually with an authentication filter enabled.

Author: Marcelo Vanzin 

Closes #17019 from vanzin/SPARK-19652_2.1.

commit d30238f1b9096c9fd85527d95be639de9388fcc7
Author: actuaryzhang 
Date:   2017-02-23T19:12:02Z

[SPARK-19682][SPARKR] Issue warning (or error) when subset method "[[" 
takes vector index

## What changes were proposed in this pull request?
The `[[` method is supposed to take a single index and return a column. 
This is different from base R which takes a vector index.  We should check for 
this and issue warning or error when vector index is supplied (which is very 
likely given the behavior in base R).

Currently I'm issuing a warning message and just take the first element of 
the vector index. We could change this to an error it that's better.

## How was this patch tested?
new tests

Author: actuaryzhang 

Closes #17017 from actuaryzhang/sparkRSubsetter.

(cherry picked from commit 7bf09433f5c5e08154ba106be21fe24f17cd282b)
Signed-off-by: Felix Cheung 

commit 43084b3cc3918b720fe28053d2037fa22a71264e
Author: Herman van Hovell 
Date:   2017-02-23T22:58:02Z

[SPARK-19459][SQL][BRANCH-2.1] Support for nested char/varchar fields in ORC

## What changes were proposed in this pull request?
This is a backport of the two following commits: 
https://github.com/apache/spark/commit/78eae7e67fd5dec0c2d5b1853ce86cd0f1ae 
& 
https://github.com/apache/spark/commit/de8a03e68202647555e30fffba551f65bc77608d

This PR adds support for ORC tables with (nested) char/varchar fields.

## How was this patch tested?
Added a regression test to `OrcSourceSuite`.

Author: Herman van Hovell 

Closes #17041 from hvanhovell/SPARK-19459-branch-2.1.

commit 66a7ca28a9de92e67ce24896a851a0c96c92aec6
Author: Takeshi Yamamuro 
Date:   2017-02-24T09:54:00Z

[SPARK-19691][SQL][BRANCH-2.1] Fix ClassCastException when calculating 
percentile of decimal column

## What changes were proposed in this pull request?
This is a backport of the two following commits: 
https://github.com/apache/spark/commit/93aa4271596a30752dc5234d869c3ae2f6e8e723

This pr fixed a class-cast exception below;
```
scala> spark.range(10).selectExpr("cast (id as decimal) as 
x").selectExpr("percentile(x, 0.5)").collect()
 java.lang.ClassCastException: org.apache.spark.sql.types.Decimal cannot be 
cast to java.lang.Number
at 
org.apache.spark.sql.catalyst.expressions.aggregate.Percentile.update(Percentile.scala:141)
at