GitHub user Sherry302 opened a pull request:

    https://github.com/apache/spark/pull/14659

    [SPARK-16757] Set up Spark caller context to HDFS

    ## What changes were proposed in this pull request?
    
    1. Pass `jobId` to Task.
    2. Invoke Hadoop APIs. 
    
    A new function `setCallerContext` is added in `Utils`. `setCallerContext` 
function invokes APIs of   `org.apache.hadoop.ipc.CallerContext` to set up 
spark caller contexts, which will be written into `hdfs-audit.log`.
    
    For applications in Yarn client mode, `org.apache.hadoop.ipc.CallerContext` 
are called in `Task` and Yarn `Client`. For applications in Yarn cluster mode, 
`org.apache.hadoop.ipc.CallerContext` are be called in `Task` and 
`ApplicationMaster`.
    
    The Spark caller contexts written into `hdfs-audit.log` are applications' 
name` {spark.app.name}` and `JobID_stageID_stageAttemptId_taskID_attemptNumbe`.
    
    ## How was this patch tested?
    Manual Tests against some Spark applications in Yarn client mode and Yarn 
cluster mode. Need to check if spark caller contexts are written into HDFS 
hdfs-audit.log successfully.
    
    For example, run SparkKmeans in Yarn client mode: 
    `./bin/spark-submit  --master yarn --deploy-mode client --class 
org.apache.spark.examples.SparkKMeans 
examples/target/original-spark-examples_2.11-2.1.0-SNAPSHOT.jar 
hdfs://localhost:9000/lr_big.txt 2 5`
    
    Before:
    There will be no Spark caller context in records of `hdfs-audit.log`.
    
    After:
    Spark caller contexts will be in records of `hdfs-audit.log`.
    (_Note: spark caller context below since Hadoop caller context API was 
invoked in Yarn Client_)
    `2016-07-21 13:52:30,802 INFO FSNamesystem.audit: allowed=true        
ugi=wyang (auth:SIMPLE)        ip=/127.0.0.1        cmd=getfileinfo        
src=/lr_big.txt        dst=null        perm=null        proto=rpc        
callerContext=SparkKMeans running on Spark 
    `
    (_Note: spark caller context below since Hadoop caller context API was 
invoked in Task_)
    `2016-07-21 13:52:35,584 INFO FSNamesystem.audit: allowed=true        
ugi=wyang (auth:SIMPLE)        ip=/127.0.0.1        cmd=open        
src=/lr_big.txt        dst=null        perm=null        proto=rpc        
callerContext=JobId_0_StageID_0_stageAttemptId_0_taskID_0_attemptNumber_0`

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/Sherry302/spark callercontextSubmit

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/14659.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #14659
    
----
commit ec6833d32ef14950b2d81790bc908992f6288815
Author: Weiqing Yang <yangweiqing...@gmail.com>
Date:   2016-08-16T04:11:41Z

    [SPARK-16757] Set up Spark caller context to HDFS

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to