This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
     new d5b903e  [SPARK-32035][DOCS][EXAMPLES] Fixed typos involving AWS 
Access, Secret, & Sessions tokens
d5b903e is described below

commit d5b903e38556ee3e8e1eb8f71a08e232afa4e36a
Author: moovlin <richjoer...@gmail.com>
AuthorDate: Thu Jul 9 10:35:21 2020 -0700

    [SPARK-32035][DOCS][EXAMPLES] Fixed typos involving AWS Access, Secret, & 
Sessions tokens
    
    ### What changes were proposed in this pull request?
    I resolved some of the inconsistencies of AWS env variables. They're fixed 
in the documentation as well as in the examples. I grep-ed through the repo to 
try & find any more instances but nothing popped up.
    
    ### Why are the changes needed?
    
    As previously mentioned, there is a JIRA request, SPARK-32035, which 
encapsulates all the issues. But, in summary, the naming of items was 
inconsistent.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Correct names:
    AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY
    AWS_SESSION_TOKEN
    These are the same that AWS uses in their libraries.
    
    However, looking through the Spark documentation and comments, I see that 
these are not denoted correctly across the board:
    
    docs/cloud-integration.md
    106:1. `spark-submit` reads the `AWS_ACCESS_KEY`, `AWS_SECRET_KEY` <-- both 
different
    107:and `AWS_SESSION_TOKEN` environment variables and sets the associated 
authentication options
    
    docs/streaming-kinesis-integration.md
    232:- Set up the environment variables `AWS_ACCESS_KEY_ID` and 
`AWS_SECRET_KEY` with your AWS credentials. <-- secret key different
    
    
external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py
    34: $ export AWS_ACCESS_KEY_ID=<your-access-key>
    35: $ export AWS_SECRET_KEY=<your-secret-key> <-- different
    48: Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY <-- secret 
key different
    
    core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
    438: val keyId = System.getenv("AWS_ACCESS_KEY_ID")
    439: val accessKey = System.getenv("AWS_SECRET_ACCESS_KEY")
    448: val sessionToken = System.getenv("AWS_SESSION_TOKEN")
    
    
external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala
    53: * $ export AWS_ACCESS_KEY_ID=<your-access-key>
    54: * $ export AWS_SECRET_KEY=<your-secret-key> <-- different
    65: * Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY <-- 
secret key different
    
    
external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java
    59: * $ export AWS_ACCESS_KEY_ID=[your-access-key]
    60: * $ export AWS_SECRET_KEY=<your-secret-key> <-- different
    71: * Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY <-- 
secret key different
    
    These were all fixed to match names listed under the "correct names" 
heading.
    
    ### How was this patch tested?
    
    I built the documentation using jekyll and verified that the changes were 
present & accurate.
    
    Closes #29058 from Moovlin/SPARK-32035.
    
    Authored-by: moovlin <richjoer...@gmail.com>
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
    (cherry picked from commit 9331a5c44baa79998625829e9be624e8564c91ea)
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
---
 docs/cloud-integration.md                                             | 2 +-
 docs/streaming-kinesis-integration.md                                 | 2 +-
 .../org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java  | 4 ++--
 .../src/main/python/examples/streaming/kinesis_wordcount_asl.py       | 4 ++--
 .../org/apache/spark/examples/streaming/KinesisWordCountASL.scala     | 4 ++--
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/docs/cloud-integration.md b/docs/cloud-integration.md
index b455cd4..e50ff58 100644
--- a/docs/cloud-integration.md
+++ b/docs/cloud-integration.md
@@ -101,7 +101,7 @@ for talking to cloud infrastructures, in which case this 
module may not be neede
 Spark jobs must authenticate with the object stores to access data within them.
 
 1. When Spark is running in a cloud infrastructure, the credentials are 
usually automatically set up.
-1. `spark-submit` reads the `AWS_ACCESS_KEY`, `AWS_SECRET_KEY`
+1. `spark-submit` reads the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`
 and `AWS_SESSION_TOKEN` environment variables and sets the associated 
authentication options
 for the `s3n` and `s3a` connectors to Amazon S3.
 1. In a Hadoop cluster, settings may be set in the `core-site.xml` file.
diff --git a/docs/streaming-kinesis-integration.md 
b/docs/streaming-kinesis-integration.md
index 0685cd8..fc21e3d 100644
--- a/docs/streaming-kinesis-integration.md
+++ b/docs/streaming-kinesis-integration.md
@@ -200,7 +200,7 @@ To run the example,
 
 - Set up Kinesis stream (see earlier section) within AWS. Note the name of the 
Kinesis stream and the endpoint URL corresponding to the region where the 
stream was created.
 
-- Set up the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_KEY` 
with your AWS credentials.
+- Set up the environment variables `AWS_ACCESS_KEY_ID` and 
`AWS_SECRET_ACCESS_KEY` with your AWS credentials.
 
 - In the Spark root directory, run the example as
 
diff --git 
a/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java
 
b/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java
index 626bde4..5a21253 100644
--- 
a/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java
+++ 
b/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java
@@ -56,7 +56,7 @@ import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionIn
  * Example:
  *      # export AWS keys if necessary
  *      $ export AWS_ACCESS_KEY_ID=[your-access-key]
- *      $ export AWS_SECRET_KEY=<your-secret-key>
+ *      $ export AWS_SECRET_ACCESS_KEY=<your-secret-key>
  *
  *      # run the example
  *      $ SPARK_HOME/bin/run-example   streaming.JavaKinesisWordCountASL 
myAppName  mySparkStream \
@@ -67,7 +67,7 @@ import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionIn
  *
  * This code uses the DefaultAWSCredentialsProviderChain to find credentials
  * in the following order:
- *    Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
+ *    Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  *    Java System Properties - aws.accessKeyId and aws.secretKey
  *    Credential profiles file - default location (~/.aws/credentials) shared 
by all AWS SDKs
  *    Instance profile credentials - delivered through the Amazon EC2 metadata 
service
diff --git 
a/external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py
 
b/external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py
index 777a332..5370b79 100644
--- 
a/external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py
+++ 
b/external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py
@@ -32,7 +32,7 @@
   Example:
       # export AWS keys if necessary
       $ export AWS_ACCESS_KEY_ID=<your-access-key>
-      $ export AWS_SECRET_KEY=<your-secret-key>
+      $ export AWS_SECRET_ACCESS_KEY=<your-secret-key>
 
       # run the example
       $ bin/spark-submit --jars \
@@ -45,7 +45,7 @@
 
   This code uses the DefaultAWSCredentialsProviderChain to find credentials
   in the following order:
-      Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
+      Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
       Java System Properties - aws.accessKeyId and aws.secretKey
       Credential profiles file - default location (~/.aws/credentials) shared 
by all AWS SDKs
       Instance profile credentials - delivered through the Amazon EC2 metadata 
service
diff --git 
a/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala
 
b/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala
index d97ab74..c78737d 100644
--- 
a/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala
+++ 
b/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala
@@ -51,7 +51,7 @@ import org.apache.spark.streaming.kinesis.KinesisInputDStream
  * Example:
  *      # export AWS keys if necessary
  *      $ export AWS_ACCESS_KEY_ID=<your-access-key>
- *      $ export AWS_SECRET_KEY=<your-secret-key>
+ *      $ export AWS_SECRET_ACCESS_KEY=<your-secret-key>
  *
  *      # run the example
  *      $ SPARK_HOME/bin/run-example  streaming.KinesisWordCountASL myAppName  
mySparkStream \
@@ -62,7 +62,7 @@ import org.apache.spark.streaming.kinesis.KinesisInputDStream
  *
  * This code uses the DefaultAWSCredentialsProviderChain to find credentials
  * in the following order:
- *    Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
+ *    Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  *    Java System Properties - aws.accessKeyId and aws.secretKey
  *    Credential profiles file - default location (~/.aws/credentials) shared 
by all AWS SDKs
  *    Instance profile credentials - delivered through the Amazon EC2 metadata 
service


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to