Github user SparkQA commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2298#discussion_r17220865
  
    --- Diff: docs/storage-openstack-swift.md ---
    @@ -0,0 +1,152 @@
    +---
    +layout: global
    +title: Accessing OpenStack Swift from Spark
    +---
    +
    +Spark's support for Hadoop InputFormat allows it to process data in 
OpenStack Swift using the
    +same URI formats as in Hadoop. You can specify a path in Swift as input 
through a 
    +URI of the form <code>swift://container.PROVIDER/path</code>. You will 
also need to set your 
    +Swift security credentials, through <code>core-sites.xml</code> or via
    +<code>SparkContext.hadoopConfiguration</code>.
    +Current Swift driver requires Swift to use Keystone authentication method.
    +
    +# Configuring Swift for Better Data Locality
    +
    +Although not mandatory, it is recommended to configure the proxy server of 
Swift with
    +<code>list_endpoints</code> to have better data locality. More information 
is
    +[available 
here](https://github.com/openstack/swift/blob/master/swift/common/middleware/list_endpoints.py).
    +
    +
    +# Dependencies
    +
    +The Spark application should include <code>hadoop-openstack</code> 
dependency.
    +For example, for Maven support, add the following to the 
<code>pom.xml</code> file:
    +
    +{% highlight xml %}
    +<dependencyManagement>
    +  ...
    +  <dependency>
    +    <groupId>org.apache.hadoop</groupId>
    +    <artifactId>hadoop-openstack</artifactId>
    +    <version>2.3.0</version>
    +  </dependency>
    +  ...
    +</dependencyManagement>
    +{% endhighlight %}
    +
    +
    +# Configuration Parameters
    +
    +Create <code>core-sites.xml</code> and place it inside 
<code>/spark/conf</code> directory.
    +There are two main categories of parameters that should to be configured: 
declaration of the
    +Swift driver and the parameters that are required by Keystone. 
    +
    +Configuration of Hadoop to use Swift File system achieved via 
    +
    +<table class="table">
    +<tr><th>Property Name</th><th>Value</th></tr>
    +<tr>
    +  <td>fs.swift.impl</td>
    +  <td>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</td>
    +</tr>
    +</table>
    +
    +Additional parameters required by Keystone (v2.0) and should be provided 
to the Swift driver. Those 
    +parameters will be used to perform authentication in Keystone to access 
Swift. The following table 
    +contains a list of Keystone mandatory parameters. <code>PROVIDER</code> 
can be any name.
    +
    +<table class="table">
    +<tr><th>Property Name</th><th>Meaning</th><th>Required</th></tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.auth.url</code></td>
    +  <td>Keystone Authentication URL</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.auth.endpoint.prefix</code></td>
    +  <td>Keystone endpoints prefix</td>
    +  <td>Optional</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.tenant</code></td>
    +  <td>Tenant</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.username</code></td>
    +  <td>Username</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.password</code></td>
    +  <td>Password</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.http.port</code></td>
    +  <td>HTTP port</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.region</code></td>
    +  <td>Keystone region</td>
    +  <td>Mandatory</td>
    +</tr>
    +<tr>
    +  <td><code>fs.swift.service.PROVIDER.public</code></td>
    +  <td>Indicates if all URLs are public</td>
    +  <td>Mandatory</td>
    +</tr>
    +</table>
    +
    +For example, assume <code>PROVIDER=SparkTest</code> and Keystone contains 
user <code>tester</code> with password <code>testing</code>
    +defined for tenant <code>test</code>. Than <code>core-sites.xml</code> 
should include:
    --- End diff --
    
    here again


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to