Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/455#discussion_r13521516
  
    --- Diff: docs/programming-guide.md ---
    @@ -378,11 +378,88 @@ Some notes on reading files with Spark:
     
     * The `textFile` method also takes an optional second argument for 
controlling the number of slices of the file. By default, Spark creates one 
slice for each block of the file (blocks being 64MB by default in HDFS), but 
you can also ask for a higher number of slices by passing a larger value. Note 
that you cannot have fewer slices than blocks.
     
    -Apart reading files as a collection of lines,
    +Apart from reading files as a collection of lines,
     `SparkContext.wholeTextFiles` lets you read a directory containing 
multiple small text files, and returns each of them as (filename, content) 
pairs. This is in contrast with `textFile`, which would return one record per 
line in each file.
     
    -</div>
    +### SequenceFile and Hadoop InputFormats
    +
    +In addition to reading text files, PySpark supports reading 
[SequenceFile](http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/SequenceFileInputFormat.html)
 
    +and any arbitrary 
[InputFormat](http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/InputFormat.html).
    +
    +#### Writable Support
    +
    +PySpark SequenceFile support loads an RDD within Java, and pickles the 
resulting Java objects using
    +[Pyrolite](https://github.com/irmen/Pyrolite/). The following Writables 
are automatically converted:
    +
    +<table class="table">
    +<tr><th>Writable Type</th><th>Python Type</th></tr>
    +<tr><td>Text</td><td>unicode str</td></tr>
    +<tr><td>IntWritable</td><td>int</td></tr>
    +<tr><td>FloatWritable</td><td>float</td></tr>
    +<tr><td>DoubleWritable</td><td>float</td></tr>
    +<tr><td>BooleanWritable</td><td>bool</td></tr>
    +<tr><td>BytesWritable</td><td>bytearray</td></tr>
    +<tr><td>NullWritable</td><td>None</td></tr>
    +<tr><td>ArrayWritable</td><td>list of primitives, or tuple of 
objects</td></tr>
    +<tr><td>MapWritable</td><td>dict</td></tr>
    +<tr><td>Custom Class conforming to Java Bean conventions</td>
    +    <td>dict of public properties (via JavaBean getters and setters) + 
__class__ for the class type</td></tr>
    +</table>
    +
    +#### Loading SequenceFiles
    +
    +Similarly to text files, SequenceFiles can be loaded by specifying the 
path. The key and value
    +classes can be specified, but for standard Writables it should work 
without requiring this.
    +
    +{% highlight python %}
    +>>> rdd = sc.sequenceFile("path/to/sequencefile/of/doubles")
    +>>> rdd.collect()         # this example has DoubleWritable keys and Text 
values
    +[(1.0, u'aa'),
    + (2.0, u'bb'),
    + (2.0, u'aa'),
    + (3.0, u'cc'),
    + (2.0, u'bb'),
    + (1.0, u'aa')]
    +{% endhighlight %}
    +
    +#### Loading Arbitrary Hadoop InputFormats
    +
    +PySpark can also read any Hadoop InputFormat, for both 'new' and 'old' 
Hadoop APIs. If required,
    +a Hadoop configuration can be passed in as a Python dict. Here is an 
example using the
    +Elasticsearch ESInputFormat:
    +
    +{% highlight python %}
    +$ SPARK_CLASSPATH=/path/to/elasticsearch-hadoop.jar ./bin/pyspark
    +>>> conf = {"es.resource" : "index/type"}   # assume Elasticsearch is 
running on localhost defaults
    +>>> rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat",\
    +    "org.apache.hadoop.io.NullWritable", 
"org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
    +>>> rdd.first()         # the result is a MapWritable that is converted to 
a Python dict
    +(u'Elasticsearch ID',
    + {u'field1': True,
    +  u'field2': u'Some Text',
    +  u'field3': 12345})
    +{% endhighlight %}
     
    +Note that, if the InputFormat simply depends on a Hadoop configuration 
and/or input path, and
    +the key and value classes can easily be converted according to the above 
table,
    +then this approach should work well for such cases.
    +
    +If you have custom serialized binary data (like pulling data from 
Cassandra / HBase) or custom
    +classes that don't conform to the JavaBean requirements, then you will 
first need to 
    +transform that data on the Scala/Java side to something which can be 
handled by Pyrolite's pickler.
    +A [Converter](api/scala/index.html#org.apache.spark.api.python.Converter) 
trait is provided 
    +for this. Simply extend this trait and implement your transformation code 
in the ```convert``` 
    +method. The ensure this class is packaged into your Spark job jar and 
included on the PySpark 
    --- End diff --
    
    Typo: "The ensure".
    
    Also, I'd mention that they need to include both the `Convereter` 
implementation and also any dependencies required to read from the storage 
system (e.g. `hbase-client`) those actually aren't packaged by default with 
Spark except in the examples.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to