Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/455#discussion_r13521493
  
    --- Diff: docs/programming-guide.md ---
    @@ -378,11 +378,88 @@ Some notes on reading files with Spark:
     
     * The `textFile` method also takes an optional second argument for 
controlling the number of slices of the file. By default, Spark creates one 
slice for each block of the file (blocks being 64MB by default in HDFS), but 
you can also ask for a higher number of slices by passing a larger value. Note 
that you cannot have fewer slices than blocks.
     
    -Apart reading files as a collection of lines,
    +Apart from reading files as a collection of lines,
     `SparkContext.wholeTextFiles` lets you read a directory containing 
multiple small text files, and returns each of them as (filename, content) 
pairs. This is in contrast with `textFile`, which would return one record per 
line in each file.
     
    -</div>
    +### SequenceFile and Hadoop InputFormats
    --- End diff --
    
    Further up in this guide there is a statement that says:
    
    ```
    The current API is limited to text files, but support for binary Hadoop 
InputFormats is expected in future versions.
    ```
    
    Given this patch, it probably makes sense to remove that :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to