Github user brennonyork commented on the pull request:

    https://github.com/apache/spark/pull/3561#issuecomment-66163505
  
    @JoshRosen I'm pretty sure we can definitely support the `hdfs://` URI 
model. I'll look and see if, given an `hdfs://` URI, Spark would already have 
some sort of Hadoop `Configuration` object representing the connection made, 
but, if not, can always make one.
    
    Also, can you help me understand why the tests failed? I'm seeing:
    
    `[error] (streaming/test:test) sbt.TestsFailedException: Tests unsuccessful`
    
    But that isn't really that helpful and, as with all the talk on the dev 
distro, I'm just wondering if its the patch that fails or if its a timing / 
sync issue (`./dev/run-tests` finishes without fail on my OSX machine).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to