Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/12004
  
    Has anyone had a chance to review this? Is there more clarification needed, 
or some specific aspect of the patch which needs changing? 
    
    Without this it is near-impossible to have a consistent set of hadoop, 
aws-sdk and jackson artifacts on the CP to work with Amazon or Azure cloud 
infrastructure. This patch fixes the dependency problem, adds a new POM which 
you can import downstream to pick up a consistent spark build, and all the 
tests to verify everything works.
    
    If you think that this isn't the right place for spark/cloud integration 
tests, then that's something which I can keep off to one side.
    
    But the rest of the patch: the incorporation of a consistent and functional 
set of dependencies needed need to restore s3:// and s3n://, and add s3a:// and 
wasb:// has do go here.
    
    What do I have to do to get this into a state ready for merging?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to