That error is a jar conflict, you must be having multiple versions of
hadoop jar in the classpath. First you make sure you are able to access
your AWS S3 with s3a, then you give the endpoint configuration and try to
access the custom storage.

Thanks
Best Regards

On Mon, Jul 27, 2015 at 4:02 PM, Schmirr Wurst <schmirrwu...@gmail.com>
wrote:

> No with s3a, I have the following error :
> java.lang.NoSuchMethodError:
>
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:285)
>
> 2015-07-27 11:17 GMT+02:00 Akhil Das <ak...@sigmoidanalytics.com>:
> > So you are able to access your AWS S3 with s3a now? What is the error
> that
> > you are getting when you try to access the custom storage with
> > fs.s3a.endpoint?
> >
> > Thanks
> > Best Regards
> >
> > On Mon, Jul 27, 2015 at 2:44 PM, Schmirr Wurst <schmirrwu...@gmail.com>
> > wrote:
> >>
> >> I was able to access Amazon S3, but for some reason, the Endpoint
> >> parameter is ignored, and I'm not able to access to storage from my
> >> provider... :
> >>
> >> sc.hadoopConfiguration.set("fs.s3a.endpoint","test")
> >> sc.hadoopConfiguration.set("fs.s3a.awsAccessKeyId","")
> >> sc.hadoopConfiguration.set("fs.s3a.awsSecretAccessKey","")
> >>
> >> Any Idea why it doesn't work ?
> >>
> >> 2015-07-20 18:11 GMT+02:00 Schmirr Wurst <schmirrwu...@gmail.com>:
> >> > Thanks, that is what I was looking for...
> >> >
> >> > Any Idea where I have to store and reference the corresponding
> >> > hadoop-aws-2.6.0.jar ?:
> >> >
> >> > java.io.IOException: No FileSystem for scheme: s3n
> >> >
> >> > 2015-07-20 8:33 GMT+02:00 Akhil Das <ak...@sigmoidanalytics.com>:
> >> >> Not in the uri, but in the hadoop configuration you can specify it.
> >> >>
> >> >> <property>
> >> >>   <name>fs.s3a.endpoint</name>
> >> >>   <description>AWS S3 endpoint to connect to. An up-to-date list is
> >> >>     provided in the AWS Documentation: regions and endpoints. Without
> >> >> this
> >> >>     property, the standard region (s3.amazonaws.com) is assumed.
> >> >>   </description>
> >> >> </property>
> >> >>
> >> >>
> >> >> Thanks
> >> >> Best Regards
> >> >>
> >> >> On Sun, Jul 19, 2015 at 9:13 PM, Schmirr Wurst <
> schmirrwu...@gmail.com>
> >> >> wrote:
> >> >>>
> >> >>> I want to use pithos, were do I can specify that endpoint, is it
> >> >>> possible in the url ?
> >> >>>
> >> >>> 2015-07-19 17:22 GMT+02:00 Akhil Das <ak...@sigmoidanalytics.com>:
> >> >>> > Could you name the Storage service that you are using? Most of
> them
> >> >>> > provides
> >> >>> > a S3 like RestAPI endpoint for you to hit.
> >> >>> >
> >> >>> > Thanks
> >> >>> > Best Regards
> >> >>> >
> >> >>> > On Fri, Jul 17, 2015 at 2:06 PM, Schmirr Wurst
> >> >>> > <schmirrwu...@gmail.com>
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> Hi,
> >> >>> >>
> >> >>> >> I wonder how to use S3 compatible Storage in Spark ?
> >> >>> >> If I'm using s3n:// url schema, the it will point to amazon, is
> >> >>> >> there
> >> >>> >> a way I can specify the host somewhere ?
> >> >>> >>
> >> >>> >>
> >> >>> >>
> ---------------------------------------------------------------------
> >> >>> >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> >> >>> >> For additional commands, e-mail: user-h...@spark.apache.org
> >> >>> >>
> >> >>> >
> >> >>>
> >> >>>
> ---------------------------------------------------------------------
> >> >>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> >> >>> For additional commands, e-mail: user-h...@spark.apache.org
> >> >>>
> >> >>
> >
> >
>

Reply via email to