Since you're running in standalone mode, can you try it using Spark 1.5.1
please?
On Thu, Dec 31, 2015 at 9:09 AM Steve Loughran <ste...@hortonworks.com>
wrote:

>
> > On 30 Dec 2015, at 19:31, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
> >
> > Hi Jerry,
> >
> > I want to run different jobs on different S3 buckets - different AWS
> creds - on the same instances. Could you shed some light if it's possible
> to achieve with hdfs-site?
> >
> > Thank you,
> > Konstantin Kudryavtsev
> >
>
>
> The Hadoop s3a client doesn't have much (anything?) in the way for
> multiple logins.
>
> It'd be possible to do it by hand (create a Hadoop Configuration object,
> fill with the credential, and set "fs.s3a.impl.disable.cache"= true to make
> sure you weren't getting an existing version.
>
> I don't know how you'd hook that up to spark jobs. maybe try setting the
> credentials and that fs.s3a.impl.disable.cache flag in your spark context
> to see if together they get picked up
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to