Re: fs.s3a.endpoint not working

2016-01-11 Thread Phillips, Caleb
Hi All,

Just wanted to send this out again since there was no response
(admittedly, originally sent in the midst of the US holiday season) and it
seems to be an issue that continues to come up (see e.g., the email from
Han Ju on Jan 5).

If anyone has successfully connected Hadoop to a non-AWS S3-compatable
object store, it’d be very helpful to hear how you made it work. The
fs.s3a.endpoint configuration directive appears non-functional at our site
(with Hadoop 2.6.3).

--
Caleb Phillips, Ph.D.
Data Scientist | Computational Science Center

National Renewable Energy Laboratory (NREL)
15013 Denver West Parkway | Golden, CO 80401
303-275-4297 | caleb.phill...@nrel.gov






On 12/22/15, 1:39 PM, "Phillips, Caleb"  wrote:

>Hi All,
>
>New to this list. Looking for a bit of help:
>
>I'm having trouble connecting Hadoop to a S3-compatable (non AWS) object
>store.
>
>This issue was discussed, but left unresolved, in this thread:
>
>https://mail-archives.apache.org/mod_mbox/spark-user/201507.mbox/%3CCA+0W_
>au5es_flugzmgwkkga3jya1asi3u+isjcuymfntvnk...@mail.gmail.com%3E
>
>And here, on Cloudera's forums (the second post is mine):
>
>https://community.cloudera.com/t5/Data-Ingestion-Integration/fs-s3a-endpoi
>nt-ignored-in-hdfs-site-xml/m-p/33694#M1180
>
>I'm running Hadoop 2.6.3 with Java 1.8 (65) on a Linux host. Using
>Hadoop, I'm able to connect to S3 on AWS, and e.g., list/put/get files.
>
>However, when I point the fs.s3a.endpoint configuration directive at my
>non-AWS S3-Compatable object storage, it appears to still point at (and
>authenticate against) AWS.
>
>I've checked and double-checked my credentials and configuration using
>both Python's boto library and the s3cmd tool, both of which connect to
>this non-AWS data store just fine.
>
>Any help would be much appreciated. Thanks!
>
>--
>Caleb Phillips, Ph.D.
>Data Scientist | Computational Science Center
>
>National Renewable Energy Laboratory (NREL)
>15013 Denver West Parkway | Golden, CO 80401
>303-275-4297 | caleb.phill...@nrel.gov
>
>-
>To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: user-h...@hadoop.apache.org
>



Re: fs.s3a.endpoint not working

2016-01-11 Thread Billy Watson
One of the threads suggested using the core-site.xml. Did you try putting
your configuration in there?

One thing I've noticed is that the AWS stuff is handled by an underlying
library (I think jets3t in < 2.6 versions, forget what in 2.6+) and when I
was trying to mess with stuff and spelunking through the hadoop code, I
kept running into blocks with that library.

William Watson
Software Engineer
(904) 705-7056 PCS

On Mon, Jan 11, 2016 at 10:39 AM, Phillips, Caleb 
wrote:

> Hi All,
>
> Just wanted to send this out again since there was no response
> (admittedly, originally sent in the midst of the US holiday season) and it
> seems to be an issue that continues to come up (see e.g., the email from
> Han Ju on Jan 5).
>
> If anyone has successfully connected Hadoop to a non-AWS S3-compatable
> object store, it’d be very helpful to hear how you made it work. The
> fs.s3a.endpoint configuration directive appears non-functional at our site
> (with Hadoop 2.6.3).
>
> --
> Caleb Phillips, Ph.D.
> Data Scientist | Computational Science Center
>
> National Renewable Energy Laboratory (NREL)
> 15013 Denver West Parkway | Golden, CO 80401
> 303-275-4297 | caleb.phill...@nrel.gov
>
>
>
>
>
>
> On 12/22/15, 1:39 PM, "Phillips, Caleb"  wrote:
>
> >Hi All,
> >
> >New to this list. Looking for a bit of help:
> >
> >I'm having trouble connecting Hadoop to a S3-compatable (non AWS) object
> >store.
> >
> >This issue was discussed, but left unresolved, in this thread:
> >
> >
> https://mail-archives.apache.org/mod_mbox/spark-user/201507.mbox/%3CCA+0W_
> >au5es_flugzmgwkkga3jya1asi3u+isjcuymfntvnk...@mail.gmail.com%3E
> >
> >And here, on Cloudera's forums (the second post is mine):
> >
> >
> https://community.cloudera.com/t5/Data-Ingestion-Integration/fs-s3a-endpoi
> >nt-ignored-in-hdfs-site-xml/m-p/33694#M1180
> >
> >I'm running Hadoop 2.6.3 with Java 1.8 (65) on a Linux host. Using
> >Hadoop, I'm able to connect to S3 on AWS, and e.g., list/put/get files.
> >
> >However, when I point the fs.s3a.endpoint configuration directive at my
> >non-AWS S3-Compatable object storage, it appears to still point at (and
> >authenticate against) AWS.
> >
> >I've checked and double-checked my credentials and configuration using
> >both Python's boto library and the s3cmd tool, both of which connect to
> >this non-AWS data store just fine.
> >
> >Any help would be much appreciated. Thanks!
> >
> >--
> >Caleb Phillips, Ph.D.
> >Data Scientist | Computational Science Center
> >
> >National Renewable Energy Laboratory (NREL)
> >15013 Denver West Parkway | Golden, CO 80401
> >303-275-4297 | caleb.phill...@nrel.gov
> >
> >-
> >To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> >For additional commands, e-mail: user-h...@hadoop.apache.org
> >
>
>


Running MPI jobs on hadoop yarn cluster

2016-01-11 Thread Ravikant Dindokar
Hello Hadoop Users,

I want to know are there any tools available to run MPI jobs on hadoop yarn
cluster?

Thanks
Ravikant