According to the stacktrace you pasted, your software seems modified
to use some EMR RPC calls, which is not included in OSS hadoop [1]. I
can't imagine anything precise, but it looks like some reporting API
calls to Amazon's EMR service and your job seems to be accessing that
RPC with unauthentica
Hi Kota/John/Andrew,
Thanks for your suggestions.
So this is what i've tried with unsuccessful results.
-* jets3t.properties file*
s3service.s3-endpoint=
s3service.s3-endpoint-http-port=8080
s3service.disable-dns-buckets=true
s3service.s3-endpoint-virtual-path=/
httpclient.proxy-autodetect=fals
I played on Hadoop MapReduce on Riak CS, and it actually worked with
the latest 1.5 beta package. Hadoop relies S3 connectivity on jets3t,
so if MapR uses vanilla jets3t it will work. I believe so because MapR
works on EMR (which usually extracts data from S3).
Technically, you can add several opt
This blog post on configuring S3 clients to work with CS may be useful:
http://basho.com/riak-cs-proxy-vs-direct-configuration/
Sent from my iPhone
On Jul 31, 2014, at 2:53 PM, Andrew Stone wrote:
Hi Charles,
AFAIK we haven't ever tested Riak Cs with the MapR connector. However, if
MapR works
Hi Charles,
AFAIK we haven't ever tested Riak Cs with the MapR connector. However, if
MapR works with S3, you should just have to change the IP to point to a
load balancer in front of your local Riak CS cluster. I'm unaware of how to
change that setting in MapR though. It seems like a question for
Hi,
I would like to use MapR with Riak CS for hadoop map reduce jobs. My code
is currently referring to objects using s3n:// urls.
I'd like to be able to have the hadoop code on MapR point to the Riak CS
cluster using the s3 url.
Is there a proxy or hostname setting in hadoop to be able to route t