I am having similar issue when I try to create 22 server hadoop cluster. Here 
is the stack trace:
Bootstrapping cluster
Configuring template
Starting 22 node(s) with roles [tt, dn]
Configuring template
Starting 1 node(s) with roles [jt, nn]
starting nodes, completed: 0/22, errors: 1, rate: 630ms/op
java.util.concurrent.ExecutionException: 
org.jclouds.http.HttpResponseException: command: POST 
https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871
 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: 
[{"overLimit":{"message":"Too many 
requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at 
org.jclouds.concurrent.FutureIterables$1.run(FutureIterables.java:121)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.jclouds.http.HttpResponseException: command: POST 
https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871
 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: 
[{"overLimit":{"message":"Too many 
requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
        at 
org.jclouds.rackspace.cloudservers.handlers.ParseCloudServersErrorFromHttpResponse.handleError(ParseCloudServersErrorFromHttpResponse.java:76)
        at 
org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:70)
        at 
org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
        at 
org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
        at 
org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more

________________________________________
From: ext Paolo Castagna [castagna.li...@googlemail.com]
Sent: Monday, January 31, 2011 3:04 PM
To: whirr-user@incubator.apache.org
Subject: Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 
1 jt+nn, 10 dn+tt

Lars George wrote:
> Hi Paolo,
>
> I had that same error a few times on a very slow connection and using The 
> default AMIs. Could you try what I am using here
>
> https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27
>
> Plus the medium or even large instance size. Just to confirm if that works or 
> fails.

Ok, I can confirm I do not have problems using:

whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-da0cf8b3
whirr.location-id=us-east-1

I do not know if it is m1.large or east-1/ami-da0cf8b3 which actually
makes the difference.

... also, to have the necessary jars copied in cli/target/lib I needed
to run mvn install package -Ppackage instead on simply mvn package
-Ppackage.

I will probably make more experiments tomorrow.

Thank you all,
Paolo

>
> Lars
>
> On Jan 28, 2011, at 21:59, Paolo Castagna <castagna.li...@googlemail.com> 
> wrote:
>
>> Hi,
>> I am a Whirr newbie and I've tried to use Whirr only recently.
>> Also, I am not sure if the problem I am experiencing is related to
>> Whirr, jclouds or Amazon.
>>
>> This is my Whirr properties file:
>>
>> ------
>> whirr.service-name=hadoop
>> whirr.cluster-name=myhadoopcluster
>> whirr.location-id=eu-west-1
>> whirr.instance-templates=1 jt+nn, 10 dn+tt
>> whirr.provider=ec2
>> whirr.identity=********************
>> whirr.credential=****************************************
>> whirr.private-key-file=${sys:user.home}/.ssh/castagna
>> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
>> whirr.hadoop-install-runurl=cloudera/cdh/install
>> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
>> ------
>>
>> This is what I do (i.e. I am trying to start-up a 10
>> datanodes/tasktrackers Hadoop cluster):
>>
>> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
>> $ cd whirr
>> $ mvn package -Ppackage
>> $ ./bin/whirr version
>> Apache Whirr 0.4.0-incubating-SNAPSHOT
>> $ ./bin/whirr launch-cluster --config
>> /home/castagna/Desktop/hadoop-whirr.properties
>>
>> These are the errors I see on the console and on the whirr.log file:
>>
>> ------
>> Cannot retry after server error, command has exceeded retry limit 5:
>> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
>> << problem applying options to node(eu-west-1/i-8709ecf1):
>> org.jclouds.aws.AWSResponseException: request POST
>> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
>> requestToken='null', code='RequestLimitExceeded', message='Request
>> limit exceeded.', context='{Response=, Errors=}'}
>>    at 
>> org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>>    at 
>> org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>>    at 
>> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>>    at 
>> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>>    at 
>> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>    at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:619)
>> ------
>>
>> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>>
>> Is someone else experiencing a similar problem?
>> Is it my Amazon account or Whirr or jclouds too aggressive?
>>
>> I have already tried different regions (i.e.
>> whirr.location-id=us-west-1|us-east-1) but I experience the same
>> problem.
>> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
>> fine, no errors.
>>
>> Thanks in advance for your help (and thanks for sharing Whirr with the 
>> world),
>> Paolo

Reply via email to