Hey Oren,

The Cloud Servers REST API returns a "hostId" for each server that indicates 
which physical host you are on: I'm not sure if you can see it from the control 
panel, but a quick curl session should get you the answer.

Thanks,
Stu

-----Original Message-----
From: "Oren Benjamin" <o...@clearspring.com>
Sent: Monday, July 19, 2010 10:30am
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: Cassandra benchmarking on Rackspace Cloud

Certainly I'm using multiple cloud servers for the multiple client tests.  
Whether or not they are resident on the same physical machine, I just don't 
know.

   -- Oren

On Jul 18, 2010, at 11:35 PM, Brandon Williams wrote:

On Sun, Jul 18, 2010 at 8:45 PM, Oren Benjamin 
<o...@clearspring.com<mailto:o...@clearspring.com>> wrote:
Thanks for the info.  Very helpful in validating what I've been seeing.  As for 
the scaling limit...

>> The above was single node testing.  I'd expect to be able to add nodes and 
>> scale throughput.  Unfortunately, I seem to be running into a cap of 21,000 
>> reads/s regardless of the number of nodes in the cluster.
>
> This is what I would expect if a single machine is handling all the
> Thrift requests.  Are you spreading the client connections to all the
> machines?

Yes - in all tests I add all nodes in the cluster to the --nodes list.  The 
client requests are in fact being dispersed among all the nodes as evidenced by 
the intermittent TimedOutExceptions in the log which show up against the 
various nodes in the input list.  Could it be a result of all the virtual nodes 
being hosted on the same physical hardware?  Am I running into some connection 
limit?  I don't see anything pegged in the JMX stats.

It's unclear if you're using multiple client machines for stress.py or not, a 
limitation of 24k/21k for a single quad-proc machine is normal in my experience.

-Brandon



Reply via email to