Right, graphing those sorts of variables has always been part of our test plan. 
What I’ve done so far was just some pilot tests, and I realize now that I 
wasn’t very clear on that point. I wanted to get a rough idea of where the 
Redis driver sat in case there were any obvious bug fixes that needed to be 
taken care of before performing more extensive testing. As it turns out, I did 
find one bug that has since been fixed.

Regarding latency, saying that it "is not important” is an exaggeration; it is 
definitely important, just not the only thing that is important. I have spoken 
with a lot of prospective Zaqar users since the inception of the project, and 
one of the common threads was that latency needed to be reasonable. For the use 
cases where they see Zaqar delivering a lot of value, requests don't need to be 
as fast as, say, ZMQ, but they do need something that isn’t horribly slow, 
either. They also want HTTP, multi-tenant, auth, durability, etc. The goal is 
to find a reasonable amount of latency given our constraints and also, 
obviously, be able to deliver all that at scale.

In any case, I’ve continue working through the test plan and will be publishing 
further test results shortly.

> graph latency versus number of concurrent active tenants

By tenants do you mean in the sense of OpenStack Tenants/Project-ID's or in  
the sense of “clients/workers”? For the latter case, the pilot tests I’ve done 
so far used multiple clients (though not graphed), but in the former case only 
one “project” was used.

From: Joe Gordon <joe.gord...@gmail.com<mailto:joe.gord...@gmail.com>>
Reply-To: OpenStack Dev 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, September 12, 2014 at 1:45 PM
To: OpenStack Dev 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

If zaqar is like amazon SQS, then the latency for a single message and the 
throughput for a single tenant is not important. I wouldn't expect anyone who 
has latency sensitive work loads or needs massive throughput to use zaqar, as 
these people wouldn't use SQS either. The consistency of the latency (shouldn't 
change under load) and zaqar's ability to scale horizontally mater much more. 
What I would be great to see some other things benchmarked instead:

* graph latency versus number of concurrent active tenants
* graph latency versus message size
* How throughput scales as you scale up the number of assorted zaqar 
components. If one of the benefits of zaqar is its horizontal scalability, lets 
see it.
* How does this change with message batching?
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to