Dave,

This is a new app being developed to use the Bluemix Messaging Hub 0.9 beta 
service.  No migration involved from 0.8.

With batching working for us (having fixed our own coding bug), we’re quite 
pleased with the numbers we’re seeing.

Our mindset is that in using a cloud service solution, we don’t invest the time 
-- or maintain the skills — to install, operate, tune, apply fixes, or upgrade 
the server.

With batching working, we expect to have ample throughput for our project’s 
needs.  Without batching, it would have been marginal to keep up with the flow 
rate with messages between 50 and 150 bytes,  

I did a single timing this morning using my laptop and then another by 
deploying the identical Docker image to Bluemix Containers.

Of course, having the client in the same cloud facility as the service was 
faster than our local WiFi configuration. :)

For 100,000 77 byte text records, linger.ms=10:

Laptop Producer: 17,437 ms
Laptop Consumer: 3,146 ms

Bluemix Producer:   6,659 ms
Bluemix Consumer: 2,184 ms

Again, all the disclaimers.  One run each.  No tuning of client.  And Bluemix 
service is in beta, so IBM’s configuration would be considered at best to be an 
untuned dev/test environment.  Unable to assess multi-tenancy impact at this 
point.

Dave, this is about 15K msg/sec with a single producer instance in Bluemix.  
Our design needs ~1K msg/sec for message ingestion (8GB/day).  Headroom looks 
good in our case.

Next design challenge is cloud harvesting and longer-term persistence from the 
Messaging Hub service for use with Spark (per Lambda Architecture).

Gary


> On Dec 14, 2015, at 12:50 PM, Dave Ariens <dari...@blackberry.com> wrote:
> 
> Gary,
> 
> I was asking last week on the dev list regarding performance in 0.9.x and how 
> best to achieve optimal message throughput and find your results rather 
> interesting.
> 
> Is producing 7142 msg/sec a fairly typical rate for your test environment (I 
> realize you're  just using your laptop, though).  Are you able to share your 
> peak message rates in your target/production environment under peak load?
> 
> Also--have you noticed any increase/decrease moving from 0.8.x to 0.9.x (if 
> applicable)?
> 
> I'm attempting to compare producing and consuming rates using Kafka (0.8.x 
> and 0.9.x) and our own in-house low overhead library and would be interested 
> in learning how others are faring.
> 
<snipped>

Reply via email to