On Wed, Aug 29, 2012 at 8:13 PM, Jonathan Tripathy <[email protected]> wrote:
> On 30/08/2012 01:09, Jonathan Tripathy wrote:
>>
>> Hi Everyone,
>>
>> I'm using bcache with a RAID1 pair of SSDs (for the cache) with a
>> MD-RAID10 spindle array for the backing device. On top of this is LVM. This
>> setup is used with the Xen Hypervisor. Bcache is formatted with a sector
>> size of 512 bytes.
>>
>> If I use an LV for a Linux DomU, I get fantastic disk performance using
>> fio (about 23k random write). However, when I use IOMeter in a Windows HVM
>> DomU (with GPLPV drivers installed), my avg IOPS is around 4000. I am using
>> the "default" Access Specification. Am I doing something wrong? Changing the
>> number of workers doesn't seem to help.
>>
>> Any advice is appreciated.
>>
>> Thanks
>>
> Actually nvm, I forgot to enable the dist target for each of the works. Now
> I'm getting an avg iops of about 34k.
>
> But this does leave me with a question: is the number of "workers" in
> IOMeter akin to "IO Depth" in fio?

I've not used IOmeter, but I would assume that the number of "workers" would
be similar to the number of "jobs" in fio. IO depth/queue depth is the
number of IO requests that are queued for processing at any one time. So, if
you've got 4 workers, each keeping one IO operation queued at all times,
your effective IO depth would be 4, while the IO depth for each job/worker
would be 1. That's my understanding at least.

-davidc
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to