Hi Kazum,
I think it is ready to be pushed into the master if you don't find
fatal deficiency
NOTE:
You also need to patch following QEMU patch to issue flush requests and not
all kernel will
issue flush reqeust properly with upstream QEMU. For e.g, Debian squeeze will
*not* issuse
flush reqeust via qemu to sheepdog[Linux kernel has different implementation of
flush req
and we works with new one].
Ubuntu server and RHEL 6 works as expected.
TODOs:
cache object quota and reclaim
v4 -> v5:
- add control to disable cache
v3 -> v4:
- fix collie vdi list. Don't operate on cache for it.
- make flush operation from IO to LOCAL
v2 -> v3:
- refactored the object cache code, cache_object_fd is removed
- fix oc->dirty_rb race
- donot propagate error to guests for flush operation
- implement vdi delete operation
v1 -> v2:
- free entry when add_to_dirty_rb_and_list() fails.
- use mutex instead of spin lock.
Object cache caches data and vdi objects on the local node. It is at
higher level than backend store. This extra cache layer translate gateway
requests into local requests, largely reducing the network traffic and highly
improving the IO performance.
Dirty objects will be flushed to cluster storage by 'sync' request from
guest OS.
The initial version concentrate on the simplicity. Its design is
straightforward
just add one layer in front of the backend store. So the gateway requests (from
Guest)
old path: IO req -> sheep cluster
|
v
new path: IO req -> object cache on local node -> sheep cluster
Later we might consider other features, for e.g., cache quota for different
VDIs.
How To Use It:
The cache mode is controlled by the QEMU tool's DRIVE option,
To enable cache: qemu --enable-kvm -drive file=sheepdog:your_vm,cache=writeback
To disable cache:
1 qemu --enable-kvm -drive file=sheepdog:your_vm,cache=none
or
2 qemu --enable-kvm -drive file=sheepdog:your_vm
Thanks,
Yuan
--
sheepdog mailing list
[email protected]
http://lists.wpkg.org/mailman/listinfo/sheepdog