Some code seems licensed under the GPLv2, some under the LGPL. Best regards,
- Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White) ----- Original Message ----- > From: OZAWA Tsuyoshi <[email protected]> > To: [email protected]; [email protected] > Cc: > Sent: Friday, September 23, 2011 6:08 AM > Subject: [announce] Accord: A high-performance coordination service for > write-intensive workloads > > Hi, > > Sending zookeeper-users and hbase-users ml since there may be some > cluster developers interested in participating in this project there. > > I am pleased to announce the initial release of Accord, yet another > coordination service like Apache ZooKeeper. > ZooKeeper is a de facto standard coordination kernel as you know at present. > Accord provides ZK-like features as a coordination service. Concretely > speaking, it features: > - Accord is a distributed, transactional, and fully-replicated (No SPoF) > Key-Value Store with strong consistency. > - Accord can be scale-out up to tens of nodes. > - Accord servers can handle tens or thousands of clients. > - The changes for a write request from a client can be notified to the > other clients. > - Accord detects events of client's joining/leaving, and notifies > joined/left client information to the other clients. > > There are some problems in ZK, however, as follows: > - ZK cannot handle write-intensive workloads well. ZK forwards all write > requests to a master server. It may be bottleneck in write-intensive > workload. > - ZK is optimized for disk-persistence mode, not for in-memory mode. > ZOOKEEPER-866 shows that ZK has the other bottleneck outside disk > persistence, though there are some needs of a fully-replicated storage > with both strong consistency and low latency. > https://issues.apache.org/jira/browse/ZOOKEEPER-866 > - Limited Transaction APIs. ZK can only issue write operations (write, > del) in a transaction(multi-update). > > These restriction limit the capability of the coordination kernel. > Accord solves such problems. > 1. Accord uses Corosync Cluster Engine as a total-order messaging > infrastructure instead of Zab, an atomic broadcast protocol ZK uses. The > engine enable any servers to accept and process requests. > 2. Accord supports in-memory mode. > 3. More flexible transaction support. Not only write, del operations, > but also cmp, copy, read operations are supported in transaction operation. > > These differences of the core engine (1, 2) enable us to avoid master > bottleneck. Benchmark demonstrates that the write-operation throughput > of Accord is much higher than one of ZooKeeper > (up to 20 times better throughput at persistent mode, and up to 18 times > better throughput at in-memory mode). > > The high performance kernel can extend the application ranges. Assumed > applications are as follows, for instance : > - Distributed Lock Manager whose lock operations occur at a high > frequency from thousands of clients. > I assume that the lock manager for Hbase in particluar. The coordination > service enables HBase to update multiple rows with ACID properties. > Hbase acts as distributed DB with ACID properties until the coordination > service becomes the bottleneck. The new coordination kernel, Accord, can > handle 18 times better throughput than ZK. As a result, Accord can > dramatically improve the scalability of Hbase with ACID properties. > - Metadata management service for large-scale distributed storage, > including HDFS, Ceph and Sheepdog etc. > Replicated-master can be implemented easily. > - Replicated Message Queue or logger (For instance, replicated RabbitMQ). > and so on. > > The other distributed systems can use Accord features easily because > Accord provides general-purpose APIs (read/write/del/more flexible > transaction). > > More information including getting started, benchmarks, and API docs are > available from our project page : > http://www.osrg.net/accord > > and all code is available from: > http://github.com/collie/accord > > Please try it out, and let me know any opinions or problems. > > Best regards, > OZAWA Tsuyoshi > <[email protected]> >
