On Wed, Mar 23, 2011 at 1:47 PM, Anurag Priyam <[email protected]> wrote:
[...]
> Am I write in understanding this? And if this be the case what happens
> when d2.yeban.in goes done? Is that take care of by some kind of
> master-slave replication?

Adding some more context to my question:

I had played around with MongoDB for a while, and it has a totally
different approach to sharding. Its something like this:

- There are chunks (a contiguous range of data) for each collection.
- If a chunk grows above 200MB, its split into two.
- These chunks are then migrated to different shards
- Then there is meta data about these shards - things like what
information is stored on each shard. So, you want to shard a
collection of users based on name then the meta-data would store
information like - {name: Andrew} to {name: Anurag} is available on
shard2.
- Requests first get routed to the so called config servers where such
meta data is stored. From there the right shard to query is found out.
- Fail over is provided by master-slave replication

So, at first I was thinking on similar lines. Will I have to partition
data for each shard like Mongo does? If so, how do I handle meta-data?

Googling up, I learned memcahced's way, and that Drizzle would prefer
something like that[1]. You see, there were quite different concepts
involved. So, I wanted to clarify, particularly about failure handling
mechanism.

I find Memcached's way to be more elegant. It does away with storing
all the meta-data, and config servers. I don't need to bother about
updating my meta-data, keeping the config-server consistent (maybe the
overhead is small); just have to do the hashing right. Maybe, Mongo's
way is best suited for its architecture. I would not know much; I did
not dig into its code.

-- 
Anurag Priyam
http://about.me/yeban/

_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to