Hi.

Could you share the way in which it didn't quite work? Would be valuable
> information for the community.
>

The idea is to have a Xen machine dedicated to NN, and maybe to SNN, which
would be running over DRBD, as described here:
http://www.drbd.org/users-guide/ch-xen.html

The VM will be monitored by heart-beat, which would restart it on another
node when it fails.

I wanted to go that way as I thought it's perfect in case of small cluster,
as then the node can be re-used for other tasks.
Once the cluster grows reasonably, the VM could be migrated to dedicated
machine in live fashion - with minimum downtime.

Problem is, that it didn't work as expected. The Xen over DRBD is just not
reliable, as described. The most basic operation of live domain migration
works only in 50% of cases. Most often the domain migration leaves the DRBD
in read-only status, meaning the domain can't be cleanly shut down - only
killed. This often leads in turn to NN meta-data corruption.



>
> Always good to learn how to recover metadata :) You can do fire drills like
> this on a pseudodistributed cluster too - probably good for any ops people
> out there who haven't tried it before.
>
>
By the way, several times I managed to break SNN main checkpoint as well. In
this case, I manually replaced the checkpoint with contents of "previous"
directory.

Was this a planned activity, that such copying has to be done manually? I
mean, if the NN couldn't importCheckpoint from the SNN, shouldn't it offer
to import the previous one?

And another question while we at it - if meta-data rolled back to last
stable check-point, what happens with the files on DataNodes which were
created after the checkpoint? Will DataNodes erase them eventually, or they
will be just left there forever?



>
> >
> > Are there any other approaches which will make the NameNode
> > highly-available?
> >
> >
> I think this discussion came up last week on the list. Check the archives.
>
>
Can you tell me the name of the discussion in the list?


> Also, if we speaking about this, is it possible to use the config
directory
> from NFS, to have a single configuration for all the node?
>
>

> Yes, it should work fine, but you'll really be kicking yourself when your
> NFS server is down and thus the entirety of your Hadoop cluster won't start
> either :) I'd recommend rsync, personally. Keep things simple :)
>
>
Good idea :).

Thanks again.

Reply via email to