Nick,

The 2 URLs (Which in fact are just one same article) are interesting.
However, the setup that this guy describes looks fairly simple, and mostly
"home lab" oriented to what I understand. In my personal case, that's
something to put in production that I need, and production needs something
proven. After some months investigating the DRBD stuff, I now have the
strong feeling that DRBD IS proven. First by looking at the huge number of
setups all over the world. If the product was bad, at that time one would
know. Second by the fact that many industrialized solutions have been
implemented, some being certified by various actors. It wouldn't have that
success if had strong lacks or drawbacks. I didn't know of that Glusterfs
product. Glad to know that it exists, but I guess that if the objective is
real production, I guess it would be a wise attitude to let it "dry" a bit,
and see if it gains more success in the future. You can't trust a product
solely because its features look great, right ?
Confirming that feeling, if you go by the end of the URL, in the comments
area, there's an interesting mention of a guy who obviously belongs to the
development team, and who says more or less that iSCSI is not the right
technology to adopt for large scale Glusterfs deployments, it has side
effects that can lead to data corruption (He doesn't explicitly say so, but
the effects he describes clearly indicate it might certainly!). Then, the
IET that guy makes use of has some known lacks, in terms of performance as
well as functionality, especially in the virtualization area (I faced a
couple of problems with round-robin accesses under VMware for instance). I'm
surprised that he doesn't know them and don't propose something more serious
like SCST. If then you chose NFS to export your datas, it's very likely that
you will lose important features such as VMware Thin Provisionning, or
should I say, you will use it but it won't save any space, too bad... This
makes me feel like this guy, as honest as his enthusiasm might be (And I
believe it is), mainly relies on the announced features rather than on a
real experience of the product itself. And that's what makes me prudent.
Talking about production, I mean.
Also, my understanding was that ZFS was under FreeBSD licensing and as far
as I know there was nothing available for our common GNU distributions, I'm
surprised to see that it has changed, may be the Oracle effect. If so, that
might be of interest, but in that case nothing prevents you from using ZFS
with DRBD, right ? At least that's my understanding...
So to summarize, it's interesting to know that this product exists, but it's
urgent to wait before implementing it. There are tons of DRBD clusters
experience available everywhere, an official support, you have very few for
Glusterfs to what I can see. You have tons of references implementing DRBD
and Pacemaker. How many companies do you know implementing this Glusterfs ?
In my case, none. And you probably have very few answers to your question "
Any feedback regarding to IET+GlusterFS or ZFS, or other setups", simply
because nobody or so can really tell...
DRBD and Pacemaker stuffs are effectively a bit complex, but clusterizing
data access IS a complex (And sensible!) task by nature, and you get nothing
for nothing. I think that the best advice I may give you is spend the time
it requires to master these products, you won't regret it! I prefer
suffering a bit and trusting my datas rather than trusting something that
looks so simple and smart, and then having to explain my customers that
their datas are definitely gone... 20 years of IT made a prudent guy of me!
:)
But if your goal is not production and only a test platform or whatever
simple not needing specific performance level and tortured scenarios, then
why not going for Glusterfs... In the end, it's mainly a matter of use
case...
And unfortunately, no, at the moment I don't know of any other serious data
clustering offerings, except from array manufacturers or dedicated actors
such as Datacore or Falconstor, but it costs an eye and I personally had the
opportunity to see big bad situations with such products, with official
support guys not being able to explain why it had sucked up and not being
able to tell when it would go back to production. VMware has introduced a -
paying - Virtual Storage Appliance as of vSphere 5.0, which proposes a
cluster of replicated data volumes, but honestly it doesn't compare to the
DRBD/Pacemaker features, many lacks, many limitations. May be it will get
better in the future, but at the moment it's not competitive. I'm a
pro-VMware, I should say it's good, but objectively I can't! So that's why I
have been so happy to discover DRBD clusters last year and why I spend so
much time those last few months to understand how they work and try to
master them. Not that easy you're right, but I know I won't regret it!

Best regards,

Pascal.

-----Message d'origine-----
De : drbd-user-boun...@lists.linbit.com
[mailto:drbd-user-boun...@lists.linbit.com] De la part de Nick Khamis
Envoyé : dimanche 30 octobre 2011 02:05
À : linux...@lists.linux-ha.org; drbd-user@lists.linbit.com
Objet : [DRBD-user] DRBD Alternatives (Apples vs. Apples)

As I look to find a other native file replication solutions
(i.e., network raid 1), I found:

http://t.co/jeo6QoH5, and
http://www.bauer-power.net/2011/08/roll-your-own-fail-over-san-cluster.html

With that in mind, has anyone else here looked into alternatives? An
apples with applies comparison, so no talk about SANs+LUN. Any feedback
regarding to IET+GlusterFS or ZFS, or other setups is greatly appreciated?

Looking forward to getting your feedback,

Nick.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to