Re: [Gluster-users] Meta

2013-01-21 Thread Jeff Darcy

On 01/20/2013 12:09 PM, Whit Blauvelt wrote:

Or is it healthy for everyone to be aware of the alternatives, whether for
their own use or as the competition? For general use, all the major distros
are great, and arguing over which is or isn't enterprise class only
reveals who is a fool for marketing campaigns. But for a specific use case,
there are times when a particular distro has a clear-cut advantage.


Just to be clear, it was the based on FUD part that raised my ire, not 
the recommended an alternative part.  I'm pretty sure I've recommended 
Ceph myself on this list, where I've felt it might be a better fit.  I 
go out of my way to praise XtreemFS, because I don't think they get the 
recognition they deserve for a fine project.  Directing people toward 
more suitable alternatives is IMO constructive, and therefore quite 
suitable for the list.


On the other hand, GlusterFS failed without details is not 
constructive.  IMO neither is suggesting an alternative that has its own 
rather serious problems or limitations, without warning people of those. 
 It can lead them to follow one mistake with a worse one.  The goal 
should be for people to help each other solve problems.  To me it seemed 
that the post which triggered this would solve neither GlusterFS 
problems nor the user's, and quite likely wasn't even intended to.  Had 
it done either, or at least shown some possibility, I would have 
responded differently.


I hope that helps clarify where the line is, at least for me.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [OFF] Re: Meta

2013-01-21 Thread Papp Tamas

On 01/21/2013 01:23 PM, Jeff Darcy wrote:

I think this conversation is already way offtopic:)


Just to be clear, it was the based on FUD part that raised my ire, not the 
recommended an
alternative part.  I'm pretty sure I've recommended Ceph myself on this list, 
where I've felt it


Well, actually is there a comparison between the two system? Pro/cons, using scenarios, stability, 
real use cases...etc?



To be more ontopic.
Are there situations, where glusterfs is definitely not recommended? Will be there changes in the 
future?


Thanks,
tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [OFF] Re: Meta

2013-01-21 Thread Jeff Darcy

On 01/21/2013 07:35 AM, Papp Tamas wrote:

Well, actually is there a comparison between the two system? Pro/cons, using
scenarios, stability, real use cases...etc?


There are two that I know of.

http://hekafs.org/index.php/2012/11/trying-out-moosefs/ (that's me)
http://blog.tinola.com/?e=13 (someone I don't know)

TBH I wouldn't read too much into the performance tests in the second.  Those 
mostly favor GlusterFS, but they're pretty simplistic single-threaded tests 
that I don't think reflect even the simplest real-world scenarios.  My tests 
weren't exactly exhaustive either, but IMNSHO they at least give a better 
picture of what to expect when using each system as it was designed to be used. 
 Still, some of the other author's non-performance points are good.



To be more ontopic.
Are there situations, where glusterfs is definitely not recommended? Will be
there changes in the future?


There are definitely some sore spots when it comes to performance. 
Synchronous random small-write performance with replication (a common need when 
hosting virtual images or databases) has historically been one.  If you're 
using kvm/qemu you can avoid the FUSE overhead by using the qemu driver, and in 
that case I think we're very competitive.  Otherwise people with those 
workloads might be better off with Ceph.  The other big pain point is directory 
operations.  Again because of FUSE, things like large directory listings or 
include/library searches can be pretty painful - though be wary of jumping to 
conclusions there, because I've found that even Ceph's kernel-based client 
seems to have anomalies in that area too.  We're working on some fixes in this 
area, but I don't know when they'll reach fruition.


As always, the real answer depends on details.  I think we win big on initial 
setup and flexibility (built-in feature set and potential to add features 
yourself).  I will be first to admit that debugging and tuning can be pretty 
miserable, but AFAICT that is true for *every* distributed filesystem of the 
last twenty years.  I'm hoping we can raise the bar on that some day, as we did 
for initial setup.  Meanwhile, the important thing is to consider one's own 
specific needs and evaluate performance in that context.  All a general 
comparison can really do is tell you which candidates you should test.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Big problem

2013-01-21 Thread Mario Kadastik
Hi,

I had a 4 x 3 gluster volume distributed over 6 servers (2 bricks from each). I 
wanted to move to 4 x 2 volume removing two nodes. The initial config is here:

http://fpaste.org/txxs/

I asked for the command how to do it and got it from the gluster IRC. I then 
proceeded to run it:

gluster volume remove-brick home0 replica 2 192.168.1.243:/d35 
192.168.1.240:/d35 192.168.1.243:/d36 192.168.1.240:/d36

having had read the gluster help output I assertained I should probably add 
start to the end to have it gracefully check everything (it did warn me without 
of possible data loss). However the result was that it started rebalancing and 
immediately had reconfigured the volume to 6 x 2 replica sets so now I have a 
HUGE mess:

http://fpaste.org/EpKG/

Most processes failed and directory listings come in double:

[root@wn-c-27 test]# ls
ls: cannot access hadoop-fuse-addon.tgz: No such file or directory
ls: cannot access hadoop-fuse-addon.tgz: No such file or directory
etc  hadoop-fuse-addon.tgz  hadoop-fuse-addon.tgz
[root@wn-c-27 test]# 

I need urgently help how to recover from this state? It seems gluster now has 
me in a huge mess and it will be tough to get out of it. Immediately when I 
noticed this I stopped the brick-remove with stop command, but the mess is as 
it is. Should I force the remove brick? Should I stop the volume and stop 
gluster and manually reconfigure it to 4x3 or how can I recover to a consistent 
filesystem. This is users /home so a huge mess is NOT a good thing. Due to 3x 
replication there is no backup right now either...

Mario Kadastik, PhD
Researcher

---
  Physics is like sex, sure it may have practical reasons, but that's not why 
we do it 
 -- Richard P. Feynman

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users