Use the linux software raid first. Gluster is best used between
servers as an additional level of raid. If I remember your setup
correctly it would be best to do a linux software raid 5 then mirror
with another server for redundancy.
-Mic
On 9/24/2010 3:51 AM, Jeremy Enos wrote:
Hardware
x27;s a step in the right direction.
-Mic
Daniel Maher wrote:
On 06/17/2010 04:05 PM, Mickey Mazarick wrote:
Let me know if anyone tries this; we can help with the first half
(getting gluster into an initrd).
That would be very interesting ; if you'd be willing to share your
notes on this pro
We have a similar setup booting from gluster where it loads the os into
a ramdrive, but there are concerns as you write logs etc since it can
start to eat up your ram.
there is an article from slashdot that had an interesting approach (look
under "setting up storage" halfway down):
http://blo
e-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mickey Mazarick
Sent: Monday, May 03, 2010 2:43 PM
To: Lakshmipathi
Cc: Gluster Users
Subject: Re: [Gluster-users] server ver 3.0.4 crashes
It turns out I had a previous client version (2.09) r
hmipathi.G
- Original Message -
From: "Tejas N. Bhise"
To: "Mickey Mazarick"
Cc: "Gluster Users"
Sent: Wednesday, April 28, 2010 9:46:07 PM
Subject: Re: [Gluster-users] server ver 3.0.4 crashes
Hi Mickey,
Please open a defect in bugzilla. Someone from the dev tea
Did a strait install and the ibverbs instance will crash after a
single connection attempt. Are there any bugs that would cause this
behavior?
All the log tells me is:
pending frames:
frame : type(2) op(SETVOLUME)
patchset: v3.0.4
signal received: 11
time of crash: 2010-04-28 10:41:08
confi
g the iscsi lun to just act as a second
gluster server.
There has been talk about a delayed-write mirror before for offsite and
slower mirroring maybe a dev can tell us how far out that is.
Currently every write would have to occur before the client could continue.
-Mickey Mazarick
Marcu
Can you tell us a little more about your setup? I'm running many
hundreds of vms on our cluster but I found infiniband is necessary if
you have any large amount of io (databases, lots of drive access etc).
You may simply be saturating your io if you only have a single gigabit
interface to your
Sorry the mail daemon just batched me the rest of this conversation and
I see this is already done. please ignore.
-Mic
Mickey Mazarick wrote:
I had some difficulty getting OFED 1.3 working on kernel 2.6.27 about
6 months back. It took some patching but I did find that you needed
to have
I had some difficulty getting OFED 1.3 working on kernel 2.6.27 about 6
months back. It took some patching but I did find that you needed to
have the srq enabled for it to work. The ibv_srq_pingpong test app was a
good test for weather it would work with gluster of not.
I also had to upgrade
Just a note we initially tried to set up our storage network with bonded
4 port gig E connections per client and storage node and it was still
~1/3 the speed of infiniband. There also appears to be more overhead in
unwrapping data from packets even with jumbo frames set.
We did see about a 50
Just a not, we have seen a pretty significant increase in speed from
this latest 2.03 release. Doing a test read over afr we are seeing
speeds between 200-320 mB a second. (over infiniband, ib-verbs)
This is with direct IO disabled too. Oddly putting performance
translators on the clients made
Have you seen a distributed parallel fault tolerant file system that
doesn't take a serious hit doing mmaps or direct io?
This is a serious question, I've installed luster to contrast recently
and it didn't measure up but I'm wondering what other DPFT filers anyone
has tried and for what applica
01-main on /mnt/gluster/main1 type ext3
(rw,user_xattr)
We just unmounted and did a "mount -a" for gluster to see it .
Thanks!
-Mickey Mazarick
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Is there a way to check if a file has a posix lock on it? I know you are
locking sections of a file but is there a way of listing which files
have locks on them for an entire cluster (I *think* /proc/locks only
applies to the local machine)?
Thanks!
-Mickey Mazarick
15 matches
Mail list logo