Hello all, new to the list here. Figured I'd hop in to get some insight/opinion
on the hardware requirements of Gluster in our environment.
We've been testing Gluster as our shared storage technology for our new Cloud
product we're going to be launching. The primary role of the Gluster
infrastr
I'm having problems removing directories, if I do a mv or if I do a rm
I'll get an error like this:
[00:57:57] [r...@clustr-01 /]# rm -rf /mnt/glusterfs/bhl/
rm: cannot remove directory `/mnt/glusterfs/bhl': Transport endpoint
is not connected
EdWyse on IRC suggested I run getfattr -m "" on a few
Heh sure although we took a circuitous route to get to where we are.
To start with I recommend using DRBL (diskless remote boot linux). It's
a well maintained project and will save you a lot of initial headaches
of creating initrds compiling etc.
http://drbl.sourceforge.net/
It's nfs based bu
Here were the notes for us in Debian. Since these notes are mildly out
of context hopefully they simply give you a starting point. Rsync is
the tool we use to slurp the debian image. You manage gluster as you
would on a normal install. However, you manage the configs and such on
a copy at (X) l
On 06/17/2010 04:05 PM, Mickey Mazarick wrote:
Let me know if anyone tries this; we can help with the first half
(getting gluster into an initrd).
That would be very interesting ; if you'd be willing to share your notes
on this procedure, i'm sure i wouldn't be the only one to thank you for
We have a similar setup booting from gluster where it loads the os into
a ramdrive, but there are concerns as you write logs etc since it can
start to eat up your ram.
there is an article from slashdot that had an interesting approach (look
under "setting up storage" halfway down):
http://blo
Am 17.06.2010 13:15, schrieb Benjamin Hudgens:
Hello Jan,
Our company took the approach of slurping our OS into a ram drive and
then mounting file system points from Gluster. The OS becomes
expendable. In our case (large amounts of dumb storage machines) this
is okay. We were itching to get a
Hello Jan,
Our company took the approach of slurping our OS into a ram drive and
then mounting file system points from Gluster. The OS becomes
expendable. In our case (large amounts of dumb storage machines) this
is okay. We were itching to get away from NFS. Boot time is slow while
it reads d
Hi,
I'm only recently started playing with glusterfs. My set up consists of
two servers (noriko and kumiko), each with twelve 1Tb disk, raided
together in raid10.
The systems have CentOS-5.5 installed, and I have
installed glusterfs-3.0.4-1 (client, common and server).
I have generated t
Am 17.06.2010 12:08, schrieb Daniel Maher:
On 06/17/2010 12:00 PM, Jan wrote:
I could easily setup netboot the traditional way using NFS, but I would
not have any failover/ha for that. As I understand, NFS on the gluster
storage platform (gsp) does not provide failover in case the first
server
On 06/17/2010 12:00 PM, Jan wrote:
I could easily setup netboot the traditional way using NFS, but I would
not have any failover/ha for that. As I understand, NFS on the gluster
storage platform (gsp) does not provide failover in case the first
server crashes. Failover-functionality for some dat
Am 17.06.2010 11:37, schrieb Daniel Maher:
On 06/17/2010 11:08 AM, Jan wrote:
- enables (diskless) Linux-Clients to boot over the network (debian,
some "real servers", some virtual ones)
Is this possible with glusterfs? Has anybody tried it? I understand it
is not that easy with a FUSE-filesyst
On 06/17/2010 11:08 AM, Jan wrote:
- enables (diskless) Linux-Clients to boot over the network (debian,
some "real servers", some virtual ones)
Is this possible with glusterfs? Has anybody tried it? I understand it
is not that easy with a FUSE-filesystem, and there does not seem to be a
out-of-t
Hello,
I am just looking around for a cheap, flexible and redundant
SAN-Solution and started reading about glusterfs.
my network is based on gigabit-ethernet and i have some 16 servers being
connected via infiniband.
After reading some documentation I have a few questions and hopefully
someone
14 matches
Mail list logo