Re: [Gluster-users] Exporting nfs share with glusterfs?

2010-04-16 Thread carlopmart

Please, any response??

carlopmart wrote:

then, If i use nfs4, can this work??

Burnash, James wrote:

NFS v4 supports extended attributes


--
CL Martinez
carlopmart {at} gmail {d0t} com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exporting nfs share with glusterfs?

2010-04-16 Thread Burnash, James
 I have not deployed this configuration - I was simply correcting the earlier 
statement made that NFS doesn't support extended attributes.

I would be interested if anyone was implementing NFS v4 on top of Gluster

James



DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of carlopmart
Sent: Friday, April 16, 2010 4:38 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Exporting nfs share with glusterfs?

Please, any response??

carlopmart wrote:
 then, If i use nfs4, can this work??

 Burnash, James wrote:
 NFS v4 supports extended attributes

--
CL Martinez
carlopmart {at} gmail {d0t} com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread pawel eljasz

dear all, I just subscribed and started reading docs,
but still not sure if I got the hung of it all
is GlusterFS for something simple like:

a box -b box
/some_folder  /some_folder

so /some_folder on both boxes would contain same data

if yes, then does setting only the servers suffice? or client side is 
needed too?
can someone share a simplistic config that would work for above simple 
design?


cheers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread RW
This is basically the config I'm using for replicate
a directory between two hosts (RAID 1 if you like ;-) )
You need server and client even both are on the same
host:

##
# glusterfsd.vol (server):
##
volume posix
  type storage/posix
  option directory /some_folder
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address ...
  option transport.socket.listen-port 6996
  option auth.addr.locks.allow *
  subvolumes locks
end-volume

#
# glusterfs.vol (client):
#
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host ip_or_name_of_box_a
  option remote-port 6996
  option remote-subvolume locks
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host ip_or_name_of_box_b
  option remote-port 6996
  option remote-subvolume locks
end-volume

volume replicate
  type cluster/replicate
  # optionally but useful if most is reading
  # !!!different values for box a and box b!!!
  # option read-subvolume remote1
  # option read-subvolume remote2
  subvolumes remote1 remote2
end-volume

#
# /etc/fstab
#
/etc/glusterfs/glusterfs.vol /some_folder  glusterfs  noatime  0  0

noatime is optional of course. Depends on your needs.

- Robert


On 04/16/10 14:18, pawel eljasz wrote:
 dear all, I just subscribed and started reading docs,
 but still not sure if I got the hung of it all
 is GlusterFS for something simple like:
 
 a box -b box
 /some_folder  /some_folder
 
 so /some_folder on both boxes would contain same data
 
 if yes, then does setting only the servers suffice? or client side is
 needed too?
 can someone share a simplistic config that would work for above simple
 design?
 
 cheers
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread David Christensen
Given that ssh access is disabled by default, which I understand why  
that is, I would be interested in learning what the options are for  
enabling something like net-snmp on the server.  Between snmp and ssh  
access there would be a starting point for getting data.


David Christensen



On Apr 16, 2010, at 2:35 AM, Daniel Maher dma+glus...@witbe.net wrote:


On 04/16/2010 03:01 AM, Bryan McGuire wrote:

Hello,

Is there a way to monitor a gluster platform server via snmp?


Given that you can configure snmpd to trigger and report the results  
of more or less anything, the answer is theoretically « yes ».  The  
real question is whether you can gather the data you want reported i 
n the first place.



--
Daniel Maher dma+gluster AT witbe DOT net
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] DHT, pre-existing data unevenly distributed

2010-04-16 Thread Dan Bretherton
I have been using DHT to join together two large filesystems (2.5TB)
containing pre-exising data.  I solved the problem of ls not seeing
all the files by doing rsync --dry-run from the individual brick
directories to the glusterfs mounted volume.  I am using
glusterfs-3.0.2 and option lookup-unhashed yes for DHT on the
client.   All seemed to be well until the volume started to get nearly
full, and despite also using option min-free-disk 10% one of the
bricks became 100% full preventing any further writes to the whole
volume.  I managed to get going again by manually transferring some
data from one server to the other, making the two more evenly
balanced, but I would like to find a more permanent solution.  I would
also like to know if this sort of thing is supposed to happen with DHT
and pre-existing data, in the situation where data is not evenly
distributed across the bricks.  I have included my client volume file
at the bottom of this message.

I tried using the unify translator instead, even though it is
supposedly now obsolete, but glusterfs crashed (segfault) when I tried
to mount the volume.  I thought perhaps unify was no longer supported
in 3.0.2 so didn't pursue that option any further.  However, if unify
turns out to be better than DHT for pre-existing data situations I
will have to find out what went wrong.  Should I be using the unify
translator instead of DHT for pre-existing data that is unevenly
distributed across bricks?  If I can continue with DHT, can I stop
using option lookup-unhashed yes at some point?

Regards,
Dan Bretherton

## Client vol file
volume romulus
type protocol/client
option transport-type tcp
option remote-host romulus
option remote-port 6996
option remote-subvolume brick1
end-volume

volume perseus
type protocol/client
option transport-type tcp
option remote-host perseus
option remote-port 6996
option remote-subvolume brick1
end-volume

volume distribute
type cluster/distribute
option min-free-disk 10%
option lookup-unhashed yes
subvolumes romulus perseus
end-volume

volume io-threads
  type performance/io-threads
  #option thread-count 8  # default is 16
  subvolumes distribute
end-volume

volume io-cache
type performance/io-cache
option cache-size 1GB
subvolumes io-threads
end-volume

volume main
  type performance/stat-prefetch
  subvolumes io-cache
end-volume


-- 
Mr. D.A. Bretherton
Reading e-Science Centre
Environmental Systems Science Centre
Harry Pitt Building
3 Earley Gate
University of Reading
Reading, RG6 6AL
UK

Tel. +44 118 378 7722
Fax: +44 118 378 6413
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread Jenn Fountain
Jumping on this thread with a relevant  (I think question) - I am new to 
gluster as well.

Where do you typically work with the files - local or gluster mount?  IE:  
/repl/export - local /mnt/glusterfs - gluster mount

Would you work with the files on /repl/export and then copy them (automate this 
via a script or can gluster automate this) to the /mnt/glusterfs so they 
replicate or work with them on the /mnt/glusterfs and have them replicate?   
Sorry for the novice question but I am a novice. 

-Jenn





On Apr 16, 2010, at 10:01 AM, RW wrote:

 
 many thanks Robert for your quick reply,
 I still probably am missing/misunderstanding the big picture here, what
 about this:
 
box a   --   box b
/dir_1 /dir_1
  ^ ^
serivces locally services locally
   read/write to dir_1 read/write to /dir_1
 
 This is basically the setup I described with my config files.
 /dir_1 (or /some_folder in you former mail) is the client mount.
 Everything you copy in there will be replicated to box a and
 box b. It doesn't matter if you do the copy in box a or b.
 But you need a different location for glusterfsd (the GlusterFS
 daemon) to store the files locally. This could be /opt/glusterfsbackend
 for example. You need this on both hosts and you need the mounts
 (client) on both hosts.
 
 - can all these local services/processes, whatever these might be,
 not know about mountig and all this stuff?
 
 You need to copy glusterfsd.vol on both hosts e.g. /etc/glusterfs/
 Then you start glusterfsd (on Gentoo this is /etc/init.d/glusterfsd
 start). Now you should see a glusterfsd process on both hosts.
 You also copy glusterfs.vol to both hosts. As you can see in my
 /etc/fstab I supply the glusterfs.vol file as the filesystem
 and glusterfs as type. You now mount GlusterFS as you would do
 with every other filesystem. If you now copy a file to /some_folder
 on box a it will automatically be replicated to box b and after
 that it will be immediately be available at box b. The replication
 is done by the client (the mountpoint in your case if this
 helps to better understand). The servers basically only provide the
 backend services to store the data somewhere on a brick (host).
 In my example above this was /opt/glusterfsbackend.
 
 - and server between themselves make sure(resolve conflicts, etc.)
 that content of dir_1 on both boxes is the same?
 
 Most of the time ;-) There're situations where conflicts can
 occur but in this basic setup they're seldom. You have to monitor
 the log files. But GlusterFS provides self healing which means
 that if a backend (host) goes down the files generated on the
 good host - while the bad host is down - will be copied to the failed
 host if it is up again. But this will not happen immediately.
 This is the magic part of GlusterFS ;-)
 
 - so whatever happens(locally) on box_a is replicated(through servers)
 on box_b and vice versa,
 possible with GlusterFS or I need to be looking for something else?
 
 As long as you copy the files into the glusterfs mount (in your
 case /some_folder) the files will be copied to box b if you
 copy it on box a and vice versa.
 
 and your configs, do both files glusterfsd and glusterfs go to both
 box_a  box_b?
 
 Yes.
 
 does mount need to be executed on both boxes as well?
 
 Yes.
 
 - Robert
 
 
 thanks again Robert
 
 
 
 On 16/04/10 13:42, RW wrote:
 This is basically the config I'm using for replicate
 a directory between two hosts (RAID 1 if you like ;-) )
 You need server and client even both are on the same
 host:
 
 ##
 # glusterfsd.vol (server):
 ##
 volume posix
  type storage/posix
  option directory /some_folder
 end-volume
 
 volume locks
  type features/locks
  subvolumes posix
 end-volume
 
 volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address ...
  option transport.socket.listen-port 6996
  option auth.addr.locks.allow *
  subvolumes locks
 end-volume
 
 #
 # glusterfs.vol (client):
 #
 volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host ip_or_name_of_box_a
  option remote-port 6996
  option remote-subvolume locks
 end-volume
 
 volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host ip_or_name_of_box_b
  option remote-port 6996
  option remote-subvolume locks
 end-volume
 
 volume replicate
  type cluster/replicate
  # optionally but useful if most is reading
  # !!!different values for box a and box b!!!
  # option read-subvolume remote1
  # option read-subvolume remote2
  subvolumes remote1 remote2
 end-volume
 
 #
 # /etc/fstab
 #
 /etc/glusterfs/glusterfs.vol /some_folder  glusterfs  noatime  0  0
 
 noatime is optional of 

Re: [Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread RW
You would normally always work with /mnt/glusterfs (the glusterfs mount)
because every change will immediately will be replicated
and this is what you want normally.

Maybe there some confusion what is meant by locally. Basically
in this setup everything is locally ;-) If you look in the
glusterfsd.vol file you see a option directory. This directory
has to exist on both hosts if you want a RAID 1 setup which
means that your data will be stored on both backends which in
turn means that the data will be duplicated on different hosts.
This will save you the data if one hosts explodes ;-) This is
what glusterfsd will do for you. It stores the data in the
directory specified in option directory in glusterfsd.vol.
This directory is really local for every backend. But you would
normally do any changes in this directory. Strictly speaking:
Do NOT change anything there.

But something has to do the replication. And this is what
the client/mount will do for you. If you mount glusterfs.vol
on /mnt/glusterfs e.g. on both hosts you get a GlusterFS
mount. In our case it is a replicated mount. As you see in
volume replicate the option subvolumes remote1 remote2
will to the magic. It basically says: If someone copies a
file to /mnt/glusterfs store it on remote1 and remote2 in
directory /opt/glusterfsbackend (to get back to my example
below). So in our case not glusterfsd will replicate the
data but the client/mount will do it.

As long as you can live with some inaccuracy you can do
read only things like find, du, ... in the backend directory
/opt/glusterfsbackend. This will be much faster. But don't
change anything there (I know someone will bashing me for
this ... ;-) ).

- Robert

On 04/16/10 16:08, Jenn Fountain wrote:
 Jumping on this thread with a relevant  (I think question) - I am new to 
 gluster as well.
 
 Where do you typically work with the files - local or gluster mount?  IE:  
 /repl/export - local /mnt/glusterfs - gluster mount
 
 Would you work with the files on /repl/export and then copy them (automate 
 this via a script or can gluster automate this) to the /mnt/glusterfs so they 
 replicate or work with them on the /mnt/glusterfs and have them replicate?   
 Sorry for the novice question but I am a novice. 
 
 -Jenn
 
 
 
 
 
 On Apr 16, 2010, at 10:01 AM, RW wrote:
 

 many thanks Robert for your quick reply,
 I still probably am missing/misunderstanding the big picture here, what
 about this:

box a   --   box b
/dir_1 /dir_1
  ^ ^
serivces locally services locally
   read/write to dir_1 read/write to /dir_1

 This is basically the setup I described with my config files.
 /dir_1 (or /some_folder in you former mail) is the client mount.
 Everything you copy in there will be replicated to box a and
 box b. It doesn't matter if you do the copy in box a or b.
 But you need a different location for glusterfsd (the GlusterFS
 daemon) to store the files locally. This could be /opt/glusterfsbackend
 for example. You need this on both hosts and you need the mounts
 (client) on both hosts.

 - can all these local services/processes, whatever these might be,
 not know about mountig and all this stuff?

 You need to copy glusterfsd.vol on both hosts e.g. /etc/glusterfs/
 Then you start glusterfsd (on Gentoo this is /etc/init.d/glusterfsd
 start). Now you should see a glusterfsd process on both hosts.
 You also copy glusterfs.vol to both hosts. As you can see in my
 /etc/fstab I supply the glusterfs.vol file as the filesystem
 and glusterfs as type. You now mount GlusterFS as you would do
 with every other filesystem. If you now copy a file to /some_folder
 on box a it will automatically be replicated to box b and after
 that it will be immediately be available at box b. The replication
 is done by the client (the mountpoint in your case if this
 helps to better understand). The servers basically only provide the
 backend services to store the data somewhere on a brick (host).
 In my example above this was /opt/glusterfsbackend.

 - and server between themselves make sure(resolve conflicts, etc.)
 that content of dir_1 on both boxes is the same?

 Most of the time ;-) There're situations where conflicts can
 occur but in this basic setup they're seldom. You have to monitor
 the log files. But GlusterFS provides self healing which means
 that if a backend (host) goes down the files generated on the
 good host - while the bad host is down - will be copied to the failed
 host if it is up again. But this will not happen immediately.
 This is the magic part of GlusterFS ;-)

 - so whatever happens(locally) on box_a is replicated(through servers)
 on box_b and vice versa,
 possible with GlusterFS or I need to be looking for something else?

 As long as you copy the files into the glusterfs mount (in your
 case /some_folder) the files will be copied to box b 

Re: [Gluster-users] snmp

2010-04-16 Thread Bryan McGuire

Hello

I guess this is what I was getting at with my original question.

Since ssh is disabled how would one go about configuring Gluster  
platform to respond to snmp requests?


Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org


On Apr 16, 2010, at 8:16 AM, David Christensen wrote:

Given that ssh access is disabled by default, which I understand why  
that is, I would be interested in learning what the options are for  
enabling something like net-snmp on the server.  Between snmp and  
ssh access there would be a starting point for getting data.


David Christensen



On Apr 16, 2010, at 2:35 AM, Daniel Maher dma+glus...@witbe.net  
wrote:



On 04/16/2010 03:01 AM, Bryan McGuire wrote:

Hello,

Is there a way to monitor a gluster platform server via snmp?


Given that you can configure snmpd to trigger and report the  
results of more or less anything, the answer is theoretically « yes  
».  The real question is whether you can gather the data you want  
reported in the first place.



--
Daniel Maher dma+gluster AT witbe DOT net
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal with VM Storage

2010-04-16 Thread Justice London
After the self heal finishes it sort of works. Usually this destroys InnoDB
if you're running a database. Most often, though, it also causes some
libraries and similar to not be properly read in by the VM guest which means
you have to reboot it to fix for this. It should be fairly easy to
reproduce... just shut down a storage brick (any configuration... it doesn't
seem to matter). Make sure of course that you have a running VM guest (KVM,
etc) using the gluster mount. You'll then turn off(unplug, etc.) one of the
storage bricks and wait a few minutes... then re-enable it.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 7:41 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

Thanks for the description. So, does this mean 
that after the self heal is over after some time, 
the guest starts to work fine ?

We will reproduce this inhouse and get back.

Regards,
Tejas.
- Original Message -
From: Justice London jlon...@lawinfo.com
To: Tejas N. Bhise te...@gluster.com
Cc: gluster-users@gluster.org
Sent: Friday, April 16, 2010 1:18:36 AM
Subject: RE: [Gluster-users] Self heal with VM Storage

Okay, but what happens on a brick shutting down and being added back to the
cluster? This would be after some live data has been written to the other
bricks.

From what I was seeing access to the file is locked. Is this not the case?
If file access is being locked it will obviously cause issues for anything
trying to read/write to the guest at the time.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 12:33 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

From posts from the community on this user list, I know 
that there are folks that run hundreds of VMs out of 
gluster. 

So it's probably more about the data usage than just a 
generic viability statement as you made in your post.

Gluster does not support databases, though many people
use them on gluster without much problem. 

Please let me know if you see some problem with unstructured 
file data on VMs. I would be happy to help debug that problem.

Regards,
Tejas.


- Original Message -
From: Justice London jlon...@lawinfo.com
To: gluster-users@gluster.org
Sent: Friday, April 16, 2010 12:52:19 AM
Subject: [Gluster-users] Self heal with VM Storage

I am running gluster as a storage backend for VM storage (KVM guests). If
one of the bricks is taken offline (even for an instant), on bringing it
back up it runs the metadata check. This causes the guest to both stop
responding until the check finishes and also to ruin data that was in
process (sql data for instance). I'm guessing the file is being locked while
checked. Is there any way to fix for this? Without being able to fix for
this, I'm not certain how viable gluster will be, or can be for VM storage.

 

Justice London

jlon...@lawinfo.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread lejeczek

dear all, I just subscribed and started reading docs,
but still not sure if I got the hung of it all
is GlusterFS for something simple like:

a box -b box
/some_folder  /some_folder

so /some_folder on both boxes would contain same data

if yes, then does setting only the servers suffice? or client side is 
needed too?
can someone share a simplistic config that would work for above simple 
design?


cheers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GSP nodes on different networks

2010-04-16 Thread Bala.JA


Hi,

Its not yet supported directly.  However its possible to provision servers at 
one subnet and some of servers can be moved to different subnet by editing 
network settings by hand (3.0.4 release will have full network configuration 
functionality).


Thanks,

Regards,
Bala


Diego Zuccato wrote:

Hi all.

I couldn't yet find a definitive answer to this question: can Gluster 
Storage Platform handle two servers on different subnets (connected by a 
100Mbit link) with both nodes acting as active NFS servers?


The question is due to the fact that installer asks to specify a range 
of IPs on the same subnet the main node is on...


If it's possible, can someone please point me in the right direction?

TIA!



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread RW
See my answers in the text.

 Robert, thank you ever so much for clarifying the picture,
 
 but I still wonder, why I do? because to me that seems like kind of
 first aid functionality
 in any network distributed fs, it should be there..
 so I wonder is it possible with glusterfs get the following:
 
 have server(backend) working as daemon on two(or any number of) boxes
 and have this server(s) on this box(es) watching over a local tree(folder)
 and basically these servers(backends) would be syncing with each other
 and would be doing it only to ensure of the content of this tree to be
 the same on all boxes

Puh... I don't know if I get you right but for me it looks like
that you're looking for a filesystem which requires a central storage
(SAN) like GFS/GFS2 (Redhat) or OCFS (Oracle Cluster File System).
GFS or GFS2 can also be used as a local filesystem. GFS/GFS2 is more
what you've described above.

 server_1  -  server_2  -  server_3
  |||
 ^  ^   ^
 /watch_me /watch_me   /watch_me
 
 so no mounts, a process changes something in this local /watch_me on
 server_1
 server_1 propagates(obviously working through the logic) the change to
 other servers and vice versa
 
 is it possible to, maybe by introducing client part of config into
 glusterfsd.vol,
 to have it like this? without having a client have to mount/configure
 replication?

Well if I haven't missed something then the short answer should be: no.
Since the glusterfsd daemons (backend) are only responsible for storing
the data locally (besides some other things of course) you need a
mount point because the magic of distribution/replication lies in the
client (configuration).

But I can show you a configuration where (almost) no mount is needed.
But I doubt that it will help you. We're using GlusterFS where we have a
central CMS (content management system). On this CMS host we've a
GlusterFS mount which replicates the pictures uploaded to 8 other
hosts. On each of this 8 hosts there is running glusterfsd of course.
glusterfsd then stores this files locally on each host. The 8 hosts
run Apache webservers which delivers this pictures to the web browsers
out there. This scenario is very practical if you need to distribute
files from a central location to many other hosts. Important to note
here is that you really only read the files and do not modify it
(besides the host which has the CMS of course). This changes on the
backends won't be replicated and you'll probably get strange results
over time.

 other than that glusterfs feels cool, last two days I was fiddling with coda
 but it the end it crashes way to often, at least Fedora's rpm is like this,
 yet there is(was) a problem with glusterfs for me too, if anybody uses
 fedora:
 https://bugzilla.redhat.com/show_bug.cgi?id=555728

I've had problems on Gentoo until version 3.0.2. 3.0.2 was the
first version for us which works quite well. There are some issues
left until now but I haven't tested 3.0.4 yet.

 ps. is it in reality as docs say, glusterfs won't work on slow and flaky
 networks? 1GbE at least?

I would definitely recommend 1GbE. If you need a filesystem for
slow and flaky networks (over WAN) maybe you should have a look at AFS
(http://en.wikipedia.org/wiki/Andrew_File_System). But it is more
complicated to setup. But I wouldn't compare GlusterFS and AFS
directly.

- Robert


 cheers
 
 
 On 16/04/10 15:01, RW wrote:
   
 many thanks Robert for your quick reply,
 I still probably am missing/misunderstanding the big picture here, what
 about this:

 box a   --   box b
 /dir_1 /dir_1
   ^ ^
 serivces locally services locally
read/write to dir_1 read/write to /dir_1
 
 This is basically the setup I described with my config files.
 /dir_1 (or /some_folder in you former mail) is the client mount.
 Everything you copy in there will be replicated to box a and
 box b. It doesn't matter if you do the copy in box a or b.
 But you need a different location for glusterfsd (the GlusterFS
 daemon) to store the files locally. This could be /opt/glusterfsbackend
 for example. You need this on both hosts and you need the mounts
 (client) on both hosts.

   
 - can all these local services/processes, whatever these might be,
 not know about mountig and all this stuff?
 
 You need to copy glusterfsd.vol on both hosts e.g. /etc/glusterfs/
 Then you start glusterfsd (on Gentoo this is /etc/init.d/glusterfsd
 start). Now you should see a glusterfsd process on both hosts.
 You also copy glusterfs.vol to both hosts. As you can see in my
 /etc/fstab I supply the glusterfs.vol file as the filesystem
 and glusterfs as type. You now mount GlusterFS as you would do
 with every other filesystem. If you now copy a file to /some_folder
 on 

Re: [Gluster-users] novice kind of question.. replication(raid)

2010-04-16 Thread Vikas Gorur

 server_1  -  server_2  -  server_3
 |||
^  ^   ^
 /watch_me /watch_me   /watch_me
 
 so no mounts, a process changes something in this local /watch_me on
 server_1
 server_1 propagates(obviously working through the logic) the change to
 other servers and vice versa
 
 is it possible to, maybe by introducing client part of config into
 glusterfsd.vol,
 to have it like this? without having a client have to mount/configure
 replication?


This is easy to do. Have a directory named /backend on all three servers. Run a 
GlusterFS
server there which simply exports this backend directory. Write a client volume 
file which has
replication in it and has these three servers as its children. Using this 
client volume file,
mount on /watch_me on all three servers.

Now when a process writes to /watch_me on any server, the changes are 
propagated to all
the three servers.

To understand how to write volume files, you can try this command and take a 
look at the files
it generates. This will generate files for a 2-server replicate setup, but you 
can easily extend
it to three servers.

$ glusterfs-volgen --raid=1 --name=testvolume server_1:/backend 
server_2:/backend

(you can give an IP address instead of a name like server_1).

--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread Vikas Gorur

On Apr 16, 2010, at 7:56 AM, Bryan McGuire wrote:

 Hello
 
 I guess this is what I was getting at with my original question.
 
 Since ssh is disabled how would one go about configuring Gluster platform to 
 respond to snmp requests?

You can ssh to a platform node using:

username: gluster
password: glusteradmin

and then do sudo -s to get a root shell.

--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exporting nfs share with glusterfs?

2010-04-16 Thread Vikas Gorur

On Apr 16, 2010, at 3:27 AM, Burnash, James wrote:

 I have not deployed this configuration - I was simply correcting the earlier 
 statement made that NFS doesn't support extended attributes.

Right. I was referring to NFSv3 when I said it doesn't support extended 
attributes.

 I would be interested if anyone was implementing NFS v4 on top of Gluster.

Gluster should theoretically work on a NFSv4 backend. We would be interested to
hear about it as well if anyone gets it working.

--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread David Christensen
Vikas,

Are you sure that the password you provided is the default one?  I
just tried it and its failing.

David Christensen

On 04/16/2010 06:25 PM, Vikas Gorur wrote:
 
 On Apr 16, 2010, at 7:56 AM, Bryan McGuire wrote:
 
 Hello

 I guess this is what I was getting at with my original question.

 Since ssh is disabled how would one go about configuring Gluster
 platform to respond to snmp requests?
 
 You can ssh to a platform node using:
 
 username: gluster
 password: glusteradmin
 
 and then do sudo -s to get a root shell.
 
 --
 Vikas Gorur
 Engineer - Gluster, Inc.
 +1 (408) 770 1894
 --
 
 
 
 
 
 
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread Harshavardhana

On 04/16/2010 04:39 PM, David Christensen wrote:

Vikas,

Are you sure that the password you provided is the default one?  I
just tried it and its failing.

David Christensen
   

David,

   if you have changed the password from the webui then it would be the 
same password.


Regards

--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-480-1730

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread David Christensen
I'm in thanks!!!



On 04/16/2010 06:43 PM, Harshavardhana wrote:
 On 04/16/2010 04:39 PM, David Christensen wrote:
 Vikas,

 Are you sure that the password you provided is the default one?  I
 just tried it and its failing.

 David Christensen

 David,
 
if you have changed the password from the webui then it would be the
 same password.
 
 Regards
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replicated vs distributed

2010-04-16 Thread Bryan McGuire
Could someone explain to the new person the difference between replicated and 
distributed volumes

Thanks Bryan

Sent from my iPad
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal with VM Storage

2010-04-16 Thread Tejas N. Bhise
Thanks, that will help reproduce internally. 

Regards,
Tejas.
- Original Message -
From: Justice London jlon...@lawinfo.com
To: Tejas N. Bhise te...@gluster.com
Cc: gluster-users@gluster.org
Sent: Friday, April 16, 2010 9:03:50 PM
Subject: RE: [Gluster-users] Self heal with VM Storage

After the self heal finishes it sort of works. Usually this destroys InnoDB
if you're running a database. Most often, though, it also causes some
libraries and similar to not be properly read in by the VM guest which means
you have to reboot it to fix for this. It should be fairly easy to
reproduce... just shut down a storage brick (any configuration... it doesn't
seem to matter). Make sure of course that you have a running VM guest (KVM,
etc) using the gluster mount. You'll then turn off(unplug, etc.) one of the
storage bricks and wait a few minutes... then re-enable it.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 7:41 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

Thanks for the description. So, does this mean 
that after the self heal is over after some time, 
the guest starts to work fine ?

We will reproduce this inhouse and get back.

Regards,
Tejas.
- Original Message -
From: Justice London jlon...@lawinfo.com
To: Tejas N. Bhise te...@gluster.com
Cc: gluster-users@gluster.org
Sent: Friday, April 16, 2010 1:18:36 AM
Subject: RE: [Gluster-users] Self heal with VM Storage

Okay, but what happens on a brick shutting down and being added back to the
cluster? This would be after some live data has been written to the other
bricks.

From what I was seeing access to the file is locked. Is this not the case?
If file access is being locked it will obviously cause issues for anything
trying to read/write to the guest at the time.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 12:33 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

From posts from the community on this user list, I know 
that there are folks that run hundreds of VMs out of 
gluster. 

So it's probably more about the data usage than just a 
generic viability statement as you made in your post.

Gluster does not support databases, though many people
use them on gluster without much problem. 

Please let me know if you see some problem with unstructured 
file data on VMs. I would be happy to help debug that problem.

Regards,
Tejas.


- Original Message -
From: Justice London jlon...@lawinfo.com
To: gluster-users@gluster.org
Sent: Friday, April 16, 2010 12:52:19 AM
Subject: [Gluster-users] Self heal with VM Storage

I am running gluster as a storage backend for VM storage (KVM guests). If
one of the bricks is taken offline (even for an instant), on bringing it
back up it runs the metadata check. This causes the guest to both stop
responding until the check finishes and also to ruin data that was in
process (sql data for instance). I'm guessing the file is being locked while
checked. Is there any way to fix for this? Without being able to fix for
this, I'm not certain how viable gluster will be, or can be for VM storage.

 

Justice London

jlon...@lawinfo.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster and 10GigE - quick survey in the Community

2010-04-16 Thread Tejas N. Bhise
Dear Community Users,

In an effort to harden Gluster's strategy with 
10GigE technology, I would like to request the following
information from you -

1) Are you already using 10GigE with either the Gluster
   servers or clients ( or the platform ) ?

2) If not currently, are you considering using 10GigE with
   Gluster servers or clients ( or platform ) in the future ?

3) Which make, driver and on which OS are you using or considering
   using this 10GigE technology with Gluster ?

4) If you are already using this technology, would you like to 
   share your experiences with us ?

Your feedback is extremely important to us. Please write to me soon.

Regards,
Tejas.

tejas at gluster dot com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users