Re: [Gluster-users] Trouble mounting NFS

2011-12-27 Thread Bryan McGuire
Problem was solved by turning off the NFS server installed on my Ubuntu boxes. 
Just forgot to do that.

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org



On Dec 27, 2011, at 8:01 AM, Adam Tygart wrote:

> Bryan,
> 
> If your mount command is resorting to version 4 of the nfs protocol by
> default, you need to force version 3.
> 
> Try this: mount -t nfs -o vers=3,tcp 192.168.1.100:/test-vol /mnt/glusterssd
> 
> --
> Adam
> 
> On Mon, Dec 26, 2011 at 10:54, Bryan McGuire  wrote:
>> Hello,
>> 
>> I have a small distributed setup trying to test NFS.
>> Volume Name: test-vol
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: ubuntu3:/ssdpool/gluster
>> Brick2: ubuntu:/ssdpool/gluster
>> Options Reconfigured:
>> nfs.disable: off
>> auth.allow: 192.*
>> 
>> I am trying to mount via NFS from my CentOS 5.7 box using the following 
>> command.
>> mount -t nfs 192.168.1.100:/test-vol /mnt/glusterssd
>> 
>> and I get the following
>> mount: 192.168.1.100:/test-vol failed, reason given by server: Permission 
>> denied
>> 
>> How do I allow my client permission to mount via NFS?
>> 
>> Bryan McGuire
>> Senior Network Engineer
>> NewNet 66
>> 
>> 918.231.8063
>> bmcgu...@newnet66.org
>> 
>> 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Trouble mounting NFS

2011-12-27 Thread Bryan McGuire
Hello,

I have a small distributed setup trying to test NFS.
Volume Name: test-vol
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ubuntu3:/ssdpool/gluster
Brick2: ubuntu:/ssdpool/gluster
Options Reconfigured:
nfs.disable: off
auth.allow: 192.*

I am trying to mount via NFS from my CentOS 5.7 box using the following command.
mount -t nfs 192.168.1.100:/test-vol /mnt/glusterssd

and I get the following
mount: 192.168.1.100:/test-vol failed, reason given by server: Permission denied

How do I allow my client permission to mount via NFS?

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Trouble mounting via NFS

2011-12-26 Thread Bryan McGuire
Hello,

I have a small distributed setup trying to test NFS.
Volume Name: test-vol
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ubuntu3:/ssdpool/gluster
Brick2: ubuntu:/ssdpool/gluster
Options Reconfigured:
nfs.disable: off
auth.allow: 192.*

I am trying to mount via NFS from my CentOS 5.7 box using the following command.
mount -t nfs 192.168.1.100:/test-vol /mnt/glusterssd

and I get the following
mount: 192.168.1.100:/test-vol failed, reason given by server: Permission denied

How do I allow my client permission to mount via NFS?

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Infiniband

2011-01-09 Thread Bryan McGuire
Hello,

I am looking into GlusterFS as a high availability solution for our email 
servers. I am new to Infiniband but find it could possibly provide us with the 
necessary speed. 

Could someone describe what I would need in the way of Infiniband hardware / 
software to complete the following.

Two to 4 front end email servers with each being a client and server for the 
GlusterFS file system performing replication of the data.

I think I would need the necessary Infiniband cards in each server along with 
an Infiniband switch. But do not have any background to determine which or even 
if this is correct. 

Thanks in advance.

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Errors while rsyncing

2010-07-10 Thread Bryan McGuire

Hello,

I am rysncing an email storage to a replicated Glusterfs setup (Two  
nodes). Two Dell servers with two quad core 2.9 Ghz processors and 32  
Gigs of ram, gigabit network switch between nodes and client.


From the client 192.168.1.15:
time rsync -rzva --delete --exclude 'subdirs.dat' --exclude  
'size.dat' /fs/mail/* 192.168.1.17:/mnt/mail


Attached are the vol files.

I am receiving the following errors in etc-glusterfs- 
glusterfsd.vol.log on both nodes. Could someone shed some light on the  
problem and a possible remedy? The rsync is excruciating slow.


[2010-07-10 14:15:50] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021652241566.imap failed: No such file or directory
[2010-07-10 14:15:51] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021337499580.imap failed: No such file or directory
[2010-07-10 14:15:51] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021337449577.imap failed: No such file or directory
[2010-07-10 14:15:51] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021337199574.imap failed: No such file or directory
[2010-07-10 14:15:52] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021121127979.imap failed: No such file or directory
[2010-07-10 14:15:52] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021120327955.imap failed: No such file or directory
[2010-07-10 14:15:52] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007021119457931.imap failed: No such file or directory
[2010-07-10 14:15:52] E [posix.c:654:posix_setattr] posix1: setattr  
(lstat) on /fs/gluster/agra.k12.ok.us/c/cstine/Deleted Messages/ 
201007020213553213.imap failed: No such file or directory




Bryan McGuire


## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.17:/fs/mail 
192.168.1.16:/fs/mail

volume posix1
  type storage/posix
  option directory /fs/gluster
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.17:/fs/mail 
192.168.1.16:/fs/mail

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.17-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.17
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
option read-subvolume 192.168.1.16
subvolumes 192.168.1.17-1 192.168.1.16-1
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume

volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed 
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 20
option max-file-size 1024kB
subvolumes iocache
#subvolumes mirror-0
end-volume

volume writebehind
type performance/write-behind
option cache-size 12MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Question

2010-07-05 Thread Bryan McGuire

Hello,

I really don't know how to ask this question but I will give it a try.

New two node, replicated setup, with data existing on one node.
What happens when setting up glusterfs when one of the shared bricks  
already contains data?

Will that data be replicated to the other brick?
If so when will the replication take place? Can I force the  
replication to the other brick?


Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Running Gluster client/server on single process

2010-06-21 Thread Bryan McGuire

Hello,


Any reasons behind commenting out the above two translators?


I was having issues with speed when dealing with small files. Craig  
Carl indicated that the adjustments might help.

Craig, are these adjustments still needed on 3.0.5?

Bryan

On Jun 20, 2010, at 11:19 PM, Vijay Bellur wrote:


On Monday 21 June 2010 06:26 AM, Bryan McGuire wrote:


I have done the following in order to test with 3.0.5 release  
candidate. Please correct me if I am wrong.


Unmounted storage on both servers.
Stopped glusterfsd.
Downloaded glusterfs-3.0.5rc6.tar.gz from 
http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/
Extracted
./configure
make
make install
Started glusterfsd
mounted storage on both servers.


Your installation does look right.

<<<>>>

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


volume posix1
 type storage/posix
 option directory /fs/gluster
end-volume

volume locks1
   type features/locks
   subvolumes posix1
end-volume

volume brick1
   type performance/io-threads
   option thread-count 8
   subvolumes locks1
end-volume

volume server-tcp
   type protocol/server
   option transport-type tcp
   option auth.addr.brick1.allow *
   option transport.socket.listen-port 6996
   option transport.socket.nodelay on
   subvolumes brick1
end-volume


<<<>>>

## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
   type protocol/client
   option transport-type tcp
   option remote-host 192.168.1.16
   option transport.socket.nodelay on
   option transport.remote-port 6996
   option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
   type protocol/client
   option transport-type tcp
   option remote-host 192.168.1.15
   option transport.socket.nodelay on
   option transport.remote-port 6996
   option remote-subvolume brick1
end-volume

volume mirror-0
   type cluster/replicate
   subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes mirror-0
#end-volume

#volume iocache
#type performance/io-cache
#option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB

#option cache-timeout 1
#subvolumes readahead
#end-volume

volume quickread
   type performance/quick-read
   option cache-timeout 1
   option max-file-size 1024kB
  # subvolumes iocache
   subvolumes mirror-0
end-volume

volume writebehind
   type performance/write-behind
   option cache-size 4MB
   subvolumes quickread
end-volume

volume statprefetch
   type performance/stat-prefetch
   subvolumes writebehind
end-volume






The configuration files do look fine.
Any reasons behind commenting out the above two translators?

Regards,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Running Gluster client/server on single process

2010-06-20 Thread Bryan McGuire

Tejas,

I have done the following in order to test with 3.0.5 release  
candidate. Please correct me if I am wrong.


Unmounted storage on both servers.
Stopped glusterfsd.
Downloaded glusterfs-3.0.5rc6.tar.gz from 
http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/
Extracted
./configure
make
make install
Started glusterfsd
mounted storage on both servers.

Do I need to make any changes to my configuration files?

<<<>>>

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


volume posix1
  type storage/posix
  option directory /fs/gluster
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume


<<<>>>

## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes mirror-0
#end-volume

#volume iocache
#type performance/io-cache
#option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB

#option cache-timeout 1
#subvolumes readahead
#end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 1024kB
   # subvolumes iocache
subvolumes mirror-0
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume



Bryan McGuire





On Jun 20, 2010, at 12:59 PM, Tejas N. Bhise wrote:


Hi Bryan,

3.0.5 should be out soon. If you want to do some testing before it's  
officially out, you can try the latest release candidate. You don't  
need to patch at this stage. Let me know if you know how to get the  
release candidates and use them.


Regards,
Tejas.

- Original Message -
From: "Bryan McGuire" 
To: "Tejas N. Bhise" 
Cc: "gluster-users" 
Sent: Sunday, June 20, 2010 7:46:55 PM
Subject: Re: [Gluster-users] Running Gluster client/server on single  
process


Tejas,

Any idea when 3.0.5 will be released? I am very anxious for these
patches to be in production.

On another note, I am very new to Gluster let alone Linux. Could you,
or someone else, give me some guidance (How to) in applying the
patches. I would like to test them for now?

Bryan McGuire


On May 19, 2010, at 8:06 AM, Tejas N. Bhise wrote:


Roberto,

We recently made some code changes we think will considerably help
small file performance -

selective readdirp - http://patches.gluster.com/patch/3203/
dht lookup revalidation optimization - http://patches.gluster.com/patch/3204/
updated write-behind default values - http://patches.gluster.com/patch/3223/

These are tentatively scheduled to go into 3.0.5.
If its possible for you, I would suggest you test them in a non-
production environment
and see if it  helps with distribute config itself.

Please do not use in production, for that wait for the release which
these patches go in.

Do let me know if you have any questions about this.

Regards,
Tejas.


- Original Message -
From: "Roberto Franchini" 
To: "gluster-users" 
Sent: Wednesday, May 19, 2010 5:29:47 PM
Subject: Re: [Gluster-users] Running Gluster client/server on single
process

On Sat, May 15, 2010 at 10:06 PM, Craig Carl 
wrote:

Robert -
 NUFA has been deprecated and doesn't apply to any recent
version of
Gluster. What version are you running? ('glusterfs --version')


We run 3.0.4 on ubuntu 9.10 and 10.04 server.
Is there a way to mimic NUFA behaviour?

We are using gluster to store Lucene indexes. Indexes are created
locally from milions of small files and then copied to the storage.
I tried read this little files from gluster but was too slow.
So maybe a NUFA way, e.g. prefer

Re: [Gluster-users] Running Gluster client/server on single process

2010-06-20 Thread Bryan McGuire

Tejas,

Any idea when 3.0.5 will be released? I am very anxious for these  
patches to be in production.


On another note, I am very new to Gluster let alone Linux. Could you,  
or someone else, give me some guidance (How to) in applying the  
patches. I would like to test them for now?


Bryan McGuire


On May 19, 2010, at 8:06 AM, Tejas N. Bhise wrote:


Roberto,

We recently made some code changes we think will considerably help  
small file performance -


selective readdirp - http://patches.gluster.com/patch/3203/
dht lookup revalidation optimization - http://patches.gluster.com/patch/3204/
updated write-behind default values - http://patches.gluster.com/patch/3223/

These are tentatively scheduled to go into 3.0.5.
If its possible for you, I would suggest you test them in a non- 
production environment

and see if it  helps with distribute config itself.

Please do not use in production, for that wait for the release which  
these patches go in.


Do let me know if you have any questions about this.

Regards,
Tejas.


- Original Message -
From: "Roberto Franchini" 
To: "gluster-users" 
Sent: Wednesday, May 19, 2010 5:29:47 PM
Subject: Re: [Gluster-users] Running Gluster client/server on single  
process


On Sat, May 15, 2010 at 10:06 PM, Craig Carl   
wrote:

Robert -
  NUFA has been deprecated and doesn't apply to any recent  
version of

Gluster. What version are you running? ('glusterfs --version')


We run 3.0.4 on ubuntu 9.10 and 10.04 server.
Is there a way to mimic NUFA behaviour?

We are using gluster to store Lucene indexes. Indexes are created
locally from milions of small files and then copied to the storage.
I tried read this little files from gluster but was too slow.
So maybe a NUFA way, e.g. prefer local disk for read, could improve  
performance.

Let me know :)

At the moment we use dht/replicate:


#CLIENT

volume remote1
type protocol/client
option transport-type tcp
option remote-host zeus
option remote-subvolume brick
end-volume

volume remote2
type protocol/client
option transport-type tcp
option remote-host hera
option remote-subvolume brick
end-volume

volume remote3
type protocol/client
option transport-type tcp
option remote-host apollo
option remote-subvolume brick
end-volume

volume remote4
type protocol/client
option transport-type tcp
option remote-host demetra
option remote-subvolume brick
end-volume

volume remote5
type protocol/client
option transport-type tcp
option remote-host ade
option remote-subvolume brick
end-volume

volume remote6
type protocol/client
option transport-type tcp
option remote-host athena
option remote-subvolume brick
end-volume

volume replicate1
type cluster/replicate
subvolumes remote1 remote2
end-volume

volume replicate2
type cluster/replicate
subvolumes remote3 remote4
end-volume

volume replicate3
type cluster/replicate
subvolumes remote5 remote6
end-volume

volume distribute
type cluster/distribute
subvolumes replicate1 replicate2 replicate3
end-volume

volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes distribute
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1 # default 1 second
#  option max-file-size 256KB# default 64Kb
subvolumes writebehind
end-volume

### Add io-threads for parallel requisitions
volume iothreads
type performance/io-threads
option thread-count 16 # default is 16
subvolumes quickread
end-volume


#SERVER

volume posix
type storage/posix
option directory /data/export
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume
--
Roberto Franchini
http://www.celi.it
http://www.blogmeter.it
http://www.memesphere.it
Tel +39.011.562.71.15
jabber:ro.franch...@gmail.com skype:ro.franchini
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] slow rsync

2010-06-04 Thread Bryan McGuire

Hello,

I have a working Glusterfs setup with two nodes mirroring their  
bricks. Both are server and client.


I am trying to copy our email data (around 400 GB) from /fs/mail to / 
mnt/glusterfs/mail (gluster mount) in order to test the setup before  
production.


Using rsync the first time to 29 hours.

I am currently performing a second rsync as I would like to get up to  
date data for testing.


rsync -rva --delete /fs/mail/* /mnt/glusterfs/mail

This is still taking forever!  Any advice, or is their a faster way to  
get this accomplished?


Could I use /fs/mail as the source for glusterfs in one node and sync  
to the second node?


Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] mnt-mail.log - memory ???

2010-05-12 Thread Bryan McGuire
Thanks for that, it seems to be working. I can not determine if its  
faster yet as I have noticed something else.


The gluster servers are running at or above 98% memory usage. I have  
32 Gigs of memory so this seems a bit out of bounds.


I have read about a memory leak does this apply to this situation?  
link here http://comments.gmane.org/gmane.comp.file-systems.gluster.user/2701


Or is this just what gluster does when copying a lot of small files?

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





On May 11, 2010, at 11:48 PM, Craig Carl wrote:


Bryan -
Sorry about that. You still need a "subvolumes" value, it should  
be the translator next up in the list, mirror-0. So -


volume quickread
type performance/quick-read
option cache-timeout 10
option max-file-size 64kB
subvolumes mirror-0
end-volume





- Original Message -----
From: "Bryan McGuire" 
To: "Craig Carl" 
Cc: gluster-users@gluster.org
Sent: Tuesday, May 11, 2010 6:45:44 PM GMT -08:00 US/Canada Pacific
Subject: Re: [Gluster-users] mnt-mail.log

Done. but now have this error in /var/log/glusterfs/mnt-mail.log

[2010-05-11 20:40:21] E [quick-read.c:2194:init] quickread: FATAL:
volume (quickread) not configured with exactly one child
[2010-05-11 20:40:21] E [xlator.c:839:xlator_init_rec] quickread:
Initialization of volume 'quickread' failed, review your volfile again
[2010-05-11 20:40:21] E [glusterfsd.c:591:_xlator_graph_init]
glusterfs: initializing translator failed
[2010-05-11 20:40:21] E [glusterfsd.c:1394:main] glusterfs: translator
initialization failed.  exiting

I did change the option max-file-size but when I received the errors I
put it back to 64kb.

New glusterfs.vol

## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
192.168.1.16:/fs

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes mirror-0
#end-volume

#volume iocache
#type performance/io-cache
#option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
#option cache-timeout 1
#subvolumes readahead
#end-volume

volume quickread
type performance/quick-read
option cache-timeout 10
option max-file-size 64kB
#subvolumes iocache
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume




Bryan McGuire


On May 11, 2010, at 7:37 PM, Craig Carl wrote:


Bryan -
 Your server vol file isn't perfect for large rsync operations. The
changes I'm recommending will improve your rsync performance if you
are moving a lot of small files. Please backup the current file
before making any changes. You should comment out "readahead" and
"iocache". In the quickread section change the "option cache-
timeout" to 10 and change the "max-file-size" to the size of the
largest file of which you have many, rounded up to the nearest
factor of 4.
 After you have made the changes across all the storage nodes
please restart Gluster and measure the throughput again.

Thanks,

Craig



- Original Message -
From: "Bryan McGuire" 
To: "Craig Carl" 
Cc: gluster-users@gluster.org
Sent: Tuesday, May 11, 2010 2:31:48 PM GMT -08:00 US/Canada Pacific
Subject: Re: [Gluster-users] mnt-mail.log

Here they are,


For msvr1 - 192.168.1.15

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
192.168.1.16:/fs

volume posix1
 type storage/posix
 option directory /fs
end-volume

volume locks1
   type features/locks
   subvolumes posix1
end-volume

volume brick1
   type performance/io-threads
   option thread-count 8
   subvolumes locks1
end-volume

volume server-tcp
   type protocol/server
   option transport-type tcp
   option auth.addr.brick1.allow *
   option transport.socket.listen-port 6996
   option transport.socket.nodelay on
   subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1

Re: [Gluster-users] mnt-mail.log

2010-05-11 Thread Bryan McGuire

Done. but now have this error in /var/log/glusterfs/mnt-mail.log

[2010-05-11 20:40:21] E [quick-read.c:2194:init] quickread: FATAL:  
volume (quickread) not configured with exactly one child
[2010-05-11 20:40:21] E [xlator.c:839:xlator_init_rec] quickread:  
Initialization of volume 'quickread' failed, review your volfile again
[2010-05-11 20:40:21] E [glusterfsd.c:591:_xlator_graph_init]  
glusterfs: initializing translator failed
[2010-05-11 20:40:21] E [glusterfsd.c:1394:main] glusterfs: translator  
initialization failed.  exiting


I did change the option max-file-size but when I received the errors I  
put it back to 64kb.


New glusterfs.vol

## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes mirror-0
#end-volume

#volume iocache
#type performance/io-cache
#option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB

#option cache-timeout 1
#subvolumes readahead
#end-volume

volume quickread
type performance/quick-read
option cache-timeout 10
option max-file-size 64kB
#subvolumes iocache
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume




Bryan McGuire


On May 11, 2010, at 7:37 PM, Craig Carl wrote:


Bryan -
  Your server vol file isn't perfect for large rsync operations. The  
changes I'm recommending will improve your rsync performance if you  
are moving a lot of small files. Please backup the current file  
before making any changes. You should comment out "readahead" and  
"iocache". In the quickread section change the "option cache- 
timeout" to 10 and change the "max-file-size" to the size of the  
largest file of which you have many, rounded up to the nearest  
factor of 4.
  After you have made the changes across all the storage nodes  
please restart Gluster and measure the throughput again.


Thanks,

Craig



- Original Message -
From: "Bryan McGuire" 
To: "Craig Carl" 
Cc: gluster-users@gluster.org
Sent: Tuesday, May 11, 2010 2:31:48 PM GMT -08:00 US/Canada Pacific
Subject: Re: [Gluster-users] mnt-mail.log

Here they are,


For msvr1 - 192.168.1.15

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
192.168.1.16:/fs

volume posix1
  type storage/posix
  option directory /fs
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
192.168.1.16:/fs

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume

volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file

Re: [Gluster-users] mnt-mail.log

2010-05-11 Thread Bryan McGuire

Here they are,


For msvr1 - 192.168.1.15

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


volume posix1
  type storage/posix
  option directory /fs
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume

volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB

option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume


For msvr2 192.168.1.16

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


volume posix1
  type storage/posix
  option directory /fs
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs


# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.16
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.15
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume

volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB

option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume






Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





On May 11, 2010, at 4:26 PM, Craig Carl wrote:


Bryan -
  Can you send your client and server vol files?

Thanks,

Craig

--
Craig Carl
Sales Engineer
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster

----- Original Message -
From: "Bryan McGuire" 
To: gluster-users@gluster.org
Sent: Tuesday, May 11, 2010 2:12:13 PM GMT -08:00 US/Canada Pacific
Subject: [Gluster-users] mnt-mail.log

Hello,

I have Glusterfs 3.0.4 setup in a two node replication. It appears to
be working just fine. Although I am using rsync to move over 350 Gig
of email files

[Gluster-users] mnt-mail.log

2010-05-11 Thread Bryan McGuire

Hello,

I have Glusterfs 3.0.4 setup in a two node replication. It appears to  
be working just fine. Although I am using rsync to move over 350 Gig  
of email files and the process is very slow.


I have noticed the following in the file /var/log/gluserfs/mntl- 
mail.log... could someone explain what the lines mean. Thanks


[2010-05-11 15:41:51] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs- 
fuse: LOOKUP(/_outgoing/retry/201005100854105298-1273614110_8.tmp)  
inode (ptr=0xa235c70, ino=808124434, gen=5468694309383486383) found  
conflict (ptr=0x2aaaea26c290, ino=808124434, gen=5468694309383486383)
[2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs- 
fuse: LOOKUP(/_outgoing/retry/201005101016464462-1273614395_8.tmp)  
inode (ptr=0x2aaaf07f5550, ino=808124438, gen=5468694309383486385)  
found conflict (ptr=0x82e4420, ino=808124438, gen=5468694309383486385)
[2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs- 
fuse: LOOKUP(/_outgoing/retry/201005100830599960-1273614395_8.tmp)  
inode (ptr=0x2aaac01da520, ino=808124430, gen=5468694309383486381)  
found conflict (ptr=0x60d7a90, ino=808124430, gen=5468694309383486381)
[2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs- 
fuse: LOOKUP(/_outgoing/retry/201005101417175132-1273614396_8.tmp)  
inode (ptr=0x2aaaf07f5550, ino=808124446, gen=5468694309383486389)  
found conflict (ptr=0x8eb16e0, ino=808124446, gen=5468694309383486389)
[2010-05-11 15:51:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs- 
fuse: LOOKUP(/_outgoing/retry/201005100749045904-1273614665_8.tmp)  
inode (ptr=0x1ec11ee0, ino=808124420, gen=5468694309383486379) found  
conflict (ptr=0x2aaaea26bd30, ino=808124420, gen=5468694309383486379)




Bryan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Question

2010-05-07 Thread Bryan McGuire

Hello,

I want to use the replication ability of  GulsterFS as a storage for  
our mail servers and have the following.


Two CentOS boxes - Gluster 3.0.4 installed on both. Each box is a  
server and a client so we can load balance the email traffic between  
the two servers.


Everything works fine except when one server goes down. The other  
seems to loose connectivity to the shared storage. I do an ls /mnt/ 
mail and it just hangs and never responds.


Is this by design or am I missing something?

Bryan McGuire


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] New setup - Transport endpoint is not connected

2010-05-06 Thread Bryan McGuire
on transport-type tcp
 38:   option auth.addr.brick.allow *
 39:   option auth.addr.brick-ns.allow *
 40:   subvolumes brick brick-ns
 41: end-volume
 42:

+ 
--+
[2010-05-06 18:14:39] D [glusterfsd.c:1382:main] glusterfs: running in  
pid 5400
[2010-05-06 18:14:39] D [transport.c:123:transport_load] transport:  
attempt to load file /usr/local/lib/glusterfs/3.0.4/transport/socket.so
[2010-05-06 18:14:39] D [name.c:553:server_fill_address_family]  
server: option address-family not specified, defaulting to inet/inet6
[2010-05-06 18:14:39] D [io-threads.c:2841:init] brick-ns: io-threads:  
Autoscaling: off, min_threads: 8, max_threads: 8
[2010-05-06 18:14:39] D [io-threads.c:2841:init] brick: io-threads:  
Autoscaling: off, min_threads: 8, max_threads: 8
[2010-05-06 18:14:39] N [glusterfsd.c:1408:main] glusterfs:  
Successfully started


Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Upgrading

2010-05-06 Thread Bryan McGuire

Hello,

I am in the testing stages of GlusterFS. I have installed 3.03 on  
CentOS 5.4. I see now that version 3.04 is available. Is the upgrade  
as simple as unmounting, stopping, and running the new RPM. If not  
could someone point me to the documentation please. Thank You.


Bryan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replicated vs distributed

2010-04-16 Thread Bryan McGuire
Could someone explain to the new person the difference between replicated and 
distributed volumes

Thanks Bryan

Sent from my iPad
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] snmp

2010-04-16 Thread Bryan McGuire

Hello

I guess this is what I was getting at with my original question.

Since ssh is disabled how would one go about configuring Gluster  
platform to respond to snmp requests?


Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org


On Apr 16, 2010, at 8:16 AM, David Christensen wrote:

Given that ssh access is disabled by default, which I understand why  
that is, I would be interested in learning what the options are for  
enabling something like net-snmp on the server.  Between snmp and  
ssh access there would be a starting point for getting data.


David Christensen



On Apr 16, 2010, at 2:35 AM, Daniel Maher   
wrote:



On 04/16/2010 03:01 AM, Bryan McGuire wrote:

Hello,

Is there a way to monitor a gluster platform server via snmp?


Given that you can configure snmpd to trigger and report the  
results of more or less anything, the answer is theoretically « yes  
».  The real question is whether you can gather the data you want  
reported in the first place.



--
Daniel Maher 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] snmp

2010-04-15 Thread Bryan McGuire

Hello,

Is there a way to monitor a gluster platform server via snmp?

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] New Install help

2010-04-13 Thread Bryan McGuire

Thanks for your immediate responses.

I seem to have it working now.
I changed to a mirroring two drives in separate nodes. And also  
changed the volume name to "volume1".

Don't know which one fixed it or if neither did? But it works now.

Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcgu...@newnet66.org





On Apr 13, 2010, at 2:22 PM, Bryan McGuire wrote:


Hello,

I'm very new to Gluster but have managed to install the Gluster  
Platform. I have created a volume called vol1 that is a mirror of  
two drives in a node.


I am trying to mount the volume from a client using glusterfs.

Here is the command I am using:
mount -t glusterfs 192.168.1.17:vol1-tcp /mnt/glusterfs

I get the following:
Error while getting volume file from server 192.168.1.17

Please guide me in the correct direction.

Bryan McGuire


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] New Install help

2010-04-13 Thread Bryan McGuire

Hello,

I'm very new to Gluster but have managed to install the Gluster  
Platform. I have created a volume called vol1 that is a mirror of two  
drives in a node.


I am trying to mount the volume from a client using glusterfs.

Here is the command I am using:
mount -t glusterfs 192.168.1.17:vol1-tcp /mnt/glusterfs

I get the following:
Error while getting volume file from server 192.168.1.17

Please guide me in the correct direction.

Bryan McGuire


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users