[Gluster-users] how to optimize performance on server side

2010-12-22 Thread Gotwalt, P.
Hi,

What is the best practice in configuring the storage bricks. 

I have 4 nodes with each 4 disks. I want to have the best performance so
I will make a stripe of these 4 servers. But how can I get the best
performance out of each server? 

1 - Make a software stripe (with md raid0) over all the internal disks
and export this as 1 volume, and then
volume create performance-volume stripe 4 node1:/bigdisk node2:/bigdisk
node3:/bigdisk node4:/bigdisk

2 - Let gluster do the striping - make each disk a volume and let
glusterfs do the striping:
volume create performance-volume stripe 16 node1:/disk1 node1:/disk2 ...
node4:/disk3 node4:/disk4

Any best practices?

Peter Gotwalt
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] NFS problems with 3.1.1

2010-12-22 Thread Kon Wilms
I'm having I/O errors on my clients who are mounting 3.1.1 via gluster
native NFS. The clients are running NGINX.

Any advice appreciated! Perhaps my access mode is incorrect for nfs?

The throughput is about 20Mbps sustained, but the system benched at
about 600-900Mbps so that shouldn't be a problem.

Errors along the lines of:
2010/12/22 08:09:32 [alert] 12206#0: *5807333 sendfile() failed (5:
Input/output error) while sending
which matches to:
nfs.log:
[2010-12-22 16:07:36.937671] I [dht-common.c:369:dht_revalidate_cbk]
pool-dht: subvolume pool-client-1 returned -1 (Invalid argument)

(one system is gmt, the other pst)

- NFS in fstab on clients:
10.2.16.51:/pool /gfs1 nfs
rw,bg,rsize=8192,wsize=8192,timeo=14,noatime,intr,soft,retrans=6
 0   0
- GFS configuration:
2 nodes running ubuntu 10.04.1LTS
1 brick per node consisting of EXT4+LVM on two RAID1 2TB drives (total
of 2TB per node)
- GFS create line:
gluster volume create pool transport tcp 10.2.16.51:/pool/raw
10.2.16.52:/pool/raw
- Access mode:
client 1 accesses nfs on node 1
client 2 accesses nfs on node 2
- Interfaces
3 gige nics on each node bonded in bond mode 6

Cheers
Kon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 newbe question

2010-12-22 Thread Daniel Müller
Thanks for the hint,
I changed network.ping-timeout to "5". But it seems only lightly different. I 
would expect for gluster the same behavior
As I do with drbd??!!

[r...@ctdb1 ~]# gluster volume info all

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5


What about :network.frame-timeout can I adjust this parameter to react quick if 
a node
is down???

[r...@ctdb1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79
State: Peer in Cluster (Connected)
[r...@ctdb1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79
State: Peer in Cluster (Connected)

Or is it in my /etc/glusterfs/glusterd.vol:
[r...@ctdb1 glusterfs]# cat glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type tcp,socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Jacob Shucart [mailto:ja...@gluster.com] 
Gesendet: Dienstag, 21. Dezember 2010 18:40
An: muel...@tropenklinik.de; 'Daniel Maher'; gluster-users@gluster.org
Betreff: RE: [Gluster-users] Gluster 3.1 newbe question

Hello,

Please don't write to /glusterfs/export as this is not compatible with 
Gluster.  There is a ping timeout which controls how long Gluster will wait 
to write for a node that went down.  By default this value is very high, so 
please run:

gluster volume set samba-vol network.ping-timeout 15

Then mount your Gluster volume somewhere and try writing to it.  You will 
see that it will pause for a while and then resume writing.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Daniel Müller
Sent: Tuesday, December 21, 2010 7:29 AM
To: muel...@tropenklinik.de; 'Daniel Maher'; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.1 newbe question

Hm, now I did not use the mount point of the volumes.
I wrote diretctly in /glusterfs/export and gluster did not hang while the 
other peer restarted.
But now the files I wrote in the meanwhile are not replicated
How about this?
Is there a command to get them replicated to the other node?

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Müller
Gesendet: Dienstag, 21. Dezember 2010 16:07
An: 'Daniel Maher'; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

Even started it is the same, perhaps I missed a thing:
[r...@ctdb1 ~]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export

I created the volumes like:
gluster volume create samba-vol  replica 2 transport tcp 
192.168.132.56:/glusterfs/export 192.168.132.57:/glusterfs/export

Both are mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
glusterfs#192.168.132.56:/samba-vol on /mnt/glusterfs type fuse 
(rw,allow_other,default_permissions,max_read=131072)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Maher
Gesendet: Dienstag, 21. Dezember 2010 15:21
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

On 12/21/2010 02:54 PM, Daniel Müller wrote:

> I have build up a two peer gluster on centos 5.5 x64
> My Version:
> glus