Re: [Gluster-users] reloading client config at run-time

2010-04-19 Thread Tejas N. Bhise
Not today, but with dynamic volume management, it will be possible in a future 
release. It will help add and remove servers, migrate data, change the 
configuration and replication count etc on the fly - its part of our 
virtualization and cloud strategy.


Regards,
Tejas.

- Original Message -
From: "D.P." 
To: gluster-users@gluster.org
Sent: Monday, April 19, 2010 10:48:28 PM
Subject: [Gluster-users] reloading client config at run-time

is it possible?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] reloading client config at run-time

2010-04-19 Thread Count Zero
I was told on the IRC channel that it's being planned in a future release... ;-)

On Apr 19, 2010, at 10:18 AM, D.P. wrote:

> is it possible?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] reloading client config at run-time

2010-04-19 Thread D.P.
is it possible?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] unix sockets and client

2010-04-19 Thread pawel eljasz

dear users
having this simplistic config on client:

volume vmail
type protocol/client
option transport-type unix
option transport.unix.connect-path /var/run/glusterfsd.d/vmail.sock
option remote-subvolume vmail
end-volume

these are being logged:
option transport.unix.connect-path not specified for address-family unix

I surely am missing something, but what?

regards
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Force usage of local store

2010-04-19 Thread pawel eljasz

hi,
have you seen this, it might be helpful I think
http://gluster.com/community/documentation/index.php/GlusterFS_1.3_High_Availability_Storage_with_GlusterFS

On 18/04/10 21:27, Kelvin Westlake wrote:

Hi Guys



I have 2 servers setup in RAID 1, also each have a client to the
store/volume - How can I ensure each client uses the local copy of
files?



Thanks

Kelvin




This email with any attachments is for the exclusive and confidential use of 
the addressee(s) and may contain legally privileged information. Any other 
distribution, use or reproduction without the senders prior consent is 
unauthorised and strictly prohibited. If you receive this message in error 
please notify the sender by email and delete the message from your computer.

Netbasic Limited registered office and business address is 9 Funtley Court, 
Funtley Hill, Fareham, Hampshire PO16 7UY. Company No. 04906681. Netbasic 
Limited is authorised and regulated by the Financial Services Authority in 
respect of regulated activities. Please note that many of our activities do not 
require FSA regulation.

   



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster and 10GigE - quick survey in the Community

2010-04-19 Thread Burnash, James
Sorry - didn't read the entire message. My bad - comments inline below:

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Monday, April 19, 2010 9:44 AM
To: 'Tejas N. Bhise'; gluster-users@gluster.org; gluster-de...@nongnu.org
Subject: Re: [Gluster-users] Gluster and 10GigE - quick survey in the Community

Yes, I am currently configuring Glusterfs servers using 10Gb Ethernet 
connecting to (ultimately) hundreds of clients on 1Gb Ethernet.

Current HP Proliant platform for both clients and servers, running CentOS  5.2 
on the clients and 5.4 on the servers

James



DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Tejas N. Bhise
Sent: Friday, April 16, 2010 11:48 PM
To: gluster-users@gluster.org; gluster-de...@nongnu.org
Subject: [Gluster-users] Gluster and 10GigE - quick survey in the Community

Dear Community Users,

In an effort to harden Gluster's strategy with
10GigE technology, I would like to request the following
information from you -

1) Are you already using 10GigE with either the Gluster
   servers or clients ( or the platform ) ?

Yes, I am currently configuring Glusterfs servers using 10Gb Ethernet 
connecting to (ultimately) hundreds of clients on 1Gb Ethernet.

2) If not currently, are you considering using 10GigE with
   Gluster servers or clients ( or platform ) in the future ?

N/A

3) Which make, driver and on which OS are you using or considering
   using this 10GigE technology with Gluster ?

Current HP Proliant platform for both clients and servers, running 
CentOS  5.2 on the   clients and 5.4 on the servers. Driver 
/lib/modules/2.6.18- 164.el5/kernel/drivers/net/ixgbe/ixgbe.ko

4) If you are already using this technology, would you like to
   share your experiences with us ?

Sure. Currently doing a head to head evaluation / pilot of Gluster 
3.0.4 vs. Lustre 1.8.1. 6 storage bricks with 37.5 raw TB of attached 
storage each. Looking to measure   read, write, and mixed I/O using 1 to 7 
clients (in the prototype configuration) with   all machines sitting on the 
same VLAN. Backbone of network is 10Gb.

More to come as results become available.


Your feedback is extremely important to us. Please write to me soon.

Regards,
Tejas.

tejas at gluster dot com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster and 10GigE - quick survey in the Community

2010-04-19 Thread Burnash, James
Yes, I am currently configuring Glusterfs servers using 10Gb Ethernet 
connecting to (ultimately) hundreds of clients on 1Gb Ethernet.

Current HP Proliant platform for both clients and servers, running CentOS  5.2 
on the clients and 5.4 on the servers

James



DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Tejas N. Bhise
Sent: Friday, April 16, 2010 11:48 PM
To: gluster-users@gluster.org; gluster-de...@nongnu.org
Subject: [Gluster-users] Gluster and 10GigE - quick survey in the Community

Dear Community Users,

In an effort to harden Gluster's strategy with
10GigE technology, I would like to request the following
information from you -

1) Are you already using 10GigE with either the Gluster
   servers or clients ( or the platform ) ?

2) If not currently, are you considering using 10GigE with
   Gluster servers or clients ( or platform ) in the future ?

3) Which make, driver and on which OS are you using or considering
   using this 10GigE technology with Gluster ?

4) If you are already using this technology, would you like to
   share your experiences with us ?

Your feedback is extremely important to us. Please write to me soon.

Regards,
Tejas.

tejas at gluster dot com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] upgrade 2.0.2 to 2.0.9?

2010-04-19 Thread Robert Minvielle
glusterfs-server.vol:

volume posix
  type storage/posix  
  option directory /space/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume iothreads
  type performance/io-threads
  option thread-count 16 
  subvolumes locks
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 500MB  
  option flush-behind off   
  subvolumes iothreads
end-volume

volume brick
  type performance/io-cache
  option cache-size 500MB
  option cache-timeout 1
  subvolumes writebehind
end-volume 

volume server
  type protocol/server
  option transport-type tcp/server
  option transport.socket.nodelay on   
  subvolumes brick
  option auth.addr.brick.allow 10.1.10.*,10.1.15.*,10.1.20.*,10.1.30.*,10.1.35.*
,10.1.85.*,10.1.90.*,10.1.95.5
end-volume

glusterfs-client.vol:

volume level42
  type protocol/client
  option transport-type tcp/client
  option remote-host level42
  option transport.socket.nodelay on
  option remote-subvolume brick   
end-volume

volume level43
  type protocol/client
  option transport-type tcp/client
  option remote-host level43
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level44
  type protocol/client
  option transport-type tcp/client
  option remote-host level44 
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level45
  type protocol/client
  option transport-type tcp/client
  option remote-host level45
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level46
  type protocol/client
  option transport-type tcp/client
  option remote-host level46
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level47
  type protocol/client
  option transport-type tcp/client
  option remote-host level47
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level48
  type protocol/client
  option transport-type tcp/client
  option remote-host level48
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level49
  type protocol/client
  option transport-type tcp/client
  option remote-host level49
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level50
  type protocol/client
  option transport-type tcp/client
  option remote-host level50
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level51
  type protocol/client
  option transport-type tcp/client
  option remote-host level51
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level52
  type protocol/client
  option transport-type tcp/client
  option remote-host level52
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level53
  type protocol/client
  option transport-type tcp/client
  option remote-host level53
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level54
  type protocol/client
  option transport-type tcp/client
  option remote-host level54
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume

volume level55
  type protocol/client
  option transport-type tcp/client
  option remote-host level55
  option transport.socket.nodelay on
  option remote-subvolume brick
end-volume


volume distribute
  type cluster/distribute
  option lookup-unhashed off
  subvolumes level42 level43 level44 level45 level46 level47 level48 level49 lev
el50 level51 level52 level53 level54 level55
end-volume

volume wb
  type performance/write-behind
  option cache-size 15MB
  subvolumes distribute
end-volume

volume ioc
  type performance/io-cache
  option cache-size 512MB
  subvolumes wb
end-volume

Some logs from one client... I had to go back to 2.0.2, so the server logs are 
gone...


Version  : glusterfs 2.0.9 built on Apr 17 2010 08:09:18
git: v2.0.9
Starting Time: 2010-04-18 07:35:23
Command line : glusterfs --log-level=DEBUG -f /etc/glusterfs/glusterfs-client.vo
l /data 
PID  : 11826
System name  : Linux
Nodename : thx1138
Kernel Release : 2.6.28-gentoo-r5
Hardware Identifier: x86_64

Given volfile:
/
...skipping
[2010-04-18 07:35:23] D [client-protocol.c:6130:init] level42: defaulting frame-
timeout to 30mins
[2010-04-18 07:35:23] D [client-protocol.c:6141:init] level42: defaulting ping-t
imeout to 10
[2010-04-18 07:35:23] D [transport.c:141:transport_load] transport: attempt to l
oad file //lib/glusterfs/2.0.9/transport/socket.so
[2010-04-18 07:35:23] D [socket.c:1410:socket_init] level42: enabling nodelay
[2010-04-18 07:35:23] D [transport.c:141:transport_load] transport: attempt to l
oad file //lib/glusterfs/2.0.9/transport/socket.so
[2010-04-18 07:35:23] D [socket.c:1410:socket_init] level42: enabling nodelay
[2010-04-18 07:35:

[Gluster-users] Permission Problems

2010-04-19 Thread Rafael Pappert
Hello List,

first of all my configuration:

I have 2 GlusterPlatform 3.0.3 Servers virtualized on VMWare Esxi 4. With one 
Volume exported as "raid 1".
I mounted the share with the GlusterClient 3.0.2 with the following /etc/fstab 
line:

/etc/glusterfs/client.vol /mnt/images   glusterfs   defaults   0   0

The client.vol looks like this:

# auto generated by /usr/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --conf-dir=/etc/glusterfs --name=images --raid=1 
--transport=tcp --port=10002 --auth=192.168.1.168,192.168.1.167,* 
gluster2:/exports/sda2/images gluster1:/exports/sda2/images

# RAID 1
# TRANSPORT-TYPE tcp
volume gluster2-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.168
option transport.socket.nodelay on
option transport.remote-port 10002
option remote-subvolume brick1
end-volume

volume gluster1-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.167
option transport.socket.nodelay on
option transport.remote-port 10002
option remote-subvolume brick1
end-volume

volume mirror-0
type cluster/replicate
subvolumes gluster2-1 gluster1-1
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes mirror-0
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes writebehind
end-volume

volume iocache
type performance/io-cache
option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print $2 * 0.2 / 
1024}' | cut -f1 -d.`MB
option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes quickread
end-volume

Everything seems to be ok, but I just can write as root user to the mounted 
volume. 

I tried to set the rights with the uid, gid options in the fstab. But no 
success.
Long story short, how can I mount the volume with "user rights".

Best regards,
Rafael.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal with VM Storage

2010-04-19 Thread Pavan Sondur
Hi,
I've logged a bug to fix this issue and can be tracked at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=831
The fix should be available for the upcoming 3.1 release.

Pavan

- Original Message -
From: "Tejas N. Bhise" 
To: "Justice London" 
Cc: gluster-users@gluster.org
Sent: Saturday, April 17, 2010 8:57:13 AM
Subject: Re: [Gluster-users] Self heal with VM Storage

Thanks, that will help reproduce internally. 

Regards,
Tejas.
- Original Message -
From: "Justice London" 
To: "Tejas N. Bhise" 
Cc: gluster-users@gluster.org
Sent: Friday, April 16, 2010 9:03:50 PM
Subject: RE: [Gluster-users] Self heal with VM Storage

After the self heal finishes it sort of works. Usually this destroys InnoDB
if you're running a database. Most often, though, it also causes some
libraries and similar to not be properly read in by the VM guest which means
you have to reboot it to fix for this. It should be fairly easy to
reproduce... just shut down a storage brick (any configuration... it doesn't
seem to matter). Make sure of course that you have a running VM guest (KVM,
etc) using the gluster mount. You'll then turn off(unplug, etc.) one of the
storage bricks and wait a few minutes... then re-enable it.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 7:41 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

Thanks for the description. So, does this mean 
that after the self heal is over after some time, 
the guest starts to work fine ?

We will reproduce this inhouse and get back.

Regards,
Tejas.
- Original Message -
From: "Justice London" 
To: "Tejas N. Bhise" 
Cc: gluster-users@gluster.org
Sent: Friday, April 16, 2010 1:18:36 AM
Subject: RE: [Gluster-users] Self heal with VM Storage

Okay, but what happens on a brick shutting down and being added back to the
cluster? This would be after some live data has been written to the other
bricks.

>From what I was seeing access to the file is locked. Is this not the case?
If file access is being locked it will obviously cause issues for anything
trying to read/write to the guest at the time.

Justice London
jlon...@lawinfo.com

-Original Message-
From: Tejas N. Bhise [mailto:te...@gluster.com] 
Sent: Thursday, April 15, 2010 12:33 PM
To: Justice London
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal with VM Storage

Justice,

>From posts from the community on this user list, I know 
that there are folks that run hundreds of VMs out of 
gluster. 

So it's probably more about the data usage than just a 
generic viability statement as you made in your post.

Gluster does not support databases, though many people
use them on gluster without much problem. 

Please let me know if you see some problem with unstructured 
file data on VMs. I would be happy to help debug that problem.

Regards,
Tejas.


- Original Message -
From: "Justice London" 
To: gluster-users@gluster.org
Sent: Friday, April 16, 2010 12:52:19 AM
Subject: [Gluster-users] Self heal with VM Storage

I am running gluster as a storage backend for VM storage (KVM guests). If
one of the bricks is taken offline (even for an instant), on bringing it
back up it runs the metadata check. This causes the guest to both stop
responding until the check finishes and also to ruin data that was in
process (sql data for instance). I'm guessing the file is being locked while
checked. Is there any way to fix for this? Without being able to fix for
this, I'm not certain how viable gluster will be, or can be for VM storage.

 

Justice London

jlon...@lawinfo.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users